Chairperson’s Corner by Douglas W. Brooks_____________________2
Update From the International Committee ofERMAP and INARMby David Ingram_________________________4
Operational Risk Management and BusinessContinuity Planningby Camilo Salazar _______________________6
A Report on the CAS COTOR Risk PremiumProjectby J. David Cummins, Richard A. Derrig and Richard D. Phillips ___________________11
Highlights from the 5th Annual ERMSymposium by Valentina A. Isakina and Max J. Rudolph ___17
Second Annual Scientific Track Continues toPush the Envelope at the ERM Symposiumby Steven C. Siegel ______________________18
Information Conveyed in HiringAnnouncements of Senior ExecutivesOverseeing Enterprise-Wide Risk ManagementProcessby Mark Beasley, Don Pagach, Richard Warr ___21
Multivariate Operational Risk: DependenceModeling with Lévy Copulas by Klaus Böcker and Claudia Klüppelberg _____26
Capital Allocation by Percentile Layer by Neil Bodoff _________________________32
ERMAP annual Meeting 2007by Todd Henderson _____________________38
Risk
Table of Contents
R I S K M A N A G E M E N T S E C T I O N“A JOINT SECTION OF SOCIETY OF ACTUARIES, CASUALTY ACTUARIAL
SOCIETY AND CANADIAN INSTITUTE OF ACTUARIES”
ManagementSeptember 2007, Issue No. 11
Published in Schaumburg, Ill.by the Society of Actuaries
A year ago, the Joint Risk ManagementSection of the Society of Actuariesand the Casualty Actuarial Society
(as it then was) looked at the possibility of a newnickname for the section, finding the foregoingsomewhat clumsy, and not given to a catchyacronym. Dave Ingram’s Chairperson’s Cornerarticle in the July 2006 issue, “What’s in a(Nick) Name,” discusses this. The name chosenat the time was Enterprise Risk ManagementActuarial Professionals (ERMAP). For variousreasons, the name was not proceeded with, inpart because there were discussions about thenew risk management designation, and concernover potential confusion between the designa-tion and the name of the section. Since then, the section has added the Canadian Institute of Actuaries as a sponsoring organization, making the official name even more of a mouthful. As well, the new designationChartered Enterprise Risk Analyst (CERA) hasbeen developed and announced, and the nameis distinct enough that the use of ERMAP as amoniker for the section is not a problem.Therefore, we are pleased to announce that thesection will now be known as ERMAP (for allbut the most official purposes).
Does a name make a difference? Juliet’s point,in Romeo and Juliet, is that a name should notmatter: “What’s in a name? That which we call arose By any other name would smell as sweet.”The play goes on to demonstrate that, unfortu-nately in that case, names and labels do matter,at least to many people. Names are an importantpart of communication, and communication is acritical success factor of any initiative involvinggroups of people. The name of the section is notlikely to make a significant difference in the ability to communicate our goals, but never-theless can help to give a group an identity. Italso serves as a reminder of the broader impor-tance of communication, and tools for improv-ing communication and as a result theeffectiveness of organizations.
A more significant attempt to develop a tool forcommunication in risk management is the re-cent paper just published by the CAS-CIA-SOARisk Management Section Research Team onrisk management terminology. The study looksat risk terminology across industries, and wasdeveloped by a research team from theUniversity of Wisconsin-Madison. The motiva-tion for the research was the desire to improvecommunications among risk professionals bothwithin and across organizations. The study canbe found at http://www.soa.org/research/risk-management/risk-mngt-terms.aspx.
Why is common terminologyimportant?
It improves our ability to communicate aboutrisk management without having to clarifymeaning at each stage, or, more usually, assumethat meaning is clear without validating andfind out at a future stage that in fact differentparties understood things quite differently. The results can range from the inefficient (gapsor duplication of work) to the dangerous (twoparties each assume the other is taking care of a particular threat when in fact neither isdoing so).
Common terminology is one of the buildingblocks of a strong ERM framework. Other com-ponents include a common currency (i.e. meas-urement basis), a strong culture and alignedincentives. A common terminology enablescommunication among risk managers within anorganization. Different professions and evendifferent areas within the same profession oftendevelop their own language for dealing withtheir specific issues. Often, significant insightscan be gained by learning from the experiencesof others in dealing with situations that have un-derlying characteristics that are similar to thosethat we are grappling with. Language differ-ences, however, can make it difficult to get at thecommon underlying issues—things may sound
Risk Management ◗ September 2007
What’s in a Name (Reprise)?by Douglas W. Brooks
Doug W. Brooks, FSA, CERA,
FCIA, MAAA, is senior vice
president and chief financial
officer at Equitable Life
Insurance Company of
Canada. He can be reached
Chairperson’s Corner
◗ Page 2
like two entirely different problems when in factthey are very similar!
Common terminology also assists significantlyin taking a true enterprise-wide view of an organization by aggregating similar risks. If ter-minology is different, it is much harder to iden-tify when a risk should be aggregated with another. A common basis of risk classifica-tion, applied consistently across an organiza-tion, on the other hand, facilitates bothaggregation and understanding.
Communication with business managers andfinding ways to make business leaders aware of
risk management tools, methods and measuresis also critical. Common language, includingcommon terminology for risks and a commonrisk classification, is one of the tools that can as-sist in communicating with business managers.
The actuarial profession has been criticized forbeing less than fully effective in communica-tion. This is an area that we must continue to work at proactively and in fact an area inwhich we can make a significant contribution to the development of effective risk manage-ment practices. ✦
September 2007 ◗ Risk Management
“…communication… is an
area that we must con-
tinue to work at proac-
tively. …
”
Chairperson’s Corner
Page 3 ◗
International Committee Update
◗ Page 4
Risk Management ◗ September 2007
Update from the International Committee of ERMAP &INARMby David Ingram
T he International Committee is off to aroaring start both because of outstand-ing support from ERMAP member vol-
unteers and due to very warm internationalresponses. The activities have focused in threeareas: Newsletter translations, network build-ing and joint projects.
Thanks to the Canadian Institute of Actuaries(CIA), the newsletter translation effort startedwith one non-English edition, the French edi-
tion. This was produced for theFrench speaking members of theCIA and it has also been circulatedto actuaries in France. In addition,Ken Seng Tan, the editor of thenewsletter has been working withChina Institute of ActuarialScience, Central University ofFinance and Economics to pro-duce an edition in MandarinChinese. That edition should beavailable in September and con-sists of selected articles from thepast year that are thought to be of
high interest to non-North American actuariesworking in risk management. A third project is aSpanish translation of summaries of the articlesfrom the newsletters. That translation is beingundertaken by the Mexican College ofActuaries. We are in discussion with actuariesin other parts of the world about possible trans-lations of the international edition or the sum-maries into other languages.
Our efforts to form affiliations with other actuar-ies around the world has had such enthusiasticresponse that we decided to give the effort aname. The International Network of ActuarialRisk Managers (INARM) now has affiliatesfrom seven countries (Australia, China,Germany, Hong Kong, Korea, Mexico, United
Kingdom) plus Canada and United States viaERMAP and we are in active discussion with adozen more. Representatives for some of thesegroups join our monthly working group calls andothers interact through an ERMAP volunteerwho acts at the ambassador.
We are positioning this network similarly to thestanding of a section within the SOA—that is, asa bottom-up membership driven group of volun-teer practitioners and other interested parties.We hope to augment the IAA Enterprise andFinancial Risk Committee (EFIRC) and areworking toward forming a statement of under-standing regarding the relationship. We expectto be able to support the IAA activities in riskmanagement, and in addition, to pursue mem-ber initiated projects. We hope that this willwork just as SOA sections work with the SOAand the AAA, recruiting volunteers for someprojects and taking complete responsibility forothers.
We have started two projects already. The first isa compilation of information about the status ofregulatory and actuarial roles regarding ERMthroughout the world. This project is being un-dertaken for the IAA EFIRC and is being leadby Ian Laughlin of Australia. The second is thedevelopment of a Beginners guide to ERM, inthe form of an FAQ. This project is being con-ducted via a web based tool and is lead byGeraldine Kaye of the U.K. We are hoping to getparticipation from as many different countriesas possible in both of these projects, whethernetwork affiliates or not.
This is a truly amazing level of activity.However, there is more that could be done. Wecould use more volunteers to act as ambassadorsand volunteers to support projects and futureactivities. We expect that future projects will be
David Ingram, FSA, CERA,
MAAA, is director of enterprise
riskmanagement with
Standard & Poor’s in New York,
N.Y. He can be reached at
david_ingram@
standardandpoors.com.
developed from member and IAA requests, so ifyou have any suggestions for future projects, wewould be happy to have those ideas as well.Please note that our intention with the networkis to promote risk management developments asour first priority. There is not a requirement thatvolunteers have an international job or work foran international company. Volunteers who areinterested in risk management and who wouldbe willing to cooperate with practitioners andlearners in other parts of the world are what weare looking for. At this point in time, everyoneseems to agree that Risk Management is a bor-derless discipline. There is not a unique U.S. orCanadian or European or Chinese or Japaneseversion of Risk Management. Through INARM,we hope to continue to develop the unique actu-arial contribution to risk management. The bor-derless condition of risk management allows usto work together to use all of our collective skills,knowledge and experiences to build up the ac-tuarial contribution. To volunteer, [email protected] or [email protected].
Finally, we realize that not everyone is able tovolunteer. We would like to invite anyone whowould like to follow our efforts on a real timebasis to join an e-mail group that will get at leastmonthly updates on INARM activities. This canbe done by connecting to the SOA Web site pagehttp://www.soa.org/news-and-publications/listservs/list-public-listservs.aspx. There you cansign up for the e-mail group. ✦
Page 5 ◗
September 2007 ◗ Risk Management
Business Continuity Planning
◗ Page 6
Risk Management ◗ September 2007
Operational Risk Management and Business ContinuityPlanning by Camilo Salazar
Camilo Salazar, ASA, MAAA,
is a senior consultant with
Milliman, Inc. He can be
reached at camilo.salazar@
milliman.com.
Hurricane Katrina came ashore on
Aug. 29, 2005, wreaking havoc on the
Gulf States of Louisiana, Alabama,
and Mississippi. The huge storm and its de-
structive power shone a spotlight on the critical
relationship between operational risk manage-
ment and business continuity planning.
Hurricane Katrina was a devastat-
ing natural disaster. Its effects—
physical, emotional, and
political—are still being felt al-
most two years later. The business
disruption experienced by many
companies in the area was com-
pounded by the personal hardships
of their employees.
Although no two disasters are pre-
cisely alike, all present a similar
core of critical issues. These in-
clude the challenges that a busi-
ness is likely to encounter during the crisis,
what a business continuity plan can and cannot
do, and how individuals typically respond, both
personally and professionally.
The ultimate business interrup-tion
Hurricane Katrina exposed not only the frailty
of the people, but also the confusion and
lack of leadership at many levels in
government. It forced many companies to
change how they do business and reconsider
how they work internally.
Hurricane Katrina was three correlated
events, one after another, occurring within a
72-hour window:
1. On Sunday, Aug. 28, at 10:00 a.m., antici-
pating the storm, a total and mandatory
evacuation order of the entire city of New
Orleans was issued.
2. On Monday morning, Aug. 29, Hurricane
Katrina made landfall as a Category III hur-
ricane with sustained winds of 125 mph and
storm surges up to 25 feet high.
3. On Monday evening, three Lake
Pontchartrain levees in New Orleans began
to give way, and 80 percent of the city
was under water at peak flooding, which in
some places was 20 to 25 feet deep by
Tuesday morning.
The combination of these three events wrapped
into one changed everything. During the first
few days, there was no information coming
from anyone at the local, state or federal
government as to when or how things would
begin to return to normal. There was simply no
similar precedent, and no clear plan where to
start recovery operations.
As the disaster unfolded for many companies
based in the New Orleans area, their manage-
ment teams began to grasp that returning to nor-
mal operations from the home office could not
happen soon and would not be easy. The chal-
lenge presented itself starkly: how to operate the
enterprise without a home office for an extended
period of time with significantly reduced staff,
all of whom were themselves displaced and ex-
periencing severe personal hardship? A busi-
ness continuity plan designed around the
concept of a short-term evacuation would not
sustain the company for more than a few weeks.
Companies that had planned only for such
short-term events needed to change their
approach and plan for a much longer “return
to normalcy.”
For individual employees, the situation looked
bleak. There might not be a home to return to.
How can you live or work out of a cramped motel
for weeks or months—with the kids and maybe
extra relatives or even pets sharing the space—
when you never intended to stay there for more
than a few days? What about work and getting a
paycheck? Where will the kids go to school?
These questions were increasingly difficult to
answer as it became clear that the displacement
would go on indefinitely.
How companies respond to abusiness interruption
Any catastrophic event affects different people
in different ways. The same can be said about
businesses and companies, depending on the
type of business, how it operates, its size, where
it is located, and other factors.
From the perspective of an insurance company,
the lessons from Hurricane Katrina provide a
reference point for planning, preparation
and recovery. These lessons can be divided
into internally and externally focused manage-
ment decisions:
• IInntteerrnnaall: Make sure you can continue to op-
erate and serve your clients. Internal man-
agement is directed at employees and the
processes needed to keep the business
whole. Internal activities relate to being
able to process new and existing business in
terms of handling new applications, premi-
ums, claims, paying commissions, invest-
ments, accounting, and actuarial matters.
• EExxtteerrnnaall:: Make sure the business keeps
coming in. External management extends to
vendors, suppliers and other organizations
that are crucial to a business’s ability to gen-
erate revenue. External activities are driven
by the need to make sure the company can
continue to receive new business and is able
to serve its existing clients.
A consistent and delicate balance of both is
needed to ensure the client continues to be
served. Focusing mostly internally might result
in employees capable and ready—but unable to
perform because of the lack of business coming
in. Likewise, focusing mostly externally will
leave critical weaknesses internally that even-
tually will disrupt the business coming in as
client service deteriorates and new business
goes elsewhere.
Hurricanes and the Gulf region
In the Gulf region, when people think of disas-ters, they think of hurricanes. As such, the plan-ning at home typically includes three stages:
• Evacuate for two or three days to a motel, toa relative’s or friend’s home, or to some otherlocation out of the hurricane’s path.
• Return home; clean up the yard and repairthe house as needed.
• Go back to work; the kids go back to school.
At work, the approach is typically more sophis-ticated. In one particular instance, the businesscontinuity plan was based on a three-day regres-sive countdown approach:• Three days before a potential impact: Begin
to make basic preparations to evacuate(making tape backups, shipping data to off-site location, preparing basic office facili-ties off-site).
• Two days before potential impact: Start preparing essential personnel to leavethe city.
Page 7 ◗
September 2007 ◗ Risk Management
continued on page 8 ◗
“
”
Any catastrophic event
affects different people
in different ways. The
same can be said
about businesses and
companies. …
Business Continuity Planning
◗ Page 8
Risk Management ◗ September 2007
• One day before potential impact: Begin toevacuate.
A successful business continuityplan
A disaster can be externally driven, such as ahurricane, an earthquake, or a terrorist act; or itcan be internally driven, such as a fire, a systemfailure, or a disgruntled employee that doessomething to damage the organization.
Any basic business continuity plan must ad-dress four essential elements:
• OOppeerraattiinngg iinnffrraassttrruuccttuurree:: A remote facilityto operate the business. Such a facility willlikely have scaled-down capabilities (usingreduced staff); it requires well-documentedprocesses, policies and procedures, includ-ing accounting processes that can documentextraordinary expenditures during the dis-aster.
• IInnffoorrmmaattiioonn tteecchhnnoollooggyy iinnffrraassttrruuccttuurree::Redundant networks with multiple points ofaccess. Data should be regularly and safelybacked up and stored remotely.
• HHuummaann rreessoouurrcceess:: Regular employee train-ing to establish awareness on how to re-spond, at least in the first few hours of adisaster. These first few hours are critical.
• EExxeeccuuttiivvee ppaarrttiicciippaattiioonn:: Business continu-ity cannot be merely an operational matter;it is a strategic concern that should be dis-cussed at the executive level. Senior deci-sion-makers need to know where theirrevenue-generating capability is most vul-nerable and should plan to address thesevulnerabilities.
The plan can also be organized along functionallines, such as operations and administration,actuarial, accounting and investments, market-ing, sales, and distribution. Regardless of howthe plan is organized, there are internally driven
and externally driven issues that need to be con-sidered. This brings us to some considerationswith regard to risk management and businesscontinuity planning:
• Do not think it cannot happen to you or yourorganization. A company can certainly takesteps to minimize internal sources of a po-tential disaster, but it cannot control exter-nal events.
• Do not dismiss planning under the pretextthat no amount of planning is perfect.Although every disaster is different andeach situation is quite fluid, a complete lack of planning, even for basic and obvious steps, could lead to the collapse ofthe company.
Key ingredients of a successfulbusiness continuity plan
The essential components of any business con-tinuity plan, regardless of the industry or the cir-cumstances, must address three things thatcould destroy or seriously damage a company.These are:
1. LLoossss ooff iinntteelllleeccttuuaall ccaappiittaall.. Intellectualcapital is the collective know-how that de-fines the company. This capital resides inthe data, in the systems, in the documentedprocesses, and in the minds of employees.
2. LLoossss ooff hhuummaann ccaappiittaall.. Human capital con-sists of employees as well as all those who dobusiness with the company—those essen-tial to maintaining the organization’s abilityto generate revenue. This includes externalrelationships with producers, providers,regulators, and rating agencies.
3. LLeeaaddeerrsshhiipp aanndd ccoommmmuunniiccaattiioonn..Leadership and communication make abusiness continuity plan work during and inthe aftermath of a disaster. Communicationwith those affected both internally and
Operational Risk Management …◗ continued from page 7
externally is crucial to the resiliency of the business.
Internally, most employees are likely to be dis-traught during a major crisis, experiencing lossand lack of direction. They fear not only for theirpotential personal losses, but also for their jobs.
Senior management has some sense of control asit stays connected and makes decisions that af-fect the organization; but for most employees, aswas the case during Hurricane Katrina, there isa sense of isolation because they have been dis-placed from their homes and jobs.
Employees need to hear from senior manage-ment in order to get a sense of direction and be-longing—and the sooner they hear frombusiness leaders, the better. Employees areready and willing to do whatever the companyasks of them that can validate their sense ofvalue to the company and give them a sense ofbelonging to a greater effort, of being part of aprocess and a team.
Externally, producers and providers also needto hear from senior management to understandwhat is really happening and whether or not itmakes sense to continue doing business withthe company. If senior management does nottalk to them promptly, the competitors will.
A business continuity plan needs to addresssome basic elements aimed at recovering andprotecting the intellectual and human capital ofthe company, and examine how to manage theleadership and the communication process dur-ing a crisis.
With regard to data and other vital information,these basic elements include things such as re-dundancy in systems and accessibility to themfrom various points, as well as the alternative ofoutsourcing certain functions, such as systems
administration or valuation processes, espe-cially for a small or medium-sized company.Business processes need to be documented andmaintained in more than one medium and ac-cessible from more than one location.
With regard to human capital, the best way toensure business continuity is to maintain con-nections among people. When HurricaneKatrina hit, many companies lost their e-mailsystems for several days. Suddenly, they could not reach or communicate with their em-ployees. Likewise, employees could not communicate with their employers to find outwhat was happening.
One way to preserve these connections in theevent of a disaster is to print wallet-sized cards for all employees with the following infor-mation, to be used following a business interruption:
• A company Web site, which can be activatedduring an event and which will provide in-formation and updates to employees.
• A telephone conferencing call process, witha different password by function area. Aconference call can be held every day at aspecific time following a disaster declara-tion. Employees can dial in and obtain andexchange information, and thus feel con-nected with their peers.
Hurricane Katrina was unique in that the em-ployees also suffered personal hardship on alarge scale, so companies had to deal with notonly how to facilitate a working environment inalternative office space, but also how to help em-ployees and their families with housing needs.
Finally, there is the question of leadership.Senior management needs to lead and provideexamples. Managers should make decisionsbased on the best facts available, and change
Page 9 ◗
September 2007 ◗ Risk Management
continued on page 10 ◗
“
”
With regard to human
capital, the best way to
ensure business conti-
nuity is to maintain con-
nections among people.
Business Continuity Planning
◗ Page 10
Risk Management ◗ September 2007
them if the facts change. Senior managementneeds to be seen and heard.
As for producers, providers and anyone thecompany does business with externally, seniormanagement needs to communicate with them,in person if possible, address their concerns,communicate the desire to continue doing busi-ness together, and extract a commitment to con-tinue the relationship. External constituentsinclude regulators and rating agencies, whomay need some objective explanation of howmanagement is addressing the crisis.
In terms of communications, it is best to controlthe message whenever possible by providingtimely, frequent updates. This extends not onlyto customers but also to media. In certain in-stances, effective communication may requirethe company to dispel the image the media hascreated of the disaster in general (by rational ex-trapolation, people think a company is likely af-fected, regardless of how the media hasportrayed the wider event). An objective, sim-ple, and clear message needs to be consistentlydelivered when addressing outside contacts.
Communication strategies must also be mindfulof the potential for limited access to usualmeans of communication. Other means of com-munication available need to be used: newspa-per and radio ads, e-mails, conference calls,business contacts, and even connections withboard members.
Coming home
Nothing lasts forever. Eventually the crisis be-gins to settle down, and the nature of the crisischanges. In the case of Hurricane Katrina, afterthree months of operating in remote locations,the challenge for many companies was how tostart migrating back to the home office whenNew Orleans began to reopen. But which func-tions should they migrate back and in whatorder should they do it?
This challenge was not so simple to address.After three months, many employees had “set-tled” in alternative locations and were not ableor willing to move back or commute to the city.Many of their children had been enrolled in newlocal schools, and New Orleans schools were notgoing to reopen anytime soon. For many em-ployees, their homes in New Orleans were de-stroyed or severely damaged, so they had noplace to live, even if they did want to go back towork in the city.
Still, over the following months, local business-es started to return and the city began to func-tion with some sense of normalcy. For thecompanies and businesses in the area, bothsmall and large, Hurricane Katrina was an eventthat changed them in significant ways.Hopefully, the lessons discussed here can beapplied to any business anywhere and can helpstrengthen a company’s operational risk man-agement practices for the future.✦
Operational Risk Management …◗ continued from page 9
T he Risk Premium Project (RPP) repre-sented the most extensive, thoroughand up-to-date analysis of the theory
and empirics of risk assessment for property-casualty insurance through 2000. The Projectbegan as a response by industry and academicresearchers to a request for proposals issued bythe Committee on the Theory of Risk (COTOR)of the Casualty Actuarial Society (CAS) dueApril 1, 1999.1 At the time, the discounting ofloss reserves was an important topic of debateamong casualty actuaries, and COTOR was in-terested in providing the members of the CASwith a review of the seemingly disparate aca-demic and actuarial literature with the expresshope of revealing appropriate discount rates forliabilities. The members of RPP proposed to ad-dress the topic using a three phased approach:
Phase I: Provide a compilation of the mostrelevant academic and actuarial literature on risk assessment for theprior 20 years, approximately 1980-2000.
Phase II: Discuss the equilibrium pricing forinsurance risk in light of the Phase Iliterature.
Phase III: Propose empirical projects to quan-tify some of the Phase II theoreticalconclusions.
COTOR accepted the RPP proposal on Aug. 13,1999 and the researchers presented their finalreport on Phases I and II to COTOR on June 30,2000.2
Phase I produced an annotatedbibliography of 138 items consist-ing of 14 referenced books and124 articles and papers from 37 fi-nancial and actuarial publica-tions, including the Proceedings ofthe Casualty Actuarial Society,Journal of Finance, Journal ofPortfolio Management, the ASTINBulletin, and National Bureau ofEconomic Research working pa-pers. The bibliography is search-able by author, title and keyword online atwww.casact.org/cotor/index.cfm?fa=rpp. Thearticles and papers are separated into themesthat cover general finance, asset pricing, insur-ance risk, surplus allocation, the history of ap-plications in finance and insurance and somemiscellaneous topics.
The abstract of Phase II report is as follows:This report summarizes the authors' reviewof the actuarial and finance literature onthe subject of risk adjustments for discount-ing liabilities in property-liability insur-ance. The authors find that the actuarialand financial views of risk priced in themarket are converging: systematic or non-diversifiable risk still plays a central role inequilibrium pricing, but non-systematiccosts arising from market frictions such astaxes and financial risk management alsocontribute to market valuations. Recent ad-vances in risk assessment and capital allo-cation techniques are noted. Severalempirical follow-up projects are identified.
Page 11 ◗
September 2007 ◗ Risk Management
continued on page 12 ◗
A Report on the CAS COTOR Risk Premium Projectby J. David Cummins, Richard A. Derrig and Richard D. Phillips
1 There were four members of the RPP. The academic researchers included J. David Cummins and Richard D. Phillips. The industry
researchers were Robert P. Butsic from Fireman's Fund and Richard A. Derrig who, at the time of the project, was with the
Automobile Insurers Bureau of Massachusetts. Both industry members have recently retired while the academic members
remain with their universities.
2 The report is available on the CAS Web site at www.casact.org/cotor/index.cfm?fa=rpp. Alternatively you can search the CAS Web
site using the key words Risk Premium Project.
COTOR Risk Premium Project
◗ Page 12
Risk Management ◗ September 2007
The specific conclusions the researchersreached included:
(1) Although actuaries have long argued thatnon-systematic (non-market) risk plays arole in insurance pricing, financial econo-mists have recently developed various theo-ries that provide sound justification for thisconclusion.
(2) Systematic risk plays a role in valuing liabil-ities either because the loss cash flows arecontemporaneously correlated with market-wide returns or because unexpectedchanges in the interest rate used to discountlong-term liabilities is correlated across allfuture cash flows in the economy. This lattereffect is expected to be a more important fac-tor for longer duration insurance liabilities.3
(3) Returns to financial assets cannot be ade-quately explained by the Capital AssetPricing Model (CAPM) beta. Additionalfactors have been identified which signifi-cantly enhance the explanatory power of themodels in general. Unfortunately, no suchresearch focuses on insurance company re-turns.
(4) Equity capital can be allocated in a theoret-ically consistent way. Unfortunately no suchresearch focusing on actual insurance com-panies exists.
(5) The issue of insurance default should berecognized in pricing.4
The Phase II report concluded with four specif-ic suggestions for a follow-up Phase III empiri-cal research that were focused on two broadareas of inquiry. The proposed areas of inquiryincluded studies to investigate the relevance ofthe multi-factor asset pricing models for theproperty-casualty insurance industry and toempirically investigate the role of capital allo-cation in insurance pricing given the recent the-oretical advances in the literature. The projectswere recommended based upon the observa-tions in the Report that recently developedmethods such as the Fame-French 3-factor ex-tension of CAPM and the Myers-Read alloca-tion of capital formulas allowed for newempirical estimates of cost and allocation ofcapital, both lacking for the property-casualtyindustry as a whole.
Ultimately, two of the four proposed projectswere selected to be funded: one on cost of equi-ty capital for insurers by-line of insurance andone on allocation of capital for insurers.5 The re-sult of the cost of capital study is a peer-re-viewed paper that appeared in the Journal ofRisk and Insurance (Cummins and Phillips,2005). The paper was recently awarded the CASprize for the best paper of interest to the CASmembers in the 2005 volume of the Journal ofRisk and Insurance. A session on the paper willbe scheduled for the joint CAS/ASTIN meetingsin June 2007 in Orlando, Fla. The second PhaseIII study has resulted in a working paper(Cummins, Lin, and Phillips, 2006). Both pa-pers are available on the same CAS/RPP Website as the Phase I and II report. The principalresults of those two studies are discussed next.
3 As an example of the latter effect, the Phase II report cites research that shows supposedly “riskless” Treasury bonds have posi-
tive betas: 0.14 for intermediate maturity bonds and 0.42 for long-term maturities bonds (see Cornell, 1999).
4 Insurer default is generally recognized as a pay-as-you-go system through guaranty funds.
5 The two studies were jointly funded by the Casualty Actuarial Society and the Insurance Research Council.
Richard A. Derrig, Ph.D., is now
president of OPAL Consulting
LLC and a Visiting Scholar at
the Wharton School. He can be
reached at [email protected].
J. David Cummins, Ph.D., is the
Joseph E. Boettner Professor of
Risk Management, Insurance,
and Financial Institutions at
Temple University's Fox School
of Business. He can be reached
Richard D. Phillips, Ph.D. is the
Bruce A. Palmer Professor of
Risk Management and
Insurance and Department
Chair at Georgia State
University. He can be reached
A Report on the CAS …◗ continued from page 11
Cost of Capital
The study estimates CAPM and Fama-Frenchcost of capital estimates for a sample of 117companies writing property-casualty insuranceover the sample period 1997-2000. Cost of cap-ital estimates are estimated for each year of thesample period based on 60 monthly observa-tions. The Fama-French model augments theCAPM market systematic risk factor with twoadditional factors, for firm size and financialdistress, respectively. The size factor is basedon the firm’s market capitalization (number ofshares multiplied by share price) and the finan-cial distress factor is represented by the ratio ofthe book value of equity to the market value ofequity.6 In addition, the cost of capital is esti-mated specifically for the property-casualty,automobile, and workers’ compensation insur-ance lines of business using both the full infor-mation industry and sum beta methodologies.These methods are fully explained in Cumminsand Phillips (2005).7 The cost of capital esti-mates are presented in several tables in the arti-cle. The results of the study demonstrate thestatistical inadequacy of the single factorCAPM in estimating the cost of capital for prop-erty-casualty insurers. I.e., the study demon-strates the importance of including theFama-French adjustments for size and financialdistress when estimating the cost of capital forproperty-casualty insurers.8 In particular, theCAPM tends to significantly under-estimate thecost of capital for firms in this industry.
Although there is considerable empirical evidence supporting the use of the financial dis-
tress factor in asset pricing, researchers haveyet to agree on the rationale for the presence ofthis effect. The financial distress effect is alsooften called the value effect because the ratio ofbook-to-market equity is often used to identifyso-called “value” stocks. Theoretically, the lit-erature on the value effect seems to have splitinto two camps: (1) The “rationalist” camp,which argues that the value factor is a rationalpricing factor consistent with Merton’s inter-temporal capital asset pricing model (ICAPM)or Ross’ arbitrage pricing theory (APT). (2) The“behavioralist” camp, which argues that thevalue factor may be a behavioral effect reflect-ing irrational investor behavior (e.g.,Lakonishok, Schleifer and Vishny, JF 1994).
Principal Findings
(1) Sum betas are larger than raw betasAs expected, sum beta estimates are consis-tently larger than ordinary beta coefficients.For the sample as a whole, the raw beta aver-aged 0.677 versus the sum beta 0.836, or a23 percent larger equity beta. The reason forthe increase in averages is the dominance ofsmaller insurers with relatively infrequenttrading (which sum beta corrects) in the 117company sample (see Table 2 of Cumminsand Phillips (2005)).
(2) The Fama-French market systematic riskbeta is about 1.0 for P&C InsurersThe overall market beta for P&C insurers,with (1.04) or without (0.98) the sum betacorrection, is about the market average of
Page 13 ◗
September 2007 ◗ Risk Management
continued on page 14 ◗
“
”
Ultimately, two of the
four proposed projects
were selected to be
funded: one on cost of
equity capital for insur-
ers by line of insurance
and one on allocation of
capital for insurers.
6 The Fama-French cost of capital is obtained by adding the risk-free rate, usually the 30-day Treasury bill rate, to the market sys-
tematic risk beta multiplied by the market risk premium for systematic risk plus the size beta multiplied by the market risk premi-
um for size plus the financial distress beta multiplied by the market risk premium for financial distress.
7 The current status of the asset pricing literature is reviewed in Fama and French (2004).
8 Additional specifications of multifactor models specific to insurer returns are discussed in the Recent Developments section here.
COTOR Risk Premium Project
◗ Page 14
Risk Management ◗ September 2007
1.0. This result indicates that the low rawbeta average (0.67) arises primarily from thesingle simple factor regression omitted vari-ables problem identified 10 years earlier byfinancial researchers,9 size and financialdistress (see Table 3 of the Cummins andPhillips (2005)).
(3) The FIIB market systematic risk beta forauto insurance averages 0.92 based on un-weighted regressions but averages 0.64when the regressions are weighted by mar-ket values.Based on equally weighted regressions, themarket systematic risk betas are about thesame for automobile insurance (0.92) andworkers’ compensation (0.86). However,based on market value weighted regres-sions, the market systematic risk beta forauto insurance (0.64) is significantly differ-ent from the beta for workers’ compensation(0.882) (see Table 8 of the Cummins andPhillips (2005)). Because the market valueweighted regressions give greater weight tolarger insurers, the results provide clear ev-idence that systematic risk betas vary byfirm size.
(4) The FIIB Fama-French market systematicrisk beta with the sum beta correction forauto insurance is about 1.0 based on marketvalue weighted regressions.Applying all methodologies to estimate theunderlying market equity beta yields a mar-ket weighted average of 1.031 and a panelestimate of 0.965, neither significantly dif-ferent from 1.0, for automobile insurance.The contrast with conclusion (3) indicates a
strong case for using the sum beta adjustment (see Table 9 of the Cummins andPhillips (2005)).
(5) The size factor in the FIIB Fama-French es-timation is significantly different from zeroat about 1.6.The FIIB Fama-French three-factor esti-mates in Table 9 show size betas for auto in-surance of 1.686. Applying a size beta of 1.6to the long-term excess market premium forsize of 2.35 percent yields an average sizeadjustment of about 3.8 percent. Ibbotson(2006, p. 31) graphically displays the differ-ence in the realized return distributions be-tween large company stocks and smallcompany stocks in general. Small stockshave larger mean returns and a larger vari-ance of returns. Of course, the notion that a(non-systematic) small stock premium ex-ists has been known since at least the 1980s.The Cummins-Phillips extends that resultspecifically to the insurance industry as awhole and to property casualty insurers inparticular.
(6) The financial distress betas for property-casualty insurance are substantially greaterthan for firms on average from other industries.Based on market value weighted regres-sions, Table 7 of the Cummins and Phillips(2005), shows financial distress betas of0.917 for personal lines and 0.992 for com-mercial lines. Based on the financial dis-tress risk premium for 2000 (3.85 percent),the financial distress factor adds more than3 percent to the cost of capital. Based on es-
9 The fact that significant explanatory variables omitted in a structural regression can cause distorted results is well known in
statistics (see, for example, Maddala (1992), pp. 161-64).
10 The risk premium puzzle is simply that realized market risk premiums excess of a risk free rate for some recent time periods would
relate to risk aversion coefficients in standard models that exceed all prior (reasonable) estimates. Of course, those simple risk
aversion/risk premium models could be misspecified just as the simple CAPM is misspecified with significant omitted variables.
A Report on the CAS …◗ continued from page 13
timates for other industries, financial dis-tress or, conversely, proxies of the quality ofinsurer risk management affects returns forinsurers much more significantly than formost other firms in the economy.
Risk Premium Puzzle
The theoretical developments above paralleleda flurry of papers that attempted to makeprogress on the empirical estimation of the sys-tematic or market factor risk premium used inall the extended models. Known as the “riskpremium puzzle” since the mid-1980s,10 manyresearchers promoted market risk premium es-timates based on varieties of data series, bothU.S. and international, methods of interpretingthe data, and even surveys of “experts” such asfinance professors. Derrig and Orr (2004) sur-veyed a representative sample of such effortsand found a wide disparity of prospective mar-ket risk premiums in the studies ranging from - 0.9 percent to 8.5 percent. A fair amount of thenumerical disparity was based on a lack of acommon definition for “the market risk premi-um.” Rather, the risk premiums in the studiesvaried by the use of real or nominal interestrates, arithmetic or geometric averaging, short,intermediate and long horizons, short or longrun averages, and conditional or unconditionalestimates. Put on a common definitional basis ofshort horizon, long run, arithmetic, and uncon-ditional risk premium, the appropriate basis forvaluation of quarterly flows in insurance pric-ing, the range of risk premium estimates nar-rowed considerably to 5.0 to 9.0 - percent. Therisk premium puzzle is now incorporated intoPart 8, Investments and Financial Analysis ofthe CAS Fellowship Examinations.
Allocation of Capital
The paper by Cummins, Lin, and Phillips(2006) provides an empirical test of the theoriesdeveloped by Froot and Stein (1998), Froot
(2005), and Zanjani (2002). The overall predic-tion of these papers is that prices of illiquid, im-perfectly hedgeable intermediated riskproducts should depend upon firm capitalstructure: the covariability of the risks with thefirm’s other projects, their marginal effects onthe firm’s insolvency risk, and negative asym-metries of return distributions. In particular,prices should be higher for lines of insurancewith higher covariability with the insurer’s over-all insurance portfolio and for lines that have agreater marginal effect on insurer insolvencyrisk. Cummins, Lin, and Phillips (2006) provideempirical tests of these theoretical predictionsusing data from the U.S. property-casualty in-surance industry. The strategy in the paper is toestimate the price of insurance for a sample ofproperty-casualty insurers and then to regressinsurance price on variables representing firmsolvency risk, capital allocations by line, andother firm characteristics.
The empirical tests in Cummins-Lin-Phillipsare based on two pooled, cross-sectioned, time-series samples of U.S. property-casualty insur-ers over the sample period 1997-2004. The firstsample consists of the maximum number of in-surers with usable data that report to theNational Association of InsuranceCommissioners (NAIC). The second sampleconsists of the subset of insurance firms thathave traded equity capital.
To measure the price of insurance, Cummins-Lin-Phillips utilize the economic premium ratio(EPR) suggested by Winter (1994). The EPR,the ratio of the premium revenues net of expens-es and policyholder dividends for a given insur-er and line of insurance to the estimated presentvalue of losses for the line, provides a measure ofthe insurer’s return for underwriting a line of in-surance. Theory predicts that the EPR will berelated cross-sectionally to insurer capitalstructure, the covariability among lines of in-
Page 15 ◗
September 2007 ◗ Risk Management
continued on page 16 ◗
“
”
Cummins, Lin, and
Phillips (2006) provide
empirical tests of … theo-
retical predictions using
data from the U.S. prop-
erty-casualty insurance
industry.
COTOR Risk Premium Project
◗ Page 16
Risk Management ◗ September 2007
surance and between insurance lines and as-sets, and the amount of capital allocated to eachline of business.
To estimate by line capital allocations,Cummins, Lin and Phillips (2006) utilize themethodology developed by Myers and Read(2001). Myers and Read allocate capital mar-ginally by taking the derivative of the firm's in-solvency put option with respect to changes inloss liabilities for each project or line of busi-
ness. The methodology provides aunique allocation of 100 percent ofthe firm’s capital. Although the Myersand Read model is not dependentupon specific distributional assump-tions for the returns on the firm’s as-sets and liabilities, distributionalassumptions are required to imple-ment the methodology empirically.Cummins, Lin, and Phillips (2006)assume that assets and liabilities arejointly lognormally distributed so thatcapital allocation is based on theBlack-Scholes exchange optionmodel (Margrabe 1978).
Although the Myers-Read model clearly hasnormative implications for insurance manage-ment and regulation, Cummins-Lin-Phillipshypothesize that it also has positive implica-tions for insurance markets. That is, an implicitunderlying hypothesis in the paper is that cross-sectional differences in insurance pricescan be partially explained by Myers-Read capi-tal allocations. In order for this hypothesis to becorrect, it is not necessary that insurance com-panies actually allocate capital according to theMyers-Read model. It is only necessary that,through the operation of insurance markets,risks are priced in such a way that prices reflectthe marginal burden that specific risks place onthe insolvency risk of insurers. This requiresonly that markets are sufficiently rational so that
insurers are able to assess the riskiness of poli-cies that are being priced and that their pricequotes reflect these insolvency risk assess-ments.
The Cummins-Lin-Phillips tests support thetheoretical predictions. The price of insuranceas measured by the EPR is inversely related toinsurer insolvency risk, consistent with prior re-search (Phillips, et al. 1998). Moreover, pricesare directly related to the amount of capital allo-cated to lines of insurance by the Myers-Readmodel and thus are also directly related to thecovariability of losses across lines of insurance.Thus, the results support the predictions ofFroot and Stein (1998) and the capital alloca-tion literature (Myers and Read 2001, Zanjani2002). The tests provide somewhat weaker evi-dence that prices reflect negative asymmetriesof return distributions (Froot 2005).
References
Cummins, J. David, and R.D. Phillips 2005.Estimating the Cost of Capital for Property-Liability Insurers. The Journal of Risk andInsurance. 72(3): 441-78.
Cummins, J. David, Yijia Lin, and Richard D.Phillips. 2006. Capital Allocation and thePricing of Financially Intermediated Risks: AnEmpirical Investigation. Working Paper,Georgia State University, Atlanta, GA.
Derrig, Richard A., and Elisha D. Orr. 2004.Equity Risk Premium: Expectations Great andSmall. North American Actuarial Journal, 8(1):45-69.
Due to space considerations, an expanded ver-sion and all remaining references are availableat www.derrig.com. ✦
A Report on the CAS …◗ continued from page 15
Page 17 ◗
Highlights from the 5th Annual ERM Symposium by Valentina A. Isakina and Max J. Rudolph
W ith sessions ranging from DecisionMaking under Extreme Eventsand Emerging Risks to
Dashboards to Environmental Impacts, the 5thannual Enterprise Risk ManagementSymposium covered a wide range of topics withsomething for everyone interested in risk man-agement. Over 500 attendees mingled with theircounterparts from the financial services, ener-gy, government, and manufacturing industries.The event is co-sponsored by the Joint RiskManagement Section (Casualty ActuarialSociety, Canadian Institute of Actuaries, andSociety of Actuaries) and Professional RiskManagers’ International Association (PRMIA)in collaboration with Georgia State University.The attendees came to Chicago from across theworld to network and learn more aboutEnterprise Risk Management (ERM) and how toimplement it in their venue.
The first day featured three workshops, cover-ing very distinct topics: • Operational Risk Management• ERM Essentials for Decision Makers, and• Banks and Insurers: Separate Paths But a
Common Destination
These daylong seminars allowed participants todive deeper into specific issues than a singlesession allows. The Operational Risk workshopwas a particular success, having completelysold out with standing room only.
The overarching goal of the symposium is toshow how ERM principles, tools, and tech-niques compare across industries, allowing riskmanagers to talk to and teach each other. In thisway, best practices can be shared and dissemi-nated faster.
Terrence Odean set the tone for the meeting ashe discussed behavioral finance and its link torisk management. Only by knowing our own bi-ases can we truly understand current andemerging risks. Dr. Odean showed how in-vestors generally act in the financial markets at
inopportune times, making it hard to optimizetheir results.
Three other general sessions provided consis-tency to the meeting:• Leading off on the first day was a session dis-
cussing the convergence of ERM practicesinternationally.
• The View from the Top shared views abouthow ERM is being implemented in boardrooms.
• The Role of ERM in Regulation got the sec-ond day off to an energetic start as represen-tatives from rating agencies and othersfamiliar with the regulatory process spokeabout how ERM should be a by-product ofwork already being done, not somethingspecifically created to check a regulator’sproverbial box.
• The meeting concluded with a general ses-sion discussing ERM perspectives frompractitioners as they discussed past, pres-ent and future states of ERM.
In addition, 35 concurrent sessions were organized around six different tracks, includinga risk management research track organized by Georgia State University. All the sessionswere taped to mp3 format and are available,along with the presentation slides, onwww.ermsymposium.org.
For the second year, the symposium featured acall for scientific papers, along with a call forpresentations. Both had strong response rates,and the symposium committees were chal-lenged to choose those who would present. All ofthe submitted papers can also be found atwww.ermsymposium.org.
The next symposium will be held in ChicagoApril 14-16, 2008. By building off past sympo-siums, keeping what worked, and bringing freshideas into the mix, it will surely be a highlight of the year for ERM continuing educationevents.✦
Valentina A. Isakina, ASA,
MAAA, is a senior consultant
with McKinsey & Company, Inc.
in Atlanta, Ga. She can be
reached at Valentina_Isakina@
mckinsey.com.
Max J. Rudolph, FSA, CERA,
MAAA, is the founder of Rudolph
Financial Consulting, LLC,
Omaha, Neb. He can be
reached at max.rudolph@
rudolphfinancialconsulting.com.
September 2007 ◗ Risk Management
Second Annual Scientific Track Continues to Pushthe Envelope at the ERM Symposiumby Steven C. Siegel
ERM Papers Awards
◗ Page 18
Risk Management ◗ September 2007
Steven C. Siegel, ASA, MAAA, is
a staff research actuary with
the Society of Actuaries in
Schaumburg, Ill. He can be
reached at [email protected].
T he 2006 ERM Symposium establishedits first-ever annual call for ERM-re-lated research papers to present the
very latest in ERM thinking and move forwardprinciples-based research. The overwhelmingresponse and success of this first effort, origi-nally the brainchild of Max Rudolph, set intomotion high expectations for succeeding years.In light of these lofty expectations, I am pleasedto report that the 2007 ERM SymposiumScientific Paper did not disappoint.
With over 40 abstracts submitted for review, thelevel of response significantly ex-ceeded the previous year’s andagain proves that ERM is some-thing that companies ignore attheir own peril. The papers reviewcommittee, chaired by Rudolph,included returning members MarkAbbott, Sam Cox, Emily Gilde,Krzysztof Jajuga, Nawal Roy, FredTavan, and Al Weller as well asnewcomers Maria Coronado, SteveCraighead, Dan Oprescu,Matthieu Royer, and RichardTargett. Choosing from among the40 abstracts for nine presentationslots was no small task and giventhe quality and number of ab-stracts, the committee regretted that there wereonly nine slots available.
The final task of the committee was to select theprize winning papers. This year, in addition tothe Actuarial Foundation ERM ResearchExcellence Award for Best Overall Paper, two more prizes were awarded: the PRMIAInstitute New Frontiers in Risk ManagementAward and the Risk Management Section Best Paper Award for Practical RiskManagement Applications.
The award winners along with the paper ab-stracts are shown below. Awards were presentedat the ERM Symposium General Luncheon ses-sion held on March 30th.
22000077 AAccttuuaarriiaall FFoouunnddaattiioonn EERRMM RReesseeaarrcchhEExxcceelllleennccee AAwwaarrdd ffoorr BBeesstt OOvveerraallll PPaappeerr::Mark Beasley, Don Pagach, and Richard Warr of North Carolina State University for “Information Conveyed in HiringAnnouncements of Senior ExecutivesOverseeing Enterprise-Wide RiskManagement Processes.”
AABBSSTTRRAACCTT ((sseeee ppaaggee 2211 ooff tthhiiss nneewwsslleetttteerr ffoorraann aabbrriiddggeedd vveerrssiioonn ooff tthhee aarrttiiccllee))
Enterprise risk management (ERM) is theprocess of analyzing the portfolio of risks facingthe enterprise to ensure that the combined effectof such risks is within an acceptable tolerance.While ERM adoption is on the rise, little aca-demic research exists about the costs and bene-fits of ERM. Proponents of ERM claim thatERM is designed to enhance shareholder value;however, portfolio theory suggests that costlyERM implementation would be unwelcome by
Mark Beasley and Don Pagach accept second annual Actuarial Foundation
award from David Cummings.
shareholders who can use less costly diversifica-tion to eliminate idiosyncratic risk. This studyexamines equity market reactions to announce-ments of appointments of senior executive officersoverseeing the enterprise’s risk managementprocesses. Based on a sample of 120 announce-ments from 1992-2003, we find that the univari-ate average two-day market response is notsignificant, suggesting that a broad definitivestatement about the benefit or cost of implement-ing ERM is not possible. However, our multivari-ate analysis reveals that market responses to suchappointments are significantly positively associ-ated with a firm’s size and prior earnings volatil-ity, and negatively associated with the amount ofcash on hand relative to liabilities and leverageon the balance sheet. These results are confined tonon-financial firms, possibly due to the regulato-ry requirements for enterprise risk managementthat already exist for financial firms. We con-clude that the costs and benefits of ERM are firm-specific.
22000077 PPRRMMIIAA IInnssttiittuuttee AAwwaarrdd ffoorr NNeeww FFrroonnttiieerrssiinn RRiisskk MMaannaaggeemmeenntt:: Klaus Böcker andClaudia Klüppelberg of the Munich Universityof Technology for “Multivariate Models forOperational Risk.”
AABBSSTTRRAACCTT ((sseeee ppaaggee 2266 ooff tthhiiss nneewwsslleetttteerr ffoorraann aabbrriiddggeedd vveerrssiioonn ooff tthhee aarrttiiccllee))
In Böcker and Klüppelberg (2005) we presenteda simple approximation of Op-Var of a single operational risk cell. The present paper derivesapproximations of similar quality and simplicityfor the multivariate problem. Our approach is based on modelling of the dependence structureof different cells via the new concept of a Lévy copula.
22000077 RRiisskk MMaannaaggeemmeenntt SSeeccttiioonn AAwwaarrdd ffoorr BBeessttPPaappeerr AAwwaarrdd ffoorr PPrraaccttiiccaall RRiisskk MMaannaaggeemmeennttAApppplliiccaattiioonnss:: Neil Bodoff of Willis for “CapitalAllocation by Percentile Layer.”
AABBSSTTRRAACCTT ((sseeee ppaaggee 3322 oofftthhiiss nneewwsslleetttteerr ffoorr aann aabbrriiddggeeddvveerrssiioonn ooff tthhee aarrttiiccllee))
MMoottiivvaattiioonn.. Capital allocationcan have substantial ramifica-tions upon measuring risk ad-justed profitability as well assetting risk loads for pricing.Current allocation methodsthat emphasize the tail allocatetoo much capital to extreme
Page 19 ◗
September 2007 ◗ Risk Management
continued on page 20 ◗Klaus Böcker accepts PRMIA Institute award from David Koenig.
Neil Bodoff accepts Risk Management Section award from Kevin Dickson.
ERM Papers Awards
◗ Page 20
Risk Management ◗ September 2007
events; “capital consumption” methods, whichincorporate relative likelihood, tend to allocateinsufficient capital to highly unlikely yet ex-tremely severe losses.MMeetthhoodd.. In this paper I develop a new formula-tion of the meaning of holding capital equal tothe Value at Risk. The new formulation views thetotal capital of the firm as the sum of many per-centile layers of capital. Thus capital allocationvaries continuously by layer and the capital allo-cated to any particular loss scenario is the sum ofallocated capital across many percentile layers.RReessuullttss. Capital Allocation by Percentile Layerproduces capital allocations that differ signifi-cantly from other common methods such as VaR,TVaR, and coTVaR.CCoonncclluussiioonnss.. Capital Allocation by PercentileLayer has important advantages over existingmethods. It highlights a new formulation ofValue at Risk and other capital standards, recog-nizes the capital usage of losses that do not extendinto the tail, and captures the disproportionatecapital usage of severe losses.
As of this writing, an online monograph is beingcreated to house the papers. A link to the mono-graph, when completed, will be found on the ERM Symposium Web site at www.ermsymposium.org. Papers that were not presented at the symposium will also be includ-ed in the monograph.
We encourage you to review the monograph andread papers of particular interest to you. Youmay not agree with everything you read in themonograph; it was our intent to procure papersthat would not only inform, but also provoke dis-cussion and spark debate.
We wish to thank all the organizations and committee members for their support and formaking this a success. Planning for the 2008ERM Symposium call for papers has alreadybegun. I invite you to contact me if you haveideas or feedback for next year. Until then,watch the ERM Symposium site for the latestdevelopments! ✦
Second Annual Scientific Track …◗ continued from page 19
Editor’s note: This article represents an abridged
version of the submission to the 2007 ERM sympo-
sium, which receives the 2007 Actuarial
Foundation ERM Research Excellence Award for
Best Overall Paper. The full paper is forthcoming in
the Journal of Accounting, Auditing and Finance.
T here has been a dramatic change in therole of risk management in corpora-tions (Necco and Stulz, 2006). Recent
corporate financial reporting scandals andevolving corporate governance requirementsare increasing the expectation that boards of directors and senior executives effectively manage the risks facing their companies(Kleffner et. al., 2003). To meet these expecta-tions, an increasing number of enterprises areembracing an enterprise-wide risk manage-ment approach (ERM).
While there has been significant growth in thenumber of ERM programs, little empirical re-search has been conducted on the value of suchprograms (Tufano, 1996; Colquitt et al., 1999;Liebenberg and Hoyt, 2003; Beasley et. al.,2005). In particular, there have been few chal-lenges to the view that ERM provides a signifi-cant opportunity for competitive advantage(Stroh, 2005) and that ERM is designed to pro-tect and enhance shareholder value. However,modern portfolio theory suggests that an ERMapproach to risk management could be value de-stroying, as shareholders, through portfolio di-versification, can eliminate idiosyncratic risk ina virtually costless manner. According to thisview, expending corporate resources to reduceidiosyncratic risk will result in a reduction infirm value and shareholder wealth. However,there are circumstances, driven by market im-perfections and agency issues, under which riskmanagement may have a positive net present
value (Stulz, 1996, 2003), and therefore the trueeffect of ERM on shareholder value is uncertain.
Background and HypothesesDevelopment
One of the challenges associated with ERM im-plementation is determining the appropriateleadership structure to manage the identifica-tion, assessment, measurement, and responseto all types of risks that arise across the enter-prise. For ERM to be successful, it is criticalthat the whole organization understand whyERM creates value (Necco and Stulz, 2006).Senior executive leadership over ERM helpscommunicate and integrate the entity’s risk phi-losophy and strategy towards risk managementconsistently throughout the enterprise.
To respond to this challenge, many organiza-tions are appointing a member of the senior ex-ecutive team, often referred to as the chief riskofficer or CRO, to oversee the enterprise’s riskmanagement process (Economist IntelligenceUnit, 2005). Indeed, some argue that the ap-pointment of a chief risk officer is being used tosignal both internally and externally that seniormanagement and the board is serious about in-tegrating all of its risk management activitiesunder a more powerful senior-level executive(Lam, 2001). In fact, rating agencies, such asStandard and Poor’s, explicitly evaluate organi-zational structure and authority of the risk man-agement function as part of their assessment ofstrength and independence of the risk manage-ment function (Standard & Poor’s, 2005).
Recent empirical research documents that thepresence of a CRO is associated with a greaterstage of ERM deployment within an enterprise,suggesting that the appointment of senior exec-
Page 21 ◗
September 2007 ◗ Risk Management
Information Conveyed in Hiring Announcements of Senior Executives Overseeing Enterprise-WideRisk Management Processesby Mark Beasley, Don Pagach, Richard Warr
continued on page 22 ◗
2007 ActuarialFoundation ERMResearch ExcellenceAward for Best Overall Paper
22 00 00 77
AAwwaarrdd
Best Overall Paper Award Winner
◗ Page 22
Risk Management ◗ September 2007
utive leadership affects the extent to whichERM is embraced within an enterprise (Beasleyet. al., 2005). Despite the growth in the appoint-ment of senior risk executives, little is knownabout factors that affect an organization’s deci-sion to appoint a CRO or equivalent, andwhether these appointments create value.
Because corporations disclose only minimal de-tails of their risk management programs(Tufano, 1996), our focus on hiring announce-ments of senior risk officers attempts to measurethe valuation impact of the firm’s signaling of anenterprise risk management process.
The basic premise that ERM is a value creatingactivity actually runs counter to modern portfo-lio theory. Portfolio theory shows that under cer-tain assumptions, investors can fully diversifyaway all firm (or idiosyncratic) risk (Markowitz,1952). This can usually be achieved costlesslyby randomly adding stocks to an investmentportfolio. Because investors can diversify awayfirm-specific risk, they should not be compen-sated for bearing such risk. As a result, in-vestors should not value costly attempts byfirms to reduce firm-specific risk, given an in-vestor’s costless ability to eliminate this type ofrisk. While portfolio theory might suggest a lackof value associated with ERM implementation,markets do not always operate in the mannerpresented by Markowitz (1952). Stulz (1996,2003) presents arguments under which riskmanagement activities could be value increas-ing for shareholders in the presence of agencycosts and market imperfections. The motivationbehind Stulz’s work is to reconcile the apparentconflict between current wide-spread corporateembrace of risk management practices andmodern portfolio theory.
Stulz (1996, 2003) argues that any potentialvalue creation role for risk management is in thereduction or elimination of “costly lower-tailoutcomes.” Lower tail outcomes are those
events in which a decline in earnings or a largeloss would result in severe negative conse-quences for the firm. Thus, when a firm is facedwith the likelihood of lower tail outcomes, en-gaging in risk management that reduces thelikelihood of real costs associated with suchoutcomes could represent a positive net presentvalue project. Only firms facing an increasedlikelihood of these actual negative conse-quences associated with lower tail events areexpected to benefit from risk management,(Stulz, 1996, 2003).
Our study of equity market responses to an-nouncements of appointments of CROs buildsupon Stulz (1996, 2003) to examine firm-specif-ic variables that reflect the firm’s likelihood ofexperiencing a lower-tailed event. These vari-ables reflect firm-specific factors that financetheory suggests should explain the value effectsof corporate risk management. These variablesare described more fully below, and includeseveral factors that may impact earnings volatil-ity such as the extent of the firm’s growth op-tions, intangible assets, cash reserves, earningsvolatility, leverage, and firm size.
GGrroowwtthh OOppttiioonnss.. Firms with extensive growthoptions require consistent capital investmentand may face greater asymmetric informationregarding their future earnings (Myers, 1984;Myers and Majluf, 1984). When in financialdistress, growth options are likely to be under-valued and that distress may lead to underin-vestment in profitable growth opportunities. Wehypothesize that the firms with greater growthoptions will have a positive abnormal returnaround hiring announcements of CROs.
IInnttaannggiibbllee AAsssseettss.. Firms that have more opaqueassets, such as goodwill, are more likely to ben-efit from an ERM program because these assetsare likely to be undervalued in times of financialdistress (Smith and Stulz, 1985). Although thisbenefit directly accrues to debtholders, stock-
Mark Beasley, PH.D., is
Professor of Accounting and
the ERM Initiative Director at
North Carolina University.
He can be reached at
Don Pagach, Ph.D., is professor
of accounting at North
Carolina State University. He
can be reached at
Richard Warr, Ph.D., is associ-
ate professor of finance at
North Carolina State
University. He can be reached
Information Conveyed …◗ continued from page 21
holders should benefit through lower interestexpense charged by the debtholders. We hy-pothesize that the firms with a large amount ofintangible assets will have a positive abnormalreturn around hiring announcements of CROs:
CCaasshh RRaattiioo:: Firms with greater amounts of cashon hand are less likely to benefit from a riskmanagement program, as these firms can pro-tect themselves against a liquidity crisis thatmight result from some lower tail outcomes.Less cash on hand can increase the likelihood offinancial distress for levered firms (Smith andStulz, 1985). We hypothesize that firms withgreater amounts of cash will have a negative ab-normal return around announcements of CROappointments.
EEaarrnniinnggss VVoollaattiilliittyy:: Firms with a history ofgreater earnings volatility are more likely tobenefit from ERM. Firms that have largeamounts of earnings volatility have a greaterlikelihood of seeing a lower tail earnings out-come, missing analysts’ earnings forecasts, andviolating accounting based debt covenants(Bartov, 1993). In addition, managers maysmooth earnings to increase firm’s share pricesby reducing the potential loss shareholders maysuffer when they trade for liquidity reasons(Goel and Thakor, 2003). We hypothesize thatfirms experiencing a high variance of earningsper share (EPS) will have a positive abnormalreturn around hiring announcements of CROs.
LLeevveerraaggee : Greater financial leverage in-creases the likelihood of financial distress.Under financial distress, firms are likely toface reductions in debt ratings and conse-quently higher borrowing costs. More robustERM practices may lead to lower financingcosts. We hypothesize that the firms withhigh leverage will have a positive abnormalreturn around hiring announcements ofCROs.
SSiizzee:: Research examining the use of financialderivatives finds that large companies makegreater use of derivatives than smaller compa-nies. Such findings confirm the experience ofrisk management practitioners that the corpo-rate use of derivatives requires considerableupfront investment in personnel, training andcomputer hardware and software, which mightdiscourage smaller firms from engaging in theiruse (Stulz, 2003). We hypothesize that largerfirms will have a positive abnormal returnaround hiring announcements of CROs.
Data and Results
Our study method examines the impact of firm-specific characteristics on the equity market re-sponse to announcements of appointments ofCROs within the enterprise. We searched theperiod of 1992 through 2003 and identified 120unique observations. Each observation isunique to a firm, in that it represents a firm’s firstannouncement during the period searched,subsequent announcements by a firm are ex-cluded. By starting our search in 1992, we hopeto capture the initial creation of a CRO position,as the presence of CRO positions became moreprevalent in the later 1990s.
The security market’s response is measured byexamining the cumulative abnormal return(CAR) to the CRO announcement. Our study focuses on the cross-sectional firm characteris-tics previously discussed that we hypothesizemay determine the value of effects of risk management. In addition, due to the large number of financial service firms in our sample,we disaggregate our sample into financial service industry firms and non-financial serviceindustry firms.
To examine whether there are cross sectionaldifferences in our hypothesized associationsbetween firm-specific characteristics and the
Page 23 ◗
September 2007 ◗ Risk Management
continued on page 24 ◗
Best Overall Paper Award Winner
◗ Page 24
Risk Management ◗ September 2007
equity market reaction to announcements of ap-pointments of CROs, we use multivariate re-gression analysis. Specifically, the general formof the model is the following:
Given the large portion (39.1 percent) of oursample is from the financial services industriesand that these firms have different regulatoryexpectations with respect to ERM, we examinewhether the predicted associations describedby our hypotheses differ from non-financialservice firms. The results of this analysis are re-ported separately in Table 1.
We find that of the six independent variables onlythe cash ratio variable is found to be significantlyassociated with the market reaction to announce-ments of appointments of CROs for the financialservices firms in our sample, with the overall modelnot significant. This result is consistent with the be-lief that regulatory pressures and requirementsdrive financial services institutions to embrace en-terprise-wide risk management processes, notfirm-specific financial characteristics.
In contrast, the results shown in Table 1 for thesample of firms in industries other than finan-cial services indicate that, in the absence of reg-ulatory expectations, several of the firm’sfinancial characteristics may explain the firm’svalue enhancement due to ERM adoption.
For our sample non-financial firms, we find thatannouncement period market returns are posi-tively associated with the firm’s prior earningsvolatility and size, while negatively associatedwith the extent of cash on hand and leverage.There is no statistical association between theannouncement period returns and the firm’sgrowth or extent of intangible assets.
While the results for earnings volatility, size andcash on hand are consistent with our expectations,the findings for leverage are opposite our expecta-tions. One explanation for this result is that share-holders of highly leveraged firms may not wantrisk reduction as it reduces the value of the optionwritten to them by debtholders. In this case, theoption value outweighs the dead weight costs ofbankruptcy that are increased with high leverage.
Conclusions and Limitations
This study provides evidence on how the perceivedvalue of enterprise risk management processesvaries across companies. While ERM practices arebeing widely embraced within the corporate sector,not all organizations are embracing those practicesand little academic research exists about the bene-fits and costs of ERM.
CAR(0,+1) = a0 + a1Market/Book + a2Intangibles + a3Cash Ratio + a4EPS Vol + a5Leverage + a6Size + e
Information Conveyed …◗ continued from page 23
In cross section analysis, we find that a firm’sshareholders respond largely in accordancewith our expectations and value ERM where theprogram can enhance value by overcoming mar-ket distortions or agency costs. Specifically, wefind that shareholders of large firms that havelittle cash on hand value ERM. Furthermore,shareholders of large non-financial firms, withvolatile earnings, low amounts of leverage andlow amounts of cash on hand also react favorablyto the implementation of ERM. These findingsare consistent with the idea that a well imple-mented ERM program can create value when itreduces the likelihood of costly lower tail out-comes such as financial distress.
References Bartov, E. 1993. The timing of asset sales andearnings manipulation. The Accounting Review68(4): 840- 855.
Beasley, M.S., R. Clune, and D.R. Hermanson.(2005). Enterprise risk management: An empir-ical analysis of factors associated with the ex-tent of implementation. Journal of Accountingand Public Policy, 24 (6), 521-531.
Colquitt, L.L., R.E. Hoyt, and R.B. Lee. (1999).Integrated risk management and the role of therisk manager. Risk Management and InsuranceReview, 2, 43-61.
Economist Intelligence Unit. (2005).The evolv-ing role of the CRO, The Economist IntelligenceUnit, London/New York/Hong Kong (May).
Goel, A., and A. Thakor. 2003. Why do firmssmooth earnings? The Journal of Business 76(1):151-192.
Kleffner, A.E., R.B. Lee, and B. McGannon.(2003). The effect of corporate governance onthe use of enterprise risk management:Evidence from Canada. Risk Management andInsurance Review, 6 (1), 53-73.
Lam, J. (2001). The CRO is here to stay. RiskManagement, 48 (4) (April), 16-22.
Liebenberg, A., and R. Hoyt. (2003). The deter-minants of enterprise risk management:Evidence from the appointment of chief risk of-ficers. Risk Management and Insurance Review6 (1): 37-52.
Markowitz, H. M. (1952). Portfolio selection,The Journal of Finance, 7, 77-91
Myers, S.C. (1984). The capital structure puz-zle. Journal of Finance 39, 575-592.
Myers, S.C., and N.S. Majluf. (1984). Corporatefinancing and investment decisions when firmshave information that investors do not have.Journal of Financial Economics 13, 187-221.
Necco, B.W. and R.M. Stulz.(2006). Enterpriserisk management: Theory and practice. TheOhio State University. Working paper.
Smith, C.W. and R. Stulz. (1985). The determi-nants of firms’ corporate hedging policies.Journal of Finance and Quantitative Analysis,20, 391-405
Standard & Poor’s. (2005). Enterprise risk man-agement for financial institutions, Standard &Poor’s, Inc, New York, NY. (November 18,2005). www.standardandpoors.com
Stroh, P.J. (2005). Enterprise risk management atUnited Healthcare. Strategic Finance, (July), 27-35.
Stulz, R. (1996). Rethinking risk management,Journal of Applied Corporate Finance, Vol. 9,No. 3, 8-24.
Stulz, R. (2003). Rethinking risk management,The Revolution in Corporate Finance, 4thEdition, Blackwell Publishing, 367-384.
Tufano, P. (1996). Who manages risk? An em-pirical examination of risk management prac-tices in the gold mining industry. The Journal ofFinance, LI (4), 1097-1137. ✦
Page 25 ◗
September 2007 ◗ Risk Management
New Frontiers Award Winner
◗ Page 26
Risk Management ◗ September 2007
Multivariate Operational Risk: Dependence Modelling with Lévy Copulas by Claudia Kluppelberg and Klaus Bocker
Klaus Böcker, Ph.D., is senior
risk controller at
HypoVereinsbank AG,
München. He can be reached
Claudia Klüppelberg, Ph.D.,
holds the chair of
Mathematical Statistics at the
Center for Mathematical
Sciences of the Munich
University of Technology,
Germany.
She can be reached at
Abstract: Simultaneous modelling of opera-tional risks occurring in different event type/busi-ness line cells poses the challenge for operationalrisk quantification. Invoking the new concept ofLévy copulas for dependence modelling yieldssimple approximations of high quality for multi-variate operational VAR.
A required feature of any advancedmeasurement approach (AMA) ofBasel II for measuring operational
risk is that it allows for explicit correlations between different operational risk events. Thecore problem here is multivariate modelling encompassing all different event type/businessline cells, and thus the question how their dependence structure affects a bank's total operational risk. The prototypical loss distribu-tion approach (LDA) assumes that, for each cell
the cumulated operational loss up to time t is described by an aggregate loss process
(1.1)
where for each i the sequence are inde-pendent and identically distributed (iid) positiverandom variables with distribution function describing the magnitude of each loss event(loss severity), and counts the numberof losses in the time interval [0, t] (called fre-quency), independent of . The bank’stotal operational risk is then given as:
(1.2)
The present literature suggests to model de-pendence between different operational riskcells by means of different concepts, which ba-sically split into models for frequency depend-
ence on the one hand and for severity depend-ence on the other hand.
Here we suggest a model based on the new con-cept of Lévy copulas (see e.g. Cont & Tankov(2004)), which models dependence in frequen-cy and severity simultaneously, yielding amodel with comparably few parameters.Moreover, our model has the same advantage asa distributional copula: the dependence struc-ture between different cells can be separatedfrom the marginal processes for .This approach allows for closed-form approxi-mations for operational VAR (OpVAR).
Dependent Operational Risksand Lévy Copulas
In accordance with a recent survey of the BaselCommittee on Banking Supervision about AMApractices at financial services firms, we assume that the loss frequency processes in(1.1) follows a homogeneous Poisson processwith rate . Then the aggregate loss (1.1)constitutes a compound Poisson process and istherefore a Lévy process (actually, the com-pound Poisson process is the only Lévy processwith piecewise constant sample paths).
A key element in the theory of Lévy processes is thenotion of the so-called Lévy measure. A Lévy meas-ure controls the jump behaviour of a Lévy processand, therefore, has an intuitive interpretation, inparticular in the context of operational risk. TheLévy measure of a single operational risk cellmeasures the expected number of losses per unittime with a loss amount in a prespecified interval.For our compound Poisson model, the Lévy meas-ure of the cell process is completely deter-mined by the frequency parameter andthe distribution function of the cell’s severity:
[0, t] (called frequency), independent of (X ik)k∈N. The bank’s total operational risk is then
given as
S+(t) := S1(t) + S2(t) + · · · + Sd(t) , t ≥ 0 . (1.2)
The present literature suggests to model dependence between different operational risk
cells by means of different concepts, which basically split into models for frequency de-
pendence on the one hand and for severity dependence on the other hand.
Here we suggest a model based on the new concept of Levy copulas (see e.g. Cont
& Tankov (2004)), which models dependence in frequency and severity simultaneously,
yielding a model with comparably few parameters. Moreover, our model has the same
advantage as a distributional copula: the dependence structure between different cells can
be separated from the marginal processes Si for i = 1, . . . , d. This approach allows for
closed-form approximations for operational VAR (OpVAR).
2 Dependent Operational Risks and Levy Copulas
In accordance with a recent survey of the Basel Committee on Banking Supervision about
AMA practices at financial services firms, we assume that the loss frequency processes Ni
in (1.1) follows a homogeneous Poisson process with rate λi > 0. Then the aggregate loss
(1.1) constitutes a compound Poisson process and is therefore a Levy process (actually,
the compound Poisson process is the only Levy process with piecewise constant sample
paths).
A key element in the theory of Levy processes is the notion of the so-called Levy
measure. A Levy measure controls the jump behaviour of a Levy process and, therefore,
has an intuitive interpretation, in particular in the context of operational risk. The Levy
measure of a single operational risk cell measures the expected number of losses per unit
time with a loss amount in a prespecified interval. For our compound Poisson model,
the Levy measure Πi of the cell process Si is completely determined by the frequency
parameter λi > 0 and the distribution function Fi of the cell’s severity: Πi([0, x)) :=
λiP (X i ≤ x) = λiFi(x) for x ∈ [0,∞). The corresponding one-dimensional tail integral
is defined as
Πi(x) := Πi([x,∞)) = λiP (X i > x) = λiF i(x) . (2.1)
Our goal is modelling multivariate operational risk. Hence, the question is how dif-
ferent one-dimensional compound Poisson processes Si(·) =∑Ni(·)
k=1 X ik can be used to
construct a d-dimensional compound Poisson process S = (S1, S2, . . . , Sd) with in general
dependent components. It is worthwhile to recall the similar situation in the case of the
2
[0, t] (called frequency), independent of (X ik)k∈N. The bank’s total operational risk is then
given as
S+(t) := S1(t) + S2(t) + · · · + Sd(t) , t ≥ 0 . (1.2)
The present literature suggests to model dependence between different operational risk
cells by means of different concepts, which basically split into models for frequency de-
pendence on the one hand and for severity dependence on the other hand.
Here we suggest a model based on the new concept of Levy copulas (see e.g. Cont
& Tankov (2004)), which models dependence in frequency and severity simultaneously,
yielding a model with comparably few parameters. Moreover, our model has the same
advantage as a distributional copula: the dependence structure between different cells can
be separated from the marginal processes Si for i = 1, . . . , d. This approach allows for
closed-form approximations for operational VAR (OpVAR).
2 Dependent Operational Risks and Levy Copulas
In accordance with a recent survey of the Basel Committee on Banking Supervision about
AMA practices at financial services firms, we assume that the loss frequency processes Ni
in (1.1) follows a homogeneous Poisson process with rate λi > 0. Then the aggregate loss
(1.1) constitutes a compound Poisson process and is therefore a Levy process (actually,
the compound Poisson process is the only Levy process with piecewise constant sample
paths).
A key element in the theory of Levy processes is the notion of the so-called Levy
measure. A Levy measure controls the jump behaviour of a Levy process and, therefore,
has an intuitive interpretation, in particular in the context of operational risk. The Levy
measure of a single operational risk cell measures the expected number of losses per unit
time with a loss amount in a prespecified interval. For our compound Poisson model,
the Levy measure Πi of the cell process Si is completely determined by the frequency
parameter λi > 0 and the distribution function Fi of the cell’s severity: Πi([0, x)) :=
λiP (X i ≤ x) = λiFi(x) for x ∈ [0,∞). The corresponding one-dimensional tail integral
is defined as
Πi(x) := Πi([x,∞)) = λiP (X i > x) = λiF i(x) . (2.1)
Our goal is modelling multivariate operational risk. Hence, the question is how dif-
ferent one-dimensional compound Poisson processes Si(·) =∑Ni(·)
k=1 X ik can be used to
construct a d-dimensional compound Poisson process S = (S1, S2, . . . , Sd) with in general
dependent components. It is worthwhile to recall the similar situation in the case of the
2
[0, t] (called frequency), independent of (X ik)k∈N. The bank’s total operational risk is then
given as
S+(t) := S1(t) + S2(t) + · · · + Sd(t) , t ≥ 0 . (1.2)
The present literature suggests to model dependence between different operational risk
cells by means of different concepts, which basically split into models for frequency de-
pendence on the one hand and for severity dependence on the other hand.
Here we suggest a model based on the new concept of Levy copulas (see e.g. Cont
& Tankov (2004)), which models dependence in frequency and severity simultaneously,
yielding a model with comparably few parameters. Moreover, our model has the same
advantage as a distributional copula: the dependence structure between different cells can
be separated from the marginal processes Si for i = 1, . . . , d. This approach allows for
closed-form approximations for operational VAR (OpVAR).
2 Dependent Operational Risks and Levy Copulas
In accordance with a recent survey of the Basel Committee on Banking Supervision about
AMA practices at financial services firms, we assume that the loss frequency processes Ni
in (1.1) follows a homogeneous Poisson process with rate λi > 0. Then the aggregate loss
(1.1) constitutes a compound Poisson process and is therefore a Levy process (actually,
the compound Poisson process is the only Levy process with piecewise constant sample
paths).
A key element in the theory of Levy processes is the notion of the so-called Levy
measure. A Levy measure controls the jump behaviour of a Levy process and, therefore,
has an intuitive interpretation, in particular in the context of operational risk. The Levy
measure of a single operational risk cell measures the expected number of losses per unit
time with a loss amount in a prespecified interval. For our compound Poisson model,
the Levy measure Πi of the cell process Si is completely determined by the frequency
parameter λi > 0 and the distribution function Fi of the cell’s severity: Πi([0, x)) :=
λiP (X i ≤ x) = λiFi(x) for x ∈ [0,∞). The corresponding one-dimensional tail integral
is defined as
Πi(x) := Πi([x,∞)) = λiP (X i > x) = λiF i(x) . (2.1)
Our goal is modelling multivariate operational risk. Hence, the question is how dif-
ferent one-dimensional compound Poisson processes Si(·) =∑Ni(·)
k=1 X ik can be used to
construct a d-dimensional compound Poisson process S = (S1, S2, . . . , Sd) with in general
dependent components. It is worthwhile to recall the similar situation in the case of the
2
[0, t] (called frequency), independent of (X ik)k∈N. The bank’s total operational risk is then
given as
S+(t) := S1(t) + S2(t) + · · · + Sd(t) , t ≥ 0 . (1.2)
The present literature suggests to model dependence between different operational risk
cells by means of different concepts, which basically split into models for frequency de-
pendence on the one hand and for severity dependence on the other hand.
Here we suggest a model based on the new concept of Levy copulas (see e.g. Cont
& Tankov (2004)), which models dependence in frequency and severity simultaneously,
yielding a model with comparably few parameters. Moreover, our model has the same
advantage as a distributional copula: the dependence structure between different cells can
be separated from the marginal processes Si for i = 1, . . . , d. This approach allows for
closed-form approximations for operational VAR (OpVAR).
2 Dependent Operational Risks and Levy Copulas
In accordance with a recent survey of the Basel Committee on Banking Supervision about
AMA practices at financial services firms, we assume that the loss frequency processes Ni
in (1.1) follows a homogeneous Poisson process with rate λi > 0. Then the aggregate loss
(1.1) constitutes a compound Poisson process and is therefore a Levy process (actually,
the compound Poisson process is the only Levy process with piecewise constant sample
paths).
A key element in the theory of Levy processes is the notion of the so-called Levy
measure. A Levy measure controls the jump behaviour of a Levy process and, therefore,
has an intuitive interpretation, in particular in the context of operational risk. The Levy
measure of a single operational risk cell measures the expected number of losses per unit
time with a loss amount in a prespecified interval. For our compound Poisson model,
the Levy measure Πi of the cell process Si is completely determined by the frequency
parameter λi > 0 and the distribution function Fi of the cell’s severity: Πi([0, x)) :=
λiP (X i ≤ x) = λiFi(x) for x ∈ [0,∞). The corresponding one-dimensional tail integral
is defined as
Πi(x) := Πi([x,∞)) = λiP (X i > x) = λiF i(x) . (2.1)
Our goal is modelling multivariate operational risk. Hence, the question is how dif-
ferent one-dimensional compound Poisson processes Si(·) =∑Ni(·)
k=1 X ik can be used to
construct a d-dimensional compound Poisson process S = (S1, S2, . . . , Sd) with in general
dependent components. It is worthwhile to recall the similar situation in the case of the
2
[0, t] (called frequency), independent of (X ik)k∈N. The bank’s total operational risk is then
given as
S+(t) := S1(t) + S2(t) + · · · + Sd(t) , t ≥ 0 . (1.2)
The present literature suggests to model dependence between different operational risk
cells by means of different concepts, which basically split into models for frequency de-
pendence on the one hand and for severity dependence on the other hand.
Here we suggest a model based on the new concept of Levy copulas (see e.g. Cont
& Tankov (2004)), which models dependence in frequency and severity simultaneously,
yielding a model with comparably few parameters. Moreover, our model has the same
advantage as a distributional copula: the dependence structure between different cells can
be separated from the marginal processes Si for i = 1, . . . , d. This approach allows for
closed-form approximations for operational VAR (OpVAR).
2 Dependent Operational Risks and Levy Copulas
In accordance with a recent survey of the Basel Committee on Banking Supervision about
AMA practices at financial services firms, we assume that the loss frequency processes Ni
in (1.1) follows a homogeneous Poisson process with rate λi > 0. Then the aggregate loss
(1.1) constitutes a compound Poisson process and is therefore a Levy process (actually,
the compound Poisson process is the only Levy process with piecewise constant sample
paths).
A key element in the theory of Levy processes is the notion of the so-called Levy
measure. A Levy measure controls the jump behaviour of a Levy process and, therefore,
has an intuitive interpretation, in particular in the context of operational risk. The Levy
measure of a single operational risk cell measures the expected number of losses per unit
time with a loss amount in a prespecified interval. For our compound Poisson model,
the Levy measure Πi of the cell process Si is completely determined by the frequency
parameter λi > 0 and the distribution function Fi of the cell’s severity: Πi([0, x)) :=
λiP (X i ≤ x) = λiFi(x) for x ∈ [0,∞). The corresponding one-dimensional tail integral
is defined as
Πi(x) := Πi([x,∞)) = λiP (X i > x) = λiF i(x) . (2.1)
Our goal is modelling multivariate operational risk. Hence, the question is how dif-
ferent one-dimensional compound Poisson processes Si(·) =∑Ni(·)
k=1 X ik can be used to
construct a d-dimensional compound Poisson process S = (S1, S2, . . . , Sd) with in general
dependent components. It is worthwhile to recall the similar situation in the case of the
2
[0, t] (called frequency), independent of (X ik)k∈N. The bank’s total operational risk is then
given as
S+(t) := S1(t) + S2(t) + · · · + Sd(t) , t ≥ 0 . (1.2)
The present literature suggests to model dependence between different operational risk
cells by means of different concepts, which basically split into models for frequency de-
pendence on the one hand and for severity dependence on the other hand.
Here we suggest a model based on the new concept of Levy copulas (see e.g. Cont
& Tankov (2004)), which models dependence in frequency and severity simultaneously,
yielding a model with comparably few parameters. Moreover, our model has the same
advantage as a distributional copula: the dependence structure between different cells can
be separated from the marginal processes Si for i = 1, . . . , d. This approach allows for
closed-form approximations for operational VAR (OpVAR).
2 Dependent Operational Risks and Levy Copulas
In accordance with a recent survey of the Basel Committee on Banking Supervision about
AMA practices at financial services firms, we assume that the loss frequency processes Ni
in (1.1) follows a homogeneous Poisson process with rate λi > 0. Then the aggregate loss
(1.1) constitutes a compound Poisson process and is therefore a Levy process (actually,
the compound Poisson process is the only Levy process with piecewise constant sample
paths).
A key element in the theory of Levy processes is the notion of the so-called Levy
measure. A Levy measure controls the jump behaviour of a Levy process and, therefore,
has an intuitive interpretation, in particular in the context of operational risk. The Levy
measure of a single operational risk cell measures the expected number of losses per unit
time with a loss amount in a prespecified interval. For our compound Poisson model,
the Levy measure Πi of the cell process Si is completely determined by the frequency
parameter λi > 0 and the distribution function Fi of the cell’s severity: Πi([0, x)) :=
λiP (X i ≤ x) = λiFi(x) for x ∈ [0,∞). The corresponding one-dimensional tail integral
is defined as
Πi(x) := Πi([x,∞)) = λiP (X i > x) = λiF i(x) . (2.1)
Our goal is modelling multivariate operational risk. Hence, the question is how dif-
ferent one-dimensional compound Poisson processes Si(·) =∑Ni(·)
k=1 X ik can be used to
construct a d-dimensional compound Poisson process S = (S1, S2, . . . , Sd) with in general
dependent components. It is worthwhile to recall the similar situation in the case of the
2
Multivariate Operational Risk:
Dependence Modelling with Levy Copulas
Klaus Bocker ∗ Claudia Kluppelberg †
Abstract
Simultaneous modelling of operational risks occurring in different event type/business
line cells poses the challenge for operational risk quantification. Invoking the new
concept of Levy copulas for dependence modelling yields simple approximations of
high quality for multivariate operational VAR.
1 Introduction
A required feature of any advanced measurement approach (AMA) of Basel II for measur-
ing operational risk is that it allows for explicit correlations between different operational
risk events. The core problem here is multivariate modelling encompassing all different
event type/business line cells, and thus the question how their dependence structure af-
fects a bank’s total operational risk. The prototypical loss distribution approach (LDA)
assumes that, for each cell i = 1, . . . , d, the cumulated operational loss Si(t) up to time t
is described by an aggregate loss process
Si(t) =
Ni(t)∑
k=1
X ik , t ≥ 0 , (1.1)
where for each i the sequence (X ik)k∈N are independent and identically distributed (iid)
positive random variables with distribution function Fi describing the magnitude of each
loss event (loss severity), and (Ni(t))t≥0 counts the number of losses in the time interval
∗Risk Reporting & Risk Capital, HypoVereinsbank AG, Munchen, email: [email protected]†Center for Mathematical Sciences, Munich University of Technology, D-85747 Garching bei Munchen,
Germany, email: [email protected], www.ma.tum.de/stat/
1
[0, t] (called frequency), independent of (X ik)k∈N. The bank’s total operational risk is then
given as
S+(t) := S1(t) + S2(t) + · · · + Sd(t) , t ≥ 0 . (1.2)
The present literature suggests to model dependence between different operational risk
cells by means of different concepts, which basically split into models for frequency de-
pendence on the one hand and for severity dependence on the other hand.
Here we suggest a model based on the new concept of Levy copulas (see e.g. Cont
& Tankov (2004)), which models dependence in frequency and severity simultaneously,
yielding a model with comparably few parameters. Moreover, our model has the same
advantage as a distributional copula: the dependence structure between different cells can
be separated from the marginal processes Si for i = 1, . . . , d. This approach allows for
closed-form approximations for operational VAR (OpVAR).
2 Dependent Operational Risks and Levy Copulas
In accordance with a recent survey of the Basel Committee on Banking Supervision about
AMA practices at financial services firms, we assume that the loss frequency processes Ni
in (1.1) follows a homogeneous Poisson process with rate λi > 0. Then the aggregate loss
(1.1) constitutes a compound Poisson process and is therefore a Levy process (actually,
the compound Poisson process is the only Levy process with piecewise constant sample
paths).
A key element in the theory of Levy processes is the notion of the so-called Levy
measure. A Levy measure controls the jump behaviour of a Levy process and, therefore,
has an intuitive interpretation, in particular in the context of operational risk. The Levy
measure of a single operational risk cell measures the expected number of losses per unit
time with a loss amount in a prespecified interval. For our compound Poisson model,
the Levy measure Πi of the cell process Si is completely determined by the frequency
parameter λi > 0 and the distribution function Fi of the cell’s severity: Πi([0, x)) :=
λiP (X i ≤ x) = λiFi(x) for x ∈ [0,∞). The corresponding one-dimensional tail integral
is defined as
Πi(x) := Πi([x,∞)) = λiP (X i > x) = λiF i(x) . (2.1)
Our goal is modelling multivariate operational risk. Hence, the question is how dif-
ferent one-dimensional compound Poisson processes Si(·) =∑Ni(·)
k=1 X ik can be used to
construct a d-dimensional compound Poisson process S = (S1, S2, . . . , Sd) with in general
dependent components. It is worthwhile to recall the similar situation in the case of the
2
[0, t] (called frequency), independent of (X ik)k∈N. The bank’s total operational risk is then
given as
S+(t) := S1(t) + S2(t) + · · · + Sd(t) , t ≥ 0 . (1.2)
The present literature suggests to model dependence between different operational risk
cells by means of different concepts, which basically split into models for frequency de-
pendence on the one hand and for severity dependence on the other hand.
Here we suggest a model based on the new concept of Levy copulas (see e.g. Cont
& Tankov (2004)), which models dependence in frequency and severity simultaneously,
yielding a model with comparably few parameters. Moreover, our model has the same
advantage as a distributional copula: the dependence structure between different cells can
be separated from the marginal processes Si for i = 1, . . . , d. This approach allows for
closed-form approximations for operational VAR (OpVAR).
2 Dependent Operational Risks and Levy Copulas
In accordance with a recent survey of the Basel Committee on Banking Supervision about
AMA practices at financial services firms, we assume that the loss frequency processes Ni
in (1.1) follows a homogeneous Poisson process with rate λi > 0. Then the aggregate loss
(1.1) constitutes a compound Poisson process and is therefore a Levy process (actually,
the compound Poisson process is the only Levy process with piecewise constant sample
paths).
A key element in the theory of Levy processes is the notion of the so-called Levy
measure. A Levy measure controls the jump behaviour of a Levy process and, therefore,
has an intuitive interpretation, in particular in the context of operational risk. The Levy
measure of a single operational risk cell measures the expected number of losses per unit
time with a loss amount in a prespecified interval. For our compound Poisson model,
the Levy measure Πi of the cell process Si is completely determined by the frequency
parameter λi > 0 and the distribution function Fi of the cell’s severity: Πi([0, x)) :=
λiP (X i ≤ x) = λiFi(x) for x ∈ [0,∞). The corresponding one-dimensional tail integral
is defined as
Πi(x) := Πi([x,∞)) = λiP (X i > x) = λiF i(x) . (2.1)
Our goal is modelling multivariate operational risk. Hence, the question is how dif-
ferent one-dimensional compound Poisson processes Si(·) =∑Ni(·)
k=1 X ik can be used to
construct a d-dimensional compound Poisson process S = (S1, S2, . . . , Sd) with in general
dependent components. It is worthwhile to recall the similar situation in the case of the
2
Multivariate Operational Risk:
Dependence Modelling with Levy Copulas
Klaus Bocker ∗ Claudia Kluppelberg †
Abstract
Simultaneous modelling of operational risks occurring in different event type/business
line cells poses the challenge for operational risk quantification. Invoking the new
concept of Levy copulas for dependence modelling yields simple approximations of
high quality for multivariate operational VAR.
1 Introduction
A required feature of any advanced measurement approach (AMA) of Basel II for measur-
ing operational risk is that it allows for explicit correlations between different operational
risk events. The core problem here is multivariate modelling encompassing all different
event type/business line cells, and thus the question how their dependence structure af-
fects a bank’s total operational risk. The prototypical loss distribution approach (LDA)
assumes that, for each cell i = 1, . . . , d, the cumulated operational loss Si(t) up to time t
is described by an aggregate loss process
Si(t) =
Ni(t)∑
k=1
X ik , t ≥ 0 , (1.1)
where for each i the sequence (X ik)k∈N are independent and identically distributed (iid)
positive random variables with distribution function Fi describing the magnitude of each
loss event (loss severity), and (Ni(t))t≥0 counts the number of losses in the time interval
∗Risk Reporting & Risk Capital, HypoVereinsbank AG, Munchen, email: [email protected]†Center for Mathematical Sciences, Munich University of Technology, D-85747 Garching bei Munchen,
Germany, email: [email protected], www.ma.tum.de/stat/
1
Multivariate Operational Risk:
Dependence Modelling with Levy Copulas
Klaus Bocker ∗ Claudia Kluppelberg †
Abstract
Simultaneous modelling of operational risks occurring in different event type/business
line cells poses the challenge for operational risk quantification. Invoking the new
concept of Levy copulas for dependence modelling yields simple approximations of
high quality for multivariate operational VAR.
1 Introduction
A required feature of any advanced measurement approach (AMA) of Basel II for measur-
ing operational risk is that it allows for explicit correlations between different operational
risk events. The core problem here is multivariate modelling encompassing all different
event type/business line cells, and thus the question how their dependence structure af-
fects a bank’s total operational risk. The prototypical loss distribution approach (LDA)
assumes that, for each cell i = 1, . . . , d, the cumulated operational loss Si(t) up to time t
is described by an aggregate loss process
Si(t) =
Ni(t)∑
k=1
X ik , t ≥ 0 , (1.1)
where for each i the sequence (X ik)k∈N are independent and identically distributed (iid)
positive random variables with distribution function Fi describing the magnitude of each
loss event (loss severity), and (Ni(t))t≥0 counts the number of losses in the time interval
∗Risk Reporting & Risk Capital, HypoVereinsbank AG, Munchen, email: [email protected]†Center for Mathematical Sciences, Munich University of Technology, D-85747 Garching bei Munchen,
Germany, email: [email protected], www.ma.tum.de/stat/
1
Multivariate Operational Risk:
Dependence Modelling with Levy Copulas
Klaus Bocker ∗ Claudia Kluppelberg †
Abstract
Simultaneous modelling of operational risks occurring in different event type/business
line cells poses the challenge for operational risk quantification. Invoking the new
concept of Levy copulas for dependence modelling yields simple approximations of
high quality for multivariate operational VAR.
1 Introduction
A required feature of any advanced measurement approach (AMA) of Basel II for measur-
ing operational risk is that it allows for explicit correlations between different operational
risk events. The core problem here is multivariate modelling encompassing all different
event type/business line cells, and thus the question how their dependence structure af-
fects a bank’s total operational risk. The prototypical loss distribution approach (LDA)
assumes that, for each cell i = 1, . . . , d, the cumulated operational loss Si(t) up to time t
is described by an aggregate loss process
Si(t) =
Ni(t)∑
k=1
X ik , t ≥ 0 , (1.1)
where for each i the sequence (X ik)k∈N are independent and identically distributed (iid)
positive random variables with distribution function Fi describing the magnitude of each
loss event (loss severity), and (Ni(t))t≥0 counts the number of losses in the time interval
∗Risk Reporting & Risk Capital, HypoVereinsbank AG, Munchen, email: [email protected]†Center for Mathematical Sciences, Munich University of Technology, D-85747 Garching bei Munchen,
Germany, email: [email protected], www.ma.tum.de/stat/
1
Multivariate Operational Risk:
Dependence Modelling with Levy Copulas
Klaus Bocker ∗ Claudia Kluppelberg †
Abstract
Simultaneous modelling of operational risks occurring in different event type/business
line cells poses the challenge for operational risk quantification. Invoking the new
concept of Levy copulas for dependence modelling yields simple approximations of
high quality for multivariate operational VAR.
1 Introduction
A required feature of any advanced measurement approach (AMA) of Basel II for measur-
ing operational risk is that it allows for explicit correlations between different operational
risk events. The core problem here is multivariate modelling encompassing all different
event type/business line cells, and thus the question how their dependence structure af-
fects a bank’s total operational risk. The prototypical loss distribution approach (LDA)
assumes that, for each cell i = 1, . . . , d, the cumulated operational loss Si(t) up to time t
is described by an aggregate loss process
Si(t) =
Ni(t)∑
k=1
X ik , t ≥ 0 , (1.1)
where for each i the sequence (X ik)k∈N are independent and identically distributed (iid)
positive random variables with distribution function Fi describing the magnitude of each
loss event (loss severity), and (Ni(t))t≥0 counts the number of losses in the time interval
∗Risk Reporting & Risk Capital, HypoVereinsbank AG, Munchen, email: [email protected]†Center for Mathematical Sciences, Munich University of Technology, D-85747 Garching bei Munchen,
Germany, email: [email protected], www.ma.tum.de/stat/
1
Multivariate Operational Risk:
Dependence Modelling with Levy Copulas
Klaus Bocker ∗ Claudia Kluppelberg †
Abstract
Simultaneous modelling of operational risks occurring in different event type/business
line cells poses the challenge for operational risk quantification. Invoking the new
concept of Levy copulas for dependence modelling yields simple approximations of
high quality for multivariate operational VAR.
1 Introduction
A required feature of any advanced measurement approach (AMA) of Basel II for measur-
ing operational risk is that it allows for explicit correlations between different operational
risk events. The core problem here is multivariate modelling encompassing all different
event type/business line cells, and thus the question how their dependence structure af-
fects a bank’s total operational risk. The prototypical loss distribution approach (LDA)
assumes that, for each cell i = 1, . . . , d, the cumulated operational loss Si(t) up to time t
is described by an aggregate loss process
Si(t) =
Ni(t)∑
k=1
X ik , t ≥ 0 , (1.1)
where for each i the sequence (X ik)k∈N are independent and identically distributed (iid)
positive random variables with distribution function Fi describing the magnitude of each
loss event (loss severity), and (Ni(t))t≥0 counts the number of losses in the time interval
∗Risk Reporting & Risk Capital, HypoVereinsbank AG, Munchen, email: [email protected]†Center for Mathematical Sciences, Munich University of Technology, D-85747 Garching bei Munchen,
Germany, email: [email protected], www.ma.tum.de/stat/
1
Multivariate Operational Risk:
Dependence Modelling with Levy Copulas
Klaus Bocker ∗ Claudia Kluppelberg †
Abstract
Simultaneous modelling of operational risks occurring in different event type/business
line cells poses the challenge for operational risk quantification. Invoking the new
concept of Levy copulas for dependence modelling yields simple approximations of
high quality for multivariate operational VAR.
1 Introduction
A required feature of any advanced measurement approach (AMA) of Basel II for measur-
ing operational risk is that it allows for explicit correlations between different operational
risk events. The core problem here is multivariate modelling encompassing all different
event type/business line cells, and thus the question how their dependence structure af-
fects a bank’s total operational risk. The prototypical loss distribution approach (LDA)
assumes that, for each cell i = 1, . . . , d, the cumulated operational loss Si(t) up to time t
is described by an aggregate loss process
Si(t) =
Ni(t)∑
k=1
X ik , t ≥ 0 , (1.1)
where for each i the sequence (X ik)k∈N are independent and identically distributed (iid)
positive random variables with distribution function Fi describing the magnitude of each
loss event (loss severity), and (Ni(t))t≥0 counts the number of losses in the time interval
∗Risk Reporting & Risk Capital, HypoVereinsbank AG, Munchen, email: [email protected]†Center for Mathematical Sciences, Munich University of Technology, D-85747 Garching bei Munchen,
Germany, email: [email protected], www.ma.tum.de/stat/
1
[0, t] (called frequency), independent of (X ik)k∈N. The bank’s total operational risk is then
given as
S+(t) := S1(t) + S2(t) + · · · + Sd(t) , t ≥ 0 . (1.2)
The present literature suggests to model dependence between different operational risk
cells by means of different concepts, which basically split into models for frequency de-
pendence on the one hand and for severity dependence on the other hand.
Here we suggest a model based on the new concept of Levy copulas (see e.g. Cont
& Tankov (2004)), which models dependence in frequency and severity simultaneously,
yielding a model with comparably few parameters. Moreover, our model has the same
advantage as a distributional copula: the dependence structure between different cells can
be separated from the marginal processes Si for i = 1, . . . , d. This approach allows for
closed-form approximations for operational VAR (OpVAR).
2 Dependent Operational Risks and Levy Copulas
In accordance with a recent survey of the Basel Committee on Banking Supervision about
AMA practices at financial services firms, we assume that the loss frequency processes Ni
in (1.1) follows a homogeneous Poisson process with rate λi > 0. Then the aggregate loss
(1.1) constitutes a compound Poisson process and is therefore a Levy process (actually,
the compound Poisson process is the only Levy process with piecewise constant sample
paths).
A key element in the theory of Levy processes is the notion of the so-called Levy
measure. A Levy measure controls the jump behaviour of a Levy process and, therefore,
has an intuitive interpretation, in particular in the context of operational risk. The Levy
measure of a single operational risk cell measures the expected number of losses per unit
time with a loss amount in a prespecified interval. For our compound Poisson model,
the Levy measure Πi of the cell process Si is completely determined by the frequency
parameter λi > 0 and the distribution function Fi of the cell’s severity: Πi([0, x)) :=
λiP (X i ≤ x) = λiFi(x) for x ∈ [0,∞). The corresponding one-dimensional tail integral
is defined as
Πi(x) := Πi([x,∞)) = λiP (X i > x) = λiF i(x) . (2.1)
Our goal is modelling multivariate operational risk. Hence, the question is how dif-
ferent one-dimensional compound Poisson processes Si(·) =∑Ni(·)
k=1 X ik can be used to
construct a d-dimensional compound Poisson process S = (S1, S2, . . . , Sd) with in general
dependent components. It is worthwhile to recall the similar situation in the case of the
2
[0, t] (called frequency), independent of (X ik)k∈N. The bank’s total operational risk is then
given as
S+(t) := S1(t) + S2(t) + · · · + Sd(t) , t ≥ 0 . (1.2)
The present literature suggests to model dependence between different operational risk
cells by means of different concepts, which basically split into models for frequency de-
pendence on the one hand and for severity dependence on the other hand.
Here we suggest a model based on the new concept of Levy copulas (see e.g. Cont
& Tankov (2004)), which models dependence in frequency and severity simultaneously,
yielding a model with comparably few parameters. Moreover, our model has the same
advantage as a distributional copula: the dependence structure between different cells can
be separated from the marginal processes Si for i = 1, . . . , d. This approach allows for
closed-form approximations for operational VAR (OpVAR).
2 Dependent Operational Risks and Levy Copulas
In accordance with a recent survey of the Basel Committee on Banking Supervision about
AMA practices at financial services firms, we assume that the loss frequency processes Ni
in (1.1) follows a homogeneous Poisson process with rate λi > 0. Then the aggregate loss
(1.1) constitutes a compound Poisson process and is therefore a Levy process (actually,
the compound Poisson process is the only Levy process with piecewise constant sample
paths).
A key element in the theory of Levy processes is the notion of the so-called Levy
measure. A Levy measure controls the jump behaviour of a Levy process and, therefore,
has an intuitive interpretation, in particular in the context of operational risk. The Levy
measure of a single operational risk cell measures the expected number of losses per unit
time with a loss amount in a prespecified interval. For our compound Poisson model,
the Levy measure Πi of the cell process Si is completely determined by the frequency
parameter λi > 0 and the distribution function Fi of the cell’s severity: Πi([0, x)) :=
λiP (X i ≤ x) = λiFi(x) for x ∈ [0,∞). The corresponding one-dimensional tail integral
is defined as
Πi(x) := Πi([x,∞)) = λiP (X i > x) = λiF i(x) . (2.1)
Our goal is modelling multivariate operational risk. Hence, the question is how dif-
ferent one-dimensional compound Poisson processes Si(·) =∑Ni(·)
k=1 X ik can be used to
construct a d-dimensional compound Poisson process S = (S1, S2, . . . , Sd) with in general
dependent components. It is worthwhile to recall the similar situation in the case of the
2
[0, t] (called frequency), independent of (X ik)k∈N. The bank’s total operational risk is then
given as
S+(t) := S1(t) + S2(t) + · · · + Sd(t) , t ≥ 0 . (1.2)
The present literature suggests to model dependence between different operational risk
cells by means of different concepts, which basically split into models for frequency de-
pendence on the one hand and for severity dependence on the other hand.
Here we suggest a model based on the new concept of Levy copulas (see e.g. Cont
& Tankov (2004)), which models dependence in frequency and severity simultaneously,
yielding a model with comparably few parameters. Moreover, our model has the same
advantage as a distributional copula: the dependence structure between different cells can
be separated from the marginal processes Si for i = 1, . . . , d. This approach allows for
closed-form approximations for operational VAR (OpVAR).
2 Dependent Operational Risks and Levy Copulas
In accordance with a recent survey of the Basel Committee on Banking Supervision about
AMA practices at financial services firms, we assume that the loss frequency processes Ni
in (1.1) follows a homogeneous Poisson process with rate λi > 0. Then the aggregate loss
(1.1) constitutes a compound Poisson process and is therefore a Levy process (actually,
the compound Poisson process is the only Levy process with piecewise constant sample
paths).
A key element in the theory of Levy processes is the notion of the so-called Levy
measure. A Levy measure controls the jump behaviour of a Levy process and, therefore,
has an intuitive interpretation, in particular in the context of operational risk. The Levy
measure of a single operational risk cell measures the expected number of losses per unit
time with a loss amount in a prespecified interval. For our compound Poisson model,
the Levy measure Πi of the cell process Si is completely determined by the frequency
parameter λi > 0 and the distribution function Fi of the cell’s severity: Πi([0, x)) :=
λiP (X i ≤ x) = λiFi(x) for x ∈ [0,∞). The corresponding one-dimensional tail integral
is defined as
Πi(x) := Πi([x,∞)) = λiP (X i > x) = λiF i(x) . (2.1)
Our goal is modelling multivariate operational risk. Hence, the question is how dif-
ferent one-dimensional compound Poisson processes Si(·) =∑Ni(·)
k=1 X ik can be used to
construct a d-dimensional compound Poisson process S = (S1, S2, . . . , Sd) with in general
dependent components. It is worthwhile to recall the similar situation in the case of the
2
The corresponding one-dimensional tail inte-gral is defined as
(2.1)
Our goal is modelling multivariate operationalrisk. Hence, the question is how different one-dimensional compound Poisson processes
can be used to construct ad-dimensional compound Poisson process
with in general dependentcomponents. It is worthwhile to recall the simi-lar situation in the case of the more restrictivesetting of static random variables. It is well-known that the dependence structure of a ran-dom vector can be disentangled from itsmarginals by introducing a distributional copu-la. Similarly, a multivariate tail integral
can be constructed from the marginal tail inte-grals (2.1) by means of a Lévy copula. This rep-resentation is the content of Sklar’s theorem forLévy processes with positive jumps, which basi-cally says that every multivariate tail integral
can be decomposed into its marginal tail in-tegrals and a Lévy copula according to
For a precise formulation of this Theorem werefer to Cont & Tankov (2004), Theorem 5.6.Now we can define the following prototypicalLDA model.
DDeefifinniittiioonn 22..11.. [Multivariate Compound Poisson model]
(1) All aggregate loss processesare compound Poisson processes with tail
integral
(2) The dependence between different cells is modelled by a Lévy copula
i.e., the tail integralof the d-dimensional compound Poisson process is defined by
The Bivariate Clayton Model
A bivariate model is particularly useful to illus-trate how dependence modelling via Lévy copu-las works. Therefore, we now focus on twooperational risk cells as in Definition 2.1(a). Thedependence structure is modelled by a ClaytonLévy copula, which is similar to the well-knownClayton copula for distribution functions andparameterized by (see Cont & Tankov(2004), Example 5.5):
This copula covers the whole range of positivedependence. For we obtain independ-ence and then, as we will see below, losses indifferent cells never occur at the same time. For we get the complete positive dependence Lévy copula given by
We decompose now thetwo cells’ aggregate loss processes into differentcomponents (where the time parameter t isdropped for simplicity):
(3.1)
more restrictive setting of static random variables. It is well-known that the dependence
structure of a random vector can be disentangled from its marginals by introducing a
distributional copula. Similarly, a multivariate tail integral
Π(x1, . . . , xd) = Π([x1,∞) × · · · × [xd,∞)) , x ∈ [0,∞]d , (2.2)
can be constructed from the marginal tail integrals (2.1) by means of a Levy copula. This
representation is the content of Sklar’s theorem for Levy processes with positive jumps,
which basically says that every multivariate tail integral Π can be decomposed into its
marginal tail integrals and a Levy copula C according to
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)) , x ∈ [0,∞]d . (2.3)
For a precise formulation of this Theorem we refer to Cont & Tankov (2004), Theorem 5.6.
Now we can define the following prototypical LDA model.
Definition 2.1. [Multivariate Compound Poisson model]
(1) All aggregate loss processes Si for i = 1, . . . , d are compound Poisson processes with
tail integral Πi(·) = λiFi(·).
(2) The dependence between different cells is modelled by a Levy copula C : [0,∞)d →
[0,∞), i.e., the tail integral of the d-dimensional compound Poisson process S = (S1, . . . , Sd)
is defined by
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)).
3 The Bivariate Clayton Model
A bivariate model is particularly useful to illustrate how dependence modelling via Levy
copulas works. Therefore, we now focus on two operational risk cells as in Definition 2.1(1).
The dependence structure is modelled by a Clayton Levy copula, which is similar to the
well-known Clayton copula for distribution functions and parameterized by ϑ > 0 (see
Cont & Tankov (2004), Example 5.5):
Cϑ(u, v) = (u−ϑ + v−ϑ)−1/ϑ , u, v ≥ 0 .
This copula covers the whole range of positive dependence. For ϑ → 0 we obtain in-
dependence and then, as we will see below, losses in different cells never occur at the
same time. For ϑ → ∞ we get the complete positive dependence Levy copula given by
C‖(u, v) = min(u, v). We decompose now the two cells’ aggregate loss processes into
3
more restrictive setting of static random variables. It is well-known that the dependence
structure of a random vector can be disentangled from its marginals by introducing a
distributional copula. Similarly, a multivariate tail integral
Π(x1, . . . , xd) = Π([x1,∞) × · · · × [xd,∞)) , x ∈ [0,∞]d , (2.2)
can be constructed from the marginal tail integrals (2.1) by means of a Levy copula. This
representation is the content of Sklar’s theorem for Levy processes with positive jumps,
which basically says that every multivariate tail integral Π can be decomposed into its
marginal tail integrals and a Levy copula C according to
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)) , x ∈ [0,∞]d . (2.3)
For a precise formulation of this Theorem we refer to Cont & Tankov (2004), Theorem 5.6.
Now we can define the following prototypical LDA model.
Definition 2.1. [Multivariate Compound Poisson model]
(1) All aggregate loss processes Si for i = 1, . . . , d are compound Poisson processes with
tail integral Πi(·) = λiFi(·).
(2) The dependence between different cells is modelled by a Levy copula C : [0,∞)d →
[0,∞), i.e., the tail integral of the d-dimensional compound Poisson process S = (S1, . . . , Sd)
is defined by
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)).
3 The Bivariate Clayton Model
A bivariate model is particularly useful to illustrate how dependence modelling via Levy
copulas works. Therefore, we now focus on two operational risk cells as in Definition 2.1(1).
The dependence structure is modelled by a Clayton Levy copula, which is similar to the
well-known Clayton copula for distribution functions and parameterized by ϑ > 0 (see
Cont & Tankov (2004), Example 5.5):
Cϑ(u, v) = (u−ϑ + v−ϑ)−1/ϑ , u, v ≥ 0 .
This copula covers the whole range of positive dependence. For ϑ → 0 we obtain in-
dependence and then, as we will see below, losses in different cells never occur at the
same time. For ϑ → ∞ we get the complete positive dependence Levy copula given by
C‖(u, v) = min(u, v). We decompose now the two cells’ aggregate loss processes into
3
more restrictive setting of static random variables. It is well-known that the dependence
structure of a random vector can be disentangled from its marginals by introducing a
distributional copula. Similarly, a multivariate tail integral
Π(x1, . . . , xd) = Π([x1,∞) × · · · × [xd,∞)) , x ∈ [0,∞]d , (2.2)
can be constructed from the marginal tail integrals (2.1) by means of a Levy copula. This
representation is the content of Sklar’s theorem for Levy processes with positive jumps,
which basically says that every multivariate tail integral Π can be decomposed into its
marginal tail integrals and a Levy copula C according to
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)) , x ∈ [0,∞]d . (2.3)
For a precise formulation of this Theorem we refer to Cont & Tankov (2004), Theorem 5.6.
Now we can define the following prototypical LDA model.
Definition 2.1. [Multivariate Compound Poisson model]
(1) All aggregate loss processes Si for i = 1, . . . , d are compound Poisson processes with
tail integral Πi(·) = λiFi(·).
(2) The dependence between different cells is modelled by a Levy copula C : [0,∞)d →
[0,∞), i.e., the tail integral of the d-dimensional compound Poisson process S = (S1, . . . , Sd)
is defined by
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)).
3 The Bivariate Clayton Model
A bivariate model is particularly useful to illustrate how dependence modelling via Levy
copulas works. Therefore, we now focus on two operational risk cells as in Definition 2.1(1).
The dependence structure is modelled by a Clayton Levy copula, which is similar to the
well-known Clayton copula for distribution functions and parameterized by ϑ > 0 (see
Cont & Tankov (2004), Example 5.5):
Cϑ(u, v) = (u−ϑ + v−ϑ)−1/ϑ , u, v ≥ 0 .
This copula covers the whole range of positive dependence. For ϑ → 0 we obtain in-
dependence and then, as we will see below, losses in different cells never occur at the
same time. For ϑ → ∞ we get the complete positive dependence Levy copula given by
C‖(u, v) = min(u, v). We decompose now the two cells’ aggregate loss processes into
3
more restrictive setting of static random variables. It is well-known that the dependence
structure of a random vector can be disentangled from its marginals by introducing a
distributional copula. Similarly, a multivariate tail integral
Π(x1, . . . , xd) = Π([x1,∞) × · · · × [xd,∞)) , x ∈ [0,∞]d , (2.2)
can be constructed from the marginal tail integrals (2.1) by means of a Levy copula. This
representation is the content of Sklar’s theorem for Levy processes with positive jumps,
which basically says that every multivariate tail integral Π can be decomposed into its
marginal tail integrals and a Levy copula C according to
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)) , x ∈ [0,∞]d . (2.3)
For a precise formulation of this Theorem we refer to Cont & Tankov (2004), Theorem 5.6.
Now we can define the following prototypical LDA model.
Definition 2.1. [Multivariate Compound Poisson model]
(1) All aggregate loss processes Si for i = 1, . . . , d are compound Poisson processes with
tail integral Πi(·) = λiFi(·).
(2) The dependence between different cells is modelled by a Levy copula C : [0,∞)d →
[0,∞), i.e., the tail integral of the d-dimensional compound Poisson process S = (S1, . . . , Sd)
is defined by
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)).
3 The Bivariate Clayton Model
A bivariate model is particularly useful to illustrate how dependence modelling via Levy
copulas works. Therefore, we now focus on two operational risk cells as in Definition 2.1(1).
The dependence structure is modelled by a Clayton Levy copula, which is similar to the
well-known Clayton copula for distribution functions and parameterized by ϑ > 0 (see
Cont & Tankov (2004), Example 5.5):
Cϑ(u, v) = (u−ϑ + v−ϑ)−1/ϑ , u, v ≥ 0 .
This copula covers the whole range of positive dependence. For ϑ → 0 we obtain in-
dependence and then, as we will see below, losses in different cells never occur at the
same time. For ϑ → ∞ we get the complete positive dependence Levy copula given by
C‖(u, v) = min(u, v). We decompose now the two cells’ aggregate loss processes into
3
more restrictive setting of static random variables. It is well-known that the dependence
structure of a random vector can be disentangled from its marginals by introducing a
distributional copula. Similarly, a multivariate tail integral
Π(x1, . . . , xd) = Π([x1,∞) × · · · × [xd,∞)) , x ∈ [0,∞]d , (2.2)
can be constructed from the marginal tail integrals (2.1) by means of a Levy copula. This
representation is the content of Sklar’s theorem for Levy processes with positive jumps,
which basically says that every multivariate tail integral Π can be decomposed into its
marginal tail integrals and a Levy copula C according to
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)) , x ∈ [0,∞]d . (2.3)
For a precise formulation of this Theorem we refer to Cont & Tankov (2004), Theorem 5.6.
Now we can define the following prototypical LDA model.
Definition 2.1. [Multivariate Compound Poisson model]
(1) All aggregate loss processes Si for i = 1, . . . , d are compound Poisson processes with
tail integral Πi(·) = λiFi(·).
(2) The dependence between different cells is modelled by a Levy copula C : [0,∞)d →
[0,∞), i.e., the tail integral of the d-dimensional compound Poisson process S = (S1, . . . , Sd)
is defined by
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)).
3 The Bivariate Clayton Model
A bivariate model is particularly useful to illustrate how dependence modelling via Levy
copulas works. Therefore, we now focus on two operational risk cells as in Definition 2.1(1).
The dependence structure is modelled by a Clayton Levy copula, which is similar to the
well-known Clayton copula for distribution functions and parameterized by ϑ > 0 (see
Cont & Tankov (2004), Example 5.5):
Cϑ(u, v) = (u−ϑ + v−ϑ)−1/ϑ , u, v ≥ 0 .
This copula covers the whole range of positive dependence. For ϑ → 0 we obtain in-
dependence and then, as we will see below, losses in different cells never occur at the
same time. For ϑ → ∞ we get the complete positive dependence Levy copula given by
C‖(u, v) = min(u, v). We decompose now the two cells’ aggregate loss processes into
3
more restrictive setting of static random variables. It is well-known that the dependence
structure of a random vector can be disentangled from its marginals by introducing a
distributional copula. Similarly, a multivariate tail integral
Π(x1, . . . , xd) = Π([x1,∞) × · · · × [xd,∞)) , x ∈ [0,∞]d , (2.2)
can be constructed from the marginal tail integrals (2.1) by means of a Levy copula. This
representation is the content of Sklar’s theorem for Levy processes with positive jumps,
which basically says that every multivariate tail integral Π can be decomposed into its
marginal tail integrals and a Levy copula C according to
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)) , x ∈ [0,∞]d . (2.3)
For a precise formulation of this Theorem we refer to Cont & Tankov (2004), Theorem 5.6.
Now we can define the following prototypical LDA model.
Definition 2.1. [Multivariate Compound Poisson model]
(1) All aggregate loss processes Si for i = 1, . . . , d are compound Poisson processes with
tail integral Πi(·) = λiFi(·).
(2) The dependence between different cells is modelled by a Levy copula C : [0,∞)d →
[0,∞), i.e., the tail integral of the d-dimensional compound Poisson process S = (S1, . . . , Sd)
is defined by
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)).
3 The Bivariate Clayton Model
A bivariate model is particularly useful to illustrate how dependence modelling via Levy
copulas works. Therefore, we now focus on two operational risk cells as in Definition 2.1(1).
The dependence structure is modelled by a Clayton Levy copula, which is similar to the
well-known Clayton copula for distribution functions and parameterized by ϑ > 0 (see
Cont & Tankov (2004), Example 5.5):
Cϑ(u, v) = (u−ϑ + v−ϑ)−1/ϑ , u, v ≥ 0 .
This copula covers the whole range of positive dependence. For ϑ → 0 we obtain in-
dependence and then, as we will see below, losses in different cells never occur at the
same time. For ϑ → ∞ we get the complete positive dependence Levy copula given by
C‖(u, v) = min(u, v). We decompose now the two cells’ aggregate loss processes into
3
more restrictive setting of static random variables. It is well-known that the dependence
structure of a random vector can be disentangled from its marginals by introducing a
distributional copula. Similarly, a multivariate tail integral
Π(x1, . . . , xd) = Π([x1,∞) × · · · × [xd,∞)) , x ∈ [0,∞]d , (2.2)
can be constructed from the marginal tail integrals (2.1) by means of a Levy copula. This
representation is the content of Sklar’s theorem for Levy processes with positive jumps,
which basically says that every multivariate tail integral Π can be decomposed into its
marginal tail integrals and a Levy copula C according to
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)) , x ∈ [0,∞]d . (2.3)
For a precise formulation of this Theorem we refer to Cont & Tankov (2004), Theorem 5.6.
Now we can define the following prototypical LDA model.
Definition 2.1. [Multivariate Compound Poisson model]
(1) All aggregate loss processes Si for i = 1, . . . , d are compound Poisson processes with
tail integral Πi(·) = λiFi(·).
(2) The dependence between different cells is modelled by a Levy copula C : [0,∞)d →
[0,∞), i.e., the tail integral of the d-dimensional compound Poisson process S = (S1, . . . , Sd)
is defined by
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)).
3 The Bivariate Clayton Model
A bivariate model is particularly useful to illustrate how dependence modelling via Levy
copulas works. Therefore, we now focus on two operational risk cells as in Definition 2.1(1).
The dependence structure is modelled by a Clayton Levy copula, which is similar to the
well-known Clayton copula for distribution functions and parameterized by ϑ > 0 (see
Cont & Tankov (2004), Example 5.5):
Cϑ(u, v) = (u−ϑ + v−ϑ)−1/ϑ , u, v ≥ 0 .
This copula covers the whole range of positive dependence. For ϑ → 0 we obtain in-
dependence and then, as we will see below, losses in different cells never occur at the
same time. For ϑ → ∞ we get the complete positive dependence Levy copula given by
C‖(u, v) = min(u, v). We decompose now the two cells’ aggregate loss processes into
3
more restrictive setting of static random variables. It is well-known that the dependence
structure of a random vector can be disentangled from its marginals by introducing a
distributional copula. Similarly, a multivariate tail integral
Π(x1, . . . , xd) = Π([x1,∞) × · · · × [xd,∞)) , x ∈ [0,∞]d , (2.2)
can be constructed from the marginal tail integrals (2.1) by means of a Levy copula. This
representation is the content of Sklar’s theorem for Levy processes with positive jumps,
which basically says that every multivariate tail integral Π can be decomposed into its
marginal tail integrals and a Levy copula C according to
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)) , x ∈ [0,∞]d . (2.3)
For a precise formulation of this Theorem we refer to Cont & Tankov (2004), Theorem 5.6.
Now we can define the following prototypical LDA model.
Definition 2.1. [Multivariate Compound Poisson model]
(1) All aggregate loss processes Si for i = 1, . . . , d are compound Poisson processes with
tail integral Πi(·) = λiFi(·).
(2) The dependence between different cells is modelled by a Levy copula C : [0,∞)d →
[0,∞), i.e., the tail integral of the d-dimensional compound Poisson process S = (S1, . . . , Sd)
is defined by
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)).
3 The Bivariate Clayton Model
A bivariate model is particularly useful to illustrate how dependence modelling via Levy
copulas works. Therefore, we now focus on two operational risk cells as in Definition 2.1(1).
The dependence structure is modelled by a Clayton Levy copula, which is similar to the
well-known Clayton copula for distribution functions and parameterized by ϑ > 0 (see
Cont & Tankov (2004), Example 5.5):
Cϑ(u, v) = (u−ϑ + v−ϑ)−1/ϑ , u, v ≥ 0 .
This copula covers the whole range of positive dependence. For ϑ → 0 we obtain in-
dependence and then, as we will see below, losses in different cells never occur at the
same time. For ϑ → ∞ we get the complete positive dependence Levy copula given by
C‖(u, v) = min(u, v). We decompose now the two cells’ aggregate loss processes into
3
more restrictive setting of static random variables. It is well-known that the dependence
structure of a random vector can be disentangled from its marginals by introducing a
distributional copula. Similarly, a multivariate tail integral
Π(x1, . . . , xd) = Π([x1,∞) × · · · × [xd,∞)) , x ∈ [0,∞]d , (2.2)
can be constructed from the marginal tail integrals (2.1) by means of a Levy copula. This
representation is the content of Sklar’s theorem for Levy processes with positive jumps,
which basically says that every multivariate tail integral Π can be decomposed into its
marginal tail integrals and a Levy copula C according to
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)) , x ∈ [0,∞]d . (2.3)
For a precise formulation of this Theorem we refer to Cont & Tankov (2004), Theorem 5.6.
Now we can define the following prototypical LDA model.
Definition 2.1. [Multivariate Compound Poisson model]
(1) All aggregate loss processes Si for i = 1, . . . , d are compound Poisson processes with
tail integral Πi(·) = λiFi(·).
(2) The dependence between different cells is modelled by a Levy copula C : [0,∞)d →
[0,∞), i.e., the tail integral of the d-dimensional compound Poisson process S = (S1, . . . , Sd)
is defined by
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)).
3 The Bivariate Clayton Model
A bivariate model is particularly useful to illustrate how dependence modelling via Levy
copulas works. Therefore, we now focus on two operational risk cells as in Definition 2.1(1).
The dependence structure is modelled by a Clayton Levy copula, which is similar to the
well-known Clayton copula for distribution functions and parameterized by ϑ > 0 (see
Cont & Tankov (2004), Example 5.5):
Cϑ(u, v) = (u−ϑ + v−ϑ)−1/ϑ , u, v ≥ 0 .
This copula covers the whole range of positive dependence. For ϑ → 0 we obtain in-
dependence and then, as we will see below, losses in different cells never occur at the
same time. For ϑ → ∞ we get the complete positive dependence Levy copula given by
C‖(u, v) = min(u, v). We decompose now the two cells’ aggregate loss processes into
3
more restrictive setting of static random variables. It is well-known that the dependence
structure of a random vector can be disentangled from its marginals by introducing a
distributional copula. Similarly, a multivariate tail integral
Π(x1, . . . , xd) = Π([x1,∞) × · · · × [xd,∞)) , x ∈ [0,∞]d , (2.2)
can be constructed from the marginal tail integrals (2.1) by means of a Levy copula. This
representation is the content of Sklar’s theorem for Levy processes with positive jumps,
which basically says that every multivariate tail integral Π can be decomposed into its
marginal tail integrals and a Levy copula C according to
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)) , x ∈ [0,∞]d . (2.3)
For a precise formulation of this Theorem we refer to Cont & Tankov (2004), Theorem 5.6.
Now we can define the following prototypical LDA model.
Definition 2.1. [Multivariate Compound Poisson model]
(1) All aggregate loss processes Si for i = 1, . . . , d are compound Poisson processes with
tail integral Πi(·) = λiFi(·).
(2) The dependence between different cells is modelled by a Levy copula C : [0,∞)d →
[0,∞), i.e., the tail integral of the d-dimensional compound Poisson process S = (S1, . . . , Sd)
is defined by
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)).
3 The Bivariate Clayton Model
A bivariate model is particularly useful to illustrate how dependence modelling via Levy
copulas works. Therefore, we now focus on two operational risk cells as in Definition 2.1(1).
The dependence structure is modelled by a Clayton Levy copula, which is similar to the
well-known Clayton copula for distribution functions and parameterized by ϑ > 0 (see
Cont & Tankov (2004), Example 5.5):
Cϑ(u, v) = (u−ϑ + v−ϑ)−1/ϑ , u, v ≥ 0 .
This copula covers the whole range of positive dependence. For ϑ → 0 we obtain in-
dependence and then, as we will see below, losses in different cells never occur at the
same time. For ϑ → ∞ we get the complete positive dependence Levy copula given by
C‖(u, v) = min(u, v). We decompose now the two cells’ aggregate loss processes into
3
more restrictive setting of static random variables. It is well-known that the dependence
structure of a random vector can be disentangled from its marginals by introducing a
distributional copula. Similarly, a multivariate tail integral
Π(x1, . . . , xd) = Π([x1,∞) × · · · × [xd,∞)) , x ∈ [0,∞]d , (2.2)
can be constructed from the marginal tail integrals (2.1) by means of a Levy copula. This
representation is the content of Sklar’s theorem for Levy processes with positive jumps,
which basically says that every multivariate tail integral Π can be decomposed into its
marginal tail integrals and a Levy copula C according to
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)) , x ∈ [0,∞]d . (2.3)
For a precise formulation of this Theorem we refer to Cont & Tankov (2004), Theorem 5.6.
Now we can define the following prototypical LDA model.
Definition 2.1. [Multivariate Compound Poisson model]
(1) All aggregate loss processes Si for i = 1, . . . , d are compound Poisson processes with
tail integral Πi(·) = λiFi(·).
(2) The dependence between different cells is modelled by a Levy copula C : [0,∞)d →
[0,∞), i.e., the tail integral of the d-dimensional compound Poisson process S = (S1, . . . , Sd)
is defined by
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)).
3 The Bivariate Clayton Model
A bivariate model is particularly useful to illustrate how dependence modelling via Levy
copulas works. Therefore, we now focus on two operational risk cells as in Definition 2.1(1).
The dependence structure is modelled by a Clayton Levy copula, which is similar to the
well-known Clayton copula for distribution functions and parameterized by ϑ > 0 (see
Cont & Tankov (2004), Example 5.5):
Cϑ(u, v) = (u−ϑ + v−ϑ)−1/ϑ , u, v ≥ 0 .
This copula covers the whole range of positive dependence. For ϑ → 0 we obtain in-
dependence and then, as we will see below, losses in different cells never occur at the
same time. For ϑ → ∞ we get the complete positive dependence Levy copula given by
C‖(u, v) = min(u, v). We decompose now the two cells’ aggregate loss processes into
3
[0, t] (called frequency), independent of (X ik)k∈N. The bank’s total operational risk is then
given as
S+(t) := S1(t) + S2(t) + · · · + Sd(t) , t ≥ 0 . (1.2)
The present literature suggests to model dependence between different operational risk
cells by means of different concepts, which basically split into models for frequency de-
pendence on the one hand and for severity dependence on the other hand.
Here we suggest a model based on the new concept of Levy copulas (see e.g. Cont
& Tankov (2004)), which models dependence in frequency and severity simultaneously,
yielding a model with comparably few parameters. Moreover, our model has the same
advantage as a distributional copula: the dependence structure between different cells can
be separated from the marginal processes Si for i = 1, . . . , d. This approach allows for
closed-form approximations for operational VAR (OpVAR).
2 Dependent Operational Risks and Levy Copulas
In accordance with a recent survey of the Basel Committee on Banking Supervision about
AMA practices at financial services firms, we assume that the loss frequency processes Ni
in (1.1) follows a homogeneous Poisson process with rate λi > 0. Then the aggregate loss
(1.1) constitutes a compound Poisson process and is therefore a Levy process (actually,
the compound Poisson process is the only Levy process with piecewise constant sample
paths).
A key element in the theory of Levy processes is the notion of the so-called Levy
measure. A Levy measure controls the jump behaviour of a Levy process and, therefore,
has an intuitive interpretation, in particular in the context of operational risk. The Levy
measure of a single operational risk cell measures the expected number of losses per unit
time with a loss amount in a prespecified interval. For our compound Poisson model,
the Levy measure Πi of the cell process Si is completely determined by the frequency
parameter λi > 0 and the distribution function Fi of the cell’s severity: Πi([0, x)) :=
λiP (X i ≤ x) = λiFi(x) for x ∈ [0,∞). The corresponding one-dimensional tail integral
is defined as
Πi(x) := Πi([x,∞)) = λiP (X i > x) = λiF i(x) . (2.1)
Our goal is modelling multivariate operational risk. Hence, the question is how dif-
ferent one-dimensional compound Poisson processes Si(·) =∑Ni(·)
k=1 X ik can be used to
construct a d-dimensional compound Poisson process S = (S1, S2, . . . , Sd) with in general
dependent components. It is worthwhile to recall the similar situation in the case of the
2
[0, t] (called frequency), independent of (X ik)k∈N. The bank’s total operational risk is then
given as
S+(t) := S1(t) + S2(t) + · · · + Sd(t) , t ≥ 0 . (1.2)
The present literature suggests to model dependence between different operational risk
cells by means of different concepts, which basically split into models for frequency de-
pendence on the one hand and for severity dependence on the other hand.
Here we suggest a model based on the new concept of Levy copulas (see e.g. Cont
& Tankov (2004)), which models dependence in frequency and severity simultaneously,
yielding a model with comparably few parameters. Moreover, our model has the same
advantage as a distributional copula: the dependence structure between different cells can
be separated from the marginal processes Si for i = 1, . . . , d. This approach allows for
closed-form approximations for operational VAR (OpVAR).
2 Dependent Operational Risks and Levy Copulas
In accordance with a recent survey of the Basel Committee on Banking Supervision about
AMA practices at financial services firms, we assume that the loss frequency processes Ni
in (1.1) follows a homogeneous Poisson process with rate λi > 0. Then the aggregate loss
(1.1) constitutes a compound Poisson process and is therefore a Levy process (actually,
the compound Poisson process is the only Levy process with piecewise constant sample
paths).
A key element in the theory of Levy processes is the notion of the so-called Levy
measure. A Levy measure controls the jump behaviour of a Levy process and, therefore,
has an intuitive interpretation, in particular in the context of operational risk. The Levy
measure of a single operational risk cell measures the expected number of losses per unit
time with a loss amount in a prespecified interval. For our compound Poisson model,
the Levy measure Πi of the cell process Si is completely determined by the frequency
parameter λi > 0 and the distribution function Fi of the cell’s severity: Πi([0, x)) :=
λiP (X i ≤ x) = λiFi(x) for x ∈ [0,∞). The corresponding one-dimensional tail integral
is defined as
Πi(x) := Πi([x,∞)) = λiP (X i > x) = λiF i(x) . (2.1)
Our goal is modelling multivariate operational risk. Hence, the question is how dif-
ferent one-dimensional compound Poisson processes Si(·) =∑Ni(·)
k=1 X ik can be used to
construct a d-dimensional compound Poisson process S = (S1, S2, . . . , Sd) with in general
dependent components. It is worthwhile to recall the similar situation in the case of the
2
Page 27 ◗
September 2007 ◗ Risk Management
continued on page 28 ◗
[0, t] (called frequency), independent of (X ik)k∈N. The bank’s total operational risk is then
given as
S+(t) := S1(t) + S2(t) + · · · + Sd(t) , t ≥ 0 . (1.2)
The present literature suggests to model dependence between different operational risk
cells by means of different concepts, which basically split into models for frequency de-
pendence on the one hand and for severity dependence on the other hand.
Here we suggest a model based on the new concept of Levy copulas (see e.g. Cont
& Tankov (2004)), which models dependence in frequency and severity simultaneously,
yielding a model with comparably few parameters. Moreover, our model has the same
advantage as a distributional copula: the dependence structure between different cells can
be separated from the marginal processes Si for i = 1, . . . , d. This approach allows for
closed-form approximations for operational VAR (OpVAR).
2 Dependent Operational Risks and Levy Copulas
In accordance with a recent survey of the Basel Committee on Banking Supervision about
AMA practices at financial services firms, we assume that the loss frequency processes Ni
in (1.1) follows a homogeneous Poisson process with rate λi > 0. Then the aggregate loss
(1.1) constitutes a compound Poisson process and is therefore a Levy process (actually,
the compound Poisson process is the only Levy process with piecewise constant sample
paths).
A key element in the theory of Levy processes is the notion of the so-called Levy
measure. A Levy measure controls the jump behaviour of a Levy process and, therefore,
has an intuitive interpretation, in particular in the context of operational risk. The Levy
measure of a single operational risk cell measures the expected number of losses per unit
time with a loss amount in a prespecified interval. For our compound Poisson model,
the Levy measure Πi of the cell process Si is completely determined by the frequency
parameter λi > 0 and the distribution function Fi of the cell’s severity: Πi([0, x)) :=
λiP (X i ≤ x) = λiFi(x) for x ∈ [0,∞). The corresponding one-dimensional tail integral
is defined as
Πi(x) := Πi([x,∞)) = λiP (X i > x) = λiF i(x) . (2.1)
Our goal is modelling multivariate operational risk. Hence, the question is how dif-
ferent one-dimensional compound Poisson processes Si(·) =∑Ni(·)
k=1 X ik can be used to
construct a d-dimensional compound Poisson process S = (S1, S2, . . . , Sd) with in general
dependent components. It is worthwhile to recall the similar situation in the case of the
2
more restrictive setting of static random variables. It is well-known that the dependence
structure of a random vector can be disentangled from its marginals by introducing a
distributional copula. Similarly, a multivariate tail integral
Π(x1, . . . , xd) = Π([x1,∞) × · · · × [xd,∞)) , x ∈ [0,∞]d , (2.2)
can be constructed from the marginal tail integrals (2.1) by means of a Levy copula. This
representation is the content of Sklar’s theorem for Levy processes with positive jumps,
which basically says that every multivariate tail integral Π can be decomposed into its
marginal tail integrals and a Levy copula C according to
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)) , x ∈ [0,∞]d . (2.3)
For a precise formulation of this Theorem we refer to Cont & Tankov (2004), Theorem 5.6.
Now we can define the following prototypical LDA model.
Definition 2.1. [Multivariate Compound Poisson model]
(1) All aggregate loss processes Si for i = 1, . . . , d are compound Poisson processes with
tail integral Πi(·) = λiFi(·).
(2) The dependence between different cells is modelled by a Levy copula C : [0,∞)d →
[0,∞), i.e., the tail integral of the d-dimensional compound Poisson process S = (S1, . . . , Sd)
is defined by
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)).
3 The Bivariate Clayton Model
A bivariate model is particularly useful to illustrate how dependence modelling via Levy
copulas works. Therefore, we now focus on two operational risk cells as in Definition 2.1(1).
The dependence structure is modelled by a Clayton Levy copula, which is similar to the
well-known Clayton copula for distribution functions and parameterized by ϑ > 0 (see
Cont & Tankov (2004), Example 5.5):
Cϑ(u, v) = (u−ϑ + v−ϑ)−1/ϑ , u, v ≥ 0 .
This copula covers the whole range of positive dependence. For ϑ → 0 we obtain in-
dependence and then, as we will see below, losses in different cells never occur at the
same time. For ϑ → ∞ we get the complete positive dependence Levy copula given by
C‖(u, v) = min(u, v). We decompose now the two cells’ aggregate loss processes into
3
more restrictive setting of static random variables. It is well-known that the dependence
structure of a random vector can be disentangled from its marginals by introducing a
distributional copula. Similarly, a multivariate tail integral
Π(x1, . . . , xd) = Π([x1,∞) × · · · × [xd,∞)) , x ∈ [0,∞]d , (2.2)
can be constructed from the marginal tail integrals (2.1) by means of a Levy copula. This
representation is the content of Sklar’s theorem for Levy processes with positive jumps,
which basically says that every multivariate tail integral Π can be decomposed into its
marginal tail integrals and a Levy copula C according to
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)) , x ∈ [0,∞]d . (2.3)
For a precise formulation of this Theorem we refer to Cont & Tankov (2004), Theorem 5.6.
Now we can define the following prototypical LDA model.
Definition 2.1. [Multivariate Compound Poisson model]
(1) All aggregate loss processes Si for i = 1, . . . , d are compound Poisson processes with
tail integral Πi(·) = λiFi(·).
(2) The dependence between different cells is modelled by a Levy copula C : [0,∞)d →
[0,∞), i.e., the tail integral of the d-dimensional compound Poisson process S = (S1, . . . , Sd)
is defined by
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)).
3 The Bivariate Clayton Model
A bivariate model is particularly useful to illustrate how dependence modelling via Levy
copulas works. Therefore, we now focus on two operational risk cells as in Definition 2.1(1).
The dependence structure is modelled by a Clayton Levy copula, which is similar to the
well-known Clayton copula for distribution functions and parameterized by ϑ > 0 (see
Cont & Tankov (2004), Example 5.5):
Cϑ(u, v) = (u−ϑ + v−ϑ)−1/ϑ , u, v ≥ 0 .
This copula covers the whole range of positive dependence. For ϑ → 0 we obtain in-
dependence and then, as we will see below, losses in different cells never occur at the
same time. For ϑ → ∞ we get the complete positive dependence Levy copula given by
C‖(u, v) = min(u, v). We decompose now the two cells’ aggregate loss processes into
3
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
more restrictive setting of static random variables. It is well-known that the dependence
structure of a random vector can be disentangled from its marginals by introducing a
distributional copula. Similarly, a multivariate tail integral
Π(x1, . . . , xd) = Π([x1,∞) × · · · × [xd,∞)) , x ∈ [0,∞]d , (2.2)
can be constructed from the marginal tail integrals (2.1) by means of a Levy copula. This
representation is the content of Sklar’s theorem for Levy processes with positive jumps,
which basically says that every multivariate tail integral Π can be decomposed into its
marginal tail integrals and a Levy copula C according to
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)) , x ∈ [0,∞]d . (2.3)
For a precise formulation of this Theorem we refer to Cont & Tankov (2004), Theorem 5.6.
Now we can define the following prototypical LDA model.
Definition 2.1. [Multivariate Compound Poisson model]
(1) All aggregate loss processes Si for i = 1, . . . , d are compound Poisson processes with
tail integral Πi(·) = λiFi(·).
(2) The dependence between different cells is modelled by a Levy copula C : [0,∞)d →
[0,∞), i.e., the tail integral of the d-dimensional compound Poisson process S = (S1, . . . , Sd)
is defined by
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)).
3 The Bivariate Clayton Model
A bivariate model is particularly useful to illustrate how dependence modelling via Levy
copulas works. Therefore, we now focus on two operational risk cells as in Definition 2.1(1).
The dependence structure is modelled by a Clayton Levy copula, which is similar to the
well-known Clayton copula for distribution functions and parameterized by ϑ > 0 (see
Cont & Tankov (2004), Example 5.5):
Cϑ(u, v) = (u−ϑ + v−ϑ)−1/ϑ , u, v ≥ 0 .
This copula covers the whole range of positive dependence. For ϑ → 0 we obtain in-
dependence and then, as we will see below, losses in different cells never occur at the
same time. For ϑ → ∞ we get the complete positive dependence Levy copula given by
C‖(u, v) = min(u, v). We decompose now the two cells’ aggregate loss processes into
3
more restrictive setting of static random variables. It is well-known that the dependence
structure of a random vector can be disentangled from its marginals by introducing a
distributional copula. Similarly, a multivariate tail integral
Π(x1, . . . , xd) = Π([x1,∞) × · · · × [xd,∞)) , x ∈ [0,∞]d , (2.2)
can be constructed from the marginal tail integrals (2.1) by means of a Levy copula. This
representation is the content of Sklar’s theorem for Levy processes with positive jumps,
which basically says that every multivariate tail integral Π can be decomposed into its
marginal tail integrals and a Levy copula C according to
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)) , x ∈ [0,∞]d . (2.3)
For a precise formulation of this Theorem we refer to Cont & Tankov (2004), Theorem 5.6.
Now we can define the following prototypical LDA model.
Definition 2.1. [Multivariate Compound Poisson model]
(1) All aggregate loss processes Si for i = 1, . . . , d are compound Poisson processes with
tail integral Πi(·) = λiFi(·).
(2) The dependence between different cells is modelled by a Levy copula C : [0,∞)d →
[0,∞), i.e., the tail integral of the d-dimensional compound Poisson process S = (S1, . . . , Sd)
is defined by
Π(x1, . . . , xd) = C(Π1(x1), . . . , Πd(xd)).
3 The Bivariate Clayton Model
A bivariate model is particularly useful to illustrate how dependence modelling via Levy
copulas works. Therefore, we now focus on two operational risk cells as in Definition 2.1(1).
The dependence structure is modelled by a Clayton Levy copula, which is similar to the
well-known Clayton copula for distribution functions and parameterized by ϑ > 0 (see
Cont & Tankov (2004), Example 5.5):
Cϑ(u, v) = (u−ϑ + v−ϑ)−1/ϑ , u, v ≥ 0 .
This copula covers the whole range of positive dependence. For ϑ → 0 we obtain in-
dependence and then, as we will see below, losses in different cells never occur at the
same time. For ϑ → ∞ we get the complete positive dependence Levy copula given by
C‖(u, v) = min(u, v). We decompose now the two cells’ aggregate loss processes into
3
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
New Frontiers Award Winner
◗ Page 28
Risk Management ◗ September 2007
where and describe the aggregate loss-es of cell 1 and 2 that is generated by “commonshocks,” and and describe aggregatelosses of one cell only. Note that apart from and , all compound Poissonprocesses on the right-hand side of (3.1) are mu-tually independent. The frequency of simulta-neous losses is given by
which shows that the number of simultaneousloss events is controlled by the Lévy copula.Obviously, where theleft and right bounds refer to and ,respectively. Consequently, in the case of inde-pendence, losses never happen at the same in-stant of time.
Also the severity distributions of and aswell as their dependence structure are deter-mined by the Lévy copula. To see this, define thejoint survival function as:
(3.2)
with marginals
(3.3)
(3.4)
In particular, it follows that and are dif-
ferent from and respectively. To
explicitly extract the dependence structure
between the severities of simultaneous losses
and we use the concept of a distribution-
al survival copula. Using (3.2)-(3.4) we see that
the survival copula for the tail severity dis-
tributions and is the well-known
distributional Clayton copula; i.e.
For the tail integrals of the independent lossprocesses and . we obtain for
so that
Figure 4.1.
Decomposition of the domain of the tail integral
for into a simultaneous loss part (shaded
area) and independent parts (solid black line) and
(dashed black line).
Analytical Approximations forOperational VAR
In this section we turn to the quantification oftotal operational loss encompassing all opera-tional risk cells and, therefore, we focus on the
2 4 6 8 10x1
2
4
6
8
10
x2
Figure 4.1. Decomposition of the domain of the tail integral Π+(z) for z = 6 into a
simultaneous loss part Π+
‖ (z) (orange area) and independent parts Π⊥1(z) (solid black
line) and Π⊥2(z) (dashed black line).
4 Analytical Approximations for Operational VAR
In this section we turn to the quantification of total operational loss encompassing all
operational risk cells and, therefore, we focus on the total aggregate loss process S+
defined in (1.2). Our goal is to provide some general insight to multivariate operational
risk and to find out, how different dependence structures (modelled by Levy copulas)
affect OpVAR, which is the standard metric in operational risk measurement. The tail
integral associated with S+ is given by
Π+(z) = Π({(x1, . . . , xd) ∈ [0,∞)d :
d∑
i=1
xi ≥ z}) , z ≥ 0 . (4.1)
For d = 2 we can write
Π+(z) = Π⊥1(z) + Π⊥2(z) + Π
+
‖ (z) , z ≥ 0 , (4.2)
where Π⊥1(·) and Π⊥2(·) are the independent jump parts and
Π+
‖ (z) = Π({(x1, x2) ∈ (0,∞)2 : x1 + x2 ≥ z}) , z ≥ 0 ,
describes the dependent part due to simultaneous loss events; the situation is depicted in
Figure 4.1.
Since for every compound Poisson process with intensity λ > 0 and positive jumps
with distribution function F , the tail integral is given by Π(·) = λF (·), it follows from
5
2 4 6 8 10x1
2
4
6
8
10
x2
Figure 4.1. Decomposition of the domain of the tail integral Π+(z) for z = 6 into a
simultaneous loss part Π+
‖ (z) (orange area) and independent parts Π⊥1(z) (solid black
line) and Π⊥2(z) (dashed black line).
4 Analytical Approximations for Operational VAR
In this section we turn to the quantification of total operational loss encompassing all
operational risk cells and, therefore, we focus on the total aggregate loss process S+
defined in (1.2). Our goal is to provide some general insight to multivariate operational
risk and to find out, how different dependence structures (modelled by Levy copulas)
affect OpVAR, which is the standard metric in operational risk measurement. The tail
integral associated with S+ is given by
Π+(z) = Π({(x1, . . . , xd) ∈ [0,∞)d :
d∑
i=1
xi ≥ z}) , z ≥ 0 . (4.1)
For d = 2 we can write
Π+(z) = Π⊥1(z) + Π⊥2(z) + Π
+
‖ (z) , z ≥ 0 , (4.2)
where Π⊥1(·) and Π⊥2(·) are the independent jump parts and
Π+
‖ (z) = Π({(x1, x2) ∈ (0,∞)2 : x1 + x2 ≥ z}) , z ≥ 0 ,
describes the dependent part due to simultaneous loss events; the situation is depicted in
Figure 4.1.
Since for every compound Poisson process with intensity λ > 0 and positive jumps
with distribution function F , the tail integral is given by Π(·) = λF (·), it follows from
5
2 4 6 8 10x1
2
4
6
8
10
x2
Figure 4.1. Decomposition of the domain of the tail integral Π+(z) for z = 6 into a
simultaneous loss part Π+
‖ (z) (orange area) and independent parts Π⊥1(z) (solid black
line) and Π⊥2(z) (dashed black line).
4 Analytical Approximations for Operational VAR
In this section we turn to the quantification of total operational loss encompassing all
operational risk cells and, therefore, we focus on the total aggregate loss process S+
defined in (1.2). Our goal is to provide some general insight to multivariate operational
risk and to find out, how different dependence structures (modelled by Levy copulas)
affect OpVAR, which is the standard metric in operational risk measurement. The tail
integral associated with S+ is given by
Π+(z) = Π({(x1, . . . , xd) ∈ [0,∞)d :
d∑
i=1
xi ≥ z}) , z ≥ 0 . (4.1)
For d = 2 we can write
Π+(z) = Π⊥1(z) + Π⊥2(z) + Π
+
‖ (z) , z ≥ 0 , (4.2)
where Π⊥1(·) and Π⊥2(·) are the independent jump parts and
Π+
‖ (z) = Π({(x1, x2) ∈ (0,∞)2 : x1 + x2 ≥ z}) , z ≥ 0 ,
describes the dependent part due to simultaneous loss events; the situation is depicted in
Figure 4.1.
Since for every compound Poisson process with intensity λ > 0 and positive jumps
with distribution function F , the tail integral is given by Π(·) = λF (·), it follows from
5
2 4 6 8 10x1
2
4
6
8
10
x2
Figure 4.1. Decomposition of the domain of the tail integral Π+(z) for z = 6 into a
simultaneous loss part Π+
‖ (z) (orange area) and independent parts Π⊥1(z) (solid black
line) and Π⊥2(z) (dashed black line).
4 Analytical Approximations for Operational VAR
In this section we turn to the quantification of total operational loss encompassing all
operational risk cells and, therefore, we focus on the total aggregate loss process S+
defined in (1.2). Our goal is to provide some general insight to multivariate operational
risk and to find out, how different dependence structures (modelled by Levy copulas)
affect OpVAR, which is the standard metric in operational risk measurement. The tail
integral associated with S+ is given by
Π+(z) = Π({(x1, . . . , xd) ∈ [0,∞)d :
d∑
i=1
xi ≥ z}) , z ≥ 0 . (4.1)
For d = 2 we can write
Π+(z) = Π⊥1(z) + Π⊥2(z) + Π
+
‖ (z) , z ≥ 0 , (4.2)
where Π⊥1(·) and Π⊥2(·) are the independent jump parts and
Π+
‖ (z) = Π({(x1, x2) ∈ (0,∞)2 : x1 + x2 ≥ z}) , z ≥ 0 ,
describes the dependent part due to simultaneous loss events; the situation is depicted in
Figure 4.1.
Since for every compound Poisson process with intensity λ > 0 and positive jumps
with distribution function F , the tail integral is given by Π(·) = λF (·), it follows from
5
2 4 6 8 10x1
2
4
6
8
10
x2
Figure 4.1. Decomposition of the domain of the tail integral Π+(z) for z = 6 into a
simultaneous loss part Π+
‖ (z) (orange area) and independent parts Π⊥1(z) (solid black
line) and Π⊥2(z) (dashed black line).
4 Analytical Approximations for Operational VAR
In this section we turn to the quantification of total operational loss encompassing all
operational risk cells and, therefore, we focus on the total aggregate loss process S+
defined in (1.2). Our goal is to provide some general insight to multivariate operational
risk and to find out, how different dependence structures (modelled by Levy copulas)
affect OpVAR, which is the standard metric in operational risk measurement. The tail
integral associated with S+ is given by
Π+(z) = Π({(x1, . . . , xd) ∈ [0,∞)d :
d∑
i=1
xi ≥ z}) , z ≥ 0 . (4.1)
For d = 2 we can write
Π+(z) = Π⊥1(z) + Π⊥2(z) + Π
+
‖ (z) , z ≥ 0 , (4.2)
where Π⊥1(·) and Π⊥2(·) are the independent jump parts and
Π+
‖ (z) = Π({(x1, x2) ∈ (0,∞)2 : x1 + x2 ≥ z}) , z ≥ 0 ,
describes the dependent part due to simultaneous loss events; the situation is depicted in
Figure 4.1.
Since for every compound Poisson process with intensity λ > 0 and positive jumps
with distribution function F , the tail integral is given by Π(·) = λF (·), it follows from
5
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
Multivariate Operational…◗ continued from page 27
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
different components (where the time parameter t is dropped for simplicity):
S1 = S⊥1 + S‖1 =
N⊥1∑
k=1
X1⊥k +
N‖∑
l=1
X1‖l ,
S2 = S⊥2 + S‖2 =
N⊥2∑
m=1
X2⊥m +
N‖∑
l=1
X2‖l ,
(3.1)
where S‖1 and S‖2 describe the aggregate losses of cell 1 and 2 that is generated by
“common shocks”, and S⊥1 and S⊥2 describe aggregate losses of one cell only. Note that
apart from S‖1 and S‖2, all compound Poisson processes on the right-hand side of (3.1)
are mutually independent. The frequency of simultaneous losses is given by
Cϑ(λ1, λ2) = limx↓0
Π‖2(x) = limx↓0
Π‖1(x) = (λ−θ1 + λ−θ
2 )−1/θ =: λ‖ ,
which shows that the number of simultaneous loss events is controlled by the Levy copula.
Obviously, 0 ≤ λ‖ ≤ min(λ1, λ2),where the left and right bounds refer to ϑ → 0 and
ϑ → ∞, respectively. Consequently, in the case of independence, losses never happen at
the same instant of time.
Also the severity distributions of X1‖ and X2
‖ as well as their dependence structure are
determined by the Levy copula. To see this, define the joint survival function as
F ‖(x1, x2) := P (X1‖ > x1, X
2‖ > x2) =
1
λ‖
Cϑ(Π1(x1), Π2(x2)) (3.2)
with marginals
F ‖1(x1) = limx2↓0
F ‖(x1, x2) =1
λ‖
Cϑ(Π1(x1), λ2) (3.3)
F ‖2(x2) = limx1↓0
F ‖(x1, x2) =1
λ‖
Cϑ(λ1, Π2(x2)) . (3.4)
In particular, it follows that F‖1 and F‖2 are different from F1 and F2, respectively. To
explicitly extract the dependence structure between the severities of simultaneous losses
X1‖ and X2
‖ we use the concept of a distributional survival copula. Using (3.2)–(3.4) we
see that the survival copula Sϑ for the tail severity distributions F ‖1(·) and F ‖2(·) is the
well-known distributional Clayton copula; i.e.
Sϑ(u, v) = (u−ϑ + v−ϑ − 1)−1/ϑ, 0 ≤ u, v ≤ 1 .
For the tail integrals of the independent loss processes S⊥1 and S⊥2. we obtain for
x1, x2 ≥ 0
Π⊥1(x1) = Π1(x1) − Π‖1(x1) = Π1(x1) − Cϑ(Π1(x1), λ2) ,
Π⊥2(x2) = Π2(x2) − Π‖2(x2) = Π2(x2) − Cϑ(λ1, Π2(x2)) ,
so that λ⊥1 = λ1 − λ‖ , λ⊥2 = λ2 − λ‖.
4
total aggregate loss process defined in (1.2).Our goal is to provide some general insight tomultivariate operational risk and to find out,how different dependence structures (modelledby Lévy copulas) affect OpVAR, which is thestandard metric in operational risk measure-ment. The tail integral associated with isgiven by (4.1)
For d = 2 we can write(4.2)
where and are the independentjump parts and
describes the dependent part due to simultane-ous loss events; the situation is depicted inFigure 4.1.
Since for every compound Poisson process
with intensity and positive jumps with
distribution function F, the tail integral is given
by , it follows from (4.2) that the
total aggregate loss process is again com-
pound Poisson with frequency parameter and
severity distribution(4.3)
This result proves now useful to determine abank’s total operational risk consisting of sever-al cells. Before doing that, recall the definitionof OpVAR for a single operational risk cell(henceforth called stand-alone OpVAR.) For each cell, stand-alone OpVAR atconfidence level ( 0,1) and time horizon t
is the quantile of the aggregate loss distribu-tion, i.e.
In Böcker & Klüppelberg (2005, 2006, 2007) itwas shown that OpVAR at high confidence levelcan be approximated by a closed-form expres-sion, if the loss severity is subexponential, i.e.heavy-tailed. As this is common, we consider inthe sequel this approximation, which can bewritten as
(4.4)
where the symbol “~” means that the ratio of leftand right hand side converges to 1. Moreover,EN (t) is the cell’s expected number of losses inthe time interval [0, t]. Important examples forsubexponential distributions are lognormal,Weibull, and Pareto. Here, we extend the idea ofan asymptotic OpVAR approximation to themultivariate problem. In doing so, we exploit thefact that is a compound Poisson process withparameters as in (4.3). In particular, if issubexponential, we can apply (4.4) to estimatetotal OpVAR. Consequently, if we are able tospecify the asymptotic behavior of
we have automatically an approximation of
To make more precise statements aboutOpVAR, we focus our analysis on Pareto distrib-uted severities with distribution function
with shape parameters and tail parame-ter Pareto’s law is the prototypical para-metric example for a heavy-tailed distributionand suitable for operational risk modelling. As a
(4.2) that the total aggregate loss process S+ is again compound Poisson with frequency
parameter and severity distribution
λ+ = limz↓0
Π+(z) and F+(z) = 1 − F
+(z) = 1 −
Π+(z)
λ+, z ≥ 0 . (4.3)
This result proves now useful to determine a bank’s total operational risk consisting of
several cells. Before doing that, recall the definition of OpVAR for a single operational
risk cell (henceforth called stand-alone OpVAR.) For each cell, stand-alone OpVAR at
confidence level κ ∈ (0, 1) and time horizon t is the κ-quantile of the aggregate loss
distribution, i.e. VARt(κ) = G←t (κ) with G←t (κ) = inf{x ∈ R : P (S(t) ≤ x) ≥ κ}.
In Bocker & Kluppelberg (2005, 2006, 2007) it was shown that OpVAR at high con-
fidence level can be approximated by a closed-form expression, if the loss severity is
subexponential, i.e. heavy-tailed. As this is common believe we consider in the sequel this
approximation, which can be written as
VARt(κ) ∼ F←(
1 −1 − κ
EN(t)
), κ ↑ 1 , (4.4)
where the symbol “∼” means that the ratio of left and right hand side converges to
1. Moreover, EN(t) is the cell’s expected number of losses in the time interval [0, t].
Important examples for subexponential distributions are lognormal, Weibull, and Pareto.
Here, we extend the idea of an asymptotic OpVAR approximation to the multivariate
problem. In doing so, we exploit the fact that S+ is a compound Poisson process with
parameters as in (4.3). In particular, if F+ is subexponential, we can apply (4.4) to
estimate total OpVAR. Consequently, if we are able to specify the asymptotic behaviour
of F+(x) as x → ∞ we have automatically an approximation of VARt(κ) as κ ↑ 1.
To make more precise statements about OpVAR, we focus our analysis on Pareto
distributed severities with distribution function
F (x) =(1 +
x
θ
)−α
, x > 0 ,
with shape parameters θ > 0 and tail parameter α > 0. Pareto’s law is the prototypical
parametric example for a heavy-tailed distribution and suitable for operational risk mod-
elling. As a simple consequence of (4.4), in the case of a compound Poisson model with
Pareto severities (Pareto-Poisson model) the analytic OpVAR is given by
VARt(κ) ∼ θ
[(λ t
1 − κ
)1/α
− 1
]∼ θ
(λ t
1 − κ
)1/α
, κ ↑ 1 . (4.5)
To demonstrate the kind of results we obtain by such approximation methods we
consider a Pareto-Poisson model, where the severity distributions Fi of the first (say)
6
(4.2) that the total aggregate loss process S+ is again compound Poisson with frequency
parameter and severity distribution
λ+ = limz↓0
Π+(z) and F+(z) = 1 − F
+(z) = 1 −
Π+(z)
λ+, z ≥ 0 . (4.3)
This result proves now useful to determine a bank’s total operational risk consisting of
several cells. Before doing that, recall the definition of OpVAR for a single operational
risk cell (henceforth called stand-alone OpVAR.) For each cell, stand-alone OpVAR at
confidence level κ ∈ (0, 1) and time horizon t is the κ-quantile of the aggregate loss
distribution, i.e. VARt(κ) = G←t (κ) with G←t (κ) = inf{x ∈ R : P (S(t) ≤ x) ≥ κ}.
In Bocker & Kluppelberg (2005, 2006, 2007) it was shown that OpVAR at high con-
fidence level can be approximated by a closed-form expression, if the loss severity is
subexponential, i.e. heavy-tailed. As this is common believe we consider in the sequel this
approximation, which can be written as
VARt(κ) ∼ F←(
1 −1 − κ
EN(t)
), κ ↑ 1 , (4.4)
where the symbol “∼” means that the ratio of left and right hand side converges to
1. Moreover, EN(t) is the cell’s expected number of losses in the time interval [0, t].
Important examples for subexponential distributions are lognormal, Weibull, and Pareto.
Here, we extend the idea of an asymptotic OpVAR approximation to the multivariate
problem. In doing so, we exploit the fact that S+ is a compound Poisson process with
parameters as in (4.3). In particular, if F+ is subexponential, we can apply (4.4) to
estimate total OpVAR. Consequently, if we are able to specify the asymptotic behaviour
of F+(x) as x → ∞ we have automatically an approximation of VARt(κ) as κ ↑ 1.
To make more precise statements about OpVAR, we focus our analysis on Pareto
distributed severities with distribution function
F (x) =(1 +
x
θ
)−α
, x > 0 ,
with shape parameters θ > 0 and tail parameter α > 0. Pareto’s law is the prototypical
parametric example for a heavy-tailed distribution and suitable for operational risk mod-
elling. As a simple consequence of (4.4), in the case of a compound Poisson model with
Pareto severities (Pareto-Poisson model) the analytic OpVAR is given by
VARt(κ) ∼ θ
[(λ t
1 − κ
)1/α
− 1
]∼ θ
(λ t
1 − κ
)1/α
, κ ↑ 1 . (4.5)
To demonstrate the kind of results we obtain by such approximation methods we
consider a Pareto-Poisson model, where the severity distributions Fi of the first (say)
6
(4.2) that the total aggregate loss process S+ is again compound Poisson with frequency
parameter and severity distribution
λ+ = limz↓0
Π+(z) and F+(z) = 1 − F
+(z) = 1 −
Π+(z)
λ+, z ≥ 0 . (4.3)
This result proves now useful to determine a bank’s total operational risk consisting of
several cells. Before doing that, recall the definition of OpVAR for a single operational
risk cell (henceforth called stand-alone OpVAR.) For each cell, stand-alone OpVAR at
confidence level κ ∈ (0, 1) and time horizon t is the κ-quantile of the aggregate loss
distribution, i.e. VARt(κ) = G←t (κ) with G←t (κ) = inf{x ∈ R : P (S(t) ≤ x) ≥ κ}.
In Bocker & Kluppelberg (2005, 2006, 2007) it was shown that OpVAR at high con-
fidence level can be approximated by a closed-form expression, if the loss severity is
subexponential, i.e. heavy-tailed. As this is common believe we consider in the sequel this
approximation, which can be written as
VARt(κ) ∼ F←(
1 −1 − κ
EN(t)
), κ ↑ 1 , (4.4)
where the symbol “∼” means that the ratio of left and right hand side converges to
1. Moreover, EN(t) is the cell’s expected number of losses in the time interval [0, t].
Important examples for subexponential distributions are lognormal, Weibull, and Pareto.
Here, we extend the idea of an asymptotic OpVAR approximation to the multivariate
problem. In doing so, we exploit the fact that S+ is a compound Poisson process with
parameters as in (4.3). In particular, if F+ is subexponential, we can apply (4.4) to
estimate total OpVAR. Consequently, if we are able to specify the asymptotic behaviour
of F+(x) as x → ∞ we have automatically an approximation of VARt(κ) as κ ↑ 1.
To make more precise statements about OpVAR, we focus our analysis on Pareto
distributed severities with distribution function
F (x) =(1 +
x
θ
)−α
, x > 0 ,
with shape parameters θ > 0 and tail parameter α > 0. Pareto’s law is the prototypical
parametric example for a heavy-tailed distribution and suitable for operational risk mod-
elling. As a simple consequence of (4.4), in the case of a compound Poisson model with
Pareto severities (Pareto-Poisson model) the analytic OpVAR is given by
VARt(κ) ∼ θ
[(λ t
1 − κ
)1/α
− 1
]∼ θ
(λ t
1 − κ
)1/α
, κ ↑ 1 . (4.5)
To demonstrate the kind of results we obtain by such approximation methods we
consider a Pareto-Poisson model, where the severity distributions Fi of the first (say)
6
asymptotic OpVAR approximation to the multivariate problem. In doing so, we exploit the fact
that is a compound Poisson process with parameters as in (4.3). In particular, if FS � is
subexponential, we can apply (4.4) to estimate total OpVAR. Consequently, if we are able to
specify the asymptotic behavior of � �F x as x�
o f we have automatically an approximation of
�tVAR N as 1.N n
To make more precise statements about OpVAR, we focus our analysis on Pareto
distributed severities with distribution function
� � 1 ,xF x xD
T
�§ · 0,� !¨ ¸© ¹
with shape parameters > 0 and tail parameter > 0. Pareto’s law is the prototypical
parametric example for a heavy-tailed distribution and suitable for operational risk modelling.
As a simple consequence of (4.4), in the case of a compound Poisson model with Pareto
severities (Pareto-Poisson model) the analytic OpVAR is given by
� �1 1
~ 1 ~ ,1 1t
t tVARD DO ON T T N
N N
ª º§ · § ·« » 1.n¨ ¸ ¨ ¸� �© ¹ © ¹« »¬ ¼ (4.5)
To demonstrate the kind of results we obtain by such approximation methods we consider a
Pareto-Poisson model, where the severity distributions of the first (say) b d cells are tail
equivalent with tail parameter > 0 and dominant to all other cells, i.e
iF
� �� �
� �� �1 11
lim , 1, , , lim 0, 1, , .i ii
x x
F x F xi b i b
F x F x
DTTof of
§ · �¨ ¸© ¹
� � d (4.6)
In the important cases of complete positive dependence and independence, closed-form results
can be found and may serve as extreme cases concerning the dependence structure of the model.
Theorem 4.2. Consider a compound Poisson model with cell processes with 1, , dS S�
Pareto distributed severities satisfying (4.6). Let be the stand-alone OpVAR of VAR (·) it
cell i.
S(a) If all cells are completely dependent with the same frequency for all cells, then
is compound Poisson with parameters
� �1
and ~ , ,b
ii
F z z zD
DO O T�� �
§ · o¨ ¸© ¹¦ f
7
(4.2) that the total aggregate loss process S+ is again compound Poisson with frequency
parameter and severity distribution
λ+ = limz↓0
Π+(z) and F+(z) = 1 − F
+(z) = 1 −
Π+(z)
λ+, z ≥ 0 . (4.3)
This result proves now useful to determine a bank’s total operational risk consisting of
several cells. Before doing that, recall the definition of OpVAR for a single operational
risk cell (henceforth called stand-alone OpVAR.) For each cell, stand-alone OpVAR at
confidence level κ ∈ (0, 1) and time horizon t is the κ-quantile of the aggregate loss
distribution, i.e. VARt(κ) = G←t (κ) with G←t (κ) = inf{x ∈ R : P (S(t) ≤ x) ≥ κ}.
In Bocker & Kluppelberg (2005, 2006, 2007) it was shown that OpVAR at high con-
fidence level can be approximated by a closed-form expression, if the loss severity is
subexponential, i.e. heavy-tailed. As this is common believe we consider in the sequel this
approximation, which can be written as
VARt(κ) ∼ F←(
1 −1 − κ
EN(t)
), κ ↑ 1 , (4.4)
where the symbol “∼” means that the ratio of left and right hand side converges to
1. Moreover, EN(t) is the cell’s expected number of losses in the time interval [0, t].
Important examples for subexponential distributions are lognormal, Weibull, and Pareto.
Here, we extend the idea of an asymptotic OpVAR approximation to the multivariate
problem. In doing so, we exploit the fact that S+ is a compound Poisson process with
parameters as in (4.3). In particular, if F+ is subexponential, we can apply (4.4) to
estimate total OpVAR. Consequently, if we are able to specify the asymptotic behaviour
of F+(x) as x → ∞ we have automatically an approximation of VARt(κ) as κ ↑ 1.
To make more precise statements about OpVAR, we focus our analysis on Pareto
distributed severities with distribution function
F (x) =(1 +
x
θ
)−α
, x > 0 ,
with shape parameters θ > 0 and tail parameter α > 0. Pareto’s law is the prototypical
parametric example for a heavy-tailed distribution and suitable for operational risk mod-
elling. As a simple consequence of (4.4), in the case of a compound Poisson model with
Pareto severities (Pareto-Poisson model) the analytic OpVAR is given by
VARt(κ) ∼ θ
[(λ t
1 − κ
)1/α
− 1
]∼ θ
(λ t
1 − κ
)1/α
, κ ↑ 1 . (4.5)
To demonstrate the kind of results we obtain by such approximation methods we
consider a Pareto-Poisson model, where the severity distributions Fi of the first (say)
6
(4.2) that the total aggregate loss process S+ is again compound Poisson with frequency
parameter and severity distribution
λ+ = limz↓0
Π+(z) and F+(z) = 1 − F
+(z) = 1 −
Π+(z)
λ+, z ≥ 0 . (4.3)
This result proves now useful to determine a bank’s total operational risk consisting of
several cells. Before doing that, recall the definition of OpVAR for a single operational
risk cell (henceforth called stand-alone OpVAR.) For each cell, stand-alone OpVAR at
confidence level κ ∈ (0, 1) and time horizon t is the κ-quantile of the aggregate loss
distribution, i.e. VARt(κ) = G←t (κ) with G←t (κ) = inf{x ∈ R : P (S(t) ≤ x) ≥ κ}.
In Bocker & Kluppelberg (2005, 2006, 2007) it was shown that OpVAR at high con-
fidence level can be approximated by a closed-form expression, if the loss severity is
subexponential, i.e. heavy-tailed. As this is common believe we consider in the sequel this
approximation, which can be written as
VARt(κ) ∼ F←(
1 −1 − κ
EN(t)
), κ ↑ 1 , (4.4)
where the symbol “∼” means that the ratio of left and right hand side converges to
1. Moreover, EN(t) is the cell’s expected number of losses in the time interval [0, t].
Important examples for subexponential distributions are lognormal, Weibull, and Pareto.
Here, we extend the idea of an asymptotic OpVAR approximation to the multivariate
problem. In doing so, we exploit the fact that S+ is a compound Poisson process with
parameters as in (4.3). In particular, if F+ is subexponential, we can apply (4.4) to
estimate total OpVAR. Consequently, if we are able to specify the asymptotic behaviour
of F+(x) as x → ∞ we have automatically an approximation of VARt(κ) as κ ↑ 1.
To make more precise statements about OpVAR, we focus our analysis on Pareto
distributed severities with distribution function
F (x) =(1 +
x
θ
)−α
, x > 0 ,
with shape parameters θ > 0 and tail parameter α > 0. Pareto’s law is the prototypical
parametric example for a heavy-tailed distribution and suitable for operational risk mod-
elling. As a simple consequence of (4.4), in the case of a compound Poisson model with
Pareto severities (Pareto-Poisson model) the analytic OpVAR is given by
VARt(κ) ∼ θ
[(λ t
1 − κ
)1/α
− 1
]∼ θ
(λ t
1 − κ
)1/α
, κ ↑ 1 . (4.5)
To demonstrate the kind of results we obtain by such approximation methods we
consider a Pareto-Poisson model, where the severity distributions Fi of the first (say)
6
(4.2) that the total aggregate loss process S+ is again compound Poisson with frequency
parameter and severity distribution
λ+ = limz↓0
Π+(z) and F+(z) = 1 − F
+(z) = 1 −
Π+(z)
λ+, z ≥ 0 . (4.3)
This result proves now useful to determine a bank’s total operational risk consisting of
several cells. Before doing that, recall the definition of OpVAR for a single operational
risk cell (henceforth called stand-alone OpVAR.) For each cell, stand-alone OpVAR at
confidence level κ ∈ (0, 1) and time horizon t is the κ-quantile of the aggregate loss
distribution, i.e. VARt(κ) = G←t (κ) with G←t (κ) = inf{x ∈ R : P (S(t) ≤ x) ≥ κ}.
In Bocker & Kluppelberg (2005, 2006, 2007) it was shown that OpVAR at high con-
fidence level can be approximated by a closed-form expression, if the loss severity is
subexponential, i.e. heavy-tailed. As this is common believe we consider in the sequel this
approximation, which can be written as
VARt(κ) ∼ F←(
1 −1 − κ
EN(t)
), κ ↑ 1 , (4.4)
where the symbol “∼” means that the ratio of left and right hand side converges to
1. Moreover, EN(t) is the cell’s expected number of losses in the time interval [0, t].
Important examples for subexponential distributions are lognormal, Weibull, and Pareto.
Here, we extend the idea of an asymptotic OpVAR approximation to the multivariate
problem. In doing so, we exploit the fact that S+ is a compound Poisson process with
parameters as in (4.3). In particular, if F+ is subexponential, we can apply (4.4) to
estimate total OpVAR. Consequently, if we are able to specify the asymptotic behaviour
of F+(x) as x → ∞ we have automatically an approximation of VARt(κ) as κ ↑ 1.
To make more precise statements about OpVAR, we focus our analysis on Pareto
distributed severities with distribution function
F (x) =(1 +
x
θ
)−α
, x > 0 ,
with shape parameters θ > 0 and tail parameter α > 0. Pareto’s law is the prototypical
parametric example for a heavy-tailed distribution and suitable for operational risk mod-
elling. As a simple consequence of (4.4), in the case of a compound Poisson model with
Pareto severities (Pareto-Poisson model) the analytic OpVAR is given by
VARt(κ) ∼ θ
[(λ t
1 − κ
)1/α
− 1
]∼ θ
(λ t
1 − κ
)1/α
, κ ↑ 1 . (4.5)
To demonstrate the kind of results we obtain by such approximation methods we
consider a Pareto-Poisson model, where the severity distributions Fi of the first (say)
6
(4.2) that the total aggregate loss process S+ is again compound Poisson with frequency
parameter and severity distribution
λ+ = limz↓0
Π+(z) and F+(z) = 1 − F
+(z) = 1 −
Π+(z)
λ+, z ≥ 0 . (4.3)
This result proves now useful to determine a bank’s total operational risk consisting of
several cells. Before doing that, recall the definition of OpVAR for a single operational
risk cell (henceforth called stand-alone OpVAR.) For each cell, stand-alone OpVAR at
confidence level κ ∈ (0, 1) and time horizon t is the κ-quantile of the aggregate loss
distribution, i.e. VARt(κ) = G←t (κ) with G←t (κ) = inf{x ∈ R : P (S(t) ≤ x) ≥ κ}.
In Bocker & Kluppelberg (2005, 2006, 2007) it was shown that OpVAR at high con-
fidence level can be approximated by a closed-form expression, if the loss severity is
subexponential, i.e. heavy-tailed. As this is common believe we consider in the sequel this
approximation, which can be written as
VARt(κ) ∼ F←(
1 −1 − κ
EN(t)
), κ ↑ 1 , (4.4)
where the symbol “∼” means that the ratio of left and right hand side converges to
1. Moreover, EN(t) is the cell’s expected number of losses in the time interval [0, t].
Important examples for subexponential distributions are lognormal, Weibull, and Pareto.
Here, we extend the idea of an asymptotic OpVAR approximation to the multivariate
problem. In doing so, we exploit the fact that S+ is a compound Poisson process with
parameters as in (4.3). In particular, if F+ is subexponential, we can apply (4.4) to
estimate total OpVAR. Consequently, if we are able to specify the asymptotic behaviour
of F+(x) as x → ∞ we have automatically an approximation of VARt(κ) as κ ↑ 1.
To make more precise statements about OpVAR, we focus our analysis on Pareto
distributed severities with distribution function
F (x) =(1 +
x
θ
)−α
, x > 0 ,
with shape parameters θ > 0 and tail parameter α > 0. Pareto’s law is the prototypical
parametric example for a heavy-tailed distribution and suitable for operational risk mod-
elling. As a simple consequence of (4.4), in the case of a compound Poisson model with
Pareto severities (Pareto-Poisson model) the analytic OpVAR is given by
VARt(κ) ∼ θ
[(λ t
1 − κ
)1/α
− 1
]∼ θ
(λ t
1 − κ
)1/α
, κ ↑ 1 . (4.5)
To demonstrate the kind of results we obtain by such approximation methods we
consider a Pareto-Poisson model, where the severity distributions Fi of the first (say)
6
(4.2) that the total aggregate loss process S+ is again compound Poisson with frequency
parameter and severity distribution
λ+ = limz↓0
Π+(z) and F+(z) = 1 − F
+(z) = 1 −
Π+(z)
λ+, z ≥ 0 . (4.3)
This result proves now useful to determine a bank’s total operational risk consisting of
several cells. Before doing that, recall the definition of OpVAR for a single operational
risk cell (henceforth called stand-alone OpVAR.) For each cell, stand-alone OpVAR at
confidence level κ ∈ (0, 1) and time horizon t is the κ-quantile of the aggregate loss
distribution, i.e. VARt(κ) = G←t (κ) with G←t (κ) = inf{x ∈ R : P (S(t) ≤ x) ≥ κ}.
In Bocker & Kluppelberg (2005, 2006, 2007) it was shown that OpVAR at high con-
fidence level can be approximated by a closed-form expression, if the loss severity is
subexponential, i.e. heavy-tailed. As this is common believe we consider in the sequel this
approximation, which can be written as
VARt(κ) ∼ F←(
1 −1 − κ
EN(t)
), κ ↑ 1 , (4.4)
where the symbol “∼” means that the ratio of left and right hand side converges to
1. Moreover, EN(t) is the cell’s expected number of losses in the time interval [0, t].
Important examples for subexponential distributions are lognormal, Weibull, and Pareto.
Here, we extend the idea of an asymptotic OpVAR approximation to the multivariate
problem. In doing so, we exploit the fact that S+ is a compound Poisson process with
parameters as in (4.3). In particular, if F+ is subexponential, we can apply (4.4) to
estimate total OpVAR. Consequently, if we are able to specify the asymptotic behaviour
of F+(x) as x → ∞ we have automatically an approximation of VARt(κ) as κ ↑ 1.
To make more precise statements about OpVAR, we focus our analysis on Pareto
distributed severities with distribution function
F (x) =(1 +
x
θ
)−α
, x > 0 ,
with shape parameters θ > 0 and tail parameter α > 0. Pareto’s law is the prototypical
parametric example for a heavy-tailed distribution and suitable for operational risk mod-
elling. As a simple consequence of (4.4), in the case of a compound Poisson model with
Pareto severities (Pareto-Poisson model) the analytic OpVAR is given by
VARt(κ) ∼ θ
[(λ t
1 − κ
)1/α
− 1
]∼ θ
(λ t
1 − κ
)1/α
, κ ↑ 1 . (4.5)
To demonstrate the kind of results we obtain by such approximation methods we
consider a Pareto-Poisson model, where the severity distributions Fi of the first (say)
6
(4.2) that the total aggregate loss process S+ is again compound Poisson with frequency
parameter and severity distribution
λ+ = limz↓0
Π+(z) and F+(z) = 1 − F
+(z) = 1 −
Π+(z)
λ+, z ≥ 0 . (4.3)
This result proves now useful to determine a bank’s total operational risk consisting of
several cells. Before doing that, recall the definition of OpVAR for a single operational
risk cell (henceforth called stand-alone OpVAR.) For each cell, stand-alone OpVAR at
confidence level κ ∈ (0, 1) and time horizon t is the κ-quantile of the aggregate loss
distribution, i.e. VARt(κ) = G←t (κ) with G←t (κ) = inf{x ∈ R : P (S(t) ≤ x) ≥ κ}.
In Bocker & Kluppelberg (2005, 2006, 2007) it was shown that OpVAR at high con-
fidence level can be approximated by a closed-form expression, if the loss severity is
subexponential, i.e. heavy-tailed. As this is common believe we consider in the sequel this
approximation, which can be written as
VARt(κ) ∼ F←(
1 −1 − κ
EN(t)
), κ ↑ 1 , (4.4)
where the symbol “∼” means that the ratio of left and right hand side converges to
1. Moreover, EN(t) is the cell’s expected number of losses in the time interval [0, t].
Important examples for subexponential distributions are lognormal, Weibull, and Pareto.
Here, we extend the idea of an asymptotic OpVAR approximation to the multivariate
problem. In doing so, we exploit the fact that S+ is a compound Poisson process with
parameters as in (4.3). In particular, if F+ is subexponential, we can apply (4.4) to
estimate total OpVAR. Consequently, if we are able to specify the asymptotic behaviour
of F+(x) as x → ∞ we have automatically an approximation of VARt(κ) as κ ↑ 1.
To make more precise statements about OpVAR, we focus our analysis on Pareto
distributed severities with distribution function
F (x) =(1 +
x
θ
)−α
, x > 0 ,
with shape parameters θ > 0 and tail parameter α > 0. Pareto’s law is the prototypical
parametric example for a heavy-tailed distribution and suitable for operational risk mod-
elling. As a simple consequence of (4.4), in the case of a compound Poisson model with
Pareto severities (Pareto-Poisson model) the analytic OpVAR is given by
VARt(κ) ∼ θ
[(λ t
1 − κ
)1/α
− 1
]∼ θ
(λ t
1 − κ
)1/α
, κ ↑ 1 . (4.5)
To demonstrate the kind of results we obtain by such approximation methods we
consider a Pareto-Poisson model, where the severity distributions Fi of the first (say)
6
(4.2) that the total aggregate loss process S+ is again compound Poisson with frequency
parameter and severity distribution
λ+ = limz↓0
Π+(z) and F+(z) = 1 − F
+(z) = 1 −
Π+(z)
λ+, z ≥ 0 . (4.3)
This result proves now useful to determine a bank’s total operational risk consisting of
several cells. Before doing that, recall the definition of OpVAR for a single operational
risk cell (henceforth called stand-alone OpVAR.) For each cell, stand-alone OpVAR at
confidence level κ ∈ (0, 1) and time horizon t is the κ-quantile of the aggregate loss
distribution, i.e. VARt(κ) = G←t (κ) with G←t (κ) = inf{x ∈ R : P (S(t) ≤ x) ≥ κ}.
In Bocker & Kluppelberg (2005, 2006, 2007) it was shown that OpVAR at high con-
fidence level can be approximated by a closed-form expression, if the loss severity is
subexponential, i.e. heavy-tailed. As this is common believe we consider in the sequel this
approximation, which can be written as
VARt(κ) ∼ F←(
1 −1 − κ
EN(t)
), κ ↑ 1 , (4.4)
where the symbol “∼” means that the ratio of left and right hand side converges to
1. Moreover, EN(t) is the cell’s expected number of losses in the time interval [0, t].
Important examples for subexponential distributions are lognormal, Weibull, and Pareto.
Here, we extend the idea of an asymptotic OpVAR approximation to the multivariate
problem. In doing so, we exploit the fact that S+ is a compound Poisson process with
parameters as in (4.3). In particular, if F+ is subexponential, we can apply (4.4) to
estimate total OpVAR. Consequently, if we are able to specify the asymptotic behaviour
of F+(x) as x → ∞ we have automatically an approximation of VARt(κ) as κ ↑ 1.
To make more precise statements about OpVAR, we focus our analysis on Pareto
distributed severities with distribution function
F (x) =(1 +
x
θ
)−α
, x > 0 ,
with shape parameters θ > 0 and tail parameter α > 0. Pareto’s law is the prototypical
parametric example for a heavy-tailed distribution and suitable for operational risk mod-
elling. As a simple consequence of (4.4), in the case of a compound Poisson model with
Pareto severities (Pareto-Poisson model) the analytic OpVAR is given by
VARt(κ) ∼ θ
[(λ t
1 − κ
)1/α
− 1
]∼ θ
(λ t
1 − κ
)1/α
, κ ↑ 1 . (4.5)
To demonstrate the kind of results we obtain by such approximation methods we
consider a Pareto-Poisson model, where the severity distributions Fi of the first (say)
6
2 4 6 8 10x1
2
4
6
8
10
x2
Figure 4.1. Decomposition of the domain of the tail integral Π+(z) for z = 6 into a
simultaneous loss part Π+
‖ (z) (orange area) and independent parts Π⊥1(z) (solid black
line) and Π⊥2(z) (dashed black line).
4 Analytical Approximations for Operational VAR
In this section we turn to the quantification of total operational loss encompassing all
operational risk cells and, therefore, we focus on the total aggregate loss process S+
defined in (1.2). Our goal is to provide some general insight to multivariate operational
risk and to find out, how different dependence structures (modelled by Levy copulas)
affect OpVAR, which is the standard metric in operational risk measurement. The tail
integral associated with S+ is given by
Π+(z) = Π({(x1, . . . , xd) ∈ [0,∞)d :
d∑
i=1
xi ≥ z}) , z ≥ 0 . (4.1)
For d = 2 we can write
Π+(z) = Π⊥1(z) + Π⊥2(z) + Π
+
‖ (z) , z ≥ 0 , (4.2)
where Π⊥1(·) and Π⊥2(·) are the independent jump parts and
Π+
‖ (z) = Π({(x1, x2) ∈ (0,∞)2 : x1 + x2 ≥ z}) , z ≥ 0 ,
describes the dependent part due to simultaneous loss events; the situation is depicted in
Figure 4.1.
Since for every compound Poisson process with intensity λ > 0 and positive jumps
with distribution function F , the tail integral is given by Π(·) = λF (·), it follows from
5
2 4 6 8 10x1
2
4
6
8
10
x2
Figure 4.1. Decomposition of the domain of the tail integral Π+(z) for z = 6 into a
simultaneous loss part Π+
‖ (z) (orange area) and independent parts Π⊥1(z) (solid black
line) and Π⊥2(z) (dashed black line).
4 Analytical Approximations for Operational VAR
In this section we turn to the quantification of total operational loss encompassing all
operational risk cells and, therefore, we focus on the total aggregate loss process S+
defined in (1.2). Our goal is to provide some general insight to multivariate operational
risk and to find out, how different dependence structures (modelled by Levy copulas)
affect OpVAR, which is the standard metric in operational risk measurement. The tail
integral associated with S+ is given by
Π+(z) = Π({(x1, . . . , xd) ∈ [0,∞)d :
d∑
i=1
xi ≥ z}) , z ≥ 0 . (4.1)
For d = 2 we can write
Π+(z) = Π⊥1(z) + Π⊥2(z) + Π
+
‖ (z) , z ≥ 0 , (4.2)
where Π⊥1(·) and Π⊥2(·) are the independent jump parts and
Π+
‖ (z) = Π({(x1, x2) ∈ (0,∞)2 : x1 + x2 ≥ z}) , z ≥ 0 ,
describes the dependent part due to simultaneous loss events; the situation is depicted in
Figure 4.1.
Since for every compound Poisson process with intensity λ > 0 and positive jumps
with distribution function F , the tail integral is given by Π(·) = λF (·), it follows from
5
2 4 6 8 10x1
2
4
6
8
10
x2
Figure 4.1. Decomposition of the domain of the tail integral Π+(z) for z = 6 into a
simultaneous loss part Π+
‖ (z) (orange area) and independent parts Π⊥1(z) (solid black
line) and Π⊥2(z) (dashed black line).
4 Analytical Approximations for Operational VAR
In this section we turn to the quantification of total operational loss encompassing all
operational risk cells and, therefore, we focus on the total aggregate loss process S+
defined in (1.2). Our goal is to provide some general insight to multivariate operational
risk and to find out, how different dependence structures (modelled by Levy copulas)
affect OpVAR, which is the standard metric in operational risk measurement. The tail
integral associated with S+ is given by
Π+(z) = Π({(x1, . . . , xd) ∈ [0,∞)d :
d∑
i=1
xi ≥ z}) , z ≥ 0 . (4.1)
For d = 2 we can write
Π+(z) = Π⊥1(z) + Π⊥2(z) + Π
+
‖ (z) , z ≥ 0 , (4.2)
where Π⊥1(·) and Π⊥2(·) are the independent jump parts and
Π+
‖ (z) = Π({(x1, x2) ∈ (0,∞)2 : x1 + x2 ≥ z}) , z ≥ 0 ,
describes the dependent part due to simultaneous loss events; the situation is depicted in
Figure 4.1.
Since for every compound Poisson process with intensity λ > 0 and positive jumps
with distribution function F , the tail integral is given by Π(·) = λF (·), it follows from
5
2 4 6 8 10x1
2
4
6
8
10
x2
Figure 4.1. Decomposition of the domain of the tail integral Π+(z) for z = 6 into a
simultaneous loss part Π+
‖ (z) (orange area) and independent parts Π⊥1(z) (solid black
line) and Π⊥2(z) (dashed black line).
4 Analytical Approximations for Operational VAR
In this section we turn to the quantification of total operational loss encompassing all
operational risk cells and, therefore, we focus on the total aggregate loss process S+
defined in (1.2). Our goal is to provide some general insight to multivariate operational
risk and to find out, how different dependence structures (modelled by Levy copulas)
affect OpVAR, which is the standard metric in operational risk measurement. The tail
integral associated with S+ is given by
Π+(z) = Π({(x1, . . . , xd) ∈ [0,∞)d :
d∑
i=1
xi ≥ z}) , z ≥ 0 . (4.1)
For d = 2 we can write
Π+(z) = Π⊥1(z) + Π⊥2(z) + Π
+
‖ (z) , z ≥ 0 , (4.2)
where Π⊥1(·) and Π⊥2(·) are the independent jump parts and
Π+
‖ (z) = Π({(x1, x2) ∈ (0,∞)2 : x1 + x2 ≥ z}) , z ≥ 0 ,
describes the dependent part due to simultaneous loss events; the situation is depicted in
Figure 4.1.
Since for every compound Poisson process with intensity λ > 0 and positive jumps
with distribution function F , the tail integral is given by Π(·) = λF (·), it follows from
5
2 4 6 8 10x1
2
4
6
8
10
x2
Figure 4.1. Decomposition of the domain of the tail integral Π+(z) for z = 6 into a
simultaneous loss part Π+
‖ (z) (orange area) and independent parts Π⊥1(z) (solid black
line) and Π⊥2(z) (dashed black line).
4 Analytical Approximations for Operational VAR
In this section we turn to the quantification of total operational loss encompassing all
operational risk cells and, therefore, we focus on the total aggregate loss process S+
defined in (1.2). Our goal is to provide some general insight to multivariate operational
risk and to find out, how different dependence structures (modelled by Levy copulas)
affect OpVAR, which is the standard metric in operational risk measurement. The tail
integral associated with S+ is given by
Π+(z) = Π({(x1, . . . , xd) ∈ [0,∞)d :
d∑
i=1
xi ≥ z}) , z ≥ 0 . (4.1)
For d = 2 we can write
Π+(z) = Π⊥1(z) + Π⊥2(z) + Π
+
‖ (z) , z ≥ 0 , (4.2)
where Π⊥1(·) and Π⊥2(·) are the independent jump parts and
Π+
‖ (z) = Π({(x1, x2) ∈ (0,∞)2 : x1 + x2 ≥ z}) , z ≥ 0 ,
describes the dependent part due to simultaneous loss events; the situation is depicted in
Figure 4.1.
Since for every compound Poisson process with intensity λ > 0 and positive jumps
with distribution function F , the tail integral is given by Π(·) = λF (·), it follows from
5
2 4 6 8 10x1
2
4
6
8
10
x2
Figure 4.1. Decomposition of the domain of the tail integral Π+(z) for z = 6 into a
simultaneous loss part Π+
‖ (z) (orange area) and independent parts Π⊥1(z) (solid black
line) and Π⊥2(z) (dashed black line).
4 Analytical Approximations for Operational VAR
In this section we turn to the quantification of total operational loss encompassing all
operational risk cells and, therefore, we focus on the total aggregate loss process S+
defined in (1.2). Our goal is to provide some general insight to multivariate operational
risk and to find out, how different dependence structures (modelled by Levy copulas)
affect OpVAR, which is the standard metric in operational risk measurement. The tail
integral associated with S+ is given by
Π+(z) = Π({(x1, . . . , xd) ∈ [0,∞)d :
d∑
i=1
xi ≥ z}) , z ≥ 0 . (4.1)
For d = 2 we can write
Π+(z) = Π⊥1(z) + Π⊥2(z) + Π
+
‖ (z) , z ≥ 0 , (4.2)
where Π⊥1(·) and Π⊥2(·) are the independent jump parts and
Π+
‖ (z) = Π({(x1, x2) ∈ (0,∞)2 : x1 + x2 ≥ z}) , z ≥ 0 ,
describes the dependent part due to simultaneous loss events; the situation is depicted in
Figure 4.1.
Since for every compound Poisson process with intensity λ > 0 and positive jumps
with distribution function F , the tail integral is given by Π(·) = λF (·), it follows from
5
Page 29 ◗
September 2007 ◗ Risk Management
continued on page 30 ◗
2 4 6 8 10x1
2
4
6
8
10
x2
Figure 4.1. Decomposition of the domain of the tail integral Π+(z) for z = 6 into a
simultaneous loss part Π+
‖ (z) (orange area) and independent parts Π⊥1(z) (solid black
line) and Π⊥2(z) (dashed black line).
4 Analytical Approximations for Operational VAR
In this section we turn to the quantification of total operational loss encompassing all
operational risk cells and, therefore, we focus on the total aggregate loss process S+
defined in (1.2). Our goal is to provide some general insight to multivariate operational
risk and to find out, how different dependence structures (modelled by Levy copulas)
affect OpVAR, which is the standard metric in operational risk measurement. The tail
integral associated with S+ is given by
Π+(z) = Π({(x1, . . . , xd) ∈ [0,∞)d :
d∑
i=1
xi ≥ z}) , z ≥ 0 . (4.1)
For d = 2 we can write
Π+(z) = Π⊥1(z) + Π⊥2(z) + Π
+
‖ (z) , z ≥ 0 , (4.2)
where Π⊥1(·) and Π⊥2(·) are the independent jump parts and
Π+
‖ (z) = Π({(x1, x2) ∈ (0,∞)2 : x1 + x2 ≥ z}) , z ≥ 0 ,
describes the dependent part due to simultaneous loss events; the situation is depicted in
Figure 4.1.
Since for every compound Poisson process with intensity λ > 0 and positive jumps
with distribution function F , the tail integral is given by Π(·) = λF (·), it follows from
5
2 4 6 8 10x1
2
4
6
8
10
x2
Figure 4.1. Decomposition of the domain of the tail integral Π+(z) for z = 6 into a
simultaneous loss part Π+
‖ (z) (orange area) and independent parts Π⊥1(z) (solid black
line) and Π⊥2(z) (dashed black line).
4 Analytical Approximations for Operational VAR
In this section we turn to the quantification of total operational loss encompassing all
operational risk cells and, therefore, we focus on the total aggregate loss process S+
defined in (1.2). Our goal is to provide some general insight to multivariate operational
risk and to find out, how different dependence structures (modelled by Levy copulas)
affect OpVAR, which is the standard metric in operational risk measurement. The tail
integral associated with S+ is given by
Π+(z) = Π({(x1, . . . , xd) ∈ [0,∞)d :
d∑
i=1
xi ≥ z}) , z ≥ 0 . (4.1)
For d = 2 we can write
Π+(z) = Π⊥1(z) + Π⊥2(z) + Π
+
‖ (z) , z ≥ 0 , (4.2)
where Π⊥1(·) and Π⊥2(·) are the independent jump parts and
Π+
‖ (z) = Π({(x1, x2) ∈ (0,∞)2 : x1 + x2 ≥ z}) , z ≥ 0 ,
describes the dependent part due to simultaneous loss events; the situation is depicted in
Figure 4.1.
Since for every compound Poisson process with intensity λ > 0 and positive jumps
with distribution function F , the tail integral is given by Π(·) = λF (·), it follows from
5
(4.2) that the total aggregate loss process S+ is again compound Poisson with frequency
parameter and severity distribution
λ+ = limz↓0
Π+(z) and F+(z) = 1 − F
+(z) = 1 −
Π+(z)
λ+, z ≥ 0 . (4.3)
This result proves now useful to determine a bank’s total operational risk consisting of
several cells. Before doing that, recall the definition of OpVAR for a single operational
risk cell (henceforth called stand-alone OpVAR.) For each cell, stand-alone OpVAR at
confidence level κ ∈ (0, 1) and time horizon t is the κ-quantile of the aggregate loss
distribution, i.e. VARt(κ) = G←t (κ) with G←t (κ) = inf{x ∈ R : P (S(t) ≤ x) ≥ κ}.
In Bocker & Kluppelberg (2005, 2006, 2007) it was shown that OpVAR at high con-
fidence level can be approximated by a closed-form expression, if the loss severity is
subexponential, i.e. heavy-tailed. As this is common believe we consider in the sequel this
approximation, which can be written as
VARt(κ) ∼ F←(
1 −1 − κ
EN(t)
), κ ↑ 1 , (4.4)
where the symbol “∼” means that the ratio of left and right hand side converges to
1. Moreover, EN(t) is the cell’s expected number of losses in the time interval [0, t].
Important examples for subexponential distributions are lognormal, Weibull, and Pareto.
Here, we extend the idea of an asymptotic OpVAR approximation to the multivariate
problem. In doing so, we exploit the fact that S+ is a compound Poisson process with
parameters as in (4.3). In particular, if F+ is subexponential, we can apply (4.4) to
estimate total OpVAR. Consequently, if we are able to specify the asymptotic behaviour
of F+(x) as x → ∞ we have automatically an approximation of VARt(κ) as κ ↑ 1.
To make more precise statements about OpVAR, we focus our analysis on Pareto
distributed severities with distribution function
F (x) =(1 +
x
θ
)−α
, x > 0 ,
with shape parameters θ > 0 and tail parameter α > 0. Pareto’s law is the prototypical
parametric example for a heavy-tailed distribution and suitable for operational risk mod-
elling. As a simple consequence of (4.4), in the case of a compound Poisson model with
Pareto severities (Pareto-Poisson model) the analytic OpVAR is given by
VARt(κ) ∼ θ
[(λ t
1 − κ
)1/α
− 1
]∼ θ
(λ t
1 − κ
)1/α
, κ ↑ 1 . (4.5)
To demonstrate the kind of results we obtain by such approximation methods we
consider a Pareto-Poisson model, where the severity distributions Fi of the first (say)
6
(4.2) that the total aggregate loss process S+ is again compound Poisson with frequency
parameter and severity distribution
λ+ = limz↓0
Π+(z) and F+(z) = 1 − F
+(z) = 1 −
Π+(z)
λ+, z ≥ 0 . (4.3)
This result proves now useful to determine a bank’s total operational risk consisting of
several cells. Before doing that, recall the definition of OpVAR for a single operational
risk cell (henceforth called stand-alone OpVAR.) For each cell, stand-alone OpVAR at
confidence level κ ∈ (0, 1) and time horizon t is the κ-quantile of the aggregate loss
distribution, i.e. VARt(κ) = G←t (κ) with G←t (κ) = inf{x ∈ R : P (S(t) ≤ x) ≥ κ}.
In Bocker & Kluppelberg (2005, 2006, 2007) it was shown that OpVAR at high con-
fidence level can be approximated by a closed-form expression, if the loss severity is
subexponential, i.e. heavy-tailed. As this is common believe we consider in the sequel this
approximation, which can be written as
VARt(κ) ∼ F←(
1 −1 − κ
EN(t)
), κ ↑ 1 , (4.4)
where the symbol “∼” means that the ratio of left and right hand side converges to
1. Moreover, EN(t) is the cell’s expected number of losses in the time interval [0, t].
Important examples for subexponential distributions are lognormal, Weibull, and Pareto.
Here, we extend the idea of an asymptotic OpVAR approximation to the multivariate
problem. In doing so, we exploit the fact that S+ is a compound Poisson process with
parameters as in (4.3). In particular, if F+ is subexponential, we can apply (4.4) to
estimate total OpVAR. Consequently, if we are able to specify the asymptotic behaviour
of F+(x) as x → ∞ we have automatically an approximation of VARt(κ) as κ ↑ 1.
To make more precise statements about OpVAR, we focus our analysis on Pareto
distributed severities with distribution function
F (x) =(1 +
x
θ
)−α
, x > 0 ,
with shape parameters θ > 0 and tail parameter α > 0. Pareto’s law is the prototypical
parametric example for a heavy-tailed distribution and suitable for operational risk mod-
elling. As a simple consequence of (4.4), in the case of a compound Poisson model with
Pareto severities (Pareto-Poisson model) the analytic OpVAR is given by
VARt(κ) ∼ θ
[(λ t
1 − κ
)1/α
− 1
]∼ θ
(λ t
1 − κ
)1/α
, κ ↑ 1 . (4.5)
To demonstrate the kind of results we obtain by such approximation methods we
consider a Pareto-Poisson model, where the severity distributions Fi of the first (say)
6
2 4 6 8 10x1
2
4
6
8
10
x2
Figure 4.1. Decomposition of the domain of the tail integral Π+(z) for z = 6 into a
simultaneous loss part Π+
‖ (z) (orange area) and independent parts Π⊥1(z) (solid black
line) and Π⊥2(z) (dashed black line).
4 Analytical Approximations for Operational VAR
In this section we turn to the quantification of total operational loss encompassing all
operational risk cells and, therefore, we focus on the total aggregate loss process S+
defined in (1.2). Our goal is to provide some general insight to multivariate operational
risk and to find out, how different dependence structures (modelled by Levy copulas)
affect OpVAR, which is the standard metric in operational risk measurement. The tail
integral associated with S+ is given by
Π+(z) = Π({(x1, . . . , xd) ∈ [0,∞)d :
d∑
i=1
xi ≥ z}) , z ≥ 0 . (4.1)
For d = 2 we can write
Π+(z) = Π⊥1(z) + Π⊥2(z) + Π
+
‖ (z) , z ≥ 0 , (4.2)
where Π⊥1(·) and Π⊥2(·) are the independent jump parts and
Π+
‖ (z) = Π({(x1, x2) ∈ (0,∞)2 : x1 + x2 ≥ z}) , z ≥ 0 ,
describes the dependent part due to simultaneous loss events; the situation is depicted in
Figure 4.1.
Since for every compound Poisson process with intensity λ > 0 and positive jumps
with distribution function F , the tail integral is given by Π(·) = λF (·), it follows from
5
(4.2) that the total aggregate loss process S+ is again compound Poisson with frequency
parameter and severity distribution
λ+ = limz↓0
Π+(z) and F+(z) = 1 − F
+(z) = 1 −
Π+(z)
λ+, z ≥ 0 . (4.3)
This result proves now useful to determine a bank’s total operational risk consisting of
several cells. Before doing that, recall the definition of OpVAR for a single operational
risk cell (henceforth called stand-alone OpVAR.) For each cell, stand-alone OpVAR at
confidence level κ ∈ (0, 1) and time horizon t is the κ-quantile of the aggregate loss
distribution, i.e. VARt(κ) = G←t (κ) with G←t (κ) = inf{x ∈ R : P (S(t) ≤ x) ≥ κ}.
In Bocker & Kluppelberg (2005, 2006, 2007) it was shown that OpVAR at high con-
fidence level can be approximated by a closed-form expression, if the loss severity is
subexponential, i.e. heavy-tailed. As this is common believe we consider in the sequel this
approximation, which can be written as
VARt(κ) ∼ F←(
1 −1 − κ
EN(t)
), κ ↑ 1 , (4.4)
where the symbol “∼” means that the ratio of left and right hand side converges to
1. Moreover, EN(t) is the cell’s expected number of losses in the time interval [0, t].
Important examples for subexponential distributions are lognormal, Weibull, and Pareto.
Here, we extend the idea of an asymptotic OpVAR approximation to the multivariate
problem. In doing so, we exploit the fact that S+ is a compound Poisson process with
parameters as in (4.3). In particular, if F+ is subexponential, we can apply (4.4) to
estimate total OpVAR. Consequently, if we are able to specify the asymptotic behaviour
of F+(x) as x → ∞ we have automatically an approximation of VARt(κ) as κ ↑ 1.
To make more precise statements about OpVAR, we focus our analysis on Pareto
distributed severities with distribution function
F (x) =(1 +
x
θ
)−α
, x > 0 ,
with shape parameters θ > 0 and tail parameter α > 0. Pareto’s law is the prototypical
parametric example for a heavy-tailed distribution and suitable for operational risk mod-
elling. As a simple consequence of (4.4), in the case of a compound Poisson model with
Pareto severities (Pareto-Poisson model) the analytic OpVAR is given by
VARt(κ) ∼ θ
[(λ t
1 − κ
)1/α
− 1
]∼ θ
(λ t
1 − κ
)1/α
, κ ↑ 1 . (4.5)
To demonstrate the kind of results we obtain by such approximation methods we
consider a Pareto-Poisson model, where the severity distributions Fi of the first (say)
6
simple consequence of (4.4), in the case of acompound Poisson model with Pareto severities(Pareto-Poisson model) the analytic OpVAR isgiven by
(4.5)
To demonstrate the kind of results we obtain bysuch approximation methods we consider aPareto-Poisson model, where the severity dis-tributions of the first (say) cells aretail equivalent with tail parameter anddominant to all other cells, i.e
(4.6)
In the important cases of complete positive de-pendence and independence, closed-form re-sults can be found and may serve as extremecases concerning the dependence structure ofthe model.
TThheeoorreemm 44..22.. Consider a compound Poissonmodel with cell processes with Paretodistributed severities satisfying (4.6).Let be the stand-alone OpVAR of cell i.
(1) If all cells are completely dependent with thesame frequency for all cells, then iscompound Poisson with parameters
and total OpVAR is asymptotically given by
(4.7)
(2) If all cells are independent, then is com-pound Poisson with parameters
and
and total OpVAR is asymptotically given by
(4.8)
On the one hand, Theorem 4.2 states that for thecompletely dependent Pareto-Poisson model,total asymptotic OpVAR is simply the sum of thedominating cell’s asymptotic stand-aloneOpVARs. Recall that this is similar to the newproposals of Basel II, where the standardproce-dure for calculating capital charges for opera-tional risk is just the simple-sum VAR. To put itanother way, regulators implicitly assume com-plete dependence between different cells,meaning that losses within different businesslines or risk categories always happen at thesame instants of time.
Very often, the simple-sum OpVAR (4.7) isconsidered to be the worst case scenario and,hence, as an upper bound for total OpVAR ingeneral, which in the heavy-tailed case can begrossly misleading. To see this, assume thesame frequency in all cells also for the inde-pendent model, and denote by and
can be grossly misleading. To see this, assume the same frequency λ in all cells also for
the independent model, and denote by VAR+‖ (κ) and VAR+
⊥(κ) completely dependent and
independent total OpVAR, respectively. In this case we obtain from (4.9) in the situation
(4.6) from Theorem 4.2(2)
VAR+⊥(κ) ∼
(λ t
1 − κ
)1/α(
b∑
i=1
θαi
)1/α
, κ ↑ 1 ,
whereas VAR+‖ (κ) is given by (4.7). Then, as a consequence of convexity (α > 1) and
concavity (α < 1) of the function x 7→ xα,
VAR+⊥(κ)
VAR+‖ (κ)
=
(∑bi=1 θα
i
)1/α
∑bi=1 θi
< 1 , α > 1
= 1 , α = 1
> 1 , α < 1 .
(4.10)
This result says that for heavy-tailed severity data with F i(xi) ∼ (xi/θi)−α as xi → ∞,
subadditivity of OpVAR is violated because the sum of stand-alone OpVARs is smaller
than independent total OpVAR.
About the Authors/Disclaimer
Klaus Bocker is senior risk controller at HypoVereinsbank in Munich. Claudia Kluppelberg
holds the chair of Mathematical Statistics at the Center for Mathematical Sciences of the
Munich University of Technology. The opinions expressed in this article are those of the
authors and do not reflect the views of HypoVereinsbank.
References
Bocker, K. and Kluppelberg, C. (2005) Operational VAR: a closed-form solution. RISK
Magazine, December, 90-93.
Bocker, K. and Kluppelberg, C. (2006) Multivariate models for operational risk. Preprint,
Munich University of Technology. Available at www.ma.tum.de/stat/
Bocker, K and Kluppelberg, C. (2007) Modelling and measuring multivariate operational
risk with Levy copulas. Preprint, Munich University of Technology. Available at
www.ma.tum.de/stat/
Cont, R. and Tankov, P. (2004) Financial Modelling With Jump Processes.
Chapman & Hall/CRC, Boca Raton.
8
can be grossly misleading. To see this, assume the same frequency λ in all cells also for
the independent model, and denote by VAR+‖ (κ) and VAR+
⊥(κ) completely dependent and
independent total OpVAR, respectively. In this case we obtain from (4.9) in the situation
(4.6) from Theorem 4.2(2)
VAR+⊥(κ) ∼
(λ t
1 − κ
)1/α(
b∑
i=1
θαi
)1/α
, κ ↑ 1 ,
whereas VAR+‖ (κ) is given by (4.7). Then, as a consequence of convexity (α > 1) and
concavity (α < 1) of the function x 7→ xα,
VAR+⊥(κ)
VAR+‖ (κ)
=
(∑bi=1 θα
i
)1/α
∑bi=1 θi
< 1 , α > 1
= 1 , α = 1
> 1 , α < 1 .
(4.10)
This result says that for heavy-tailed severity data with F i(xi) ∼ (xi/θi)−α as xi → ∞,
subadditivity of OpVAR is violated because the sum of stand-alone OpVARs is smaller
than independent total OpVAR.
About the Authors/Disclaimer
Klaus Bocker is senior risk controller at HypoVereinsbank in Munich. Claudia Kluppelberg
holds the chair of Mathematical Statistics at the Center for Mathematical Sciences of the
Munich University of Technology. The opinions expressed in this article are those of the
authors and do not reflect the views of HypoVereinsbank.
References
Bocker, K. and Kluppelberg, C. (2005) Operational VAR: a closed-form solution. RISK
Magazine, December, 90-93.
Bocker, K. and Kluppelberg, C. (2006) Multivariate models for operational risk. Preprint,
Munich University of Technology. Available at www.ma.tum.de/stat/
Bocker, K and Kluppelberg, C. (2007) Modelling and measuring multivariate operational
risk with Levy copulas. Preprint, Munich University of Technology. Available at
www.ma.tum.de/stat/
Cont, R. and Tankov, P. (2004) Financial Modelling With Jump Processes.
Chapman & Hall/CRC, Boca Raton.
8
b ≤ d cells are tail equivalent with tail parameter α > 0 and dominant to all other cells,
i.e.
limx→∞
F i(x)
F 1(x)=
(θi
θ1
)α
, i = 1, . . . , b , limx→∞
F i(x)
F 1(x)= 0 , i = b + 1, . . . , d . (4.6)
In the important cases of complete positive dependence and independence, closed-form
results can be found and may serve as extreme cases concerning the dependence structure
of the model.
Theorem 4.2. Consider a compound Poisson model with cell processes S1, . . . , Sd with
Pareto distributed severities satisfying (4.6). Let VARit(·) be the stand-alone OpVAR of
cell i.
(1) If all cells are completely dependent with the same frequency λ for all cells, then S+
is compound Poisson with parameters
λ+ = λ and F+(z) ∼
(b∑
i=1
θi
)α
z−α , z → ∞ ,
and total OpVAR is asymptotically given by
VAR+‖t(κ) ∼
b∑
i=1
VARit(κ), κ ↑ 1 . (4.7)
(2) If all cells are independent, then S+ is compound Poisson with parameters
λ+ = λ1 + · · ·+ λd and F+(z) ∼
1
λ+
b∑
i=1
(θi
z
)α
λi , z → ∞ , (4.8)
and total OpVAR is asymptotically given by
VAR+⊥t(κ) ∼
[b∑
i=1
(VARi
t(κ))α]1/α
, κ ↑ 1 . (4.9)
On the one hand, Theorem 4.2 states that for the completely dependent Pareto-Poisson
model, total asymptotic OpVAR is simply the sum of the dominating cell’s asymptotic
stand-alone OpVARs. Recall that this is similar to the new proposals of Basel II, where
the standard procedure for calculating capital charges for operational risk is just the
simple-sum VAR. To put it another way, regulators implicitly assume complete depen-
dence between different cells, meaning that losses within different business lines or risk
categories always happen at the same instants of time.
Very often, the simple-sum OpVAR (4.7) is considered to be the worst case scenario
and, hence, as an upper bound for total OpVAR in general, which in the heavy-tailed case
7
b ≤ d cells are tail equivalent with tail parameter α > 0 and dominant to all other cells,
i.e.
limx→∞
F i(x)
F 1(x)=
(θi
θ1
)α
, i = 1, . . . , b , limx→∞
F i(x)
F 1(x)= 0 , i = b + 1, . . . , d . (4.6)
In the important cases of complete positive dependence and independence, closed-form
results can be found and may serve as extreme cases concerning the dependence structure
of the model.
Theorem 4.2. Consider a compound Poisson model with cell processes S1, . . . , Sd with
Pareto distributed severities satisfying (4.6). Let VARit(·) be the stand-alone OpVAR of
cell i.
(1) If all cells are completely dependent with the same frequency λ for all cells, then S+
is compound Poisson with parameters
λ+ = λ and F+(z) ∼
(b∑
i=1
θi
)α
z−α , z → ∞ ,
and total OpVAR is asymptotically given by
VAR+‖t(κ) ∼
b∑
i=1
VARit(κ), κ ↑ 1 . (4.7)
(2) If all cells are independent, then S+ is compound Poisson with parameters
λ+ = λ1 + · · ·+ λd and F+(z) ∼
1
λ+
b∑
i=1
(θi
z
)α
λi , z → ∞ , (4.8)
and total OpVAR is asymptotically given by
VAR+⊥t(κ) ∼
[b∑
i=1
(VARi
t(κ))α]1/α
, κ ↑ 1 . (4.9)
On the one hand, Theorem 4.2 states that for the completely dependent Pareto-Poisson
model, total asymptotic OpVAR is simply the sum of the dominating cell’s asymptotic
stand-alone OpVARs. Recall that this is similar to the new proposals of Basel II, where
the standard procedure for calculating capital charges for operational risk is just the
simple-sum VAR. To put it another way, regulators implicitly assume complete depen-
dence between different cells, meaning that losses within different business lines or risk
categories always happen at the same instants of time.
Very often, the simple-sum OpVAR (4.7) is considered to be the worst case scenario
and, hence, as an upper bound for total OpVAR in general, which in the heavy-tailed case
7
b ≤ d cells are tail equivalent with tail parameter α > 0 and dominant to all other cells,
i.e.
limx→∞
F i(x)
F 1(x)=
(θi
θ1
)α
, i = 1, . . . , b , limx→∞
F i(x)
F 1(x)= 0 , i = b + 1, . . . , d . (4.6)
In the important cases of complete positive dependence and independence, closed-form
results can be found and may serve as extreme cases concerning the dependence structure
of the model.
Theorem 4.2. Consider a compound Poisson model with cell processes S1, . . . , Sd with
Pareto distributed severities satisfying (4.6). Let VARit(·) be the stand-alone OpVAR of
cell i.
(1) If all cells are completely dependent with the same frequency λ for all cells, then S+
is compound Poisson with parameters
λ+ = λ and F+(z) ∼
(b∑
i=1
θi
)α
z−α , z → ∞ ,
and total OpVAR is asymptotically given by
VAR+‖t(κ) ∼
b∑
i=1
VARit(κ), κ ↑ 1 . (4.7)
(2) If all cells are independent, then S+ is compound Poisson with parameters
λ+ = λ1 + · · ·+ λd and F+(z) ∼
1
λ+
b∑
i=1
(θi
z
)α
λi , z → ∞ , (4.8)
and total OpVAR is asymptotically given by
VAR+⊥t(κ) ∼
[b∑
i=1
(VARi
t(κ))α]1/α
, κ ↑ 1 . (4.9)
On the one hand, Theorem 4.2 states that for the completely dependent Pareto-Poisson
model, total asymptotic OpVAR is simply the sum of the dominating cell’s asymptotic
stand-alone OpVARs. Recall that this is similar to the new proposals of Basel II, where
the standard procedure for calculating capital charges for operational risk is just the
simple-sum VAR. To put it another way, regulators implicitly assume complete depen-
dence between different cells, meaning that losses within different business lines or risk
categories always happen at the same instants of time.
Very often, the simple-sum OpVAR (4.7) is considered to be the worst case scenario
and, hence, as an upper bound for total OpVAR in general, which in the heavy-tailed case
7
b ≤ d cells are tail equivalent with tail parameter α > 0 and dominant to all other cells,
i.e.
limx→∞
F i(x)
F 1(x)=
(θi
θ1
)α
, i = 1, . . . , b , limx→∞
F i(x)
F 1(x)= 0 , i = b + 1, . . . , d . (4.6)
In the important cases of complete positive dependence and independence, closed-form
results can be found and may serve as extreme cases concerning the dependence structure
of the model.
Theorem 4.2. Consider a compound Poisson model with cell processes S1, . . . , Sd with
Pareto distributed severities satisfying (4.6). Let VARit(·) be the stand-alone OpVAR of
cell i.
(1) If all cells are completely dependent with the same frequency λ for all cells, then S+
is compound Poisson with parameters
λ+ = λ and F+(z) ∼
(b∑
i=1
θi
)α
z−α , z → ∞ ,
and total OpVAR is asymptotically given by
VAR+‖t(κ) ∼
b∑
i=1
VARit(κ), κ ↑ 1 . (4.7)
(2) If all cells are independent, then S+ is compound Poisson with parameters
λ+ = λ1 + · · ·+ λd and F+(z) ∼
1
λ+
b∑
i=1
(θi
z
)α
λi , z → ∞ , (4.8)
and total OpVAR is asymptotically given by
VAR+⊥t(κ) ∼
[b∑
i=1
(VARi
t(κ))α]1/α
, κ ↑ 1 . (4.9)
On the one hand, Theorem 4.2 states that for the completely dependent Pareto-Poisson
model, total asymptotic OpVAR is simply the sum of the dominating cell’s asymptotic
stand-alone OpVARs. Recall that this is similar to the new proposals of Basel II, where
the standard procedure for calculating capital charges for operational risk is just the
simple-sum VAR. To put it another way, regulators implicitly assume complete depen-
dence between different cells, meaning that losses within different business lines or risk
categories always happen at the same instants of time.
Very often, the simple-sum OpVAR (4.7) is considered to be the worst case scenario
and, hence, as an upper bound for total OpVAR in general, which in the heavy-tailed case
7
b ≤ d cells are tail equivalent with tail parameter α > 0 and dominant to all other cells,
i.e.
limx→∞
F i(x)
F 1(x)=
(θi
θ1
)α
, i = 1, . . . , b , limx→∞
F i(x)
F 1(x)= 0 , i = b + 1, . . . , d . (4.6)
In the important cases of complete positive dependence and independence, closed-form
results can be found and may serve as extreme cases concerning the dependence structure
of the model.
Theorem 4.2. Consider a compound Poisson model with cell processes S1, . . . , Sd with
Pareto distributed severities satisfying (4.6). Let VARit(·) be the stand-alone OpVAR of
cell i.
(1) If all cells are completely dependent with the same frequency λ for all cells, then S+
is compound Poisson with parameters
λ+ = λ and F+(z) ∼
(b∑
i=1
θi
)α
z−α , z → ∞ ,
and total OpVAR is asymptotically given by
VAR+‖t(κ) ∼
b∑
i=1
VARit(κ), κ ↑ 1 . (4.7)
(2) If all cells are independent, then S+ is compound Poisson with parameters
λ+ = λ1 + · · ·+ λd and F+(z) ∼
1
λ+
b∑
i=1
(θi
z
)α
λi , z → ∞ , (4.8)
and total OpVAR is asymptotically given by
VAR+⊥t(κ) ∼
[b∑
i=1
(VARi
t(κ))α]1/α
, κ ↑ 1 . (4.9)
On the one hand, Theorem 4.2 states that for the completely dependent Pareto-Poisson
model, total asymptotic OpVAR is simply the sum of the dominating cell’s asymptotic
stand-alone OpVARs. Recall that this is similar to the new proposals of Basel II, where
the standard procedure for calculating capital charges for operational risk is just the
simple-sum VAR. To put it another way, regulators implicitly assume complete depen-
dence between different cells, meaning that losses within different business lines or risk
categories always happen at the same instants of time.
Very often, the simple-sum OpVAR (4.7) is considered to be the worst case scenario
and, hence, as an upper bound for total OpVAR in general, which in the heavy-tailed case
7
b ≤ d cells are tail equivalent with tail parameter α > 0 and dominant to all other cells,
i.e.
limx→∞
F i(x)
F 1(x)=
(θi
θ1
)α
, i = 1, . . . , b , limx→∞
F i(x)
F 1(x)= 0 , i = b + 1, . . . , d . (4.6)
In the important cases of complete positive dependence and independence, closed-form
results can be found and may serve as extreme cases concerning the dependence structure
of the model.
Theorem 4.2. Consider a compound Poisson model with cell processes S1, . . . , Sd with
Pareto distributed severities satisfying (4.6). Let VARit(·) be the stand-alone OpVAR of
cell i.
(1) If all cells are completely dependent with the same frequency λ for all cells, then S+
is compound Poisson with parameters
λ+ = λ and F+(z) ∼
(b∑
i=1
θi
)α
z−α , z → ∞ ,
and total OpVAR is asymptotically given by
VAR+‖t(κ) ∼
b∑
i=1
VARit(κ), κ ↑ 1 . (4.7)
(2) If all cells are independent, then S+ is compound Poisson with parameters
λ+ = λ1 + · · ·+ λd and F+(z) ∼
1
λ+
b∑
i=1
(θi
z
)α
λi , z → ∞ , (4.8)
and total OpVAR is asymptotically given by
VAR+⊥t(κ) ∼
[b∑
i=1
(VARi
t(κ))α]1/α
, κ ↑ 1 . (4.9)
On the one hand, Theorem 4.2 states that for the completely dependent Pareto-Poisson
model, total asymptotic OpVAR is simply the sum of the dominating cell’s asymptotic
stand-alone OpVARs. Recall that this is similar to the new proposals of Basel II, where
the standard procedure for calculating capital charges for operational risk is just the
simple-sum VAR. To put it another way, regulators implicitly assume complete depen-
dence between different cells, meaning that losses within different business lines or risk
categories always happen at the same instants of time.
Very often, the simple-sum OpVAR (4.7) is considered to be the worst case scenario
and, hence, as an upper bound for total OpVAR in general, which in the heavy-tailed case
7
b ≤ d cells are tail equivalent with tail parameter α > 0 and dominant to all other cells,
i.e.
limx→∞
F i(x)
F 1(x)=
(θi
θ1
)α
, i = 1, . . . , b , limx→∞
F i(x)
F 1(x)= 0 , i = b + 1, . . . , d . (4.6)
In the important cases of complete positive dependence and independence, closed-form
results can be found and may serve as extreme cases concerning the dependence structure
of the model.
Theorem 4.2. Consider a compound Poisson model with cell processes S1, . . . , Sd with
Pareto distributed severities satisfying (4.6). Let VARit(·) be the stand-alone OpVAR of
cell i.
(1) If all cells are completely dependent with the same frequency λ for all cells, then S+
is compound Poisson with parameters
λ+ = λ and F+(z) ∼
(b∑
i=1
θi
)α
z−α , z → ∞ ,
and total OpVAR is asymptotically given by
VAR+‖t(κ) ∼
b∑
i=1
VARit(κ), κ ↑ 1 . (4.7)
(2) If all cells are independent, then S+ is compound Poisson with parameters
λ+ = λ1 + · · ·+ λd and F+(z) ∼
1
λ+
b∑
i=1
(θi
z
)α
λi , z → ∞ , (4.8)
and total OpVAR is asymptotically given by
VAR+⊥t(κ) ∼
[b∑
i=1
(VARi
t(κ))α]1/α
, κ ↑ 1 . (4.9)
On the one hand, Theorem 4.2 states that for the completely dependent Pareto-Poisson
model, total asymptotic OpVAR is simply the sum of the dominating cell’s asymptotic
stand-alone OpVARs. Recall that this is similar to the new proposals of Basel II, where
the standard procedure for calculating capital charges for operational risk is just the
simple-sum VAR. To put it another way, regulators implicitly assume complete depen-
dence between different cells, meaning that losses within different business lines or risk
categories always happen at the same instants of time.
Very often, the simple-sum OpVAR (4.7) is considered to be the worst case scenario
and, hence, as an upper bound for total OpVAR in general, which in the heavy-tailed case
7
(4.2) that the total aggregate loss process S+ is again compound Poisson with frequency
parameter and severity distribution
λ+ = limz↓0
Π+(z) and F+(z) = 1 − F
+(z) = 1 −
Π+(z)
λ+, z ≥ 0 . (4.3)
This result proves now useful to determine a bank’s total operational risk consisting of
several cells. Before doing that, recall the definition of OpVAR for a single operational
risk cell (henceforth called stand-alone OpVAR.) For each cell, stand-alone OpVAR at
confidence level κ ∈ (0, 1) and time horizon t is the κ-quantile of the aggregate loss
distribution, i.e. VARt(κ) = G←t (κ) with G←t (κ) = inf{x ∈ R : P (S(t) ≤ x) ≥ κ}.
In Bocker & Kluppelberg (2005, 2006, 2007) it was shown that OpVAR at high con-
fidence level can be approximated by a closed-form expression, if the loss severity is
subexponential, i.e. heavy-tailed. As this is common believe we consider in the sequel this
approximation, which can be written as
VARt(κ) ∼ F←(
1 −1 − κ
EN(t)
), κ ↑ 1 , (4.4)
where the symbol “∼” means that the ratio of left and right hand side converges to
1. Moreover, EN(t) is the cell’s expected number of losses in the time interval [0, t].
Important examples for subexponential distributions are lognormal, Weibull, and Pareto.
Here, we extend the idea of an asymptotic OpVAR approximation to the multivariate
problem. In doing so, we exploit the fact that S+ is a compound Poisson process with
parameters as in (4.3). In particular, if F+ is subexponential, we can apply (4.4) to
estimate total OpVAR. Consequently, if we are able to specify the asymptotic behaviour
of F+(x) as x → ∞ we have automatically an approximation of VARt(κ) as κ ↑ 1.
To make more precise statements about OpVAR, we focus our analysis on Pareto
distributed severities with distribution function
F (x) =(1 +
x
θ
)−α
, x > 0 ,
with shape parameters θ > 0 and tail parameter α > 0. Pareto’s law is the prototypical
parametric example for a heavy-tailed distribution and suitable for operational risk mod-
elling. As a simple consequence of (4.4), in the case of a compound Poisson model with
Pareto severities (Pareto-Poisson model) the analytic OpVAR is given by
VARt(κ) ∼ θ
[(λ t
1 − κ
)1/α
− 1
]∼ θ
(λ t
1 − κ
)1/α
, κ ↑ 1 . (4.5)
To demonstrate the kind of results we obtain by such approximation methods we
consider a Pareto-Poisson model, where the severity distributions Fi of the first (say)
6
New Frontiers Award Winner
◗ Page 30
Risk Management ◗ September 2007
(4.2) that the total aggregate loss process S+ is again compound Poisson with frequency
parameter and severity distribution
λ+ = limz↓0
Π+(z) and F+(z) = 1 − F
+(z) = 1 −
Π+(z)
λ+, z ≥ 0 . (4.3)
This result proves now useful to determine a bank’s total operational risk consisting of
several cells. Before doing that, recall the definition of OpVAR for a single operational
risk cell (henceforth called stand-alone OpVAR.) For each cell, stand-alone OpVAR at
confidence level κ ∈ (0, 1) and time horizon t is the κ-quantile of the aggregate loss
distribution, i.e. VARt(κ) = G←t (κ) with G←t (κ) = inf{x ∈ R : P (S(t) ≤ x) ≥ κ}.
In Bocker & Kluppelberg (2005, 2006, 2007) it was shown that OpVAR at high con-
fidence level can be approximated by a closed-form expression, if the loss severity is
subexponential, i.e. heavy-tailed. As this is common believe we consider in the sequel this
approximation, which can be written as
VARt(κ) ∼ F←(
1 −1 − κ
EN(t)
), κ ↑ 1 , (4.4)
where the symbol “∼” means that the ratio of left and right hand side converges to
1. Moreover, EN(t) is the cell’s expected number of losses in the time interval [0, t].
Important examples for subexponential distributions are lognormal, Weibull, and Pareto.
Here, we extend the idea of an asymptotic OpVAR approximation to the multivariate
problem. In doing so, we exploit the fact that S+ is a compound Poisson process with
parameters as in (4.3). In particular, if F+ is subexponential, we can apply (4.4) to
estimate total OpVAR. Consequently, if we are able to specify the asymptotic behaviour
of F+(x) as x → ∞ we have automatically an approximation of VARt(κ) as κ ↑ 1.
To make more precise statements about OpVAR, we focus our analysis on Pareto
distributed severities with distribution function
F (x) =(1 +
x
θ
)−α
, x > 0 ,
with shape parameters θ > 0 and tail parameter α > 0. Pareto’s law is the prototypical
parametric example for a heavy-tailed distribution and suitable for operational risk mod-
elling. As a simple consequence of (4.4), in the case of a compound Poisson model with
Pareto severities (Pareto-Poisson model) the analytic OpVAR is given by
VARt(κ) ∼ θ
[(λ t
1 − κ
)1/α
− 1
]∼ θ
(λ t
1 − κ
)1/α
, κ ↑ 1 . (4.5)
To demonstrate the kind of results we obtain by such approximation methods we
consider a Pareto-Poisson model, where the severity distributions Fi of the first (say)
6
b ≤ d cells are tail equivalent with tail parameter α > 0 and dominant to all other cells,
i.e.
limx→∞
F i(x)
F 1(x)=
(θi
θ1
)α
, i = 1, . . . , b , limx→∞
F i(x)
F 1(x)= 0 , i = b + 1, . . . , d . (4.6)
In the important cases of complete positive dependence and independence, closed-form
results can be found and may serve as extreme cases concerning the dependence structure
of the model.
Theorem 4.2. Consider a compound Poisson model with cell processes S1, . . . , Sd with
Pareto distributed severities satisfying (4.6). Let VARit(·) be the stand-alone OpVAR of
cell i.
(1) If all cells are completely dependent with the same frequency λ for all cells, then S+
is compound Poisson with parameters
λ+ = λ and F+(z) ∼
(b∑
i=1
θi
)α
z−α , z → ∞ ,
and total OpVAR is asymptotically given by
VAR+‖t(κ) ∼
b∑
i=1
VARit(κ), κ ↑ 1 . (4.7)
(2) If all cells are independent, then S+ is compound Poisson with parameters
λ+ = λ1 + · · ·+ λd and F+(z) ∼
1
λ+
b∑
i=1
(θi
z
)α
λi , z → ∞ , (4.8)
and total OpVAR is asymptotically given by
VAR+⊥t(κ) ∼
[b∑
i=1
(VARi
t(κ))α]1/α
, κ ↑ 1 . (4.9)
On the one hand, Theorem 4.2 states that for the completely dependent Pareto-Poisson
model, total asymptotic OpVAR is simply the sum of the dominating cell’s asymptotic
stand-alone OpVARs. Recall that this is similar to the new proposals of Basel II, where
the standard procedure for calculating capital charges for operational risk is just the
simple-sum VAR. To put it another way, regulators implicitly assume complete depen-
dence between different cells, meaning that losses within different business lines or risk
categories always happen at the same instants of time.
Very often, the simple-sum OpVAR (4.7) is considered to be the worst case scenario
and, hence, as an upper bound for total OpVAR in general, which in the heavy-tailed case
7
Multivariate Operational…◗ continued from page 29
b ≤ d cells are tail equivalent with tail parameter α > 0 and dominant to all other cells,
i.e.
limx→∞
F i(x)
F 1(x)=
(θi
θ1
)α
, i = 1, . . . , b , limx→∞
F i(x)
F 1(x)= 0 , i = b + 1, . . . , d . (4.6)
In the important cases of complete positive dependence and independence, closed-form
results can be found and may serve as extreme cases concerning the dependence structure
of the model.
Theorem 4.2. Consider a compound Poisson model with cell processes S1, . . . , Sd with
Pareto distributed severities satisfying (4.6). Let VARit(·) be the stand-alone OpVAR of
cell i.
(1) If all cells are completely dependent with the same frequency λ for all cells, then S+
is compound Poisson with parameters
λ+ = λ and F+(z) ∼
(b∑
i=1
θi
)α
z−α , z → ∞ ,
and total OpVAR is asymptotically given by
VAR+‖t(κ) ∼
b∑
i=1
VARit(κ), κ ↑ 1 . (4.7)
(2) If all cells are independent, then S+ is compound Poisson with parameters
λ+ = λ1 + · · ·+ λd and F+(z) ∼
1
λ+
b∑
i=1
(θi
z
)α
λi , z → ∞ , (4.8)
and total OpVAR is asymptotically given by
VAR+⊥t(κ) ∼
[b∑
i=1
(VARi
t(κ))α]1/α
, κ ↑ 1 . (4.9)
On the one hand, Theorem 4.2 states that for the completely dependent Pareto-Poisson
model, total asymptotic OpVAR is simply the sum of the dominating cell’s asymptotic
stand-alone OpVARs. Recall that this is similar to the new proposals of Basel II, where
the standard procedure for calculating capital charges for operational risk is just the
simple-sum VAR. To put it another way, regulators implicitly assume complete depen-
dence between different cells, meaning that losses within different business lines or risk
categories always happen at the same instants of time.
Very often, the simple-sum OpVAR (4.7) is considered to be the worst case scenario
and, hence, as an upper bound for total OpVAR in general, which in the heavy-tailed case
7
b ≤ d cells are tail equivalent with tail parameter α > 0 and dominant to all other cells,
i.e.
limx→∞
F i(x)
F 1(x)=
(θi
θ1
)α
, i = 1, . . . , b , limx→∞
F i(x)
F 1(x)= 0 , i = b + 1, . . . , d . (4.6)
In the important cases of complete positive dependence and independence, closed-form
results can be found and may serve as extreme cases concerning the dependence structure
of the model.
Theorem 4.2. Consider a compound Poisson model with cell processes S1, . . . , Sd with
Pareto distributed severities satisfying (4.6). Let VARit(·) be the stand-alone OpVAR of
cell i.
(1) If all cells are completely dependent with the same frequency λ for all cells, then S+
is compound Poisson with parameters
λ+ = λ and F+(z) ∼
(b∑
i=1
θi
)α
z−α , z → ∞ ,
and total OpVAR is asymptotically given by
VAR+‖t(κ) ∼
b∑
i=1
VARit(κ), κ ↑ 1 . (4.7)
(2) If all cells are independent, then S+ is compound Poisson with parameters
λ+ = λ1 + · · ·+ λd and F+(z) ∼
1
λ+
b∑
i=1
(θi
z
)α
λi , z → ∞ , (4.8)
and total OpVAR is asymptotically given by
VAR+⊥t(κ) ∼
[b∑
i=1
(VARi
t(κ))α]1/α
, κ ↑ 1 . (4.9)
On the one hand, Theorem 4.2 states that for the completely dependent Pareto-Poisson
model, total asymptotic OpVAR is simply the sum of the dominating cell’s asymptotic
stand-alone OpVARs. Recall that this is similar to the new proposals of Basel II, where
the standard procedure for calculating capital charges for operational risk is just the
simple-sum VAR. To put it another way, regulators implicitly assume complete depen-
dence between different cells, meaning that losses within different business lines or risk
categories always happen at the same instants of time.
Very often, the simple-sum OpVAR (4.7) is considered to be the worst case scenario
and, hence, as an upper bound for total OpVAR in general, which in the heavy-tailed case
7
b ≤ d cells are tail equivalent with tail parameter α > 0 and dominant to all other cells,
i.e.
limx→∞
F i(x)
F 1(x)=
(θi
θ1
)α
, i = 1, . . . , b , limx→∞
F i(x)
F 1(x)= 0 , i = b + 1, . . . , d . (4.6)
In the important cases of complete positive dependence and independence, closed-form
results can be found and may serve as extreme cases concerning the dependence structure
of the model.
Theorem 4.2. Consider a compound Poisson model with cell processes S1, . . . , Sd with
Pareto distributed severities satisfying (4.6). Let VARit(·) be the stand-alone OpVAR of
cell i.
(1) If all cells are completely dependent with the same frequency λ for all cells, then S+
is compound Poisson with parameters
λ+ = λ and F+(z) ∼
(b∑
i=1
θi
)α
z−α , z → ∞ ,
and total OpVAR is asymptotically given by
VAR+‖t(κ) ∼
b∑
i=1
VARit(κ), κ ↑ 1 . (4.7)
(2) If all cells are independent, then S+ is compound Poisson with parameters
λ+ = λ1 + · · ·+ λd and F+(z) ∼
1
λ+
b∑
i=1
(θi
z
)α
λi , z → ∞ , (4.8)
and total OpVAR is asymptotically given by
VAR+⊥t(κ) ∼
[b∑
i=1
(VARi
t(κ))α]1/α
, κ ↑ 1 . (4.9)
On the one hand, Theorem 4.2 states that for the completely dependent Pareto-Poisson
model, total asymptotic OpVAR is simply the sum of the dominating cell’s asymptotic
stand-alone OpVARs. Recall that this is similar to the new proposals of Basel II, where
the standard procedure for calculating capital charges for operational risk is just the
simple-sum VAR. To put it another way, regulators implicitly assume complete depen-
dence between different cells, meaning that losses within different business lines or risk
categories always happen at the same instants of time.
Very often, the simple-sum OpVAR (4.7) is considered to be the worst case scenario
and, hence, as an upper bound for total OpVAR in general, which in the heavy-tailed case
7
b ≤ d cells are tail equivalent with tail parameter α > 0 and dominant to all other cells,
i.e.
limx→∞
F i(x)
F 1(x)=
(θi
θ1
)α
, i = 1, . . . , b , limx→∞
F i(x)
F 1(x)= 0 , i = b + 1, . . . , d . (4.6)
In the important cases of complete positive dependence and independence, closed-form
results can be found and may serve as extreme cases concerning the dependence structure
of the model.
Theorem 4.2. Consider a compound Poisson model with cell processes S1, . . . , Sd with
Pareto distributed severities satisfying (4.6). Let VARit(·) be the stand-alone OpVAR of
cell i.
(1) If all cells are completely dependent with the same frequency λ for all cells, then S+
is compound Poisson with parameters
λ+ = λ and F+(z) ∼
(b∑
i=1
θi
)α
z−α , z → ∞ ,
and total OpVAR is asymptotically given by
VAR+‖t(κ) ∼
b∑
i=1
VARit(κ), κ ↑ 1 . (4.7)
(2) If all cells are independent, then S+ is compound Poisson with parameters
λ+ = λ1 + · · ·+ λd and F+(z) ∼
1
λ+
b∑
i=1
(θi
z
)α
λi , z → ∞ , (4.8)
and total OpVAR is asymptotically given by
VAR+⊥t(κ) ∼
[b∑
i=1
(VARi
t(κ))α]1/α
, κ ↑ 1 . (4.9)
On the one hand, Theorem 4.2 states that for the completely dependent Pareto-Poisson
model, total asymptotic OpVAR is simply the sum of the dominating cell’s asymptotic
stand-alone OpVARs. Recall that this is similar to the new proposals of Basel II, where
the standard procedure for calculating capital charges for operational risk is just the
simple-sum VAR. To put it another way, regulators implicitly assume complete depen-
dence between different cells, meaning that losses within different business lines or risk
categories always happen at the same instants of time.
Very often, the simple-sum OpVAR (4.7) is considered to be the worst case scenario
and, hence, as an upper bound for total OpVAR in general, which in the heavy-tailed case
7
b ≤ d cells are tail equivalent with tail parameter α > 0 and dominant to all other cells,
i.e.
limx→∞
F i(x)
F 1(x)=
(θi
θ1
)α
, i = 1, . . . , b , limx→∞
F i(x)
F 1(x)= 0 , i = b + 1, . . . , d . (4.6)
In the important cases of complete positive dependence and independence, closed-form
results can be found and may serve as extreme cases concerning the dependence structure
of the model.
Theorem 4.2. Consider a compound Poisson model with cell processes S1, . . . , Sd with
Pareto distributed severities satisfying (4.6). Let VARit(·) be the stand-alone OpVAR of
cell i.
(1) If all cells are completely dependent with the same frequency λ for all cells, then S+
is compound Poisson with parameters
λ+ = λ and F+(z) ∼
(b∑
i=1
θi
)α
z−α , z → ∞ ,
and total OpVAR is asymptotically given by
VAR+‖t(κ) ∼
b∑
i=1
VARit(κ), κ ↑ 1 . (4.7)
(2) If all cells are independent, then S+ is compound Poisson with parameters
λ+ = λ1 + · · ·+ λd and F+(z) ∼
1
λ+
b∑
i=1
(θi
z
)α
λi , z → ∞ , (4.8)
and total OpVAR is asymptotically given by
VAR+⊥t(κ) ∼
[b∑
i=1
(VARi
t(κ))α]1/α
, κ ↑ 1 . (4.9)
On the one hand, Theorem 4.2 states that for the completely dependent Pareto-Poisson
model, total asymptotic OpVAR is simply the sum of the dominating cell’s asymptotic
stand-alone OpVARs. Recall that this is similar to the new proposals of Basel II, where
the standard procedure for calculating capital charges for operational risk is just the
simple-sum VAR. To put it another way, regulators implicitly assume complete depen-
dence between different cells, meaning that losses within different business lines or risk
categories always happen at the same instants of time.
Very often, the simple-sum OpVAR (4.7) is considered to be the worst case scenario
and, hence, as an upper bound for total OpVAR in general, which in the heavy-tailed case
7
b ≤ d cells are tail equivalent with tail parameter α > 0 and dominant to all other cells,
i.e.
limx→∞
F i(x)
F 1(x)=
(θi
θ1
)α
, i = 1, . . . , b , limx→∞
F i(x)
F 1(x)= 0 , i = b + 1, . . . , d . (4.6)
In the important cases of complete positive dependence and independence, closed-form
results can be found and may serve as extreme cases concerning the dependence structure
of the model.
Theorem 4.2. Consider a compound Poisson model with cell processes S1, . . . , Sd with
Pareto distributed severities satisfying (4.6). Let VARit(·) be the stand-alone OpVAR of
cell i.
(1) If all cells are completely dependent with the same frequency λ for all cells, then S+
is compound Poisson with parameters
λ+ = λ and F+(z) ∼
(b∑
i=1
θi
)α
z−α , z → ∞ ,
and total OpVAR is asymptotically given by
VAR+‖t(κ) ∼
b∑
i=1
VARit(κ), κ ↑ 1 . (4.7)
(2) If all cells are independent, then S+ is compound Poisson with parameters
λ+ = λ1 + · · ·+ λd and F+(z) ∼
1
λ+
b∑
i=1
(θi
z
)α
λi , z → ∞ , (4.8)
and total OpVAR is asymptotically given by
VAR+⊥t(κ) ∼
[b∑
i=1
(VARi
t(κ))α]1/α
, κ ↑ 1 . (4.9)
On the one hand, Theorem 4.2 states that for the completely dependent Pareto-Poisson
model, total asymptotic OpVAR is simply the sum of the dominating cell’s asymptotic
stand-alone OpVARs. Recall that this is similar to the new proposals of Basel II, where
the standard procedure for calculating capital charges for operational risk is just the
simple-sum VAR. To put it another way, regulators implicitly assume complete depen-
dence between different cells, meaning that losses within different business lines or risk
categories always happen at the same instants of time.
Very often, the simple-sum OpVAR (4.7) is considered to be the worst case scenario
and, hence, as an upper bound for total OpVAR in general, which in the heavy-tailed case
7
b ≤ d cells are tail equivalent with tail parameter α > 0 and dominant to all other cells,
i.e.
limx→∞
F i(x)
F 1(x)=
(θi
θ1
)α
, i = 1, . . . , b , limx→∞
F i(x)
F 1(x)= 0 , i = b + 1, . . . , d . (4.6)
In the important cases of complete positive dependence and independence, closed-form
results can be found and may serve as extreme cases concerning the dependence structure
of the model.
Theorem 4.2. Consider a compound Poisson model with cell processes S1, . . . , Sd with
Pareto distributed severities satisfying (4.6). Let VARit(·) be the stand-alone OpVAR of
cell i.
(1) If all cells are completely dependent with the same frequency λ for all cells, then S+
is compound Poisson with parameters
λ+ = λ and F+(z) ∼
(b∑
i=1
θi
)α
z−α , z → ∞ ,
and total OpVAR is asymptotically given by
VAR+‖t(κ) ∼
b∑
i=1
VARit(κ), κ ↑ 1 . (4.7)
(2) If all cells are independent, then S+ is compound Poisson with parameters
λ+ = λ1 + · · ·+ λd and F+(z) ∼
1
λ+
b∑
i=1
(θi
z
)α
λi , z → ∞ , (4.8)
and total OpVAR is asymptotically given by
VAR+⊥t(κ) ∼
[b∑
i=1
(VARi
t(κ))α]1/α
, κ ↑ 1 . (4.9)
On the one hand, Theorem 4.2 states that for the completely dependent Pareto-Poisson
model, total asymptotic OpVAR is simply the sum of the dominating cell’s asymptotic
stand-alone OpVARs. Recall that this is similar to the new proposals of Basel II, where
the standard procedure for calculating capital charges for operational risk is just the
simple-sum VAR. To put it another way, regulators implicitly assume complete depen-
dence between different cells, meaning that losses within different business lines or risk
categories always happen at the same instants of time.
Very often, the simple-sum OpVAR (4.7) is considered to be the worst case scenario
and, hence, as an upper bound for total OpVAR in general, which in the heavy-tailed case
7
completely dependent and independ-ent total OpVAR, respectively. In this case we
obtain from (4.8) in the situation (4.6) fromTheorem 4.2(2)
whereas is given by (4.7). Then, as aconsequence of convexity and concav-ity of the function
This result says that for heavy-tailed severity data with subadditivity of OpVAR is violated because thesum of stand-alone OpVARs is smaller than in-dependent total OpVAR.
ReferencesBöcker, K. and Klüppelberg, C. (2005)Operational VAR: a closed-form solution. RISKMagazine, December, 90-93.
Böcker, K. and Klüppelberg, C. (2006)Multivariate models for operational risk.Preprint, Munich University of Technology.Available at www.ma.tum.de/stat/
Böcker, K and Klüppelberg, C. (2007)Modelling and measuring multivariate opera-tional risk with Lévy copulas. Preprint, MunichUniversity of Technology. Available atwww.ma.tum.de/stat/
Cont, R. and Tankov, P. (2004) FinancialModelling With Jump Processes.Chapman & Hall/CRC, Boca Raton. ✦
DisclaimerThe opinions expressed in this article are thoseof the authors and do not reflect the views ofHypo Vereinsbank. ✦
can be grossly misleading. To see this, assume the same frequency λ in all cells also for
the independent model, and denote by VAR+‖ (κ) and VAR+
⊥(κ) completely dependent and
independent total OpVAR, respectively. In this case we obtain from (4.9) in the situation
(4.6) from Theorem 4.2(2)
VAR+⊥(κ) ∼
(λ t
1 − κ
)1/α(
b∑
i=1
θαi
)1/α
, κ ↑ 1 ,
whereas VAR+‖ (κ) is given by (4.7). Then, as a consequence of convexity (α > 1) and
concavity (α < 1) of the function x 7→ xα,
VAR+⊥(κ)
VAR+‖ (κ)
=
(∑bi=1 θα
i
)1/α
∑bi=1 θi
< 1 , α > 1
= 1 , α = 1
> 1 , α < 1 .
(4.10)
This result says that for heavy-tailed severity data with F i(xi) ∼ (xi/θi)−α as xi → ∞,
subadditivity of OpVAR is violated because the sum of stand-alone OpVARs is smaller
than independent total OpVAR.
About the Authors/Disclaimer
Klaus Bocker is senior risk controller at HypoVereinsbank in Munich. Claudia Kluppelberg
holds the chair of Mathematical Statistics at the Center for Mathematical Sciences of the
Munich University of Technology. The opinions expressed in this article are those of the
authors and do not reflect the views of HypoVereinsbank.
References
Bocker, K. and Kluppelberg, C. (2005) Operational VAR: a closed-form solution. RISK
Magazine, December, 90-93.
Bocker, K. and Kluppelberg, C. (2006) Multivariate models for operational risk. Preprint,
Munich University of Technology. Available at www.ma.tum.de/stat/
Bocker, K and Kluppelberg, C. (2007) Modelling and measuring multivariate operational
risk with Levy copulas. Preprint, Munich University of Technology. Available at
www.ma.tum.de/stat/
Cont, R. and Tankov, P. (2004) Financial Modelling With Jump Processes.
Chapman & Hall/CRC, Boca Raton.
8
can be grossly misleading. To see this, assume the same frequency λ in all cells also for
the independent model, and denote by VAR+‖ (κ) and VAR+
⊥(κ) completely dependent and
independent total OpVAR, respectively. In this case we obtain from (4.9) in the situation
(4.6) from Theorem 4.2(2)
VAR+⊥(κ) ∼
(λ t
1 − κ
)1/α(
b∑
i=1
θαi
)1/α
, κ ↑ 1 ,
whereas VAR+‖ (κ) is given by (4.7). Then, as a consequence of convexity (α > 1) and
concavity (α < 1) of the function x 7→ xα,
VAR+⊥(κ)
VAR+‖ (κ)
=
(∑bi=1 θα
i
)1/α
∑bi=1 θi
< 1 , α > 1
= 1 , α = 1
> 1 , α < 1 .
(4.10)
This result says that for heavy-tailed severity data with F i(xi) ∼ (xi/θi)−α as xi → ∞,
subadditivity of OpVAR is violated because the sum of stand-alone OpVARs is smaller
than independent total OpVAR.
About the Authors/Disclaimer
Klaus Bocker is senior risk controller at HypoVereinsbank in Munich. Claudia Kluppelberg
holds the chair of Mathematical Statistics at the Center for Mathematical Sciences of the
Munich University of Technology. The opinions expressed in this article are those of the
authors and do not reflect the views of HypoVereinsbank.
References
Bocker, K. and Kluppelberg, C. (2005) Operational VAR: a closed-form solution. RISK
Magazine, December, 90-93.
Bocker, K. and Kluppelberg, C. (2006) Multivariate models for operational risk. Preprint,
Munich University of Technology. Available at www.ma.tum.de/stat/
Bocker, K and Kluppelberg, C. (2007) Modelling and measuring multivariate operational
risk with Levy copulas. Preprint, Munich University of Technology. Available at
www.ma.tum.de/stat/
Cont, R. and Tankov, P. (2004) Financial Modelling With Jump Processes.
Chapman & Hall/CRC, Boca Raton.
8
can be grossly misleading. To see this, assume the same frequency λ in all cells also for
the independent model, and denote by VAR+‖ (κ) and VAR+
⊥(κ) completely dependent and
independent total OpVAR, respectively. In this case we obtain from (4.9) in the situation
(4.6) from Theorem 4.2(2)
VAR+⊥(κ) ∼
(λ t
1 − κ
)1/α(
b∑
i=1
θαi
)1/α
, κ ↑ 1 ,
whereas VAR+‖ (κ) is given by (4.7). Then, as a consequence of convexity (α > 1) and
concavity (α < 1) of the function x 7→ xα,
VAR+⊥(κ)
VAR+‖ (κ)
=
(∑bi=1 θα
i
)1/α
∑bi=1 θi
< 1 , α > 1
= 1 , α = 1
> 1 , α < 1 .
(4.10)
This result says that for heavy-tailed severity data with F i(xi) ∼ (xi/θi)−α as xi → ∞,
subadditivity of OpVAR is violated because the sum of stand-alone OpVARs is smaller
than independent total OpVAR.
About the Authors/Disclaimer
Klaus Bocker is senior risk controller at HypoVereinsbank in Munich. Claudia Kluppelberg
holds the chair of Mathematical Statistics at the Center for Mathematical Sciences of the
Munich University of Technology. The opinions expressed in this article are those of the
authors and do not reflect the views of HypoVereinsbank.
References
Bocker, K. and Kluppelberg, C. (2005) Operational VAR: a closed-form solution. RISK
Magazine, December, 90-93.
Bocker, K. and Kluppelberg, C. (2006) Multivariate models for operational risk. Preprint,
Munich University of Technology. Available at www.ma.tum.de/stat/
Bocker, K and Kluppelberg, C. (2007) Modelling and measuring multivariate operational
risk with Levy copulas. Preprint, Munich University of Technology. Available at
www.ma.tum.de/stat/
Cont, R. and Tankov, P. (2004) Financial Modelling With Jump Processes.
Chapman & Hall/CRC, Boca Raton.
8
can be grossly misleading. To see this, assume the same frequency λ in all cells also for
the independent model, and denote by VAR+‖ (κ) and VAR+
⊥(κ) completely dependent and
independent total OpVAR, respectively. In this case we obtain from (4.9) in the situation
(4.6) from Theorem 4.2(2)
VAR+⊥(κ) ∼
(λ t
1 − κ
)1/α(
b∑
i=1
θαi
)1/α
, κ ↑ 1 ,
whereas VAR+‖ (κ) is given by (4.7). Then, as a consequence of convexity (α > 1) and
concavity (α < 1) of the function x 7→ xα,
VAR+⊥(κ)
VAR+‖ (κ)
=
(∑bi=1 θα
i
)1/α
∑bi=1 θi
< 1 , α > 1
= 1 , α = 1
> 1 , α < 1 .
(4.10)
This result says that for heavy-tailed severity data with F i(xi) ∼ (xi/θi)−α as xi → ∞,
subadditivity of OpVAR is violated because the sum of stand-alone OpVARs is smaller
than independent total OpVAR.
About the Authors/Disclaimer
Klaus Bocker is senior risk controller at HypoVereinsbank in Munich. Claudia Kluppelberg
holds the chair of Mathematical Statistics at the Center for Mathematical Sciences of the
Munich University of Technology. The opinions expressed in this article are those of the
authors and do not reflect the views of HypoVereinsbank.
References
Bocker, K. and Kluppelberg, C. (2005) Operational VAR: a closed-form solution. RISK
Magazine, December, 90-93.
Bocker, K. and Kluppelberg, C. (2006) Multivariate models for operational risk. Preprint,
Munich University of Technology. Available at www.ma.tum.de/stat/
Bocker, K and Kluppelberg, C. (2007) Modelling and measuring multivariate operational
risk with Levy copulas. Preprint, Munich University of Technology. Available at
www.ma.tum.de/stat/
Cont, R. and Tankov, P. (2004) Financial Modelling With Jump Processes.
Chapman & Hall/CRC, Boca Raton.
8
can be grossly misleading. To see this, assume the same frequency λ in all cells also for
the independent model, and denote by VAR+‖ (κ) and VAR+
⊥(κ) completely dependent and
independent total OpVAR, respectively. In this case we obtain from (4.9) in the situation
(4.6) from Theorem 4.2(2)
VAR+⊥(κ) ∼
(λ t
1 − κ
)1/α(
b∑
i=1
θαi
)1/α
, κ ↑ 1 ,
whereas VAR+‖ (κ) is given by (4.7). Then, as a consequence of convexity (α > 1) and
concavity (α < 1) of the function x 7→ xα,
VAR+⊥(κ)
VAR+‖ (κ)
=
(∑bi=1 θα
i
)1/α
∑bi=1 θi
< 1 , α > 1
= 1 , α = 1
> 1 , α < 1 .
(4.10)
This result says that for heavy-tailed severity data with F i(xi) ∼ (xi/θi)−α as xi → ∞,
subadditivity of OpVAR is violated because the sum of stand-alone OpVARs is smaller
than independent total OpVAR.
About the Authors/Disclaimer
Klaus Bocker is senior risk controller at HypoVereinsbank in Munich. Claudia Kluppelberg
holds the chair of Mathematical Statistics at the Center for Mathematical Sciences of the
Munich University of Technology. The opinions expressed in this article are those of the
authors and do not reflect the views of HypoVereinsbank.
References
Bocker, K. and Kluppelberg, C. (2005) Operational VAR: a closed-form solution. RISK
Magazine, December, 90-93.
Bocker, K. and Kluppelberg, C. (2006) Multivariate models for operational risk. Preprint,
Munich University of Technology. Available at www.ma.tum.de/stat/
Bocker, K and Kluppelberg, C. (2007) Modelling and measuring multivariate operational
risk with Levy copulas. Preprint, Munich University of Technology. Available at
www.ma.tum.de/stat/
Cont, R. and Tankov, P. (2004) Financial Modelling With Jump Processes.
Chapman & Hall/CRC, Boca Raton.
8
can be grossly misleading. To see this, assume the same frequency λ in all cells also for
the independent model, and denote by VAR+‖ (κ) and VAR+
⊥(κ) completely dependent and
independent total OpVAR, respectively. In this case we obtain from (4.9) in the situation
(4.6) from Theorem 4.2(2)
VAR+⊥(κ) ∼
(λ t
1 − κ
)1/α(
b∑
i=1
θαi
)1/α
, κ ↑ 1 ,
whereas VAR+‖ (κ) is given by (4.7). Then, as a consequence of convexity (α > 1) and
concavity (α < 1) of the function x 7→ xα,
VAR+⊥(κ)
VAR+‖ (κ)
=
(∑bi=1 θα
i
)1/α
∑bi=1 θi
< 1 , α > 1
= 1 , α = 1
> 1 , α < 1 .
(4.10)
This result says that for heavy-tailed severity data with F i(xi) ∼ (xi/θi)−α as xi → ∞,
subadditivity of OpVAR is violated because the sum of stand-alone OpVARs is smaller
than independent total OpVAR.
About the Authors/Disclaimer
Klaus Bocker is senior risk controller at HypoVereinsbank in Munich. Claudia Kluppelberg
holds the chair of Mathematical Statistics at the Center for Mathematical Sciences of the
Munich University of Technology. The opinions expressed in this article are those of the
authors and do not reflect the views of HypoVereinsbank.
References
Bocker, K. and Kluppelberg, C. (2005) Operational VAR: a closed-form solution. RISK
Magazine, December, 90-93.
Bocker, K. and Kluppelberg, C. (2006) Multivariate models for operational risk. Preprint,
Munich University of Technology. Available at www.ma.tum.de/stat/
Bocker, K and Kluppelberg, C. (2007) Modelling and measuring multivariate operational
risk with Levy copulas. Preprint, Munich University of Technology. Available at
www.ma.tum.de/stat/
Cont, R. and Tankov, P. (2004) Financial Modelling With Jump Processes.
Chapman & Hall/CRC, Boca Raton.
8
Page 31 ◗
September 2007 ◗ Risk Management
can be grossly misleading. To see this, assume the same frequency λ in all cells also for
the independent model, and denote by VAR+‖ (κ) and VAR+
⊥(κ) completely dependent and
independent total OpVAR, respectively. In this case we obtain from (4.9) in the situation
(4.6) from Theorem 4.2(2)
VAR+⊥(κ) ∼
(λ t
1 − κ
)1/α(
b∑
i=1
θαi
)1/α
, κ ↑ 1 ,
whereas VAR+‖ (κ) is given by (4.7). Then, as a consequence of convexity (α > 1) and
concavity (α < 1) of the function x 7→ xα,
VAR+⊥(κ)
VAR+‖ (κ)
=
(∑bi=1 θα
i
)1/α
∑bi=1 θi
< 1 , α > 1
= 1 , α = 1
> 1 , α < 1 .
(4.10)
This result says that for heavy-tailed severity data with F i(xi) ∼ (xi/θi)−α as xi → ∞,
subadditivity of OpVAR is violated because the sum of stand-alone OpVARs is smaller
than independent total OpVAR.
About the Authors/Disclaimer
Klaus Bocker is senior risk controller at HypoVereinsbank in Munich. Claudia Kluppelberg
holds the chair of Mathematical Statistics at the Center for Mathematical Sciences of the
Munich University of Technology. The opinions expressed in this article are those of the
authors and do not reflect the views of HypoVereinsbank.
References
Bocker, K. and Kluppelberg, C. (2005) Operational VAR: a closed-form solution. RISK
Magazine, December, 90-93.
Bocker, K. and Kluppelberg, C. (2006) Multivariate models for operational risk. Preprint,
Munich University of Technology. Available at www.ma.tum.de/stat/
Bocker, K and Kluppelberg, C. (2007) Modelling and measuring multivariate operational
risk with Levy copulas. Preprint, Munich University of Technology. Available at
www.ma.tum.de/stat/
Cont, R. and Tankov, P. (2004) Financial Modelling With Jump Processes.
Chapman & Hall/CRC, Boca Raton.
8
can be grossly misleading. To see this, assume the same frequency λ in all cells also for
the independent model, and denote by VAR+‖ (κ) and VAR+
⊥(κ) completely dependent and
independent total OpVAR, respectively. In this case we obtain from (4.9) in the situation
(4.6) from Theorem 4.2(2)
VAR+⊥(κ) ∼
(λ t
1 − κ
)1/α(
b∑
i=1
θαi
)1/α
, κ ↑ 1 ,
whereas VAR+‖ (κ) is given by (4.7). Then, as a consequence of convexity (α > 1) and
concavity (α < 1) of the function x 7→ xα,
VAR+⊥(κ)
VAR+‖ (κ)
=
(∑bi=1 θα
i
)1/α
∑bi=1 θi
< 1 , α > 1
= 1 , α = 1
> 1 , α < 1 .
(4.10)
This result says that for heavy-tailed severity data with F i(xi) ∼ (xi/θi)−α as xi → ∞,
subadditivity of OpVAR is violated because the sum of stand-alone OpVARs is smaller
than independent total OpVAR.
About the Authors/Disclaimer
Klaus Bocker is senior risk controller at HypoVereinsbank in Munich. Claudia Kluppelberg
holds the chair of Mathematical Statistics at the Center for Mathematical Sciences of the
Munich University of Technology. The opinions expressed in this article are those of the
authors and do not reflect the views of HypoVereinsbank.
References
Bocker, K. and Kluppelberg, C. (2005) Operational VAR: a closed-form solution. RISK
Magazine, December, 90-93.
Bocker, K. and Kluppelberg, C. (2006) Multivariate models for operational risk. Preprint,
Munich University of Technology. Available at www.ma.tum.de/stat/
Bocker, K and Kluppelberg, C. (2007) Modelling and measuring multivariate operational
risk with Levy copulas. Preprint, Munich University of Technology. Available at
www.ma.tum.de/stat/
Cont, R. and Tankov, P. (2004) Financial Modelling With Jump Processes.
Chapman & Hall/CRC, Boca Raton.
8
Best Paper Award Winner
Neil Bodoff, FCAS, MAAA,
is senior vice president at
Willis Re in New York, N.Y.
He can be reached at
2007 Risk Management Section Award for Best PaperAward for Practical Risk Management Applications:Capital Allocation by Percentile Layer by Neil Bodoff
Editor’s note: This article represents an abridgedversion of the submission to the 2007 ERMSymposium, which received the 2007 RiskManagement Section Award for Best PaperAward for Practical Risk ManagementApplications. The full paper can be found athttp://www.ermsymposium.org/papers.php
T he goal of this paper is to propose a newapproach to capital allocation within afinancial conglomerate.
Context What motivates a financial firm (e.g., a commer-cial bank, insurance company, hedge fund, in-vestment bank, etc.) to allocate capital?Usually, the firm comprises many businessunits, products, asset managers, traders, under-writers, etc.; the firm desires to measure theprofitability of these various components.However, because these units engage in activi-ties of varying degrees of risk, the firm wishes todeploy a methodology that accounts for riskwhen measuring profitability. Thus, the chal-lenge of capital allocation usually arises withinthe context of a firm wishing to measure “risk-adjusted profitability.” Additionally, imple-menting a framework to measure profitabilityretrospectively can ultimately lead to its inte-gration into prospective situations. In otherwords, the methodology for measuring prof-itability can also become part and parcel of howa firm’s business units set prices. Thus, capitalallocation can affect both pricing methodologyas well as profitability measurement.
ScopeIn this paper we will narrow the scope in order tostreamline the discussion. Therefore, we makethe following assumptions:
• The financial firm we will deal with is a pub-licly traded insurance company that writes property catastrophe business.
• The company writes only single year policies.
• The company’s regulators, rating agencies,and investors demand that the firm holds cap-ital equal to Value-at-Risk (VaR) at the 99thpercentile (with a one-year time horizon).
• The company’s investors demand that the firmachieve profit from its underwriting activities;the required profit equals the amount of thefirm’s capital multiplied by a required per-centage rate of return.
Therefore, while there is in theory an importantquestion of how much capital a firm should hold,we assume that the operating environment (reg-ulators, rating agencies, investors) imposes ananswer. Similarly, while there is in theory an im-portant question regarding what should be therequired rate of return on capital, we assumethat the operating environment (investors) im-poses an answer. Thus, forces external to thefirm require it both to hold an amount of capitaland also to achieve a required rate of return onthis capital. Thus, the questions of how muchcapital to hold and what rate of return to earn onthis capital are outside the scope of our discus-sion; the only question we seek to address is howthe firm should allocate capital.
BackgroundMango1, among others, has highlighted that all ofthe capital of the firm is available to pay any oneclaim of any single insurance policy. Therefore,by definition, all of the capital relates to the totalfirm and cannot be allocated to its components.This concept has two important ramifications.First, as Kreps 2 has clarified, when we talk about
◗ Page 32
Risk Management ◗ September 2007
1 Mango, Donald F., "Capital Consumption: An Alternative Methodology for Pricing Reinsurance," CAS 2003 Winter Forum, 351-378.2 Kreps, Rodney, “Riskiness Leverage Models,” paper accepted by Casualty Actuarial Society (CAS), 2005.
allocating capital we are really talking about tak-ing the cost of the total firm’s capital and allocatingthis cost. Essentially, we are taking the requiredrate of return on capital and allocating it to compo-nents of the firm; we say “capital allocation”merely as shorthand.
Second, because capital relates to the total firm,any discussion of risk, capital and capital alloca-tion must relate to loss scenarios for the totalfirm. For example, many existing methods ofcapital allocation begin by measuring capital foreach business unit on a standalone basis andthen attempt to somehow “roll up” all of thesecapital amounts into a total capital figure.Essentially, all of these methods are inappropri-ate because they fail to measure risk and capitalin the proper context of the loss to the total firm.
On the other hand, some capital allocation meth-ods do properly account for the fact that individ-ual business units or other components of thefirm must be evaluated based upon their contri-bution to the total firm loss. One notable exampleof such an approach is the “co-measures” frame-work developed by Kreps3. Essentially, Kreps re-quires that one evaluate risk and capital for eachscenario of total loss to the firm and, simultane-ously, keep track of which components con-tribute to the total firm loss. Once capital isallocated to loss scenario at the total firm level, itcan then be further allocated to those businessunits, perils, policies, product types, etc. that arethe “culprits” that contribute to these loss sce-narios. The question that remains, then, is howone should allocate the cost of capital to the var-ious “total firm loss scenarios.” In particular,what about our situation in which the firm holdscapital based upon VaR (99 percent)?
Current ApproachHere one often encounters the following logic.When a firm holds capital based upon VaR (99percent), it means the firm is holding capital forthe 99th percentile loss scenario, or the “1-in-100year” loss scenario. Therefore, we allocate thecost of capital to those business units, products,perils, policies, underwriters, etc. that produce
losses that contribute to this loss scenario. Or,similarly, if the firm is holding capital basedupon Tail Value at Risk (TVaR (99 percent)),then it means that the firm is holding capital forthe average loss beyond the 99th percentile;therefore, according to this reasoning, allocatecapital to those business units that contribute tothese loss scenarios beyond the 99th percentile.The result is to allocate capital to components ofthe firm only to the extent that they contribute toextremely severe losses in the “tail” of the distri-bution.
Problems with Current ApproachAs Mango has pointed out, however, there aremany different loss scenarios that are less severethan the 99th percentile loss that require the useof the company’s capital. For example, if thecompany holds 900 million of capital based uponVaR (99 percent), consider the scenario in whichthe firm sustains only a “moderately severe”downside scenario, a loss of 500 million. Whilethis loss scenario is not a 99th percentile loss, itcertainly requires, uses and “consumes” capital.Shouldn’t we allocate, therefore, at least someportion of the cost of the firm’s capital to these lossscenarios and the units that contribute to them?
A New Formulation of theMeaning of VaRIt therefore appears that we need to clarify whatit means for a firm to hold capital equal to VaR.The conventional wisdom suggests that when afirm holds capital based upon VaR (99 percent),the firm is holding capital “for the 99th percentileloss.” This imprecise formulation leads to theflawed assumption that the firm is holding capi-tal “only for the 99th percentile loss” and leads tothe inappropriate allocation of capital only to thecomponents of the firm that contribute to the 99th
percentile loss. I believe, however, that a betterformulation of the meaning of the VaR capital re-quirement is that the firm holds sufficient capital“even for the 99th percentile loss.” In otherwords, much of a firm’s capital is intended notonly for the 99th percentile loss scenario, but for
Page 33 ◗
September 2007 ◗ Risk Management
3 Kreps, Rodney, “Riskiness Leverage Models”, paper accepted by Casualty Actuarial Society (CAS), 2005
continued on page 34 ◗
◗ Page 34
other moderately severe losses as well. Similarly,we can also apply this logic to TVaR; using TVaR(99 percent) to set capital means we are holdingcapital “even for the average loss beyond the 99th
percentile,” but not “only for” these events.
Ramifications of NewFormulation of VaRBased upon our new formulation that the firmholds capital “even for” the 99th percentile loss,we can begin to develop a new approach to allo-cating the cost of capital. Why does the firm holdcapital equal to the 99th percentile rather than thelesser amount of the 98th percentile? For mostlosses, capital equal to the 98th percentile wouldbe sufficient. Only loss scenarios (for the totalfirm) that exceed the 98th percentile require thatthe firm holds more capital than VaR (98 per-cent). Therefore, we can see that the incrementaldifference between capital at the 99th percentileand the 98th percentile is solely attributable toloss scenarios that exceed the 98th percentile. Bysimilar logic, the amount of capital equal to the98th percentile minus the 97th percentile can beallocated only to the loss scenarios that exceedthe 97th percentile. Let’s call this difference be-tween capital amounts at sequential percentilesa “percentile layer of capital.” One can applythis procedure sequentially to all “percentilelayers of capital” and thus allocate the entirecapital of the firm.The key to the allocation procedure is to view thefirm’s capital as the sum of many granular piecesof capital, or “layers of capital.” Each layer ofcapital is needed and potentially used by a dif-ferent set of loss scenarios. Therefore, we mustallocate each layer of capital individually tothose loss scenarios that will use each layer ofcapital—in other words, to those loss scenariosthat exceed the lower bound of the layer of capi-tal (which can also be described as hitting thelayer or penetrating the layer). Of course, manylayers of capital will be potentially used by manydifferent loss scenarios. In such a case, we canuse conditional probability to allocate the layerof capital to the various loss scenarios. In other
words, for any given layer of capital, each lossscenario receives allocation based upon: 1) probability of loss scenario exceeding thelower bound of the layer of capital or 2) probabil-ity of all loss scenarios exceeding the lowerbound of the layer.
Graphical Depiction andNumerical ExampleA numerical example, together with a graphicaldepiction, may help clarify the approach. Thegraph below (also known as a “Lee Diagram”based upon the contribution of Lee4, plots 20 lossscenarios in size order from smallest to largest (inthe continuous case, this is the inverse of the cu-mulative distribution function). The loss amountis plotted on the Y-axis:
In Exhibit 1 there are 20 loss scenarios and we’llassume the firm holds capital equal to the 19th
worst loss scenario, 360 million Why is it that thefirm needs to hold 360 million of capital ratherthan just 100 million of capital? It appears thatloss scenarios one through 10, which are all lessthan or equal to 100 million, do not require thislayer of capital. In contradistinction, loss sce-narios 11 through 20, which exceed 100 million,clearly do utilize this layer of capital in excess of100 million. Examining in further detail, we seethat all of scenarios 11 through 20 utilize the1million x 100 million layer, but not all of themrequire the 1 million x 200 million layer, andeven fewer require the 1 million x 300 millionlayer. Thus, we must allocate each individuallayer of capital to the loss events that penetratethe layer in proportion to the relative usage of thelayer of capital; i.e., in proportion to the relativeexceedance probability, as per Exhibit 2:
Best Paper Award WinnerRisk Management ◗ September 2007
4 Lee, Yoong-Sin., "The Mathematics of Excess of Loss Coverages and Retrospective Rating-A Graphical Approach," Proceedings of
the CAS (PCAS) LXXV, 1988, 49-77.
2007 Risk Management Section …◗ continued from page 33
Exhibit 1: Lee Diagram
Page 35 ◗
Numerical example:• Loss scenario #19 is one of two events (scenar-
ios 19 and 20) that require the 35 million x 325million layer of capital.o Thus scenario #19 receives 1/2 allocation
of this 35 million of capital.
• Loss scenario #19 also is one of five events (sce-narios 16 through 20) that require the firm to holdthe 25 million x 230 million layer of capital.o Thus it receives 1/5 allocation of this 25
million of capital.
• Apply the procedure to all layers, allocate toall loss events that exceed the lower bound of thelayer via conditional exceedance probability.
Note that a loss scenario tends to receive a larg-er percentage allocation in the upper layers thanin the lower layers for two reasons:
1) In the upper layers, we are allocating a fulllayer of capital to fewer loss events (i.e., theexceedance probability decreases as the lossamount increases); therefore, each loss sce-nario gets a larger share of the “overhead” ofthe total layer of capital.
2) In the upper layers, we are allocating a widerlayer of capital because the severity of eachloss scenario tends to outstrip the prior lossscenario by a greater amount (i.e., the per-centile layer of capital tends to widen as theloss amount increases). There can be situa-tions, however, in which this relationship be-tween layers is not the case; this behaviordepends on the particular shape of the size ofloss distribution.
ImplementationThe methodology described above, “CapitalAllocation by Percentile Layer,” can easily beimplemented using loss scenario output withinseveral common contexts. One example is usingsimulated loss output from a property catastro-phe model (e.g., one has annual aggregate lossesfor one year). Another example is loss output froma DFA model or other simulation engine. Ofcourse, the capital allocation procedure de-scribed here relates only to allocating total firmcapital to each total loss scenario; but once we al-locate capital to each total loss scenario, we canthen (per Kreps, others) further allocate the capi-tal for each total loss scenario to those individualcomponents (operating units, lines of business,insurance policies, etc.) that are the “culprits”that contribute to each total loss scenario.
Additional Areas of ApplicationThe application highlighted here focuses onproperty catastrophe risk and allocating the costof equity capital, but the reformulation of themeaning of VaR should have ramifications inother areas as well.
1) Assets–risk and capital for assets such as eq-uities and fixed income securities have tradi-tionally been defined based upon VaRmetrics; as a result, methods that allocatecapital among various asset classes and op-erating units may benefit from implementingcapital allocation by percentile layer.
2) Other sources of capital–capital allocation bypercentile layer may also be germane whenthe firm’s total capital does not reside in one“indivisible bucket of equity capital,” butrather is split into different types of capital.
a. Multiple tranches of capital–firms oftenhave sources of capital beyond equity cap-ital, sometimes in the form of tranches.Because these tranches sustain capital de-pletion in a predetermined sequentialorder and, as a result, carry different cost ofcapital rates, it would seem appropriate to
September 2007 ◗ Risk Management
continued on page 36 ◗
Exhibit 2: Lee Diagram
◗ Page 36
allocate capital with a procedure that ex-plicitly accounts for the varying layers ofcapital and their costs. Thus, capital alloca-tion by percentile layer, which provides aframework for explicitly allocating capitalby layer, appears to offer an appealing alter-native to almost all traditional methods(VaR, TVaR, TCE, coTVaR, etc.), which donot seem to adapt well to such a situation(with the notable exception of transformedprobability functions).
b.In addition, alternative forms of capital thatapply on a “layered” basis (e.g., excess ofloss reinsurance) and their costs (e.g., theamount of “risk load” or “margin” in thereinsurance price) would also appear to becandidates for capital allocation by per-centile layer.
Interpretation and CommentsThe procedure for capital allocation by per-centile layer outlined above generates alloca-tions that are different than many other methods,with ramifications for measuring the relative riskand profitability of various lines of business. Tailbased methods, such as coTVaR, tend to allocatethe overwhelming amount of capital only to per-ils that contribute to the very worst scenarios;capital allocation by percentile layer, however,recognizes that when the firm holds capital evenfor an extremely catastrophic scenario, some ofthe capital also benefits other, more likely, moremoderately severe downside events. On theother hand, other methods (e.g., Mango’s “capi-tal consumption,” XTVaR at the mean, etc.) allo-cate capital to a broader range of loss events thatconsume capital; the allocation varies propor-tionately based upon conditional probability.These methods, however, often allocate insuffi-cient capital to unlikely yet most severe events.They fail to note that the potentially extreme lossof such scenarios causes firms to hold an amountof capital that far outstrips the amount requiredby other loss events; although the actual occur-rence of one of these events is very unlikely, thecost of holding precautionary capital is quite def-inite. Capital allocation by percentile layer, on
the other hand, appropriately allocates morecapital cost to those unlikely, severe events thatrequire the firm to hold additional capital.
Capital allocation by percentile layer as delin-eated above assumes that required capital isbased upon VaR, but a similar model can alsoapply to TVaR. In other words, we can view TVaRas saying we want to hold enough capital “evenfor {the 99th percentile loss + the average amountby which losses above the 99th percentile tend toexceed the 99th percentile}.” In such a case, cap-ital allocation by layer would be nearly the same,allocating capital up to the 99th percentile. Theonly additional step would then be to allocate oneadditional layer of capital (i.e., TVaR – VaR) tothe losses that exceed the TVaR threshold.Consistent with TVaR’s meaning as well as thelayer allocation approach, this additional layerof capital should be allocated to loss events inproportion to each event’s average amount of lossexcess of the TVaR threshold.
Extension to ContinuousFormulasThe approach to capital allocation discussedabove essentially entails allocating capital onmany discrete layers of capital, from zero toVaR(99 percent). By viewing the width of eachlayer of capital to be infinitesimally small, wecan express Capital Allocation by PercentileLayer in continuous formulas.
First we will take the perspective of allocating alayer of capital to the various loss scenarios thatpotentially use this capital. Let x represent theamount of the loss scenario and let y representthe capital. First we take an infinitesimally smalllayer of capital spanning from y to y+dy and allo-cate it across loss scenarios. The amount of cap-ital we wish to allocate, the “width” of the layer,is dy. We allocate this capital based upon eachloss scenario’s conditional probability of “pene-trating the layer,” namely, exceeding the lower bound of the layer. Thus the allocationweight to a loss scenario, given that equals .
Best Paper Award WinnerRisk Management ◗ September 2007
2007 Risk Management Section …◗ continued from page 35
Page 37 ◗
The sum of all allocation weights on this partic-ular layer of capital is then
Note that the allocation weights sum to 100 per-cent. Then we perform this procedure for all lay-ers of capital, from y = 0 to y = total capital =VaR(99 percent): The sum equals VaR(99 percent).
We now can realize that each loss scenario ofamount x receives allocations of varying por-tions from many different layers of capital. Wecan express the total capital allocated to a lossscenario of amount x as follows:
As before, the allocation weight to a loss scenario on a layer of capital, given that equals . Now, if wesum across all layers of capital that the loss sce-nario penetrates, we have
Of course, we remember that if the loss amount xexceeds the total capital amount of VaR(99 per-cent), then (because the firm’s capital is a finiteamount), the formula for the loss scenario’s totalallocated capital must be amended to:
In general, though, for loss amounts below theVaR threshold, we can say that the AllocatedCapital to Loss Amount x =
According to this equation, the procedure of cap-ital allocation by percentile layer says that anyloss scenario’s allocated capital depends upon:1) The probability of the event occurring (i.e., f(x))2) The severity of the loss event, or the extent to
which the loss event penetrates layers of cap-ital (i.e., the upper bound of integration is x,the loss amount)
3) The loss event’s inability to share the burdenof its required capital with other loss events
We can think of this factor asa mathematical measurement of the loss sce-
nario’s “dissimilarity” in severity to other po-tential loss scenarios.
We can also formulate the allocated cost of cap-ital as a utility function. If we let r = requiredpercent rate of return on capital, then the cost ofcapital equals r multiplied by allocated capital.Then the cost of capital associated with loss sce-nario of amount x =
Of course, the loss scenario also contributes ex-pected loss of f(x) * x, so the total cost attributa-ble to a loss scenario of amount x is
We note that f(x) is simply the probability of theloss scenario occurring. Using conditionalprobability, we can then say that conditional onthe loss scenario, or “given” the loss scenario,the utility of a loss amount x equals:
ConclusionCapital Allocation by Percentile Layer has sever-al advantages, both conceptual and functional,over existing methods for allocating capital. Itemerges organically from a new formulation ofthe meaning of holding Value at Risk capital; al-locates capital to the entire range of loss events,not only the most extreme events in the tail of thedistribution; tends to allocate more capital, allelse equal, to those events that are more likely;tends to allocate disproportionately more capitalto those loss events that are more severe; rendersmoot the question of which arbitrary percentilethreshold to select for allocation purposes byusing all relevant percentile thresholds; pro-duces allocation weights that always add up to100 percent; explicitly allocates the entireamount of the firm’s capital, in contrast to othermethods that allocate based upon the last dollarof “marginal” capital; and provides a frameworkfor allocating capital by layer and by tranche.
Capital Allocation by Percentile Layer has thepotential to generate significantly different allo-cations than existing methods, with ramifica-tions for calculating risk load and for measuringrisk adjusted profitability. ✦
Of course, we remember that if the loss amount x exceeds the total capital amount of
VaR(99 percent), then (because the firm’s capital is a finite amount), the formula for the loss
scenario’s total allocated capital must be amended to:
In general, though, for loss amounts below the VaR threshold, we can say that the Allocated
Capital to Loss Amount x =
According to this equation, the procedure of capital allocation by percentile layer says that
any loss scenario’s allocated capital depends upon:
1) The probability of the event occurring (i.e., f(x))
2) The severity of the loss event, or the extent to which the loss event penetrates layers
of capital (i.e., the upper bound of integration is x, the loss amount)
3) The loss event’s inability to share the burden of its required capital with other loss
events (i.e., 1 / [1-F(y)]). We can think of this factor as a mathematical
measurement of the loss scenario’s “dissimilarity” in severity to other potential loss
scenarios.
September 2007 ◗ Risk Management
dyyFxfxy
y
"
"0
))(1/()(
Of course, we remember that if the loss amount x exceeds the total capital amount of
VaR(99 percent), then (because the firm’s capital is a finite amount), the formula for the loss
scenario’s total allocated capital must be amended to:
In general, though, for loss amounts below the VaR threshold, we can say that the Allocated
Capital to Loss Amount x =
According to this equation, the procedure of capital allocation by percentile layer says that
any loss scenario’s allocated capital depends upon:
1) The probability of the event occurring (i.e., f(x))
2) The severity of the loss event, or the extent to which the loss event penetrates layers
of capital (i.e., the upper bound of integration is x, the loss amount)
3) The loss event’s inability to share the burden of its required capital with other loss
events (i.e., 1 / [1-F(y)]). We can think of this factor as a mathematical
measurement of the loss scenario’s “dissimilarity” in severity to other potential loss
scenarios.
Of course, we remember that if the loss amount x exceeds the total capital amount of
VaR(99 percent), then (because the firm’s capital is a finite amount), the formula for the loss
scenario’s total allocated capital must be amended to:
dyyFxfVaRy
y
"
"
%)99(
0
))(1/()(
In general, though, for loss amounts below the VaR threshold, we can say that the Allocated
Capital to Loss Amount x =
According to this equation, the procedure of capital allocation by percentile layer says that
any loss scenario’s allocated capital depends upon:
1) The probability of the event occurring (i.e., f(x))
2) The severity of the loss event, or the extent to which the loss event penetrates layers
of capital (i.e., the upper bound of integration is x, the loss amount)
3) The loss event’s inability to share the burden of its required capital with other loss
events (i.e., 1 / [1-F(y)]). We can think of this factor as a mathematical
measurement of the loss scenario’s “dissimilarity” in severity to other potential loss
scenarios.
We can also formulate the allocated cost of capital as a utility function. If we let r = required
percent rate of return on capital, then the cost of capital equals r multiplied by allocated
capital. Then the cost of capital associated with loss scenario of amount x =
Of course, the loss scenario also contributes expected loss of f(x) * x, so the total cost
attributable to a loss scenario of amount x is
We note that f(x) is simply the probability of the loss scenario occurring. Using conditional
probability, we can then say that conditional on the loss scenario, or “given” the loss
scenario, the utility of a loss amount x equals:
"
"
�xy
y
dyyFrx0
))(1/(1
Conclusion
Capital Allocation by Percentile Layer has several advantages, both conceptual and
functional, over existing methods for allocating capital. It emerges organically from a new
formulation of the meaning of holding Value at Risk capital; allocates capital to the entire
range of loss events, not only the most extreme events in the tail of the distribution; tends to
We can also formulate the allocated cost of capital as a utility function. If we let r = required
percent rate of return on capital, then the cost of capital equals r multiplied by allocated
capital. Then the cost of capital associated with loss scenario of amount x =
Of course, the loss scenario also contributes expected loss of f(x) * x, so the total cost
attributable to a loss scenario of amount x is
)))(1/(1)((0
"
"
�xy
y
dyyFrxxf
We note that f(x) is simply the probability of the loss scenario occurring. Using conditional
probability, we can then say that conditional on the loss scenario, or “given” the loss
scenario, the utility of a loss amount x equals:
Conclusion
Capital Allocation by Percentile Layer has several advantages, both conceptual and
functional, over existing methods for allocating capital. It emerges organically from a new
formulation of the meaning of holding Value at Risk capital; allocates capital to the entire
range of loss events, not only the most extreme events in the tail of the distribution; tends to
We can also formulate the allocated cost of capital as a utility function. If we let r = required
percent rate of return on capital, then the cost of capital equals r multiplied by allocated
capital. Then the cost of capital associated with loss scenario of amount x =
"
"
xy
y
dyyFxfr0
))(1/(1)(*
Of course, the loss scenario also contributes expected loss of f(x) * x, so the total cost
attributable to a loss scenario of amount x is
We note that f(x) is simply the probability of the loss scenario occurring. Using conditional
probability, we can then say that conditional on the loss scenario, or “given” the loss
scenario, the utility of a loss amount x equals:
Conclusion
Capital Allocation by Percentile Layer has several advantages, both conceptual and
functional, over existing methods for allocating capital. It emerges organically from a new
formulation of the meaning of holding Value at Risk capital; allocates capital to the entire
range of loss events, not only the most extreme events in the tail of the distribution; tends toOf course, we remember that if the loss amount x exceeds the total capital amount of
VaR(99 percent), then (because the firm’s capital is a finite amount), the formula for the loss
scenario’s total allocated capital must be amended to:
In general, though, for loss amounts below the VaR threshold, we can say that the Allocated
Capital to Loss Amount x =
dyyFxfxACxy
y
"
"
"0
))(1/(1)()(
According to this equation, the procedure of capital allocation by percentile layer says that
any loss scenario’s allocated capital depends upon:
1) The probability of the event occurring (i.e., f(x))
2) The severity of the loss event, or the extent to which the loss event penetrates layers
of capital (i.e., the upper bound of integration is x, the loss amount)
3) The loss event’s inability to share the burden of its required capital with other loss
events (i.e., 1 / [1-F(y)]). We can think of this factor as a mathematical
measurement of the loss scenario’s “dissimilarity” in severity to other potential loss
scenarios.
ERMAP Sponsored Sessions
◗ Page 38
A Full Line-Up of ERMAP Sponsored Sessions isSlated for the Annual Meeting in Washington, D.C. by Todd Henderson
E RMAP (the Joint Risk ManagementSection) is planning an exciting pro-gram for the 2007 Annual Meeting
that will be held at the Marriott Wardman ParkOct. 14-17. The sessions for this year’s meetingcover current, timely topics that will be useful tothe risk management professional as well asthose interested in learning more about ERM.Current thought leaders from within the actuari-al profession, in addition to other experts, willdiscuss a range of issues from current pandemicresearch to economic capital to defining andclassifying operational risks.
Look For …
Pandemic Research StudyA panel of industry experts will discuss the find-ings of the recently (summer of 2007) releasedPandemic Research Project sponsored by theSociety of Actuaries. Attendees will gain a fo-cused insight into the pandemic threat and betterunderstand how this risk can affect earnings,balance sheets and operations of their organiza-tions.
Economic Capital ModelsThis session discusses current trends for devel-oping company-specific EC, as well as best prac-tices for its uses and applications. Panelists willcover recent changes in the regulatory and ratingagency landscape for determining capital ade-quacy,
Effective Stress TestingWhat are those scenarios that should be keepingus up at night? How will your company fairshould they materialize? In this session you’lllearn about identifying and modeling the outly-ing possibilities and correlated events. Also dis-cussed will be how to model the impact of these
scenarios on your organization, as well as how toincorporate this analysis into your risk manage-ment framework.
Operational Risk of Hedging ProgramsHedging programs are put in place to hedge risk,but there are many aspects of the executionwhich put the effectiveness of the program atrisk. In this session you’ll learn about the key,practical considerations to understand whenmodeling these programs, how to quantify the as-sociated operational risk, and most importantlyhow to manage it.
Defining and Classifying Operational RiskIn this session you’ll learn the true nature of theword risk, the fundamental characteristics of op-erational risk, the differences in definitionalstandards, the evolution of this field over the pastdecade, and different approaches and methodsused in an advanced measurement and manage-ment framework.
ERMAP is also sponsoring a Hot Breakfast andthe Chief Risk Officer’s Forum. See you in DC. ✦
Risk Management ◗ September 2007
Todd Henderson, FSA, CERA,
MAAA, is vice president &
chief risk officer with Western
& Southern Financial Group in
Cincinnati, Ohio. He can be
reached at todd.henderson@
westernsouthernlife.com.
475 N. Martingale Rd., Suite 600Schaumburg, IL 60173
(847) 706-3500 main(847) 706-3599 faxwww.soa.org
Risk ManagementIssue Number 11 September 2007
Published by the Society of Actuaries475 N. Martingale Road, Suite 600Schaumburg, IL 60173-2226phone: 847.706.3500 fax: 847.706.3599www.soa.org
This newsletter is free to section members. Current-year issues are available from theCommunications Department. Back issues ofsection newsletters have been placed in theSOA library and on the SOA Web site:(www.soa.org). Photocopies of back issues maybe requested for a nominal fee.
2006-2007 SECTION LEADERSHIP
EditorKen Seng Tan, ASA, CERAe: [email protected]
Co-EditorRon Harasym, FSA, CERA, FCIA, MAAAe: [email protected]
Council MembersDouglas W. Brooks, FSA, CERA, FCIA, MAAAAnthony Dardis, FSA, FIA, MAAAKevin Dickson, FCAS, MAAADavid Gilliland, FSA, FCIA, MAAARon Harasym, FSA, CERA, FCIA, MAAATodd Henderson, FSA, CERA, MAAAValentina Isakina, ASA, MAAAHank McMillan, FSA, CERA, MAAALarry Rubin, FSA, MAAAKen Seng Tan, ASA, CERAFred Tavan, FSA, FCIABob Wolf, FCAS, MAAA
Society Staff ContactsKara Clark, Staff Partnere: [email protected]
Kathryn Wiener, Staff Editore: [email protected]
Newsletter DesignAngie Godlewska Julissa Sweeneye: [email protected]
Facts and opinions contained hereinare the sole responsibility of the persons expressing them and should not be attributed to the Society of Actuaries, itscommittees, the Risk Management Section or the employers of the authors. We willpromptly correct errors brought to our attention.
qThis newsletter was printed on recycled paper.
Copyright © 2007 Society of Actuaries.
All rights reserved.Printed in the United States of America.
Articles Needed for Risk ManagementYour help and participation is needed and welcomed. All articles will include a byline to giveyou full credit for your effort. If you would like to submit an article, please contact Ken Seng Tan,editor, at [email protected] or Ron Harasym, co-editor, [email protected].
The next issue of Risk Management will be published:
Publication Date Submission DeadlineDecember 2007 October 1, 2007
Preferred FormatIn order to efficiently handle articles, please use the following format when submitting articles:
Please e-mail your articles as attachments in either MS Word (.doc) or Simple Text (.txt)files. We are able to convert most PC-compatible software packages. Headlines are typedupper and lower case. Please use a 10-point Times New Roman font for the body text.Carriage returns are put in only at the end of paragraphs. The right-hand margin is not justified.
If you must submit articles in another manner, please call Kathryn Wiener, (847) 706-3501, at the Society of Actuaries for help.
Please send an electronic copy of the article to:
Ken Seng Tan, ASA, CERA, Ph.D.University of WaterlooWaterloo, Ontario Canada N2L 3G1phone: (519) 888-4567 ext. 36688e-mail: [email protected]
Ron Harasym, FSA, CERA, FCIA, MAAANew York Life Insurance Company51 Madison Avenue7th FloorNew York, NY 10010phone: (212) 576-5345e-mail: [email protected]