welcometomcqmc2016atstanford university - mcqmc 2016, stanford

136
Welcome to MCQMC 2016 at Stanford University We are pleased to welcome you to Stanford University for the 12th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing, MCQMC 2016. This conference series is the major event for researchers in the Monte Carlo and Quasi-Monte Carlo community. Our scientific committee identified ten extraordinary researchers to give plenary talks. The broader community of researchers submitted an astonishing variety of talks in minisymposia, special sessions and contributed sessions. We have a full schedule, and you will often find yourself wishing to be at more than one talk at a time. We hope that you are able to take some time to explore our corner of the world. We are pleased to bring MCQMC back to the United States, back to California, and for the first time, to the San Francisco Bay Area. The conference takes place on the campus of Stanford University, which is celebrating its 125 th anniversary this year. The main venue is the Li Ka Shing Center (LKSC) in the School of Medicine. From here you are a short distance from Pacific Ocean beaches, from the city of San Francisco and from the head offices of our local indusry, commonly called Silicon Valley. If you extend your stay some days, you may see the wine country to the North, the Sierra mountains to the East, or the Monterey peninsula to the South. We wish you a pleasant and productive week, Art B. Owen, Peter W. Glynn, Wing H. Wong, Kay Giesecke MCQMC 2016 Conference Organizers 1

Upload: others

Post on 29-Apr-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Welcome to MCQMC 2016 at StanfordUniversity

We are pleased to welcome you to Stanford University for the 12th International Conference on MonteCarlo and Quasi-Monte Carlo Methods in Scientific Computing, MCQMC 2016. This conference seriesis the major event for researchers in the Monte Carlo and Quasi-Monte Carlo community. Our scientificcommittee identified ten extraordinary researchers to give plenary talks. The broader community ofresearchers submitted an astonishing variety of talks in minisymposia, special sessions and contributedsessions. We have a full schedule, and you will often find yourself wishing to be at more than one talk ata time.

We hope that you are able to take some time to explore our corner of the world. We are pleased tobring MCQMC back to the United States, back to California, and for the first time, to the San FranciscoBay Area. The conference takes place on the campus of Stanford University, which is celebrating its 125th

anniversary this year. The main venue is the Li Ka Shing Center (LKSC) in the School of Medicine. Fromhere you are a short distance from Pacific Ocean beaches, from the city of San Francisco and from thehead offices of our local indusry, commonly called Silicon Valley. If you extend your stay some days, youmay see the wine country to the North, the Sierra mountains to the East, or the Monterey peninsula tothe South.

We wish you a pleasant and productive week,

Art B. Owen, Peter W. Glynn, Wing H. Wong, Kay GieseckeMCQMC 2016 Conference Organizers

1

Page 2: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Contents

Welcome 1

Contents 2

The MCQMC Conference Series 3

Scientific Committee 4

Acknowledgments 5

Practical Information 6

Schedule 9

Tutorials 24

Plenary Abstracts 28

Special Session Abstracts 64

Contributed Abstracts 103

List of Session Chairs 131

Speaker and Co-author Index 133

2

Page 3: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

The MCQMC Conference Series

History

The MCQMC conference series is a biennial meet-ing focused on Monte Carlo (MC) and quasi-MonteCarlo (QMC) methods in scientific computing. Theconference typically attracts between 150 and 200participants. This year we are just over 200. Its aimis to provide a forum where leading researchers andusers can exchange information on the latest theo-retical developments and important applications ofthese methods.

Recent conferences have attracted researchers inMarkov chain Monte Carlo (MCMC). In a nutshell,MC methods study complex systems by simulationsfed by computer-generated pseudorandom numbers.QMC methods replace these random numbers bymore evenly distributed (carefully selected) numbersto improve their effectiveness. A large variety ofspecial techniques are developed and used to makethese methods more effective in terms of speed andaccuracy. The conference focuses primarily on themathematical study of these techniques, their imple-mentation and adaptation for concrete applications,and their empirical assessment.

This conference series was initiated by HaraldNiederreiter. He co-chaired the first seven confer-ences. In 2006 Harald Niederreiter announced hiswish to step down from the organizational role and aSteering Committee was formed to ensure and over-see the continuation of the conference series. The

locations of the first 12 conferences are set out be-low.

Year Location

1994 Las Vegas, NV USA1996 Salzburg, Austria1998 Claremont, CA USA2000 Hong Kong2002 Singapore2004 Juan-Les-Pins, France2006 Ulm, Germany2008 Montréal, Canada2010 Warsaw, Poland2012 Sydney, Australia2014 KU Leuven, Belgium2016 Stanford, CA USA2018 ????, ????

Steering Committee

The MCQMC steering committee consists of:

Stefan Heinrich (Germany) (chair)Fred J. Hickernell (USA)Alexander Keller (Germany)Frances Y. Kuo (Australia)Pierre L’Ecuyer (Canada)Art B. Owen (USA)Friedrich Pillichshammer (Austria)

3

Page 4: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

MCQMC 2016 Scientific Committee

The scientific committee members nominates plenary speakers, and are active in organzing special sessionsand minisymposia as well as refereeing proceedings submissions. It consists of the local organizers, theMCQMC Steering Committee and the following researchers:

Dmitriy Bilyk (USA, Minnesota)

Nicolas Chopin (France, ENSAE)

Ronald Cools (Belgium, KU Leuven)

Josef Dick (Australia, UNSW)

Arnaud Doucet (UK, Oxford)

Henri Faure (France, Marseille)

Mike Giles (UK, Oxford)

Paul Glasserman (USA, Columbia)

Michael Gnewuch (Germany, Kiel)

Aicke Hinrichs (Austria, JKU Linz)

Christophe Hery (USA, Pixar)

Wenzel Jakob (Switzerland, ETH and Disney)

Dirk Kroese (Australia, U Queensland)

Erich Novak (Germany, Friedrich Schiller University,Jena)

Gareth Peters (UK, University College London)

Dirk Nuyens (Belgium, KU Leuven)

Aneta Karaivanova (Bulgaria, Bulgarian Academyof Sciences)

Faming Liang (USA, U Florida, Gainesville)

Gerhard Larcher (Austria, JKU Linz)

Christiane Lemieux (Canada, Waterloo)

Makoto Matsumoto (Japan, Hiroshima)

Thomas Mueller-Gronbach (Germany, Passau)

Harald Niederreiter (Austria, Austrian Academy ofSciences)

Klaus Ritter (Germany, Kaiserslauten)

Wolfgang Schmid (Austria, Salzburg)

Steve Scott (USA, Google Inc)

Xiaoqun Wang (China, Tsinghua)

Yazhen Wang (USA, Wisconsin)

Grzegorz Wasilkowski (USA, U Kentucky)

Dawn Woodard (USA, Uber)

Henryk Wozniakowski (USA, Columbia and Poland,Warsaw)

Qing Zhou (USA, UCLA)

4

Page 5: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Acknowledgments

The local organizing committee would like to ac-knowledge the many organizations and people whohave supported or contributed to MCQMC 2016.

Sponsors

We are very grateful for the financial support fromthe following sponsors:• Stanford Statistics Department,• SAMSI,• Google Inc.,• Yi Zhou (PhD 1998) and Brice Rosenzweig,• Intel,• Two Sigma, and• Uber.

Promotional

The following kindly helped to promote MCQMC2016:• SIAM,• The Institute of Mathematical Statistics, and• Xian’s Og.

MCQMC Community

Previous organizers helped this conference by offer-ing their advice and even LaTeX source files. Amongthem are Fred Hickernell, Pierre L’Ecuyer, DirkNuyens, Ronald Cools and Frances Kuo. The Scien-tific Committee and others too, conributed talks andsessions and agreed to introduce speakers. Michael

Giles and Ian Sloan reported some website issuesvery early on and that was super helpful. Some ofthe prose in this booklet describing the MCQMCconference has been handed down from meeting tomeeting and we acknowledge their wording.

Stanford Statistics Department

We have had considerable help from the Statisticsdepartment here at Stanford. The following peoplehave been instrumental:• Ellen van Stone,• Emily Lauderdale,• Heather Murthy,• Joanna Yu,• Amir Sepehri,• Keli Liu, and• Zeyu Zheng (actually MS&E).

We are very grateful to Gunther Walther for hisstrong support when this project was being con-ceived.

Stanford Conference Services

It is a pleasure to acknowledge the hard workdone on our behalf by Stanford Conference Services.They are tireless, cheerful and professional, and wecould not have organized it without their help:• Brigid Neff (team lead),• Suzette Escobar,• Meredith Noe,• John Ventrella, and• Dixee Kimball.

5

Page 6: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Practical Information

Wireless

Most of the practical information you need, gath-ered from all those emails we sent you, is online atmcqmc2016.stanford.edu. That website containsnumerous links to Google maps of the Stanford cam-pus and surrounding areas. So the primary practicalinformation is about how to connect to wireless.

Stanford participates in Eduroam so you shouldhave access from them if you have joined. Alterna-tively you can connect to one of Stanford’s wirelessnetworks, called Stanford Visitor. The access is freeand you do not have to log in. You do have to accepttheir terms and conditions.

Venue

Our primary venue is the Li Ka Shing Center(LKSC) in the medical school. The lecture roomswe will use are on the first and second floors. BergHall A, B and C are also known as rooms LK 230,LK 240 and LK 250.

On Wednesday morning, the medical schoolneeds the space we are using for plenary talks. Sofor that talk only, we will be in room 200 of theHewlett Learning Center, about a five minute walkfrom LKSC.

Registration

The registration desk is on the second floor of LKSC.It will be staffed throughout the entire meeting in-cluding Sunday afternoon during the tutorial ses-sions.

Hotel to venue and back

It is about a 30 minute walk to LKSC from down-town Palo Alto where the closest hotels are. Thismay be a nice walk, but it could also be hot. Wehave chartered some shuttles from the Cardinal Ho-tel to LKSC each morning and back each afternoon.You do not have to be a hotel guest to use thoseshuttles.

The website also has a link to a bicycle rentalshop on campus.

Finally, if you are in a car, and have pre-purchased parking, information about how to useyour parking pass is given in the conference website.

Tutorials

We have QMC tutorials on Sunday afternoon from2pm to 6pm.

Audiovisual

In case of any difficulty, there is an A/V technicianin the building. The people at the registration deskcan locate the technician for you. Here is a descrip-tion of our A/V setup:• Each LKSC meeting room has a projector and

projection screen.• The control panel is connected to a Mac. Pre-

sentations should be brought on a flash driveto use with the existing Mac. Your file nameshould be titled with the following informa-tion, “PresentationDay,LastName,FirstName”(e.g. Tuesday,Smith,Jane). Please don’t bringa file called stanford.pdf!• In an effort to reduce set-up time and avoid

taking time away from any of the talks, speak-ers are strongly encouraged to use the existingMac.

6

Page 7: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

7

• For speakers whose presentations were createdon a PC and who plan to use the existing Mac,presentations should make sure to use stan-dard fonts. As a back-up, it’s suggested toalso bring a version in PDF format, with anyslide animations or embedded video removed.• Presenters bringing their own computer are re-

quired to bring along their own adapter. Thevenue has adapters only for Thunderbolt/DVI(Mac) and VGA (PC). The venue cannot ac-commodate HDMI at this time.• In order to load and test the presentations,

presenters should visit the podium of the roomyou are presenting one half hour before theprogram begins for the day or at any of thebreaks prior to their scheduled talk. Also, youmay ask your room moderator for assistancewith loading your presentations, if needed.

Welcome reception

There will be a welcome reception on the LKSC lawnon Monday evening. It will include drinks and lightfood. Partners are welcome to join.

Plenary lectures

Most plenary lectures are held in Berg Hall, on thesecond floor of LKSC. The one exception is Wednes-day morning’s 9 AM lecture by Jose Blanchet inHewlett 200. These lectures are in 60 minute timeslots. The talks should last for 50 minutes to leaveadequate time for discussion, questions and answers.

Regular lectures

The other lectures from minisymposia, special ses-sions and contributed talks all take place in 30minute time slots. The talks should last 25 minutes.This allows time for questions, as well as speakertransitions. Many audience members will want tomove between sessions. It is therefore essential tostart and stop on time.

Coffee

The coffee breaks include light snacks. They willbe served in Berg Hall of the LKSC. On Wednesday

morning there will be a coffee service at the Hewlettbuilding just ahead of the plenary talk there.

Lunch

Lunch is not provided. There are a number of cafe-terias very close to LKSC and they know that weare coming. No single one of them is large enoughto handle our conference, but if we disperse amongthem we should be fine. The conference takes placethe week after summer quarter final exams and sothere should be fewer students. The website hassome maps and descriptions.

Dinner

There are lots of interesting restaurants on Univer-sity Drive in Palo Alto. The website has a mapshowing directions there. It also describes dining atStanford shopping center, or in Mountain View andit gives a short list of local grocery stores.

Wednesday group photo

On Wednesday we finish a little earlier than usual.We will have a group photo on or near the steps ofLKSC at about 5pm.

Reception and banquet

On Wednesday we will have a reception and ban-quet at the Arrillaga Alumni Center. The receptionstarts at 6pm. The banquet starts at 7pm. After thegroup photo, we have one hour to walk to Arrillaga.The conference web page contains maps for threeroutes that take you past different campus sights.

Your conference ID badge is your ticket to thebanquet. Guests need a ticket to attend the ban-quet. If you have paid for such a ticket when youregistered, then you will get your ticket along withyour ID badge from the desk.

There will be two surprises at the banquet. Oneof them is the announcment of the location of MC-QMC 2018. What the other surprise will be, is asurprise.

Page 8: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

8

Steering Committee Meeting

The MCQMC Steering Committee will have a closedmeeting on Thursday evening. If you are interestedin organizing a future MCQMC conference pleaseapproach any member of the committee prior totheir meeting.

Emergency

In case of any life threatening emergency, call 911.The LKSC is adjacent to Stanford University Hos-pital. It is prudent to ensure that your insurancecovers events here.

Conference Close

On the last day, Friday, there will be a brief pre-sentation in the morning from the MCQMC 2018organizers. In the afternoon there will be some briefclosing remarks.

Conference Statistics

Registrants 222Tutorials 3Plenary Lectures 10Talks 169Special and Minisymposium 115Contributed 54

Return to SFO

If you are flying home, departing from San FranciscoInternational Airport, then the conference web pagehas a link to their ground transportation web site.It is also possible to get there on Uber.

Miscellaneous

If you have additional questions or need any speciallocal information please ask at the reception desk.

Page 9: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

9

MC

QM

C20

16Sc

hedu

le

Sund

ayP

M,A

ugus

t14

130–5

00

Registrationdesk

isop

en2

00–3

00

Pie

rre

L’E

cuye

rQMC

TutorialI:Introdu

ctionto

(Ran

domized)Qua

si-M

onte

Carlo

Metho

dsp.

253

00–3

20

Break

320–4

20

Fred

J.H

icke

rnel

lQMC

TutorialII:Error

Ana

lysisforQua

si-M

onte

Carlo

Metho

dsp.

264

20–4

40

Break

440–4

50

Richa

rdSm

ith:

Introd

uction

toSA

MSI

450–5

50

Fran

ces

Y.K

uoQMC

TutorialIII:A

pplicationof

QMC

toPDEswithRan

dom

Coefficients

–aSu

rvey

ofAna

lysisan

dIm

plem

entation

p.27

550–6

00

Ilse

Ipsen:

Visionforthe2017-18SA

MSI

QMC

Program

Page 10: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

10

Mon

day

AM

,Aug

ust

1585

5–9

00

Ope

ning

Cerem

ony

BergBC

900–1

000

Plena

ryLe

cture

BergBC

Dir

kN

uyen

sLa

tticeRules

andBeyon

dp.

29Cha

ir:R

onaldCoo

ls10

00–1

025

Coff

eeBergA

Spec

ialse

ssio

nHam

ilton

ianMon

teCarlo

Cha

ir:W

ingWon

g

BergB

Min

i-sy

mpos

ium

Stocha

stic

Com

putation

andCom

plexityI

Cha

ir:Tho

mas

Mueller-Gronb

ach

BergC

Min

i-sy

mpos

ium

MultilevelM

onte

Carlo

ICha

ir:Rau

lTem

pone

LK120

Spec

ialse

ssio

nExp

erim

entalD

esignfor

SimulationExp

erim

ents

Cha

ir:Simon

Mak

LK101

Con

trib

ute

dse

ssio

nQMC

inNon

cubical

Spaces

Cha

ir:Ron

aldCoo

ls

1025–1

055

Mic

hael

Bet

anco

urt

Scalab

leBayesian

Inferencewith

Ham

ilton

ianMon

teCarlo

p.83

Stef

anH

einr

ich

Com

plexityof

Param

etricStocha

stic

Integrationin

Hölder

Classes

p.59

Wei

Fang

Ada

ptiveTim

estepp

ing

forSD

Eswith

Non

-Globa

llyLipschitz

Drift

p.47

Jere

my

Stau

mMulti-Level

Mon

teCarlo

forMetam

odelingof

Stocha

stic

Simulations

withDiscontinuo

usOutpu

tp.

79

Gia

ncar

loTra

vagl

ini

LpResults

forthe

Discrepan

cyAssociated

toaLa

rgeCon

vexBod

yp.

103

1055–1

125

And

ria

Daw

son

EfficientCom

putation

ofUncertainty

inSp

atio-Tem

poralF

orest

Com

position

Inferred

from

FossilPollen

Records

p.83

Mar

tin

Hut

zent

hale

rA

Coercivity-Typ

eCon

dition

forSP

DEs

withAdd

itiveW

hite

Noise

p.59

Mic

hael

Gile

sMultilevelM

Cfor

Multi-D

imension

alReflectedBrownian

Diffusions

p.47

Ros

han

Jose

phV

enga

zhiy

ilSimulatingProba

bility

Distributions

Using

Minim

umEnergy

Designs

p.79

Kin

jalB

asu

The

VarianceLo

wer

Bou

ndan

dAsymptotic

Normalityof

the

Scrambled

Geometric

Net

Estim

ator

p.104

1125–1

155

Dav

idR

ubin

Con

fron

ting

Supe

rnova

Cosmology’sStatistical

andSy

stem

atic

Uncertainties

with

Ham

ilton

ianMon

teCarlo

p.84

And

réH

erzw

urm

Ada

ptiveApp

roximation

oftheMinim

umof

Brownian

Motion

p.60

Juho

Häp

pölä

MeanSq

uare

Error

Ada

ptive

Euler–M

aruy

ama

Metho

dwith

App

lications

inMultilevelM

onte

Carlo

p.47

Pet

erQ

ian

Samurai

Sudo

ku-B

ased

Space-FillingDesigns

for

DataPoo

ling

p.80

Hoz

umiM

oroh

osi

GeneratingMixture

Qua

sirand

omPointswith

GeometricCon

sideration

p.104

1155–1

15

Lunch

Page 11: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

11

Mon

day

earl

yP

M,A

ugus

t15

115–2

15

Plena

ryLe

cture

BergBC

Mic

hael

I.Jo

rdan

AVariation

alPerspective

onAccelerated

Metho

dsin

Optim

ization

p.30

Cha

ir:L

esterMackey

Roo

msepa

rators

close

BergA

Min

i-sy

mpos

ium

MultilevelM

onte

Carlo

IICha

ir:Micha

elGiles

BergB

Min

i-sy

mpos

ium

Stocha

stic

Com

putation

andCom

plexityII

Cha

ir:Stefan

Heinrich

BergC

Min

i-sy

mpos

ium

Graph

icsan

dTranspo

rtI

Cha

ir:AlexKeller

LK120

Con

trib

ute

dse

ssio

nThe

Gibbs

Sampler

Cha

ir:SteveScott

LK101

Con

trib

ute

dse

ssio

nNuclear

Science

Cha

ir:Yah

zenWan

g

225–2

55

Ale

xand

erG

ilber

tThe

Multivariate

Decom

position

Metho

dp.

48

And

reas

Neu

enki

rch

The

Order

Barrier

for

Strong

App

roximationof

Rou

ghVolatility

Mod

els

p.60

Chr

isto

phe

Her

y&

Ryu

suke

Vill

emin

OnMCQMC

Issues

inProdu

ctionRendering

p.43

Dav

idD

rape

rFa

stBayesian

Non

-Param

etric

Inference,

Predictionan

dDecision-Mak

ingat

Big-D

ataScale

p.105

Yin

gY

ang-

Jun

Progressin

Simulationof

Stocha

stic

Neutron

Dyn

amics

p.106

255–3

25

Are

tha

Tec

kent

rup

Qua

sian

dMultilevel

Mon

teCarlo

Metho

dsfor

BayesianInverse

Problem

sp.

48

Mar

ioH

efte

rStrong

Con

vergence

Rates

for

Cox-Ing

ersoll-Ross

Processes:Fu

llParam

eter

Ran

gep.

60

Leon

idPek

elis

Data-DrivenLigh

tScattering

Mod

els:

An

App

licationto

Rendering

Hair

p.43

Cyr

ilC

him

isov

Ada

ptingtheGibbs

Sampler

p.105

Dan

Hua

Shan

gGua

nDetailedCom

parisonof

Uniform

Tally

Density

Algorithm

andUniform

Fission

Site

Algorithm

p.106

325–3

50

Coff

ee

Page 12: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

12

Mon

day

late

PM

,Aug

ust

15BergA

Spec

ialse

ssio

nHighDim

ension

alNum

erical

Integration

Cha

ir:Marku

sWeimar

BergB

Min

i-sy

mpos

ium

Non

linearMon

teCarlo

ICha

ir:Denis

Belom

estny

BergC

Spec

ialse

ssio

nPreasym

ptotics

Cha

ir:TinoUllrich

LK120

Con

trib

ute

dse

ssio

nHam

ilton

ianMCMC

Cha

ir:Tho

mas

Catan

ach

LK101

Con

trib

ute

dse

ssio

nFinan

ceCha

ir:Ada

mKolkiew

icz

350–4

20

Jose

fD

ick

Qua

si-M

onte

Carlo

Metho

dsan

dPDEswith

Ran

dom

Coefficients

p.85

Luka

szSz

pruc

hMultilevelM

onte

Carlo

forMcK

ean-VlasovSD

Es

p.53

Tho

mas

Küh

nPreasym

ptotic

Estim

ates

forApp

roximationin

PeriodicFu

nction

Spaces

p.93

And

rew

Ber

ger

AMarkovJu

mpProcess

forMoreEfficient

Ham

ilton

ianMon

teCarlo

p.107

Tom

ášT

ichý

Strong

Law

ofLa

rge

Num

bers

for

Interval-ValuedGradu

alRan

dom

Variables

p.108

420–4

50

Ral

phK

ritz

inge

rA

Reduced

Fast

Com

ponent-by-

Com

ponent

Con

struction

ofLa

tticePoint

Sets

with

SmallW

eigh

tedStar

Discrepan

cyp.

85

Pla

men

Tur

kedj

iev

Mon

teCarlo

Metho

dfor

Optim

alMan

agem

entin

EnergyMarkets

p.53

Seba

stia

nM

ayer

Preasym

ptoticsvia

Entropy

p.94

May

urM

udig

onda

Ham

ilton

ianMon

teCarlo

witho

utDetailed

Balan

cep.

107

Min

gbin

Feng

Green

Simulationfor

Rep

eatedExp

erim

ents

p.108

450–5

20

Ian

H.S

loan

HighDim

ension

alIntegrationof

Kinks

and

Jumps:–

Smoo

thingby

Preintegration

p.86

Chr

isti

anB

ende

rA

Primal-D

ual

Algorithm

forBackw

ard

SDEs

p.54

Dm

itri

yB

ilyk

Point

Distributions

ontheSp

here,A

lmost

Isom

etricEmbe

ddings,

One-B

itSensing,

and

EnergyOptim

ization

p.94

520–5

50

Chr

isti

anIr

rgeh

erOntheOptim

alOrder

ofIntegrationin

Hermite

Spaces

withFinite

Smoo

thness

p.86

Julie

nG

uyon

ParticleMetho

dsforthe

Smile

Calibration

ofCross-D

ependent

Volatility

Mod

els

p.54

Gle

nnB

yren

heid

Non

-Optim

alityof

Ran

k-1La

tticeSa

mpling

inSp

aces

ofMixed

Smoo

thness

p.94

600–7

30

Welcomereception,

LKSC

Lawn

NB:T

heon

lyseminar

room

sweha

vepa

st5p

mareBergA,B,C

.Pleasedo

notlin

gerin

theothers

afterthelast

sessions

end.

Wemay

usethelobb

yarea

outsideof

BergA,B,C

past

5pm.

Page 13: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

13

Tues

day

AM

,Aug

ust

169

00–1

000

Plena

ryLe

cture

BergBC

Chr

isti

ane

Lem

ieux

Dep

endenceSa

mplingan

dQua

si-R

ando

mNum

bers

p.31

Cha

ir:P

ierreL’Ecuyer

1000–1

025

Coff

eeBergA

Spec

ialse

ssio

nApp

roximateBayesian

Com

putation

Cha

ir:Christian

Rob

ert

BergB

Spec

ialse

ssio

nCom

bina

torial

Discrepan

cyCha

ir:Kun

alTalwar

BergC

Spec

ialse

ssio

nQua

dratureCon

sruction

sCha

ir:Kosuk

eSu

zuki

LK120

Spec

ialse

ssio

nSy

stem

icRisks

and

FInan

cial

Networks

Cha

ir:Pau

lGlasserman

LK203/4

Con

trib

ute

dse

ssio

nDiffusions

Cha

ir:Kryzystof

Latuszyn

ski

1025–1

055

Chr

isto

pher

Dro

vand

iBayesianSy

nthetic

Likelih

ood

p.67

Ale

ksan

drN

ikol

ovCom

bina

torial

Discrepan

cy–

Introd

uction

p.71

Aic

keH

inri

chs

Lp-D

iscrepan

cyan

dBeyon

dof

HigherOrder

Digital

Sequ

ences

p.75

Ale

xSh

koln

ikIm

portan

ceSa

mplingfor

Defau

ltTim

ingMod

els

p.101

Tho

mas

Cat

anac

hSecond

Order

Lang

evin

MarkovCha

inMon

teCarlo

p.109

1055–1

125

Min

h-N

goc

Tra

nSy

ntheticLikelih

ood

Variation

alBayes

p.67

Shac

har

Love

ttMinim

izingDiscrepan

cyby

Walking

ontheEdg

esp.

71

Tak

ashi

God

aAntitheticSa

mplingfor

Qua

si-M

onte

Carlo

IntegrationusingDigital

Nets

p.75

Kyo

ung-

Kuk

Kim

Revisiting

Eisenbe

rg-N

oe:A

Dua

lPerspective

p.101

Fran

kva

nde

rM

eule

nMCMC

forEstim

ationof

IncompletelyDiscretely

ObservedDiffusions

p.109

1125–1

155

Um

bert

oP

icch

ini

InferenceviaBayesian

SyntheticLikelih

oods

for

aMixed-E

ffectsSD

EMod

elof

Tum

orGrowth

andRespo

nseTreatment

p.68

Shas

hwat

Gar

gAnAlgorithm

forKom

los

Con

jectureMatching

Ban

aszczyk’sBou

ndp.

72

Adr

ian

Ebe

rtSu

ccessive

Coo

rdinate

Search

andCom

ponent-

by-C

ompo

nent

Con

structionof

Ran

k-1

LatticeRules

inWeigh

tedRKHS

p.76

Wan

mo

Kan

gHow

Inform

ativeis

Eigenvector

Centrality?

p.102

Fred

Dau

mOptim

aldiffu

sion

for

stocha

stic

Bayesian

ParticleFlow

p.110

1155–1

15

Lunch

Page 14: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

14

Tues

day

earl

yP

M,A

ugus

t16

115–2

15

Plena

ryLe

cture

BergBC

Nic

olas

Cho

pin

Sequ

ential

Qua

si-M

onte

Carlo

inInfin

iteDim

ension

p.32

Cha

ir:W

ingWon

gRoo

msepa

rators

close

BergA

Min

i-sy

mpos

ium

MultilevelM

onte

Carlo

III

Cha

ir:Micha

elGiles

BergB

Min

i-sy

mpos

ium

Stocha

stic

Com

putation

andCom

plexityIII

Cha

ir:Paw

elPrzyb

ylow

icz

BergC

Min

i-sy

mpos

ium

Graph

icsan

dTranspo

rtII Cha

ir:AlexKeller

LK120

Con

trib

ute

dse

ssio

nQua

dratureat

non-QMC

points

Cha

ir:KinjalB

asu

LK203/4

Con

trib

ute

dse

ssio

nMCMC

Outpu

tAna

lysis

Cha

ir:Micha

elBetan

court

225–2

55

Luka

sH

errm

ann

QMC

forParam

etric,

Elliptic

PDEs

p.49

Soti

rios

Saba

nis

Anew

Generationof

Exp

licitNum

erical

Schemes

forSD

Eswith

Supe

rlinearCoefficients

p.61

Pet

erSh

irle

yMon

teCarlo

forLigh

tTranspo

rtInclud

ing

Linear

Polarization

p.44

Jam

esM

.Hym

anHighAccuracy

Algorithm

sfor

IntegratingMultivariate

Function

sDefinedat

ScatteredDataPoints

p.111

Lest

erM

acke

yMeasuring

Sample

Qua

litywithStein’s

Metho

dp.

112

255–3

25

Pie

terj

anR

obbe

AMulti-Ind

exQua

si-M

onte

Carlo

Algorithm

forLo

gnormal

DiffusionProblem

sp.

49

Lari

saY

aros

lavt

seva

OnNon

-Polyn

omial

Lower

Error

Bou

ndsfor

Ada

ptiveStrong

App

roximationof

SDEs

withBou

nded

C∞-C

oefficients

p.61

Jero

me

Span

ier

Tow

ards

RealT

ime

Mon

teCarlo

Simulations

forBiomedicine

p.44

Sim

onM

akSu

pportpo

ints

p.111

Doo

tika

Vat

sMultivariateOutpu

tAna

lysisforMarkov

Cha

inMon

teCarlo

p.112

325–3

50

Coff

ee

Page 15: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

15

Tues

day

late

PM

,Aug

ust

16BergA

Spec

ialse

ssio

nHigherOrder

QMC

Cha

ir:Takashi

God

a

BergB

Con

trib

ute

dse

ssio

nProba

bility

Cha

ir:So

urav

Cha

tterjee

BergC

Spec

ialse

ssio

nMulti-m

odal

MC/M

CMC

Cha

ir:Fa

mingLian

g

LK120

Con

trib

ute

dse

ssio

nUncertainty

Qua

ntification

Cha

ir:Peter

Qian

LK203/4

Con

trib

ute

dse

ssio

nFa

ster

Gibbs

Sampling

Cha

ir:Dav

idDrape

r

350–4

20

Tak

ehit

oY

oshi

kiOptim

alOrder

Qua

dratureError

Bou

ndsforHigherOrder

Digital

Sequ

ences

p.87

Dar

ioA

zzim

onti

Estim

atingOrtha

ntProba

bilitiesof

High

Dim

ension

alGau

ssian

Vectors

withan

App

licationto

Set

Estim

ation

p.113

Hyu

ngsu

kTak

Love

orHate:

aRAM

for

Exp

loring

Multi-M

odality

p.91

Serg

eiK

uche

renk

oSo

bol’Indicesfor

Problem

sDefinedin

Non

-Rectang

ular

Dom

ains

p.115

Cha

itan

yaJo

shi

Using

Low

Discrepan

cySequ

encesin

Gridba

sed

BayesianMetho

dsp.

117

420–4

50

Jens

Oet

ters

hage

nOnHigherOrder

Mon

teCarlo

Integration

p.87

Mar

kH

uber

Mon

teCarlo

with

User-Sp

ecified

Relative

Error

p.113

Bab

akSh

ahba

baWormho

leHam

ilton

ian

Mon

teCarlo

p.91

Jona

sŠu

kys

Uncertainty

quan

tificationin

Multi-P

hase

Cloud

CavitationCollapse

usingMulti-Level

Mon

teCarlo

p.116

Ale

xand

erTer

enin

Asynchron

ousGibbs

Sampling

p.117

450–5

20

Chr

isto

pher

Kac

win

Ortho

gona

lityof

the

Cheby

chev-FrolovLa

ttice

anditsIm

plem

entation

p.88

Kar

thye

kM

urth

yTailP

roba

bilitiesof

Infin

iteSu

msof

Ran

dom

Variables

–Exa

ctan

dEfficientSimulation

p.113

Pie

rre

Jaco

bExp

loring

with

Wan

g-La

ndau

Algorithm

san

dSequ

ential

Mon

teCarlo

p.92

520–5

50

Don

gT

.P.N

guye

nIntegrationof

Func

tion

sin

Weigh

tedKorob

ovSp

aces

with(Sup

er)-

Exp

onential

Rateof

Con

vergen

cep.

88

Rém

iBar

dene

tFa

ster

Mon

teCarlo

with

Determinan

talP

oint

Processes

p.114

Chr

isti

anR

ober

tFo

ldingMarkovCha

ins

p.92

600

Adjou

rn

Page 16: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

16

Wed

nesd

ayA

M,A

ugus

t17

900–1

000

Plena

ryLe

cture

Hew

lett

200

Jose

H.B

lanc

het

EfficientMon

teCarlo

Metho

dsforSp

atialE

xtremes

p.33

Cha

ir:P

eter

Glynn

1000–1

005

Walkfrom

Hew

lett

toLK

SC10

05–1

025

Coff

ee(atLK

SC)

BergA

Min

i-sy

mpos

ium

Graph

icsan

dTranspo

rtIII

Cha

ir:Christoph

eHery

BergB

Spec

ialse

ssio

nIm

portan

ceSa

mpling

Cha

ir:Nicolas

Cho

pin

BergC

Spec

ialse

ssio

nQua

ntum

Com

puting

Cha

ir:Christian

eLe

mieux

LK203/4

Spec

ialse

ssio

nFinan

cial

Sensitivity

Ana

lysis

Cha

ir:Kay

Giesecke

LK205/6

Min

i-sy

mpos

ium

Non

linearMon

teCarlo

IICha

ir:Lu

kasz

Szpruch

1025–1

055

Ale

xand

erK

elle

rLe

arning

Ligh

tTranspo

rttheReinforcedWay

p.45

Víc

tor

Elv

ira

Generalized

Multiple

Impo

rtan

ceSa

mpling

p.65

Yah

zen

Wan

gQua

ntum

Com

puting

Based

onQua

ntum

Ann

ealin

gan

dMon

teCarlo

Simulations

p.97

Mic

hael

FuEstim

atingQua

ntile

Sensitivities

p.89

Em

man

uelG

obet

SmallS

izeData-Driven

RegressionMon

te-C

arlo

p.55

1055–1

125

Tzu

-Mao

LiHessian

-Ham

ilton

ian

Mon

teCarlo

Rendering

p.45

Sour

avC

hatt

erje

eThe

SampleSize

Requiredin

Impo

rtan

ceSa

mpling

p.65

Eri

cR

.Joh

nsto

nQua

ntum

Supe

rsam

pling

p.97

Jong

Mun

Lee

Unb

iasedEstim

atorsof

theGreeksforGeneral

DiffusionProcesses

p.89

Den

isB

elom

estn

yVarianceReduction

Techn

ique

forNon

linear

Mon

teCarlo

Problem

sp.

5511

25–1

155

Ant

onK

apla

nyan

Cho

osingthePath

Integral

Dom

ainfor

MCMC

Integration

p.45

Kry

zyst

ofŁa

tusz

yńsk

iCon

tinu

ousTim

eIm

portan

ceSa

mplingfor

JumpDiffusions

with

App

licationto

Max

imum

Likelih

oodEstim

ation

p.66

Fam

ing

Lian

gAnIm

putation

-Con

sistency

Algorithm

forHigh-Dim

ension

alMissing

DataProblem

sp.

98

Nan

Che

nMon

teCarlo

App

lications

inStocha

stic

Dyn

amic

Program

ming

p.90

1155–1

15

Lunch

Page 17: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

17

Wed

nesd

ayea

rly

PM

,Aug

ust

171

15–2

15

Plena

ryLe

cture

BergBC

Fran

ces

Y.K

uoHot

New

Direction

sforQMC

Researchin

Step

withApp

lications

p.34

Cha

ir:F

redHickernell

Roo

msepa

rators

close

BergA

Min

i-sy

mpos

ium

QMC

ontheSp

here

and

Other

Man

ifoldsI

Cha

ir:Jo

sefDick

BergB

Min

i-sy

mpos

ium

Stocha

stic

Com

putation

andCom

plexityIV

Cha

ir:Tho

mas

Mueller-Gronb

ach

BergC

Con

trib

ute

dse

ssio

nStatistical

Cha

ir:Mac

Hym

an

LK203/4

Con

trib

ute

dse

ssio

nCha

ir:Christian

Bender

225–2

55

Joha

nnS.

Bra

ucha

rtSp

hericalL

attices:

Exp

licitCon

structionof

Low-D

iscrepan

cyPoint

Sets

onthe

2-Sp

here

p.56

And

reas

Röß

ler

ADerivative-Free

Milstein

Schemefor

SPDEswith

Com

mutativeNoise

p.62

Kun

Yan

gDensity

Estim

ationvia

Discrepan

cyp.

118

Gun

ther

Leob

ache

rQMC-Sim

ulationof

PiecewiseDeterministic

MarkovProcesses

p.119

255–3

25

Wöd

enK

usne

rCon

figurations

ofPoints

withRespe

ctto

Discrepan

cyan

dUniform

Distribution

p.56

Paw

ełP

rzyb

yłow

icz

Minim

alAsymptotic

ErrorsforStrong

Globa

lApp

roximationof

SDEs

Drivenby

Poisson

and

WienerProcesses

p.62

Ant

onio

Dal

essa

ndro

Multivariate

Con

cordan

ceMeasures

andCop

ulas

viaTensor

App

roximationof

Generalized

Correlated

Diffusions

p.118

Mic

hael

aSz

ölgy

enyi

ANum

erical

Metho

dfor

Multidimension

alSD

Es

withDiscontinuo

usDrift

andDegenerateDiffusion

p.119

325–3

50

Coff

ee

Page 18: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

18

Wed

nesd

ayla

teP

M,A

ugus

t17

BergA

Con

trib

ute

dse

ssio

nDiscrepan

cyCha

ir:Dim

itriyBily

k

BergB

Min

i-sy

mpos

ium

MultilevelM

onte

Carlo

IV Cha

ir:Rau

lTem

pone

BergC

Con

trib

ute

dse

ssio

nProcesses

forFinan

ceCha

ir:Osnat

Stramer

LK203/4

Con

trib

ute

dse

ssio

nCha

ir:Jo

nCockayn

e

350–4

20

Mar

ioN

eum

ülle

rProba

bilisticStar

Discrepan

cyBou

ndsfor

Digital

Kronecker-Seque

nces

p.120

Chi

heb

Ben

Ham

mou

daMultilevelH

ybridSp

litStep

Implicit

Tau

-Leap

forStocha

stic

Reaction

Networks

p.50

Ale

xK

rein

inFo

rwardan

dBackw

ard

Simulationof

Correlated

Poisson

Processes

p.121

Ann

aZa

rem

baNon

-Param

eteric

Gau

ssianProcess

Vector

ValuedTim

eSeries

Mod

elsan

dTesting

For

Cau

sality

p.122

420–4

50

Flo

rian

Puc

hham

mer

SharpGeneral

and

MetricBou

ndsforthe

Star

Discrepan

cyof

Perturbed

Halton-

Kronecker-Seque

nces

p.120

Kla

usR

itte

rMultilevelM

onte

Carlo

forSD

EswithRan

dom

Bits

p.50

Ada

mK

olki

ewic

zEfficientSimulationFo

rDiffusionProcesses

Using

Integrated

Gau

ssian

Bridg

esp.

121

Hir

oshi

Har

amot

oA

Metho

dto

Com

pute

anApp

ropriate

Sample

Size

oftheTwo-Le

vel

Testof

NIST

TestSu

ite

forFrequencyan

dBinary

MatrixRan

kTest

p.122

500

Group

Pho

toat

LKSC

510–6

00

Walkto

Arrillag

aAlumni

Center

600–7

00

Reception

atArrillag

aAlumni

Center

700–1

030

Ban

quet

atArrillag

aAlumni

Center

Locationof

MCQMC

2018

Annou

nced

Page 19: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

19

Thu

rsda

yA

M,A

ugus

t18

900–1

000

Plena

ryLe

cture

BergBC

Chr

isto

phA

istl

eitn

erOne

Hun

dred

Years

ofUniform

DistributionTheory:

Herman

nWeyl’s

Seminal

Pap

erof

1916

p.35

Cha

ir:J

osef

Dick

1000–1

025

Coff

eeBergA

Spec

ialse

ssio

nStocha

stic

Map

sCha

ir:You

ssef

Marzouk

BergB

Min

i-sy

mpos

ium

BayesianInverse

Problem

sI

Cha

ir:Christoph

Schw

ab

BergC

Min

i-sy

mpos

ium

MultilevelM

onte

Carlo

VCha

ir:MikeGiles

LK101

Spec

ialse

ssio

nExa

ctSa

mplingof

MarkovProcesses

Cha

ir:Jo

seBlanchet

LK102

Con

trib

ute

dse

ssio

nMarkovchainMon

teCarlo

Cha

ir:PierreJa

cob

1025–1

055

Xia

o-Li

Men

gWarpBridg

eSa

mpling:

theNextGeneration

p.99

Rob

ert

N.G

antn

erMultilevelH

igher-Order

Qua

si-M

onte

Carlo

for

BayesianInverse

Problem

sp.

40

Cha

ng-H

anR

hee

Unb

iasedMLM

Cfor

RareEvent

Simulationof

Equ

ilibrium

Distributions

ofStocha

stic

Recurrenc

eEqu

ations

p.50

Pet

erG

lynn

Ran

domized

MLM

Cfor

MarkovCha

ins

p.77

Jam

esJo

hndr

owApp

roximations

ofMarkovCha

insan

dBayesianInference

p.123

1055–1

125

Ale

ssio

Span

tini

Measure

Transpo

rt,

Variation

alInference,

andLo

w-D

imension

alMap

sp.

100

Vie

tH

aH

oang

MultilevelM

arkovCha

inMon

teCarlo

Metho

dsfor

BayesianInversionof

Partial

Differential

Equ

ations

p.41

Rau

lTem

pone

Multi-in

dexMon

teCarlo

p.51

Step

hen

Con

nor

“Omnithermal”Perfect

SimulationforM/G/c

Queues

p.77

Aad

itya

Ram

das

Function

MixingTim

esan

dCon

centration

away

from

Equ

ilibrium

p.123

1125–1

155

Mat

thia

sM

orzf

eld

Wha

ttheCollapseof

the

Ensem

bleKalman

Filter

Tells

usAbo

utLo

calizationof

Particle

Filters

p.100

Chr

isto

phSc

hwab

Num

erical

Ana

lysisof

Posterior

Con

centration

inMultilevelQ

MC

Estim

atorsforBayesian

InverseProblem

sp.

41

Mat

tiV

ihol

aUnb

iasedEstim

atorsan

dMultilevelM

onte

Carlo

p.51

Sand

eep

June

jaExa

ctSa

mplingan

dLa

rgeDeviation

sfor

SpatialB

irth

andDeath

Processes

p.78

Chr

isti

anD

avey

BAIS+L:

AnAda

ptive

MCMC

App

roachto

Samplingfrom

Proba

bilityDistributions

withMan

yMod

esp.

123

1155–1

15

Lunch

Page 20: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

20

Thu

rsda

yea

rly

PM

,Aug

ust

181

15–2

15

Plena

ryLe

cture

BergBC

And

rew

M.S

tuar

tHierarchicalB

ayesianMetho

dsfortheSo

lution

ofInverseProblem

sp.

36Cha

ir:F

rances

Kuo

Roo

msepa

rators

close

BergA

Min

i-sy

mpos

ium

MultilevelM

onte

Carlo

VI

Cha

ir:Micha

elGiles

BergB

Min

i-sy

mpos

ium

Stocha

stic

Com

putation

andCom

plexityV

Cha

ir:Stefan

Heinrich

BergC

Min

i-sy

mpos

ium

Graph

icsan

dTranspo

rtIV Cha

ir:Christoph

eHery

LK101

Con

trib

ute

dse

ssio

nStocha

stic

Volatility

Cha

ir:Sh

inHarase

225–2

55

Mar

iana

Olv

era-

Cra

viot

oThe

Pop

ulation

Dyn

amicsAlgorithm

p.52

Paw

ełM

orki

szOptim

alPointwise

Solution

ofStocha

stic

Differential

Equ

ations

inPresenceof

Inform

ationa

lNoise

p.63

Tos

hiya

Hac

hisu

kaRiseof

theMachinesin

Mon

teCarlo

Integration

forRendering

p.46

Osn

atSt

ram

erBayesianInferencefor

Heston-ST

AR

Mod

els

p.124

255–3

25

Kod

yJ.

H.L

awMultilevelS

equential

Mon

teCarlo

for

NormalizingCon

stan

tsp.

52

Tho

mas

Mue

ller-

Gro

nbac

hGeneral

Multilevel

Ada

ptations

for

Stocha

stic

App

roximation

Algorithm

sp.

63

Jenn

ifer

Ngu

yen

PerturbationMon

teCarlo-B

ased

Metho

dsfor

Cha

racterizationof

Micro-A

rchitectural

Cha

nges

inTissue

p.46

Jan

Pos

píši

lNovel

App

roachto

Calibration

ofStocha

stic

Volatility

Mod

elsusing

Qua

siEvolution

ary

Algorithm

withLo

cal

Search

Optim

ization

p.124

325–3

50

Coff

ee

Page 21: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

21

Thu

rsda

yla

teP

M,A

ugus

t18

BergA

Spec

ialse

ssio

nCon

structionan

dError

Ana

lysisforQua

drature

Rules

Cha

ir:TakehitoYoshiki

BergB

Spec

ialse

ssio

nProba

bilisticNum

erics

Cha

ir:Arnau

dDou

cet

BergC

Min

i-sy

mpos

ium

QMC

ontheSp

here

and

Other

Man

ifoldsII

Cha

ir:Jo

hann

Braucha

rt

LK203/4

Con

trib

ute

dse

ssio

nMedical

Cha

ir:Giray

Okten

LK205/6

Con

trib

ute

dse

ssio

nInterest

Rates

Cha

ir:Kay

Giesecke

350–4

20

Kos

uke

Suzu

kiOptim

alOrder

Qua

si-M

onte

Carlo

Integrationin

Sobo

lev

Spaces

ofArbitrary

Smoo

thness

p.73

Mar

kG

irol

ami

Proba

bilisticNum

erical

Com

putation

:A

New

Con

cept?

p.95

Mas

atak

eH

irao

QMC

Designs

and

Determinan

talP

oint

Processes

p.57

Evg

enia

Bab

ushk

ina

Ada

ptiveMulti-Level

Mon

te-C

arlo

Metho

dfor

Stocha

stic

Variation

alInequa

lities

p.125

Dav

idM

ande

lRob

ustnessof

ShortRate

Mod

els

p.126

420–4

50

Ron

ald

Coo

lsCon

structionof

Lattice

Rules:from

K-O

ptim

alLa

ttices

toExtremal

Lattices

p.73

Fran

cois

-Xav

ier

Bri

olProba

bilisticIntegration:

ARoleforStatistician

sin

Num

erical

Ana

lysis?

p.95

Yu

Gua

ngW

ang

Fast

Fram

elet

Transform

son

Man

ifolds

p.57

Serd

arC

ella

tEnh

ancing

Morph

ological

CategorizationviaMon

teCarlo

Metho

dsp.

125

Dor

ota

Toc

zydl

owsk

aDyn

amic

StateSp

ace

Pan

elRegressionMod

els

forYield

Curves

p.126

450–5

20

Gow

riSu

ryan

aray

ana

Ran

k1La

tticeRules

for

Integran

dswith

PermutationInvarian

tInpu

tsp.

74

Chr

isO

ates

Proba

bilisticIntegration

andIntractable

Distributions

p.96

Gia

com

oG

igan

teDiameter

Bou

nded

Equ

alMeasure

Partition

sof

Com

pact

Man

ifolds

p.58

520–5

50

Dav

idK

rieg

AUniversal

Algorithm

forMultivariate

Integration

p.74

Jon

Coc

kayn

eProba

bilisticMeshless

Metho

dsforPartial

Differential

Equ

ations

andBayesianInverse

Problem

sp.

96

Ale

xand

erR

ezni

kov

CoveringRad

iiof

Various

Point

Con

figurations

Distributed

over

theUnit

Sphere

p.58

600

Adjou

rn

Page 22: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

22

Frid

ayA

M,A

ugus

t19

900–1

000

Plena

ryLe

cture

BergBC

Arn

aud

Dou

cet

OnPseud

o-Margina

lMetho

dsforBayesianInferencein

Latent

VariableMod

els

p.37

Cha

ir:T

BA

1000–1

005

Presentationon

MCQMC

2018

1005–1

025

Coff

eeBergA

Min

i-sy

mpos

ium

BayesianInverse

Problem

sII

Cha

ir:And

rew

Stua

rt

BergB

Spec

ialse

ssio

nIntractableNormalizing

Con

stan

tsCha

ir:Krzysztof

Latuszyn

ski

BergC

Spec

ialse

ssio

nFu

nction

Space

Embe

ddings

forHigh

Dim

ension

alProblem

sCha

ir:Aicke

Hinrichs

LK101

Con

trib

ute

dse

ssio

nRareEvent

Impo

rtan

ceSa

mpling

Cha

ir:Bruno

Tuffi

n

1025–1

055

Tan

Bui

-Tha

nhParticle-ba

sed

App

roximateMon

teCarlo

App

roachesfor

Large-ScaleBayesian

InverseProblem

sp.

42

Han

iDos

sAnMCMC

App

roachto

EmpiricalB

ayes

Inferencean

dBayesian

SensitivityAna

lysisvia

EmpiricalP

rocesses

p.69

Tin

oU

llric

hSp

arse

App

roximation

withRespe

ctto

the

Fabe

r-Scha

uder

System

p.81

Zdra

vko

Bot

evReliabilityEstim

ationfor

Networks

withMinim

alFlow

Dem

andan

dRan

dom

Link

Cap

acities

p.127

1055–1

125

You

ssef

Mar

zouk

AcceleratingMetropo

lisAlgorithm

sviaMeasure

Transpo

rtp.

42

Ick

Hoo

nJi

nA

BayesianHierarchical

SpatialM

odel

forDental

CariesAssessm

entUsing

Non

-Gau

ssianMarkov

Ran

dom

Fields

p.69

Mic

hael

Gne

wuc

hHigh-Dim

ension

alProblem

s:Onthe

Universalityof

Weigh

ted

Ancho

redan

dANOVA

Spaces

p.82

Ern

est

K.R

yuCon

vexOptim

izationfor

Mon

teCarlo:Stocha

stic

Optim

izationfor

Impo

rtan

ceSa

mpling

p.127

1125–1

155

Oliv

erE

rnst

Ana

lysisof

theEnsem

ble

andPolyn

omialC

haos

Kalman

Filtersin

BayesianInverse

Problem

sp.

42

Qia

nyun

LiA

Latent

Potts

mod

elfor

Can

cerPatho

logical

DataAna

lysis

p.70

Pet

erK

ritz

erIntegrationin

Weigh

ted

Ancho

redan

dANOVA

Spaces

p.82

Pau

lRus

sell

ParallelA

daptive

Impo

rtan

ceSa

mpling

p.127

1155–1

15

Lunch

Page 23: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

23

Frid

ayea

rly

PM

,Aug

ust

191

15–2

15

Plena

ryLe

cture

BergBC

Pet

erFr

azie

rBayesianOptim

izationin

theTechSector:Exp

eriences

atUbe

r,Yelp,

andSigO

ptp.

38Cha

ir:D

awnWoo

dard

215–2

20

MCQMC

2016

Closing

Rem

arks

Roo

msepa

rators

close

BergA

Con

trib

ute

dse

ssio

nRan

kOne

Lattices

Cha

ir:DirkNuy

ens

BergB

Con

trib

ute

dse

ssio

nQMC

forFinan

ceCha

ir:AlexKreinin

BergC

Con

trib

ute

dse

ssio

nCha

ir:IanSloan

230–3

00

Ton

iVol

kmer

Sparse

High-Dim

ension

alFFT

Based

onRan

k-1

LatticeSa

mpling

p.128

Lluí

sA

nton

iJim

énez

Rug

ama

Ada

ptiveQua

si-M

onte

Carlo

Where

Each

Integran

dValue

Dep

ends

onAllDataSites:

the

American

OptionCase

p.129

Art

Ow

enPermutationp-value

App

roximationvia

Generalized

Stolarsky

Invarian

cep.

130

300–3

30

Hel

ene

Laim

erOnCom

bined

Com

ponent-by-

Com

ponent

Con

structions

ofLa

ttice

Point

Sets

p.128

Shin

Har

ase

ACom

parisonStud

yof

Sobo

l’Sequ

encesin

Finan

cial

Derivatives

p.129

Mic

hael

Feis

chl

Fast

Ran

dom

Field

Generationwith

H-M

atrices

p.130

330

Adjou

rnun

til2

018

330–4

30

Ligh

trefreshm

ents

foran

ybod

ywho

choo

sesto

lingerbe

fore

head

ingho

me

Page 24: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Tutorials

At this MCQMC we have a series of tutorials on quasi-Monte Carlo (QMC) methods to introduce QMCto attendees who are new to it. We have three members of the MCQMC steering committee giving thesetutorials.

Pierre L’Ecuyer and Fred Hickernell will describe the basics of QMC but we can also expect some oftheir personal views on important future directions.

The Statistical and Applied Mathematical Sciences Institute (SAMSI) is planning a year on quasi-Monte Carlo and probabilistic numerics for 2017/18. We will hear from Richard Smith and Ilse Ipsenof SAMSI in the third session. That is also where Frances Kuo will speak on her work applying QMCmethods to PDEs with random coefficients.

24

Page 25: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

25

Sunday August 14. 200–300, Berg Hall

QMC Tutorial I: Introduction to (Randomized) Quasi-Monte Carlo Methods

Pierre L’EcuyerUniversité de Montré[email protected]

Monte Carlo (MC) methods are ubiquitous in many areas of science, engineering,management, entertainment, etc. MC is commonly used to estimate mathematical ex-pectations viewed as integrals over the unit hypercube [0, 1)s in s dimensions, by aver-aging samples of the integrand taken at n independent random points in [0, 1)s. Quasi-Monte Carlo (QMC) replaces these points by a deterministic set of n points that coversthe space more evenly, to reduce the integration error. Randomized QMC (RQMC) ran-domizes the n QMC points in a way that they retain their high uniformity while eachpoint has a uniform distribution over [0, 1)s. This provides an unbiased estimator which,under appropriate conditions, has much lower variance than with MC. Variants of QMC

and RQMC have also been designed for the simulation of Markov chains, for function approximation andoptimization, for solving partial differential equations, etc. Highly successful applications are found incomputational finance, statistics, physics, and computer graphics, for example. Huge variance reductionsoccur mostly when the integrand can be well approximated by a sum of low-dimensional functions, oftenafter a clever (problem-dependent) multivariate change of variable.

In this tutorial, we summarize the main ideas and some basic results on QMC and RQMC methods,discuss their practical aspects, and give several examples and numerical illustrations. Key issues to bediscussed include: How good QMC point sets and sequences can be constructed and randomized, and howshould we measure their uniformity? How can these methods work for high-dimensional integrals? Canthey work in the context of simulating Markov chains over a large number of steps? Can they be used toestimate an entire distribution instead of just an expectation? How can we extend point sets on the flyby adding new points or new coordinates? What about computing confidence intervals?

This tutorial will emphasize ideas, intuition, and examples. A follow-up tutorial by F. Hickernell willgo more deeply into the theory.

Page 26: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

26

Sunday August 14. 320–420, Berg Hall

QMC Tutorial II: Error Analysis for Quasi-Monte Carlo Methods

Fred J. HickernellIllinois Institute of [email protected]

Quasi-Monte Carlo methods can provide accurate approximations to high dimen-sional integrals, µ =

∫X f(x) ν(dx), in terms of (weighted) sample means of the inte-

grand values at well chosen sites, µ =∑n

i=1wif(xi). Such integrals arise in a varietyof applications, including finance, statistical physics, sensitivity analysis, and Bayesianinference. The cubature error, |µ − µ|, depends on properties of the integrand f , andhow well the sampling measure, ν =

∑ni=1wiδxi , approximates the probability measure

ν.This tutorial highlights some of the major approaches to analyzing and reducing the

cubature error that the conference presentations will develop in great detail. There are deterministic,randomized, and Bayesian ways of viewing cubature error, each making different assumptions about theintegrand, f , and the sampling measure, ν. When the integrands lie in a Hilbert space or can be modeledas instances of a Gaussian process, the error analysis is particularly elegant. Some strategies for increasingefficiency are described. We also show how the dimension of the integration domain may affect the erroranalysis. In some cases this leads to a curse of dimensionality, while in other cases the computational costdoes not explode as the dimension becomes arbitrarily large. Finally, we discuss progress in developingdata-based error bounds, which can be used to determine the sample size, n, required to meet a desirederror tolerance.

This talk builds upon the preceding tutorial by Pierre L’Ecuyer, which describes how quasi-Monte Carlopoint sets are constructed and employed in a variety of situations. Afterwards, Frances Kuo provides atutorial on quasi-Monte Carlo methods for solving partial differential equations with random coefficients.

Page 27: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

27

Sunday August 14. 450–550, Berg Hall

QMC Tutorial III: Application of QMC to PDEs with Random Coefficients – a Survey ofAnalysis and Implementation

Frances Y. KuoUNSW [email protected]

Many physical, biological or geological models involve spatially varying input datawhich may be subject to uncertainty. This induces a corresponding uncertainty in theoutputs of the model and in any physical quantities of interest which may be derivedfrom these outputs. A common way to deal with these uncertainties is by consideringthe input data to be a random field, in which case the derived quantity of interest will ingeneral also be a random variable or a random field. The computational goal is usuallyto find the expected value, higher order moments or other statistics of these derivedquantities. A prime example is the flow of water through a disordered porous medium.

In this tutorial I will provide a survey of recent research efforts on the application of quasi-Monte Carlo(QMC) methods to elliptic partial differential equations (PDEs) with random diffusion coefficients. Asmotivated above, such PDE problems occur in the area of uncertainty quantification. There is a huge bodyof literature on this topic using a variety of methods. QMC methods are relatively new to this applicationarea. The aim of this tutorial is twofold:

• to introduce this interesting new application area to QMC experts, outlining the remaining theoret-ical difficulties and encouraging more QMC research in this direction; and

• to demonstrate to PDE analysts and practitioners how to tap into the contemporary QMC theoryfor the error analysis and how to apply the software to their problems.

The tutorial will be loosely based on my survey with Dirk Nuyens from KU Leuven. The survey andthe accompanying software package are available from

http://people.cs.kuleuven.be/˜dirk.nuyens/qmc4pde/.

The survey covers my recent joint works with Ivan Graham and Robert Scheichl from Bath, Dirk Nuyensfrom KU Leuven, Christoph Schwab from ETH Zurich, Elizabeth Ullmann from TU Munich, and JosefDick, Thong Le Gia, James Nichols and Ian Sloan from UNSW Australia.

Page 28: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Plenary Lectures

Here is our roster of plenary lectures. Their abstracts are on the following pages. Note that Jose Blanchet’sWednesday talk is in Hewlett 200. The others are in Berg Hall at LKSC. The AM plenary lectures are at900 and the PM plenary lectures are at 115.

Mon AM D. Nuyens Lattice Rules and BeyondMon PM M. Jordan A Variational Perspective on Accelerated Methods in Optimization

Tues AM C. Lemieux Dependence Sampling and Quasi-Random NumbersTues PM N. Chopin Sequential Quasi-Monte Carlo in Infinite Dimension

Weds AM J. Blanchet Efficient Monte Carlo Methods for Spatial ExtremesWeds PM F. Kuo Hot New Directions for QMC Research in Step with Applications

Thurs AM C. Aistleitner One Hundred Years of Uniform Distribution Theory: HermanWeyl’s Seminal Paper of 1916

Thurs PM A. Stuart Hierarchical Bayesian Methods for the Solution of Inverse Prob-lems

Fri AM A. Doucet On Pseudo-Marginal Methods for Bayesian Inference in LatentVariable Models

Fri PM P. Frazier Bayesian Optimization in the Tech Sector: Experiences at Uber,Yelp and SigOpt

28

Page 29: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

29

Monday August 15. 900–1000, Berg BC

Lattice Rules and Beyond

Dirk NuyensKU Leuven, [email protected]

In this talk I will show the beautiful simplicity of approximating high-dimensionalintegrals using lattice rules. In current engineering applications it is relatively easy tocome up with very high and ultra high dimensional integrals, which might be expec-tations, that depend on hundreds, thousands, or even an infinite number of variables.As the minimal non-trivial tensor product rule, constructed from a 2-point rule, in 100dimensions already uses 2100 = 1267650600228229401496703205376 ≈ 1030 functionevaluations—which would take 1024 seconds at 1 function evaluation per microsecondand we remember the age of the universe to be only approximately 1017 seconds—weall know why to use Monte Carlo or quasi-Monte Carlo methods where we can freelychoose the number of function evaluations independent of the number of dimensions.

While plain vanilla Monte Carlo methods are very robust and easy to apply, in the case of quasi-MonteCarlo methods a lot more details can pop up. I will first introduce the most elegant—in my biased opinion,which I will defend—of all quasi-Monte Carlo methods: the rank-1 lattice rule. This method is defined byone single integer generating vector together with the total number of points in the rule. Approximating amultivariate integral by this method can be done in one line of Matlab code! As one is often interested insuccessive approximations I will next introduce a rank-1 lattice sequence. Maybe surprisingly, also here,one simple line of Matlab code is sufficient to calculate a segment of a lattice sequence (with a specialproperty).

While for Monte Carlo integration the usual assumption is L2 integrability, leading to an error bound ofO(N−1/2). For quasi-Monte Carlo methods we generally work with more involved function space settings,leading to error bounds of the form O(N−α) with α ≥ 1/2. I will show the standard function spacesettings and dive into the weighted function spaces which are being used in current theory to be able tomake the error bounds independent of the number of dimensions. We will gloss over the Korobov spaceof periodic smooth functions, the unanchored Sobolev space with first mixed derivatives, and the cosinespace which is a special space of smooth non-periodic functions. All of which are spaces of so-called mixeddominating smoothness, which in terms of mixed derivatives would mean ∂k1+···+ks/(∂xks1 · · · ∂xkss ) existsfor all (k1, . . . , ks) ≤ (α, . . . , α). For non-periodic function spaces lattice rules and sequences can onlyreach O(N−1) convergence.

To arrive at higher-order convergence of O(N−α) for non-periodic functions we dive into higher-orderSobolev spaces (of mixed dominating smoothness) in combination with digital nets, but here disguised asa lattice rule, namely polynomial lattice rules. Here, all arithmetic is over a finite field (we will choose theone of order 2) and our lattice rules now take the form of a lattice of polynomials which we map back topoints in the unit cube. Higher order can now be obtained by means of the interlacing method from JosefDick. Although more complicated, again, given the generating matrices of the digital net (as a bunch ofintegers), and two lines for a magical de Bruijn sequence, we can approximate an integral by such a digitalsequence with a single line of (complicated) Matlab code.

While Matlab/Octave one-liners might be fun, and somehow instructive, I will also point you to myonline resources where code is available to (a) construct good lattice rules, lattice sequences and polynomiallattice rules (for product, order-dependent, product-and-order-dependent aka POD, and smoothness-drivenproduct-and-order dependent aka SPOD) weighted spaces, and (b) generate points given the generatingvector or matrices. Armed with this software you will be able to attack your own problems in no time.

This talk is based upon the results of collaborations with many colleagues.

Page 30: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

30

Monday August 15. 115–215, Berg BC

A Variational Perspective on Accelerated Methods in Optimization

Michael I. JordanUniversity of California, [email protected]

Accelerated gradient methods play a central role in optimization, achievingoptimal rates in many settings. While many generalizations and extensionsof Nesterov’s original acceleration method have been proposed, it is not yetclear what is the natural scope of the acceleration concept. In this paper, westudy accelerated methods from a continuous-time perspective. We show thatthere is a Lagrangian functional that we call the Bregman Lagrangian whichgenerates a large class of accelerated methods in continuous time, including

(but not limited to) accelerated gradient descent, its non-Euclidean extension, and accelerated higher-ordergradient methods. We show that the continuous-time limit of all of these methods correspond to travelingthe same curve in spacetime at different speeds. From this perspective, Nesterov’s technique and many ofits generalizations can be viewed as a systematic way to go from the continuous-time curves generated bythe Bregman Lagrangian to a family of discrete-time accelerated algorithms.

Joint work with Andre Wibisono and Ashia Wilson.

Page 31: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

31

Tuesday August 16. 900–1000, Berg BC

Dependence Sampling and Quasi-Random Numbers

Christiane LemieuxUniversity of Waterloo, [email protected]

In this talk we will discuss two topics related to the combination of dependencesampling with quasi-random numbers. In the first part of the talk, we will present somerecent results where quasi-random numbers are used to sample from copula models,which are a popular way to model dependence in several application areas including riskmanagement and insurance. Both theoretical and numerical results will be presented,and ongoing work on the topic of successfully combining this approach with importancesampling will be discussed as well. In the second part of the talk, we will present a newapproach to study randomized quasi-Monte Carlo methods that makes use of dependenceconcepts to analyze the variance of the corresponding estimators. A generalization of

Hoeffding’s lemma will be presented, along with a covariance decomposition result that makes use of thislemma, and allows for the derivation of conditions under which this variance is guaranteed to be smallerthan for the Monte Carlo method. The specific case of scrambled nets and their variance will also bediscussed.

Joint work with Mathieu Cambou (Ecole Polytechnique Fédérale de Lausanne, Switzerland), MariusHofert (U. Waterloo, Canada) and Yoshihiro Taniguchi (U. Waterloo, Canada).

Page 32: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

32

Tuesday August 16. 115–215, Berg BC

Sequential Quasi-Monte Carlo in Infinite Dimension

Nicolas ChopinENSAE, [email protected]

The objective of this talk is twofold. First, I would like to present SQMC (Sequen-tial quasi-Monte Carlo), a class of algorithms that merges particle filtering and QMC.Contrary to previous presentations, I will not assume prior knowledge from the audienceof particle filtering, state-space modelling and Feynmac-Kac representations. I will thustake to introduce these notions and the motivation to perform particle filtering.

Second, I would like to discuss some recent extensions and applications of SQMC, inparticular to partly observed diffusion models, which are infinitely-dimensional. QMCtechniques, and particularly SQMC, tend to suffer from a curse of dimensionality: theirperformance gain, relative to Monte Carlo tends to vanish for large-dimensional prob-lems. However, by exploiting well-known properties of partly observed diffusion models,

we are able to implement SQMC so that it outperforms significantly standard particle filtering.Joint work with Mathieu Gerber (Harvard University).

Page 33: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

33

Wednesday August 17. 900–1000, Hewlett 200

Efficient Monte Carlo Methods for Spatial Extremes

Jose H. BlanchetColumbia University, [email protected]

The main result in univariate EVT says that if the maximum, Zn, ofn independent and identically distributed (iid) observations can be approx-imated by a linear model of the from anM + bn (where an, bn are de-terministic sequences), then M must be a max-stable distribution (hav-ing a simple parametric density). The term max-stable is justified by thefact that if M1, . . . ,Mn are iid copies of M , then, in distribution, M =(maxni=1Mi − an)/bn.

The linear model of extrapolation postulated by EVT turns out to beapplicable to virtually all of the parametric univariate distributions which arise in applied statistics (e.g.Gaussian, exponential, gamma, Pareto, log-normal, etc.). Consequently, EVT provides a flexible and, yet,parsimonious way of extrapolating univariate extreme statistics.

Now, many applications, including modeling extreme weather events, naturally call for extrapolationtechniques of extremes with spatial dependence. A natural starting point is to build models with max-stable marginal distributions. Therefore it is natural to consider random fields, M (t) : t ∈ T (T mayrepresent a geographical region), satisfying the max-stable property by definition. That is, if Mi (·)ni=1

are iid copies of M (·) then there is a sequence of functions an (t) , bn (t) satisfying the equality indistributionM (·) = maxni=1 (Mi (·)− an (·)) /bn (·). It turns out that if M (·) is a max-stable random fieldthen (after applying a simple transformation) M (·) must admit a representation of the form

M (t) =∞

maxn=1− log (An) +Xn (t),

where An is the n-th arrival of a Poisson process and Xn (·) is an iid sequence of random fields (typicallyGaussian) independent of the Ans. The goal of this talk is to discuss efficient Monte Carlo techniques formax-stable random fields. In particular, we study the following questions:

a) Is it possible to simulate M (t1) , . . . ,M (td) exactly (i.e. without bias)?b) Can exact simulation be performed optimally as d grows to infinity?c) Can conditional simulation also be done efficiently and without bias?We share surprising good news in response to these questions using ideas borrowed from recent exact

simulation algorithms, rare-event simulation, and multilevel Monte Carlo.Joint work with Zhipeng Liu (Columbia University), Ton Dieker (Columbia University) and Thomas

Mikosch (University of Copenhagen).

Page 34: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

34

Wednesday August 17. 115–215, Berg BC

Hot New Directions for QMC Research in Step with Applications

Frances Y. KuoUNSW [email protected]

High dimensional computation is a new frontier in scientific computing, with appli-cations ranging from financial mathematics such as option pricing or risk management,to groundwater flow, heat transport, and wave propagation. This talk will begin witha contemporary review of quasi-Monte Carlo (QMC) methods for approximating highdimensional integrals. I will highlight some recent developments on “lattice rules” and“higher order digital nets”. One key element is the “fast component-by-component con-struction” which yields QMC methods with a prescribed rate of convergence for suffi-ciently smooth functions. Another key element is the careful selection of parameters

called “weights” to ensure that the worst case errors in an appropriately weighted function space arebounded independently of the dimension.

I will outline recent research efforts on the application of QMC methods to PDEs with random co-efficients. (For background motivation see the abstract for my Sunday tutorial.) I will then showcasesome current collaborations in progress, including the analysis and implementation of “multi-level meth-ods” and “multivariate decomposition methods” for PDE applications; the development of QMC theoryfor PDE eigenvalue problems; fast QMC matrix vector multiplication method; a preintegration smoothingtechnique to tackle high dimensional integration of kinks and jumps; circulant embedding and H-matrixtechniques for generating lognormal random fields using QMC. The defining characteristic of these projectsis that the theory and methods for high dimensional computation are developed not in isolation, but inclose connection with applications.

Page 35: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

35

Thursday August 18. 900–1000, Berg BC

One Hundred Years of Uniform Distribution Theory: Hermann Weyl’s Seminal Paper of1916

Christoph AistleitnerTU Graz, [email protected]

Hermann Weyl’s seminal paper “Über die Gleichverteilung von Zahlen mod. Eins”appeared in the Mathematische Annalen one hundred years ago, in 1916. This paperestablished the theory of uniform distribution modulo one as a proper mathematicaldiscipline and connected it with various other mathematical topics, including Fourieranalysis, number theory, numerical analysis and probability theory. This paper formedthe cornerstone for the later development of uniform distribution theory, discrepancytheory and the theory of Quasi–Monte Carlo integration, and it is amazing to see towhich extent later developments were already anticipated in Weyl’s paper and how farhe went beyond previous work.

In this talk, we narrate the evolution of Weyl’s paper and present its content. Weshow how it initiated subsequent developments, and we discuss some of the most important results in thefield as well as some of the most important open problems.

Page 36: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

36

Thursday August 18. 115–215, Berg BC

Hierarchical Bayesian Methods for the Solution of Inverse Problems

Andrew M. [email protected]

The last decade has seen the development of a theory of infinite dimensional Bayesianinversion, leading to new MCMC algorithms for statistical inversion which converge atrates independent of the dimension of the state-space; for problems arising in the physicalsciences, for example, the state-space may be infinite dimensional, and in practice of veryhigh dimension when represented on a computer. At the same the time machine learningcommunity has made enormous advances in the development of predictive algorithmswhich learn from data, and do not use mechanistic models. A key idea to emerge fromthis work in machine learning is that hierarchical modelling is a very effective and flexibleway of representing complex prior knowledge. This talk is concerned with the use of

hierarchical Bayesian methods for the solution of inverse problems, and in particular the interaction ofhierarchical methodologies with the construction of effective MCMC algorithms for problems in high orinfinite dimensional state spaces.

Page 37: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

37

Friday August 19. 900–1000, Berg BC

On Pseudo-Marginal Methods for Bayesian Inference in Latent Variable Models

Arnaud DoucetUniversity of Oxford, [email protected]

The pseudo-marginal algorithm is a popular variant of the Metropolis–Hastingsscheme which allows us to sample asymptotically from a target probability density whenwe are only able to estimate unbiasedly an un-normalized version of it. It has foundnumerous applications in Bayesian statistics as there are many latent variable modelswhere the likelihood function is intractable but can be estimated unbiasedly using MonteCarlo samples. In this talk, we will first review the pseudomarginal algorithm and showthat its computational cost is for many common applications quadratic in the numberof observations at each iteration. We will then present a simple modification of thismethodology which can reduce very substantially this cost. A large sample analysis ofthis novel pseudo-marginal scheme will be presented.

Page 38: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

38

Friday August 19. 115–215, Berg BC

Bayesian Optimization in the Tech Sector: Experiences at Uber, Yelp, and SigOpt

Peter FrazierCornell University, [email protected]

We discuss Bayesian optimization, a class of methods for optimization of expensivefunctions and optimization via simulation, in which Monte Carlo simulation is play-ing an increasingly critical role. We describe optimization as a decision-making task(“where should I sample next?"), in which we use decision theory supported by MonteCarlo methods to reduce the number of expensive function evaluations required to findapproximate optima. We discuss these methods in the context of applications from thetech sector: optimizing e-commerce systems, real-time economic markets, mobile apps,

and hyperparameters in machine learning algorithms. We conclude with examples of the impact of thesemethods, from the speaker’s experiences working with Yelp, Uber, and the Bayesian Optimization startupcompany, SigOpt.

Page 39: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Mini symposia

Mini-symposia are collections of related talks spanning two or more special sessions. We have six of them:

• MC and QMC Methods for Bayesian Inverse Problems

• MCQMC in Graphics and Computational Transport

• Multilevel Monte Carlo

• Nonlinear Monte Carlo Methods

• Quadrature on the Sphere and Other Manifolds

• Stochastic Computation and Complexity

They are described on the next pages.

39

Page 40: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

MC and QMC Methods for BayesianInverse Problems

Organizers: Christoph Schwab, SAM ETH Zurich and Andrew Stuart, University of Warwick.Inverse problems are of enormous importance in the numerous applications in the sciences and engi-

neering where there is a combination of both sophisticated mathematical model and access to large data.The Bayesian formulation of such problems opens substantial research challenges at the intersection ofthe fields “data science” and “uncertainty quantification”. For problems with “distributed uncertain input”parameter, Bayesian inverse problems can be reduced under rather general hypotheses to essentially com-puting infinite-dimensional integrals in which the data and model are combined to define the integrand.Widely used methodologies to deal with these problems are MCMC, Ensemble Kalman Filtering methods(EnKF), and their variants. Being based on Monte-Carlo techniques, these methods feature convergencerates of order 1/2 in terms of the number of “forward” solves of the underlying model. Application of QMCto Bayesian inverse problems does open the possibility of methods which improve upon this convergencerate, offering for problems with costly forward solves the perspective of massive performance gains. Thesession will present new theoretical advances on MC and QMC in this broad and dynamic field.

Bayes Inverse, Part I: Thursday August 18. 1025–1155, Berg B

Multilevel Higher-Order Quasi-Monte Carlofor Bayesian Inverse Problems

Robert N. GantnerETH Zürich, [email protected]

We consider Bayesian inversion of partial differentialequations with distributed uncertain inputs, whichreduces to the computation of high-dimensional inte-grals. Under certain sparsity conditions on the para-metric input, the resulting integrands fulfill the sameconditions, implying applicability of recently devel-oped higher-order QMC methods. We focus in thistalk on computational aspects of this problem, in-cluding fast CBC construction of interlaced polyno-mial lattice rules achieving higher order convergencerates, as well as single and multilevel formulationsof the discretized Bayesian inverse problem. Nu-merical results showing improvements of the time-to-solution compared to traditional, Monte Carlo-based approaches are presented. We also commentbriefly on a repository of generating vectors for such

problems, and an open-source software package toaid in the application of such methods.

This work was supported by CPU time from theSwiss National Supercomputing Centre (CSCS) un-der project ID d41, by the Swiss National ScienceFoundation (SNF) under Grant No. SNF149819.

Joint work with Christoph Schwab (ETH Zurich,Switzerland), Josef Dick (UNSW, Australia) andQuoc Thong Le Gia (UNSW, Australia).

[1] J. Dick, R. N. Gantner, Q. Le Gia, and Ch.Schwab. Higher Order Quasi-Monte Carlo inte-gration for Bayesian Estimation. Technical Re-port 2016-13, Seminar for Applied Mathematics,ETH Zürich, Switzerland, 2016.

[2] J. Dick, R. N. Gantner, Q. T. Le Gia, and Ch.Schwab. Multilevel Higher order Quasi-MonteCarlo integration for Bayesian Estimation. (inpreparation).

[3] R. N. Gantner and Ch. Schwab. ComputationalHigher Order Quasi-Monte Carlo Integration. In

40

Page 41: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

41

Ronald Cools and Dirk Nuyens, editors, MonteCarlo and Quasi-Monte Carlo Methods: MC-QMC, Leuven, Belgium, April 2014, pages 271–288. Springer International Publishing, Cham,2016.

Multilevel Markov Chain Monte Carlo Meth-ods for Bayesian Inversion of Partial Differ-ential Equations

Viet Ha HoangNanyang Technological Unviversity, [email protected]

The Bayesian approach to inverse problems, inwhich the posterior probability distribution on anunknown field is sampled for the purposes of com-puting posterior expectations of quantities of inter-est, is starting to become computationally feasiblefor partial differential equation (PDE) inverse prob-lems. Balancing the sources of error arising fromfinite-dimensional approximation of the unknownfield, the PDE forward solution map and the sam-pling of the probability space under the posteriordistribution are essential for the design of efficientcomputational Bayesian methods for PDE inverseproblems. We study Bayesian inversion for a modelelliptic PDE with an unknown diffusion coefficient.We consider both the case where the PDE is uni-formly elliptic with respect to all the realizations,and the case where uniform ellipticity does not hold,i.e. the coefficient can get arbitrarily close to 0 andarbitrarily large as in the log-normal model. Weprovide complexity analysis of Markov chain MonteCarlo (MCMC) methods for numerical evaluation ofexpectations under the Bayesian posterior distribu-tion given data, in particular bounds on the overallwork required to achieve a prescribed error level.We first bound the computational complexity of theplain MCMC, based on combining MCMC samplingwith linear complexity multi-level solvers for ellip-tic PDEs. The work versus accuracy bounds showthat the complexity of this approach can be quiteprohibitive. We then present a novel multi-levelMarkov chain Monte Carlo strategy which utilizessampling from a multi-level discretization of the pos-terior and the forward PDE. The strategy achievesan essentially optimal complexity level that is es-sentially equal to that for performing only one stepon the plain MCMC. The essentially optimal accu-

racy and complexity of the method are mathemat-ically rigorously proven. Numerical results confirmour analysis.

This is joint work with Jia Hao Quek (NTU, Sin-gapore), Christoph Schwab (ETH, Switzerland) andAndrew Stuart (Warwick, England).

Numerical Analysis of Posterior Concen-tration in Multilevel QMC Estimators forBayesian Inverse Problems

Christoph SchwabSAM ETH [email protected]

For Bayesian inversion of partial differential equa-tions with high-dimensional parametric, or dis-tributed uncertain inputs, the posterior density mayconcentrate for small observation noise covariance.

We present an analysis of the posterior concen-tration, from [3], shedding light on the scaling andgeometry of posterior concentration for BIP withformally infinite-dimensional, parametric input.

We address the question of numerical stabilityof high order, Quasi Monte-Carlo quadratures inthe numerical computation of high-dimensional in-tegrals in the case of small observation noise covari-ance.

Several strategies of building numerial stable, de-terministic estimators which are free from the curseof dimension will be discussed: a reparametrizationof the posterior density using 2nd order informationat the MAP point and a more general strategy usingnonlinear reparametrizations. We prove that, undercertain nondegeneracy assumptions, these strategieswill facilitate deterministic Bayesian estimates withdimension-independent, possibly higher order, con-vergence rates, with error bounds that are indepen-dent of the observation noise covariance as well.

Joint work with Robert Gantner (SAM, ETHZürich), Youssef Marzouk and Alessio Spantini(MIT).

Work supported by CPU time from the SwissNational Supercomputing Centre (CSCS) underproject ID d41, by the Swiss National Science Foun-dation (SNF) under Grant No. SNF149819.

[1] J. Dick, R. N. Gantner, Q. T. Le Gia andCh. Schwab, Multilevel Higher order Quasi-Monte Carlo integration for Bayesian Estima-tion. (in preparation)

Page 42: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

42

[2] J. Dick, R. N. Gantner, Q. T. Le Gia andCh. Schwab, Higher order Quasi-Monte Carlo in-tegration for Bayesian Estimation. Report 2016-13, Seminar for Applied Mathematics, ETHZürich, Switzerland (in review).

[3] Claudia Schillings and Christoph Schwab, Scal-ing Limits in Computational Bayesian Inver-sion, Tech. Report 2014-26, Seminar for Applied

Mathematics, ETH Zürich (to appear in M2AN2016).

[4] Robert N. Gantner and Christoph Schwab, Com-putational Higher Order Quasi-Monte Carlo In-tegration, in: Monte Carlo and Quasi-MonteCarlo Methods: MCQMC, Leuven, Belgium,April 2014, Springer International Publishing,Cham, 271–288 (2016).

Bayes Inverse, Part II: Friday August 19. 1025–1155, Berg A

Particle-based Approximate Monte CarloApproaches for Large-Scale Bayesian InverseProblems

Tan Bui-ThanhThe University of Texas at [email protected]

Bayesian inverse problems/UQ are ubiquitous in sci-ences and engineering. The solution of a Bayesianinverse problem is the posterior in (possibly infinite)high dimensional parameter space. One of the mosteffective tools to explore the posterior is the Markovchain Monte Carlo (MCMC) framework. However,MCMC is often too computationally extensive, andalternatives are necessary for large-scale problems.In this talk, we present several particle methodsto compute (approximate) posterior samples with-out the need for a Markov chain. We present the-ory, computational experiments, and comparisonsamong the methods.

Accelerating Metropolis Algorithms via Mea-sure Transport

Youssef [email protected]

We present a new approach for efficiently character-izing complex probability distributions, using a com-bination of measure transport and Metropolis cor-rections. We use continuous transportation to trans-form typical Metropolis proposal mechanisms intonon-Gaussian proposal distributions. Our approach

adaptively constructs a Knothe-Rosenblatt rear-rangement using information from previous MCMCstates, via the solution of convex and separable op-timization problems. We then discuss key issues un-derlying the construction of transport maps in highdimensions, and present ways of exploiting sparsityand certain kinds of near-identity structure.

Joint work with: Matthew Parno (MIT), AlessioSpantini (MIT).Analysis of the Ensemble and PolynomialChaos Kalman Filters in Bayesian InverseProblems

Oliver ErnstTU Chemnitz, [email protected]

We analyze the ensemble and polynomial chaosKalman filters applied to nonlinear stationaryBayesian inverse problems. In a sequential data as-similation setting, such stationary problems arise ineach step of either filter. We give a new interpre-tation of the approximations produced by these twopopular filters in the Bayesian context and provethat, in the limit of large ensemble or high poly-nomial degree, both methods yield approximationswhich converge to a well-defined random variabletermed the analysis random variable. We then showthat this analysis variable is more closely related toa specific linear Bayes estimator than to the solutionof the associated Bayesian inverse problem given bythe posterior measure. This suggests limited or atleast guarded use of these generalized Kalman filtermethods for the purpose of uncertainty quantifica-tion.

Page 43: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

MCQMC in Graphics and ComputationalTransport

Organizers: Alexander Keller, NVIDIA and Christophe Hery, Pixar Animation Studios and Wenzel Jakob,EPFL Switzerland.

Graphics Part I: Monday August 15. 225–325, Berg C

On MCQMC Issues in Production Rendering

Christophe Hery & Ryusuke VilleminPixar Animation Studioschery,[email protected]

Use of MC integration has popularized in the VFXindustry, but since it is usually not possible to findone importance sampling that will directly matchthe whole rendering equation, the common solutionis to use MIS. Unfortunately render times increasewith the number of techniques to MIS, and some-times arbitrary (artistic) signals don‘t have a goodsampling method to begin with. This is why we arelooking into MCMC with the hope that it will solvesome of the problems. In this talk, we will also showsome of the common shortcomings of QMC, in par-ticular in conjunction with the Alias method; andwe will explore the biased approaches we still resortto in graphics (such as denoising, photon mappingand mollification), mainly to overcome the “slow"quadratic convergence of pure MC or MCMC.

Data-Driven Light Scattering Models: AnApplication to Rendering Hair

Leonid PekelisStanford University, [email protected]

We present an implementation of Adaptive Impor-tance Sampling [1], specialized to fit easily sampleddistributions to complicated BCSDFs - light trans-port equations employed for realistic graphics ren-dering. Innovations include alternating objective

minimization and penalization designed to producegood fit, automatically across a diverse range ofcases.

We apply our algorithm to the canonical model[2] for light scattering from hair and fur. The resultis novel among importance sampling implementa-tions in that it includes features such as eccentric-ity for elliptical hair fiber cross-section, azimuthalroughness control, and fiber torsion. It is also fullyenergy preserving. We compare with two leading im-plementations in the literature as well as a groundtruth, physical model which directly evaluates light-cylinder interactions. Ours well approximates theground truth, and drastically reduces both compu-tation and artist time compared to the others.

Examples from a recent Pixar animated film, aswell as brief discussion of general convergence of thefitting algorithm are included.

Joint work with Christophe Hery (Pixar Anima-tion Studios), Ryusuke Villemin (Pixar AnimationStudios), and Junyi Ling (Pixar Animation Studios).

[1] Oh, Man-Suk and Berger, James O. Adaptiveimportance sampling in Monte Carlo integration.Journal of Statistical Computation and Simula-tion, 41, 3-4, 143âĂŞ168, 1992.

[2] Marschner, Stephen R and Jensen, Henrik Wannand Cammarano, Mike and Worley, Steve andHanrahan, Pat. Light scattering from humanhair fibers. ACM Transactions on Graphics(TOG), vol 22, ACM, 780âĂŞ791, 2003.

43

Page 44: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

44

Graphics Part II: Tuesday August 16. 225–325, Berg C

Monte Carlo for Light Transport IncludingLinear Polarization

Peter [email protected]

Monte Carlo Ray Tracing (MCRT) is often used forgeneral light transport problems. A variety of opensource and commercial packages are available thatdo this. Few of these packages support light polar-ization.

A very simple approach to polarized light trans-port is presented that uses adjoint light particleseach tagged with a linear polarization. This resultsin a very simple algorithm. Because all MC adjointparticles are independent, parallelization is straight-forward but also implies the limitation that lightphase is not supported.

A series of validation experiments is presented,and the debugging challenges added by polarizationare discussed.

Towards Real Time Monte Carlo Simulationsfor Biomedicine

Jerome SpanierU.C. Irvine, [email protected]

Monte Carlo methods provide the “gold stan-dard” computational technique for solving biomed-ical problems but their widespread use is hinderedby the slow convergence of the sample means. Anexponential increase in the convergence rate can beobtained by adaptively modifying the sampling andweighting strategy employed. However, if the radi-ance is represented by a truncated expansion of basisfunctions, a bias is introduced by the truncation andthe geometric acceleration of the convergence rate ispartly or entirely offset by the sheer number of ex-pansion coefficients to be estimated in as many as6 independent variables and in every homogeneoustissue subregion. As well, the (unknown amount of)bias is unacceptable for a gold standard numericalmethod. Conventional density function estimationmethods are widely used in other light transportapplications. They avoid the need to represent theradiance in a functional expansion but the need forsmoothing parameters also introduces a bias in thedensity estimator. This precludes convergence to theexact solution and is unacceptable in a gold stan-dard method. We introduce a density estimationmethodology that eliminates the bias by constrain-ing the radiance to obey the transport equation. Weprovide numerical evidence of the superiority of thisTransport-Constrained Unbiased Radiance Estima-tor (T-CURE) in simple problems and indicate itspromise for general, heterogeneous problems.

Joint work with Rong Kong, Hyundai CapitalAmerica and Shuang Zhao, UCI.

Page 45: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

45

Graphics Part III: Wednesday August 17. 1025–1155, Berg A

Learning Light Transport the ReinforcedWay

Alexander [email protected]

We show that the equations for reinforcement learn-ing and light transport simulation are related in-tegral equations. Based on this correspondence, ascheme to learn importance during sampling pathspace is derived. The new approach is demonstratedin a consistent light transport simulation algorithmthat uses reinforcement learning to progressivelylearn probability density functions for importancesampling. As the number of paths with non-zerocontribution can be dramatically increased, muchless noisy images can now be rendered within thesame time budget.

This is joint work with Ken Dahm, NVIDIA.

Hessian-Hamiltonian Monte Carlo Rendering

Tzu-Mao LiMIT [email protected]

The simulation of light transport in the presenceof multi-bounce glossy effects and motion is chal-lenging because the integrand is high dimensionaland areas of high-contribution tend to be narrowand hard to sample. We present a Markov ChainMonte Carlo (MCMC) rendering algorithm that ex-tends Metropolis Light Transport by automaticallyand explicitly adapting to the local shape of theintegrand, thereby increasing the acceptance rate.Our algorithm characterizes the local behavior ofthroughput in path space using its gradient as wellas its Hessian. In particular, the Hessian is able tocapture the strong anisotropy of the integrand. Weobtain the derivatives using automatic differentia-tion, which makes our solution general and easy toextend to additional sampling dimensions such astime.

However, the resulting second order Taylor ex-pansion is not a proper distribution and cannot be

used directly for importance sampling. Instead, weuse ideas from Hamiltonian Monte Carlo and simu-late the Hamiltonian dynamics in a flipped versionof the Taylor expansion where gravity pulls particlestowards the high-contribution region. Whereas suchmethods usually require numerical integration, weshow that our quadratic landscape leads to a closed-form anisotropic Gaussian distribution for the fi-nal particle positions, and it results in a standardMetropolis-Hastings algorithm. Our method ex-cels at rendering glossy-to-glossy reflections on smalland highly curved surfaces. It is also the firstMCMC rendering algorithm that is able to resolvethe anisotropy in the time dimension and render dif-ficult moving caustics.

Joint work with Jaakko Lehtinen (Aalto Univer-sity; NVIDIA), Ravi Ramamoorthi (University ofCalifornia, San Diego), Wenzel Jakob (ETH Zürich),Frédo Durand (MIT CSAIL).

Choosing the Path Integral Domain forMCMC Integration

Anton [email protected]

The path integral formulation of light transport isthe basis for (Markov chain) Monte Carlo global illu-mination methods. In this talk we present Half vec-tor Space Light Transport (HSLT), a novel approachto sampling and integrating light transport paths onsurfaces. The key is the transformation of the pathspace into another domain, where a path is repre-sented by its start and end point constraints anda sequence of halfway vectors. We show that thisdomain has several benefits. It enables importancesampling of all interactions along paths. Anotherimportant characteristic is that the Fourier-domainproperties of the path integral can be easily esti-mated. These can be used to achieve optimal cor-relation of the MCMC samples due to well-chosenstep sizes, leading to more efficient exploration oflight transport features.

Page 46: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

46

Graphics Part IV: Thursday August 18. 225–325, Berg C

Rise of the Machines in Monte Carlo Integra-tion for Rendering

Toshiya HachisukaThe University of [email protected]

Following the recent success of deep learning, ma-chine learning is getting used to solve various prob-lems in many different fields. While deep learning isoften hyped as an emulation of a human brain via acomputer, one can see it as a mere parametric modelfor approximating a (usually hidden) functional re-lationship between input data and output data. Itshould then be possible to use machine learning tomake general MC/MCMC computation adaptive bylearning a hidden, but useful structure of a targetproblem. This talk summarises our recent attemptsto accelerate practical computation via MC/MCMCin computer graphics based on machine learning.Topics covered include an automatic mixture of MCsamplers and adaptive MCMC. While the applica-tions are specific to computer graphics, the devel-oped algorithms are general and potentially appli-cable to other applications.

Perturbation Monte Carlo-Based Methodsfor Characterization of Micro-ArchitecturalChanges in Tissue

Jennifer NguyenUniversity of California, [email protected]

We present a polarization-sensitive, transport-rigorous perturbation Monte Carlo (pMC) methodto model the impact of optical property changes onreflectance measurements within a discrete parti-cle scattering model. The model consists of threelog-normally distributed populations of Mie scatter-ers that approximate biologically relevant subatomicsize distributions in cervical tissue. We compareour method’s reflectance estimates for perturbationsacross wavelength and/or scattering model parame-ters and conventional Monte Carlo’s reflectance es-timates. We then use these comparisons to drawconclusions about the performance of pMC as to ac-curacy and computational efficiency gains. Finally,we examine the utility of differential Monte Carlo inestimating sensitivities with respect to the param-eters of our discrete particle scattering model andwith respect to the optimization of parameters of in-terest. To our knowledge, ours is the first study thattakes account of scattering phase function changes,in addition to changes in the parameters of the par-ticle size distributions used for the scattering model.Time permitting, we will illustrate the improvementin the solution of inverse problems engendered byour pMC/dMC methods in identifying microarchi-tectural changes in cervical tissue.

Joint work with Carole K. Hayakawa (Uni-versity of California, Irvine), Judith R. Mourant(Los Alamos National Laboratory), Vasan Venu-gopalan (University of California, Irvine), andJerome Spanier (University of California, Irvine).

Page 47: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Multilevel Monte Carlo

Organizers: Michael Giles, University of Oxford and Abdul-Lateef Haji-Ali, KAUST Saudi Arabia andRaul Tempone, KAUST Saudi Arabia.

MLMC Part I: Monday August 15. 1025–1155, Berg C

Adaptive Timestepping for SDEs with Non-Globally Lipschitz Drift

Wei FangUniversity of [email protected]

This talk looks at the use of the standard Euler-Maruyama discretisation with adaptive timestepsfor SDEs with a drift which is not globally Lips-chitz. It is proven that under certain conditions allmoments of the numerical solution remain boundedfor any finite time interval, and the strong order ofconvergence is the same as usual, when viewed asaccuracy versus expected cost per path. Further-more, for SDEs which satisfy a certain contractionproperty, similar results hold over the infinite timeinterval. The strong convergence results are relevantto the use of this discretisation for Multilevel MonteCarlo simulation.

This is joint work with Mike Giles.

Multilevel MC for Multi-Dimensional Re-flected Brownian Diffusions

Michael GilesUniversity of [email protected]

This talk will discuss the application of the multi-level Monte Carlo method to the numerical approxi-mation of multi-dimensional reflected Brownian dif-fusions. By using adaptive timesteps, with smallertimesteps near the boundaries where the largest nu-merical approximation errors are incurred, condi-

tions are derived under which a root-mean-squareaccuracy of ε can be achieved for the expected valueof an output quantity of interest at a computationalcost which is O(ε−2). This analysis is supported bynumerical experiments.

Joint work with Kavita Ramanan (Brown Uni-versity).

Mean Square Error Adaptive Euler–Maruyama Method with Applications inMultilevel Monte Carlo

Juho HäppöläKing Abdullah University of Science and Technol-ogy, Saudi Arabia (KAUST)[email protected]

A formal mean square error expansion (MSE) isderived for Euler–Maruyama numerical solutions ofstochastic differential equations (SDE). The errorexpansion is used to construct a pathwise a posteri-ori adaptive time stepping Euler–Maruyama methodfor numerical solutions of SDE, and the resultingmethod is incorporated into a multilevel MonteCarlo (MLMC) method for weak approximations ofSDE. This gives an efficient MSE adaptive MLMCmethod for handling a number of low-regularity ap-proximation problems. In low-regularity numericalexample problems, the developed adaptive MLMCmethod is shown to outperform the uniform timestepping MLMC method by orders of magnitude,producing output whose error with high probabilityis bounded by TOL>0 at the near-optimal MLMCcost rate O(TOL−2log(TOL)4).

47

Page 48: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

48

MLMC Part II: Monday August 15. 225–325, Berg A

The Multivariate Decomposition Method

Alexander GilbertThe University of New South Wales, [email protected]

The Multivariate Decomposition Method (MDM)is an algorithm for approximating the integral ofa function f which depends on ∞-many vari-ables, y1, y2, . . .. It begins by decomposing finto the countable sum of functions which dependonly on finitely many variables: f(y1, y2, . . .) =∑|u|<∞ fu(yj : j ∈ u), where u ⊂ N denotes a subset

of variables and each fu only depends on the vari-ables yj for j ∈ u. The general idea of the MDMalgorithm is as follows. First, choose the “activeset”, which is a collection of subsets of variables ucorresponding to terms fu which, loosely speaking,contribute most to the total integral. The next step,is to apply separate quadrature rules to each decom-position term fu for u in the active set. The finalapproximation of the integral is then the sum of allthe quadrature approximations of the terms in theactive set.

One such problem where ∞-variate integrandsarise is a PDE with a random coefficient, where thecoefficient can be parametrised by∞-many stochas-tic variables. Here the quantity of interest is theexpected value, with respect to the stochastic pa-rameters, of some linear functional of the PDE so-lution.

In this talk I will discuss applying the MDMwhen the integrand is a linear functional the solutionof a PDE with a random coefficient. I will brieflyintroduce the PDE problem, the MDM algorithm,then focus on the two apsects of the implementation:constructing the active set and efficiently construct-ing the quadrature approximations.

Quasi and Multilevel Monte Carlo Methodsfor Bayesian Inverse Problems

Aretha TeckentrupUniversity of Warwick, [email protected]

The parameters in mathematical models for manyphysical processes are often impossible to determinefully or accurately, and are hence subject to un-certainty. By modelling the input parameters asstochastic processes, it is possible to quantify theinduced uncertainty in the model outputs. Basedon available information, a prior distribution is as-signed to the input parameters. If in addition somedynamic data (or observations) related to the modeloutputs are available, a better representation of theparameters can be obtained by conditioning theprior distribution on these data, leading to the pos-terior distribution in the Bayesian framework.

In most situations, the posterior distribution isintractable in the sense that the normalising con-stant is unknown and exact sampling is unavailable.Using Bayes’ Theorem, we show that posterior ex-pectations of quantities of interest can be writtenas the ratio of two prior expectations involving thequantity of interest and the likelihood of the ob-served data. These prior expectations can then becomputed using Quasi-Monte Carlo and multilevelMonte Carlo methods.

In this talk, we give a convergence and complex-ity analysis of the resulting ratio estimators, anddemonstrate their effectiveness on a typical modelproblem in uncertainty quantification.

This is joint work with Rob Scheichl (Universityof Bath, UK), Andrew Stuart (California Instituteof Technology, USA).

Page 49: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

49

MLMC Part III: Tuesday August 16. 225–325, Berg A

QMC for Parametric, Elliptic PDEs

Lukas HerrmannETH Zurich, [email protected]

Stochastic and parametric PDEs are consideredwith particular focus on mean field approximationsand/or approximations of the expectation of quan-tities of interest, cp. [3, 2]. We analyze convergencerates of first order quasi Monte-Carlo (QMC) in-tegration with randomly shifted lattice rules andfor higher order, interlaced polynomial lattice rules,for a class of countably parametric integrands thatresult from linear functionals of solutions of lin-ear, elliptic diffusion equations with uncertain co-efficient function represented in a system (ψj)j≥1 ina bounded domain D, cp. [2].

Whereas in [4, 1] the functions (ψj)j≥1 stemmedfrom global basis functions, we allow (ψj)j≥1 to bewavelet systems and investigate consequences for the(multilevel) QMC quadrature as considered in [4, 1].The QMC weights facilitate work bounds for thefast, component-by-component constructions of [5].

This research is supported in part by theSwiss National Science Foundation (SNSF) un-der grant SNF 200021_159940/1 and under grantSNF 200021_149819, the European Research Coun-cil under ERC AdG 247277, the Knut and AliceWallenberg foundation, and the Swedish ResearchCouncil under Reg. No. 621-2014-3995.

This is joint work with Robert N. Gantner,Christoph Schwab (ETH Zurich, Switzerland), andAnnika Lang (Chalmers University of Technology &University of Gothenburg, Sweden).

[1] Josef Dick, Frances Y. Kuo, Quoc T. Le Gia,Dirk Nuyens, and Christoph Schwab. Higherorder QMC Petrov-Galerkin discretization foraffine parametric operator equations with ran-dom field inputs. SIAM J. Numer. Anal.,52(6):2676–2702, 2014.

[2] Robert N. Gantner, Lukas Herrmann, andChristoph Schwab. Quasi-Monte Carlo integra-tion for affine-parametric, elliptic PDEs. Tech-

nical report, Seminar for Applied Mathematics,ETH Zürich, Switzerland, 2016. (in prepara-tion).

[3] Lukas Herrmann, Annika Lang, and ChristophSchwab. Numerical analysis of lognormal diffu-sions on the sphere. Technical Report 2016-02,Seminar for Applied Mathematics, ETH Zürich,Switzerland, 2016.

[4] Frances Y. Kuo, Christoph Schwab, and Ian H.Sloan. Quasi-Monte Carlo finite element meth-ods for a class of elliptic partial differential equa-tions with random coefficients. SIAM J. Numer.Anal., 50(6):3351–3374, 2012.

[5] Dirk Nuyens and Ronald Cools. Fast algo-rithms for component-by-component construc-tion of rank-1 lattice rules in shift-invariant re-producing kernel Hilbert spaces. Math. Comp.,75(254):903–920 (electronic), 2006.

A Multi-Index Quasi-Monte Carlo Algorithmfor Lognormal Diffusion Problems

Pieterjan RobbeKU Leuven, [email protected]

We present a Multi-Index Quasi-Monte Carlomethod for the solution of elliptic partial differentialequations with random coefficients and inputs. Bycombining the multi-index sampling idea with ran-domly shifted rank-1 lattice rules, the algorithm con-structs an estimator for the expected value of somefunctional of the solution. The efficiency of this newmethod is illustrated on a three-dimensional sub-surface flow problem with lognormal diffusion coef-ficient with underlying Matérn covariance function.This example is particularly challenging because ofthe small correlation length considered, and thus thelarge number of uncertainties that must be included.We present strong numerical evidence that it is pos-sible to achieve a cost inversely proportional to therequested tolerance on the root-mean-square error.

Page 50: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

50

MLMC Part IV: Wednesday August 17. 350–450, Berg B

Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic Reaction Networks

Chiheb Ben HammoudaKing Abdullah University of Science and [email protected]

This talk focuses on providing, efficiently, accu-rate estimations of the expected value, E[g(X(T ))],where g : Rd → R is a given smooth scalar ob-servable of the process X, continuous-time Markovchains called SRN (stochastic Reaction Network).SRNs are employed to describe the time evolution ofbiochemical reactions, epidemic processes and tran-scription and translation in genomics, etc.

In biochemically reactive systems with smallcopy numbers of one or more reactant molecules,the dynamics is dominated by stochastic effects. Toapproximate those systems, discrete state-space andstochastic simulation approaches have been shownto be more relevant than continuous state-space anddeterministic ones. In systems characterized by hav-ing simultaneously fast and slow timescales, exist-ing discrete space-state stochastic path simulationmethods, such as the stochastic simulation algo-rithm (SSA) and the explicit tau-leap (Explicit-TL)method, can be very slow. Implicit approximationshave been developed to improve numerical stabil-ity and provide efficient simulation algorithms for

those systems. Here, we propose an efficient Mul-tilevel Monte Carlo (MLMC) method in the spiritof the work by Anderson and Higham (2012). Thismethod uses split-step implicit tau-leap (SSI-TL) atlevels where the SSI-TL method is not applicabledue to numerical stability issues. We present nu-merical examples that illustrate the performance ofthe proposed method.

This is joint work with Alvaro Moraes and RaulTempone (KAUST, Saudi Arabia).

Multilevel Monte Carlo for SDEs with Ran-dom Bits

Klaus RitterTU Kaiserslautern, [email protected]

Let X be the solution to a stochastic differentialequation (SDE), and let ϕ be a real-valued func-tional on the path space. We study the approxima-tion of the expectation of ϕ(X) by means of ran-domized algorithms that may only use random bits.

We provide upper and lower bounds on the com-plexity of the problem. Moreover, we present anew multilevel algorithm with two dimensions of dis-cretization, time and number of bits.

The talk is based on joint work with M. Hefter,L. Mayer, S. Omland (all from Kaiserslautern), andM. Giles (Oxford).

MLMC Part V: Thursday August 18. 1025–1155, Berg C

Unbiased MLMC for Rare Event Simula-tion of Equilibrium Distributions of Stochas-tic Recurrence Equations

Chang-Han RheeCentrumWiskunde & Informatica, The [email protected]

In this talk, we discuss applications of multi-levelideas to the rare event simulation of stochastic re-currence equations of the form Zn+1 = AnZn + 1where Ann∈N is an iid sequence of positive randomvariables, and logAi is heavy-tailed with negativemean. The equilibrium distirbution of such stochas-tic recursion arise in the analysis of probabilistic al-gorithms or financial mathematics, where the limit

random variable Z∞ represents the stochastic per-petuity. If one interprets − logAn as the interestrate at time n, then Z∞ is the present value of abond that generates a unit of money at each timepoint n. Taking advantage of the close connectionbetween the maximum of random walks and theassociated stochastic perpetuities, we construct astrongly efficient simulation estimator for the rareevent probability PZ∞ > x. The new estimator,however, involves infinite number of random vari-ables, and hence, cannot be computed exactly. Weapply the unbiased multi-level technique developedin [1] to construct a computable unbiased estima-tor, and show that the resulting estimator maintainsstrong efficiency.

Page 51: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

51

Joint work with Bohan Chen (CWI Amsterdam,The Netherlands) and Bert Zwart (CWI Amster-dam, The Netherlands).

[1] C.-H. Rhee and P. W. Glynn Unbiased estima-tion with square root convergence for SDE mod-els. Operations Research, 63(5):1026–1043, 2015.

Multi-index Monte Carlo

Raul TemponeKing Abdullah University of Science and Technol-ogy, Saudi Arabia (KAUST)[email protected]

We describe and analyze the Multi-Index MonteCarlo (MIMC) for computing statistics of the so-lution of a PDE with random data. MIMC isboth a stochastic version of the combination tech-nique introduced by Zenger, Griebel and collabo-rators and an extension of the Multilevel MonteCarlo (MLMC) method first described by Heinrichand Giles. Instead of using first-order differences asin MLMC, MIMC uses mixed differences to reducethe variance of the hierarchical differences dramat-ically. These mixed differences yield new and im-proved complexity results, which are natural gener-alizations of Giles’s MLMC analysis, and which in-crease the domain of problem parameters for whichwe achieve the optimal convergence. We finally showthe effectiveness of MIMC in some computationaltests, including PDEs with random coefficients andStochastic Particle Systems.

References: “Multi Index Monte Carlo: WhenSparsity Meets Sampling”, by A.-L. Haji-Ali, F. No-bile, and R. Tempone. Numerische Mathematik,Vol. 132(4), Pages 767–806, 2016.

Unbiased Estimators and Multilevel MonteCarlo

Matti ViholaUniversity of Jyväskylä, [email protected]

Multilevel Monte Carlo (MLMC) [1, 2] and recentlyproposed debiasing schemes [3, 4] are closely relatedmethods for scenarios where exact simulation meth-ods are difficult to implement, but biased estimatorsare available. The talk is based on a recent paper[5] about a new general class of unbiased estimatorswhich admits earlier debiasing schemes of Rhee andGlynn [4] as special cases, and, more importantly, al-lows new lower variance estimators, which are basedon stratification. The stratified schemes behaveasymptotically like MLMC, both in terms of vari-ance and cost, under general conditions—essentiallythose that guarantee canonical square root error ratefor MLMC. This suggests that MLMC bias can oftenbe eliminated entirely with a negligible additionalcost. The new schemes also admit optimisation cri-teria which are easy to implement in practice.

[1] Michael B Giles. Multilevel Monte Carlo pathsimulation. Oper. Res., 56(3):607–617, 2008.

[2] Stefan Heinrich. Multilevel Monte Carlo meth-ods. In Large-scale scientific computing, pages58–67. Springer, 2001.

[3] Don McLeish. A general method for debiasinga Monte Carlo estimator. Monte Carlo MethodsAppl., 17(4):301–315, 2011.

[4] Chang-Han Rhee and Peter W Glynn. Unbiasedestimation with square root convergence for SDEmodels. Oper. Res., 63(5):1026–1043, 2015.

[5] Matti Vihola. Unbiased estimators and multi-level Monte Carlo. Preprint arXiv:1512.01022,2015.

Page 52: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

52

MLMC Part VI: Thursday August 18. 225–325, Berg A

The Population Dynamics Algorithm

Mariana Olvera-CraviotoColumbia [email protected]

A variety of problems in science and engineer-ing, ranging from population and statistical physicsmodels to the study of queueing systems, computernetworks and the internet, lead to the analysis ofbranching distributional equations. The solutionsto these equations are not in general analyticallytractable, and hence need to be computed numer-ically. This talk discusses a simulation algorithmknown as “Population Dynamics”, which is designedto produce a pool of identically distributed obser-vations having approximately the same law as theendogenous solution in a wide class of branching dis-tributional equations.

The Population Dynamics algorithm repeatedlyuses bootstrap to move from one iteration of thebranching distributional equation to the next, whichdramatically reduces the exponential complexity ofthe naÃŕve Monte Carlo approach. We present newresults guaranteeing the convergence of the Wasser-stein distance between the distribution of the poolgenerated by the algorithm and that of the true en-dogenous solution, including results on its rate ofconvergence.

Multilevel Sequential Monte Carlo for Nor-malizing Constants

Kody J. H. LawOak Ridge National [email protected]

The age of data is upon us, and the importance ofits incorporation into estimation schemes cannot beunderstated. Bayesian inference provides a princi-pled and well-defined approach to the integration ofdata into an a priori known distribution. The pos-terior distribution, however, is known only point-wise (possibly with an intractable likelihood) andup to a normalizing constant. Monte Carlo methodshave been designed to deal with these cases, such asMarkov chain Monte Carlo (MCMC) and sequentialMonte Carlo (SMC) samplers. Recently, the multi-level Monte Carlo framework has been extended tosome of these cases. This talk will concern the recentdevelopment of multilevel SMC (MLSMC) samplersand the resulting estimators for standard quantitiesof interest [1] as well as normalizing constants [2].

Joint work with Alex Beskos (UCL), Pierre DelMoral (INRIA Bordeaux), Ajay Jasra (NUS), RaulTempone (KAUST), Yan Zhou (NUS).

[1] Alexandros Beskos, Ajay Jasra, Kody Law, RaulTempone, and Yan Zhou. Multilevel sequen-tial Monte Carlo samplers. arXiv preprintarXiv:1503.07259, 2015.

[2] Pierre Del Moral, Ajay Jasra, Kody Law, andYan Zhou. Multilevel sequential Monte Carlosamplers for normalizing constants. arXivpreprint arXiv:1603.01136, 2016.

Page 53: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Nonlinear Monte Carlo Methods

Organizers: Lukasz Szpruch, Univeristy of Edinburgh and Denis Belomestny, Universitat Duisburg-Essen.In recent years a great progress has been made in expanding applicability of Monte Carlo techniques to

nonlinear problems arising in physics, biology and finance. In particular, language of Backward StochasticDifferential Equations (BSDEs) and Stochastic particle systems allowed probabilistic interpretations forsemi-linear and fully nonlinear PDEs. In this session we aim to overview the most recent results in thefiled.

The main topics are: stochastic particle systems, simulation-based techniques for BSDEs, stochasticcontrol and optimal stopping problems.

Nonlinear MC Part I: Monday August 15. 350–550, Berg B

Multilevel Monte Carlo for McKean-VlasovSDEs

Lukasz SzpruchUniversity of [email protected]

Stochastic Interacting Particle System (SIPS) andthey limiting stochastic McKean-Vlasov equationsoffer a very rich and versatile modelling framework.On one hand, interactions allow us to capture com-plex dependent structure, on the other provide agreat challenge for Monte Carlo simulations. Thenon-linear dependence of the approximation bias onthe statistical error makes classical variance reduc-tion techniques fail in this setting. In this talk, wewill devise a strategy relying on the fixed point ar-gument, that will allow constructing a novel SIPSwith the superior computational properties. In par-ticular, we will establish Multilevel Monte Carlo es-timator for SIPS that significantly improves classicalMonte Carlo estimators that build on the propaga-tion of chaos results.

Monte Carlo Method for Optimal Manage-ment in Energy Markets

Plamen TurkedjievKing’s College London, [email protected]

This paper is about the management of a power-grid. The aim of power grid managers is to optimizetheir expected future profit. The methods of riskmanagement are non-standard because the underly-ing asset, electricity, cannot be traded. This leadsnaturally incomplete financial market. Additionally,changing the production strategy incurs costs whichone must consider in the risk analysis. We approachthis optimal management problem by means of riskmeasure minimization. It involves a system of re-flected backwards differential equations, whose so-lution determines the optimal management strat-egy. We present a novel numerical method based onleast-squares Monte Carlo in order to determine themanagement strategy in high dimensions. Unlikeother approaches, our system of equations avoidsquadratic growth, and therefore leads to a more sta-ble system of equations for numerical methods.

This is joint work with Luciano Campi andClemence Allasseur.

53

Page 54: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

54

A Primal-Dual Algorithm for BackwardSDEs

Christian BenderSaarland University, [email protected]

We generalize the primal-dual methodology, whichis popular in the pricing of early-exercise options, toa backward dynamic programming equation associ-ated with time discretization schemes of backwardstochastic differential equations (BSDEs). Takingas an input some approximate solution of the back-ward dynamic program, which was pre-computed,e.g., by least-squares Monte Carlo, this methodol-ogy enables us to construct a confidence interval forthe unknown true solution of the time discretizedBSDE. We numerically demonstrate the practicalapplicability of our method for some multidimen-sional nonlinear pricing problems where tight pricebounds were previously unavailable.

The talk is based on joint work with N.Schweizer, J. Zhuo, and C. Gärtner.

Particle Methods for the Smile Calibration ofCross-Dependent Volatility Models

Julien GuyonBloomberg [email protected]

In quantitative finance, pricing and hedging multi-asset derivatives requires using multi-asset modelsand calibrating them to liquid market smiles (e.g.,the smiles of stocks and indices). The local volatil-ities in multi-asset models typically have no cross-asset dependency. In this talk, we propose a gen-eral framework for pricing and hedging derivatives incross-dependent volatility (CDV) models, i.e., multi-asset models in which the volatility of each asset isa function of not only its current or past levels, butalso those of the other assets. For instance, CDVmodels can capture that stock volatilities are drivenby an index level, or recent index returns. We ex-plain how to build all the CDV models that are cal-ibrated to all the asset smiles, solving in particu-lar the longstanding smiles calibration problem forthe “cross-aware” multidimensional local volatilitymodel. CDV models are rich enough to be simul-taneously calibrated to other instruments, such asbasket smiles, and we show that the model can fit abasket smile either by means of a correlation skew,like in the classical “cross-blind” multi-asset localvolatility model, or using only the cross-dependencyof volatilities itself, in a correlation-skew-free model,thus proving that steep basket skews are not nec-essarily a sign of correlation skew. The calibratedmodels follow nonlinear stochastic differential equa-tions in the sense of McKean. All the calibrationprocedures use the particle method, a very robustand efficient Monte Carlo method for calibratingmodels to given marginal distributions. The cali-bration of the implied “local in basket” CDV usesa novel fixed point-compound particle method. Nu-merical results in the case of the FX smile triangleproblem illustrate the efficiency of particle methodsand the capabilities of CDV models.

Page 55: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

55

Nonlinear MC Part II: Wednesday August 17. 1025–1155, LK 205/6

Small Size Data-Driven Regression Monte-Carlo

Emmanuel GobetEcole [email protected]

Our goal is to solve certain dynamic programmingequations associated to a given Markov chain X, us-ing a regression-based Monte Carlo algorithm. Thistype of equation arises when computing price ofAmerican options or solving non-linear pricing rules.More specifically, we assume that the model for Xis not known in full detail and only a root sample ofsize M of such process is available. We are investi-gating a new method that by-passes the calibrationstep. By a stratification of the space and a suitablechoice of a probability measure ν, we design a newresampling scheme that allows to compute local re-gressions (on basis functions) in each stratum. Thecombination of the stratification and the resamplingallows to compute the solution to the dynamic pro-gramming equation (possibly in large dimensions)using only a relatively small set of root paths. To as-sess the accuracy of the algorithm, we establish non-asymptotic error estimates in L2(ν). Our numericalexperiments illustrate the good performance, evenwith M = 20–40 root paths.

Variance Reduction Technique for NonlinearMonte Carlo Problems

Denis BelomestnyDuisburg-Essen [email protected]

In this talk I present several variance reductionapproaches which are applicable to a variety ofchallenging nonlinear Monte Carlo problems likesimulation-based optimal stopping or simulation ofMcKean-Vlasov-type equations. A distinctive fea-ture of these approaches is that they allow for a sig-nificant reduction of the computational complexitynot only up to a constant but also in order. A nu-merical performance of these approaches will also beillustrated.

Page 56: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Quadrature on the Sphere and OtherManifolds

Organizer: Johann Brauchart, Graz University of Technology and Josef Dick, University of New SouthWales.

Sphere plus, Part I: Wednesday August 17. 225–325, Berg A

Spherical Lattices: Explicit Construction ofLow-Discrepancy Point Sets on the 2-Sphere

Johann S. BrauchartGraz University of Technology, [email protected]

The aim of this talk is two-fold: firstly, to providean introduction into the main theme of ’quadratureon the sphere and other manifolds’ of this minisym-posium, and, specifically, to discuss recent progressin the construction of explicit spherical node setsderived from lattices in the plane.

It can be shown that an n-point node set withmaximal sum of mutual distances has smallest pos-sible spherical cap L2-discrepancy among all n-pointsets on the sphere and is also the best possible choicefor a QMC method for functions from a certainSobolev space over the sphere (Brauchart and Dick,2013). From a practical point of view, it is not fea-sible to solve the underlying highly non-linear op-timization problem for large n. Indeed, a centralunresolved question is to give explicit constructionsof node sets on the sphere that have provable goodproperties regarding numerical integration. One ap-proach considered here is to lift to the sphere well-studied lattice constructions in the square.

Configurations of Points with Respect to Dis-crepancy and Uniform Distribution

Wöden KusnerGraz University of [email protected]

We would like to find point sets on spheres with lowdiscrepancy, and also of known discrepancy. Thereare some good algorithms to compute the discrep-ancy of point sets and a number of candidate pointsets work with. For the purposes of applicationsin low dimensions, we can compare sets of severalthousand points using the most naive algorithms ina reasonable amount of time.

We’ll describe some algorithms for explicitlycomputing discrepancy, analyze some problems inoptimizing the quality point sets, and look at someexperimental data related to conjectured “good"point sets coming from a brute force implementa-tion.

56

Page 57: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

57

Sphere plus, Part II: Thursday August 18. 350–550, Berg C

QMC Designs and Determinantal Point Pro-cesses

Masatake HiraoAichi Prefectural University, [email protected]

The concept of quasi-Monte Carlo (QMC) designsequence is introduced by Brauchart et al. [1]. Inthis talk, we consider probabilistic generations ofa sequence of QMC design by using determinatalpoint processes (DPPs), which are used in a fermionmodel in quantum mechanics and also studied inprobability theory. We show that, the sphericalensemble on the 2-dimensional unit sphere S2, awell-studied DPPs (e.g., [2]), generate on averageQMC design sequences for Sobolev space Hs(S2)with 1 < s < 2. We also show that, DPPs definedby reproducing kernels for polynomial spaces overthe d-dimensional unit sphere Sd generate on aver-age a better convergent sequence for Hs(Sd) withd/2 + 1/2 < s < d/2 + 1. Moreover, we deal withDPPs defined on other manifolds and related results.

[1] J. Brauchart, E. Saff, I. Sloan, and R. Womer-sley. QMC Designs: optimal order quasi-MonteCarlo integration schemes on the sphere. Math.Comp., 83(290):2821-2851, April 2014.

[2] J.B. Hough, M. Krishnapur, Y. Peres, and B.VirÂě’ag. Zeros of Gaussian Analytic Functionsand Determinantal Point Processes. AmericanMathematical Society, Providence, RI, 2009.

Fast Framelet Transforms on Manifolds

Yu Guang WangCity University of Hong Kong, Hong Kong [email protected]

Data in practical application with some structurecan be viewed as sampled from a manifold, forinstance, data on graphs and in astrophysics. Asmooth and compact Riemannian manifold M, in-cluding examples of spheres, tori, cubes and graphs,is an important manifold. In this work, we constructa type of tight framelets using quadrature rules onM to represent the data (or a function) and to ex-ploit the derived framelets to process the data (forinstance, inpainting an image onM).

One critical computation for framelets is to com-pute, from the framelet coefficients for the inputdata (which are assumed at the highest level), theframelet coefficients at lower levels, and also to eval-uate the function values at new nodes using theframelet representation. We design an efficient com-putational strategy, said to be fast framelet trans-form (FMT), to compute the framelet coefficientsand to recover the function. Assuming the fastFourier transform (FFT) on the manifold M, theFMT has the same computational complexity as theFFT. Numerical examples illustrate the efficiencyand accuracy of the algorithm for the framelets.

This is joint work with Xiaosheng Zhuang (CityU. of Hong Kong, Hong Kong).

Page 58: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

58

Diameter Bounded Equal Measure Partitionsof Compact Manifolds

Giacomo GiganteUniversity of Bergamo, [email protected]

The algorithm devised by Feige and Schechtman[5] for partitioning higher dimensional spheres intoregions of equal measure and small diameter iscombined with David and Christ’s construction ofdyadic cubes [3, 4] to yield a partition algorithmsuitable to any connected Ahlfors regular metricmeasure space of finite measure. As a particularcase, we show the following result.

Theorem 1. Let (X, ρ, µ) be a smooth compact con-nected d-dimensional Riemannian manifold withoutboundary. Then there exist positive constants c1 andc2 such that for every sufficiently large N , there is apartition of X into N regions of measure µ (X) /N ,each contained in a ball of radius c1N

−1/d and con-taining a ball of radius c2N

−1/d.

Several recent results on good quadrature ruleson manifolds [1, 2] are based precisely on the possi-bility of obtaining one such partition.

This is joint work with Paul Leopardi (Aus-tralian Bureau of Meteorology).

[1] L. Brandolini, W.W.L. Chen, L. Colzani,G. Gigante, G. Travaglini Discrepancy andnumerical integration in Sobolev spaces on met-ric measure spaces, arXiv:1308.6775v2.

[2] L. Brandolini, C. Choirat, L. Colzani,G. Gigante, R. Seri, G. Travaglini,Quadrature rules and distribution of points onmanifolds, Ann. Sc. Norm. Super. Pisa Cl. Sci.,13 (2014), 889–923.

[3] M. Christ, A T (b) theorem with remarks onanalytic capacity and the Cauchy integral, Col-loq. Math., 60/61 (1990), 601–628.

[4] G. David, Morceaux de graphes lipschitzienset integrales singuliÃĺres sur une surface, Rev.Mat. Iberoamericana, 4 (1988), 73–114.

[5] U. Feige and G. Schechtman, On the op-timality of the random hyperplane roundingtechnique for MAX CUT, Random Struct. Alg.,20 (2002), 403–440. Special Issue: ProbabilisticMethods in Combinatorial Optimization.

Covering Radii of Various Point Configura-tions Distributed over the Unit Sphere

Alexander ReznikovVanderbilt [email protected]

For a fixed positive integer N we can place N in-dependent random points on the unit sphere dis-tributed with respect to it’s surface measure. An-other way is to place N points that minimize certaindiscrete energy, or maximize certain discrete polar-ization (or Chebyshev constant). For these methodswe compare how well the obtained N points coverthe sphere. It turns out that, even though somedeterministic configurations cover the sphere betterthan random ones, the difference is not too big.

This is joint work with E. Saff and A. Volberg.

Page 59: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Stochastic Computation and Complexity

Organizers: Stefan Heinrich, University of Kaiserslautern, Germany and Thomas Mueller-Gronbach, Uni-versity of Passau, Germany and Paweł Przybyłowicz, AGH University of Science and Technology, Krakow,Poland.

SC&C Part I: Monday August 15. 1025–1155, Berg B

Complexity of Parametric Stochastic Integra-tion in Hölder Classes

Stefan HeinrichUniversity of Kaiserslautern, [email protected]

We study the complexity of pathwise approximationof the stochastic Ito integral of functions dependingon a parameter. We consider functions belonging toHölder classes, that is, functions that are r-times dif-ferentiable and the highest derivatives satisfy someHölder condition. This work continues previous in-vestigations on function classes of mixed smooth-ness. We present multilevel algorithms and analyzetheir convergence rate, using the interplay betweenBanach space valued and parametric problems.

The considered classes require more subtle lowerbound methods. We discuss this aspect in some de-tail (along with a brief introduction into lower boundconcepts of information-based complexity theory).The results show the order optimality of the pro-posed algorithm and settle the complexity of theproblem.

A Coercivity-Type Condition for SPDEs withAdditive White Noise

Martin HutzenthalerUniversity of [email protected]

In this talk we explain a new explicit, easily im-plementable, and fully discrete accelerated expo-nential Euler-type approximation scheme for addi-tive space-time white noise driven stochastic partialdifferential equations (SPDEs) with possibly non-globally monotone nonlinearities such as stochasticKuramoto-Sivashinsky equations. The main resultproves that the proposed approximation scheme con-verges strongly and numerically weakly to the solu-tion process of such an SPDE. Key ingredients in theproof of our convergence result are a suitable gener-alized coercivity-type condition, the specific designof the accelerated exponential Euler-type approxi-mation scheme, and an application of FerniqueâĂŹstheorem.

The talk is based on joint work with ArnulfJentzen and Diyora Salimova.

59

Page 60: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

60

Adaptive Approximation of the Minimum ofBrownian Motion

André HerzwurmTU Kaiserslautern, [email protected]

We study the Lp-error in approximating the min-imum of a Brownian motion on the unit inter-val based on finitely many point evaluations. Weconstruct an algorithm that adaptively chooses thepoints at which to evaluate the Brownian path. Incontrast to the 1/2 convergence rate of optimal non-

adaptive algorithms, the proposed adaptive algo-rithm converges at an arbitrarily high polynomialrate. The problem of approximating the minimumof a stochastic process naturally arises in the contextof numerics for reflected stochastic differential equa-tions. In particular, the proposed algorithm can beapplied to the squared Bessel process of dimension 1,which is an instance of a Cox-Ingersoll-Ross processwhere the boundary point zero is accessible.

This is joint work with James M. Calvin (NewJersey Institute of Technology, USA) and MarioHefter (TU Kaiserslautern, Germany).

SC&C Part II: Monday August 15. 225–325, Berg B

The Order Barrier for Strong Approximationof Rough Volatility Models

Andreas NeuenkirchUniversity of Mannheim, [email protected]

We study the strong approximation of a roughvolatility model, in which the log-volatility is givenby a fractional Ornstein-Uhlenbeck process withHurst parameter H < 1/2. Our methods are basedon an equidistant discretization of the volatility pro-cess and of the driving Brownian motions, respec-tively. For the root mean-square error at a sin-gle point the optimal rate of convergence that canbe achieved by such methods is n−H , where n de-notes the number of subintervals of the discretiza-tion. This rate is in particular obtained by the Eulermethod and an Euler-trapezoidal type scheme.

This is joint work with Taras Shalaiko (Univer-sity of Mannheim).

Strong Convergence Rates for Cox-Ingersoll-Ross Processes: Full Parameter Range

Mario HefterTU [email protected]

In recent years, strong (pathwise) approximation ofstochastic differential equations (SDEs) has inten-sively been studied for SDEs of the form

dXt = (a− bXt)dt+ σ√XtdWt,

with a scalar Brownian motion W and parametersa, σ > 0, b ∈ R. These SDEs are used to describethe volatility in the Heston model and the interestrate in the Cox-Ingersoll-Ross model. In the partic-ular case of b = 0 and σ = 2 the solution is a squaredBessel process of dimension a.

We propose a tamed Milstein scheme Y N , whichuses N ∈ N values of the driving Brownian motionW , and prove positive polynomial convergence ratesfor all parameter constellations. More precisely, weshow that for every 1 ≤ p < ∞ and every ε > 0there exists a constant C > 0 such that

sup0≤t≤1

(E|Xt − Y N

t |p)1/p ≤ C · 1

Nmin(1,δ)/(2p)−ε

for all N ∈ N, where δ = 4a/σ2.This is joint work with André Herzwurm (TU

Kaiserslautern, Germany).

Page 61: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

61

SC&C Part III: Tuesday August 16. 225–325, Berg B

A new Generation of Explicit NumericalSchemes for SDEs with Superlinear Coeffi-cients

Sotirios SabanisThe University of Edinburgh, [email protected]

Convergence results for a new class of explicit, high-order schemes, which approximate stochastic dif-ferential equations (SDEs) with super-linear coef-ficients, will be presented. Theoretical results willbe illustrated through the implementation of thesealgorithms to known non-linear models. Some rele-vant references are below.

Joint work with Chaman Kumar (Edinburgh U.,U.K.).

[1] K. Dareiotis, C. Kumar and S. Sabanis. OnTamed Euler Approximations of SDEs Drivenby Lévy Noise with Applications to Delay Equa-tions. SIAM Journal of Numerical Analysis,54(3):1840–1872, 2016.

[2] I. Gyongy, S. Sabanis and D. Siska. Convergenceof tamed Euler schemes for a class of stochas-tic evolution equations. Stochastic Partial Dif-ferential Equations: Analysis and Computations,4(2):225–245, 2016.

[3] S. Sabanis. Euler approximations with varyingcoefficients: the case of superlinearly growing dif-fusion coefficients. To appear, Annals of AppliedProbability, 2016.

On Non-Polynomial Lower Error Boundsfor Adaptive Strong Approximation of SDEswith Bounded C∞-Coefficients

Larisa YaroslavtsevaUniversity of Passau, [email protected]

Recently it has been shown in [2] that there existSDEs with bounded C∞-coefficients, such that noapproximation method based on finitely many obser-vations of the driving Brownian motion at fixed timepoints converges in absolute mean to the solutionwith a polynomial rate. This result does not covermethods that may choose the number as well as thelocation of the evaluation nodes of the driving Brow-nian motion in a path dependent way, e.g. methodswith a sequential step-size control. For SDEs withglobally Lipschitz continuous coefficients it is well-known that using a path-dependent step-size controltypically does not lead to a better rate of conver-gence compared to what is best possible for non-adaptive algorithms, see e.g. [3]. However, in thecase of the one-dimensional squared Bessel process,which is the solution of an SDE with a non-globallyLipschitz continuous diffusion coefficient, an adap-tive method has recently been constructed in [1],which converges to the solution even at a superpoly-nomial rate, while non-adaptive algorithms achieverate 1/2 in the best case. In this talk we show thata similar positive result does not hold for the SDEsfrom [2]. In fact, for these SDEs a polynomial rateof convergence can also not be achieved with anykind of adaptive algorithm.

[1] M.Hefter, A.Herzwurm. Optimal strong approx-imation of the one-dimensional squared Besselprocess. arXiv:1601.01455.

[2] A. Jentzen, T.Müller-Gronbach,L.Yaroslavtseva. On stochastic differentialequations with arbitrary slow convergencerates for strong approximation. To appear inCommunications in Mathematical Sciences.

[3] T.Müller-Gronbach, T. Optimal pointwise ap-proximation of SDEs based on Brownian motionat discrete points. Ann.Appl. Probab. 14:1605–1642, 2004.

Page 62: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

62

SC&C Part IV: Wednesday August 17. 225–325, Berg B

A Derivative-Free Milstein Scheme forSPDEs with Commutative Noise

Andreas RößlerUniversität zu Lübeck, [email protected]

We consider the problem of approximating mild so-lutions of stochastic partial differential equations(SPDEs) in the mean-square sense. Therefore, aninfinite dimensional version of a derivative-free Mil-stein scheme for the time discretization is proposed.The introduced scheme can be applied to a certainclass of semilinear SPDEs in the case of commuta-tive noise. In this case, increments of the drivingQ-Wiener process have to approximated only. Thismakes the numerical scheme easy to implement. Asa novelty, the derivative-free Milstein scheme turnsout to be also efficiently applicable in the generalcase of non-multiplicative diffusion operators. More-over, the order of convergence and the efficiency ofthe new scheme will be discussed.

This is joint work with Claudine Leonhard.

Minimal Asymptotic Errors for StrongGlobal Approximation of SDEs Driven byPoisson and Wiener Processes

Paweł PrzybyłowiczAGH University of Science and Technology, Krakow,[email protected]

We refer our recent results on the strong globalapproximation of SDEs driven by two independentprocesses: a nonhomogeneous Poisson process anda Wiener process. We assume that the jump anddiffusion coefficients of the underlying SDE satisfyjump commutativity condition. We establish the ex-act convergence rate of minimal errors that can beachieved by arbitrary algorithms based on a finitenumber of observations of the Poisson and Wienerprocesses. We consider classes of methods that useequidistant or nonequidistant sampling of the Pois-son and Wiener processes. We provide a construc-tion of optimal methods, based on the classical Mil-stein scheme, which asymptotically attain the es-tablished minimal errors. The analysis implies thatmethods based on nonequidistant mesh are more ef-ficient than those based on the equidistant mesh. Wealso discuss methods based on the adaptive stepsizecontrol.

Page 63: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

63

SC&C Part V: Thursday August 18. 225–325, Berg B

Optimal Pointwise Solution of Stochastic Dif-ferential Equations in Presence of Informa-tional Noise

Paweł MorkiszAGH University of Science and Technology, [email protected]

We study a pointwise approximation of solutions ofsystems of stochastic differential equations. We as-sume that an approximation method can use valuesof the drift and diffusion coefficients which are per-turbed by some deterministic noise. Let K1,K2 ≥ 0be the precision levels for the drift and the diffusioncoefficients, respectively. We give a construction ofthe randomized Euler scheme and we prove that ithas the error O(n−min%,1/2+K1+K2) as n→ +∞,K1,K2 → 0, where n is the number of discretizationpoints and % is the Hölder exponent of the diffusioncoefficient. We also investigate lower bounds on theerror of an arbitrary algorithm and establish opti-mality of the defined randomized Euler algorithm.Finally, we report some numerical results.

We generalize the results of [1] and [2]. Thosepapers concern problems for ODEs and SDEs corre-spondingly when the exact information is available.

Joint work with Paweł Przybyłowicz.

[1] A. Jentzen, A. Neuenkirch, A random Eulerscheme for Carathéodory differential equations,J. Comp. and Appl. Math. 224 (2009), 346–359.

[2] P. Przybyłowicz, P. Morkisz, Strong approxima-tion of solutions of stochastic differential equa-tions with time-irregular coefficients via random-ized Euler algorithm, Appl. Numer. Math. 78(2014), 80–94.

General Multilevel Adaptations for Stochas-tic Approximation Algorithms

Thomas Mueller-GronbachUniversity of [email protected]

We present and analyse multilevel adaptations ofstochastic approximation algorithms. In contrast tothe classical application of multilevel Monte Carloalgorithms we are now dealing with a parameterisedfamily of expectations and we want to find param-eters with zero expectation. Our analysis is carriedout under mild assumptions with respect to the fam-ily of expectations and nearly classical assumptionswith respect to the multilevel approach. In particu-lar, we prove upper bounds for the p-th mean errorof our methods in terms of their computational cost,which coincide with the bounds known for the clas-sical problem of approximating a single expectationwith the multilevel approach.

Joint work with Steffen Dereich (University ofMünster, Germany).

Page 64: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Special sessions

Special sessions consist of 3 or 4 related talks put together by one or more organizers. These specialsessions are described on the following pages:

• Advances in Importance Sampling

• Approximate Bayesian Computation

• Bayesian Computation with Intractable Normalizing Constants

• Combinatorial Discrepancy

• Construction and Error Analysis for Quadrature Rules

• Constructions of Quadrature Rules

• Exact Sampling of Markov Processes

• Experimental Design for Simulation Experiments

• Function Space Embeddings for High DImensional Problems

• Hamiltonian Monte Carlo in Theory and Practice

• High Dimensional Numerical Integration

• Higher Order Quadrature Rules

• Monte Carlo for Financial Sensitivity Analysis

• Monte Carlo Sampling from Multi-Modal Distributions

• Preasymptotics for Multivariate Approximation

• Probabilistic Numerics

• Quantum Computing and Monte Carlo Simulations

• Stochastic Maps and Transport Approaches for Monte Carlo

• Systematic Risk and Financial Networks

64

Page 65: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Advances in Importance Sampling

Organizer: Ingmar Schuster, Université de Paris, Dauphine.Recently, there has been renewed interest in Importance Sampling, with results that go far beyond

the state of the art of the early nineties when research focus shifted to MCMC. These results includetheoretical advances in the analysis of convergence conditions and convergence assessment on one side.On the other, an overarching Multiple Importance Sampling framework has been proposed as well as ISbased on piecewise deterministic processes, which allows, amongst other things, data subsampling andincorporating gradient information.

Wednesday August 17. 1025–1155, Berg B

Generalized Multiple Importance Sampling

Víctor ElviraUniversity Carlos III of [email protected]

Importance sampling methods are broadly usedto approximate posterior distributions or some oftheir moments. In its standard approach, samplesare drawn from a single proposal distribution andweighted properly. However, since the performancedepends on the mismatch between the targeted andthe proposal distributions, several proposal densi-ties are often employed for the generation of sam-ples. Under this Multiple Importance Sampling(MIS) scenario, many works have addressed the se-lection or adaptation of the proposal distributions,interpreting the sampling and the weighting steps indifferent ways. In this paper, we establish a novelgeneral framework for sampling and weighting pro-cedures when more than one proposal is available.The most relevant MIS schemes in the literature areencompassed within the new framework, and, more-over novel valid schemes appear naturally. All theMIS schemes are compared and ranked in terms ofthe variance of the associated estimators. Finally,we provide illustrative examples which reveal that,even with a good choice of the proposal densities, acareful interpretation of the sampling and weightingprocedures can make a significant difference in theperformance of the method.

Joint work with L. Martino (University of Valen-cia), D. Luengo (Universidad Politecnica de Madrid)and M. F. Bugallo (Stony Brook University of NewYork).

The Sample Size Required in ImportanceSampling

Sourav ChatterjeeStanford [email protected]

I will talk about a recent result, obtained in a jointwork with Persi Diaconis, about the sample size re-quired for importance sampling. If an i.i.d. samplefrom a probability measure P is used to estimateexpectations with respect to a probability measureQ using the importance sampling technology, theresult says that the required sample size is exp(K),where K is the Kullback-Leibler divergence of P fromQ. If the sample size is smaller than this threshold,the importance sampling estimates may be far fromthe truth, while if the sample size is larger, the esti-mates are guaranteed to be close to the truth.

65

Page 66: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

66

Continuous Time Importance Sampling forJump Diffusions with Application to Maxi-mum Likelihood Estimation

Kryzystof Łatuszyń[email protected]

In the talk I will present a novel algorithm for sam-pling multidimensional irreducible jump diffusionsthat, unlike methods based on time discretisation, is

unbiased. The algorithm can be used as a buildingblock for maximum likelihood parameter estimation.The approach is illustrated by numerical examplesof financial models like the Merton or double-jumpmodel where its efficiency is compared to methodsbased on the standard Euler discretisation and Mul-tilevel Monte Carlo.

Joint work with Sylvain Le Corff, Paul Fearn-head, Gareth O. Roberts, and Giorgios Sermaidis.

Page 67: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Approximate Bayesian Computation

Organizer: Scott Sisson, University of New South Wales.Many scientific problems feature intractable likelihoods that cannot be evaluated, complicating the

task of Bayesian inference. Approximate Bayesian Computation (ABC) is a computationally intensivealternative that is available when one can simulate from the distribution underlying the likelihood.

Tuesday August 16. 1025–1155, Berg A

Bayesian Synthetic Likelihood

Christopher DrovandiQueensland University of Technology, [email protected]

Having the ability to work with complex models canbe highly beneficial. However, complex models of-ten have intractable likelihoods, so methods that in-volve evaluation of the likelihood function are infea-sible. In these situations, the benefits of workingwith likelihood-free methods become apparent. Oneof these methods is called the synthetic likelihood(SL), which uses a multivariate normal approxima-tion of the distribution of a set of summary statis-tics. This paper explores the accuracy and compu-tational efficiency of the Bayesian version of the syn-thetic likelihood (BSL) approach in comparison to acompetitor known as approximate Bayesian compu-tation (ABC) and its sensitivity to its tuning param-eters and assumptions. We relate BSL to pseudo-marginal methods and propose to use an alternativeSL that uses an unbiased estimator of the SL, whenthe summary statistics have a multivariate normaldistribution. Several applications of varying com-plexity are considered to illustrate the findings.

Joint work with Leah Price (Queensland Univer-sity of Technology, Australia), David Nott (NationalUniversity of Singapore), and Anthony Lee (Univer-sity of Warwick, United Kingdom).

Synthetic Likelihood Variational Bayes

Minh-Ngoc TranUniversity of Sydney Business [email protected]

Synthetic likelihood is an attractive approach tolikelihood-free inference when an approximatelyGaussian summary statistic for the data informa-tive for inference about the parameters is available.The synthetic likelihood method uses a plug-in nor-mal density estimate for the summary statistic, withthe plug-in mean and covariance matrix obtainedby simulation at each parameter value. A pseudo-marginal implementation of Bayesian synthetic like-lihood has been introduced recently, where an unbi-ased estimate of the likelihood is employed. In thissetting, if the summary statistic is Gaussian, thedistribution targeted by the MCMC scheme doesnot depend on the number of Monte Carlo sam-ples used to estimate the mean and covariance ma-trix. However, to obtain satisfactory mixing of thepseudo-marginal algorithm it is important that thelikelihood estimate is not too variable, which meansthat a large number of simulations are still requiredfor each synthetic likelihood evaluation, particularlywhen the summary statistic is high-dimensional.Here we consider implementing stochastic gradientvariational Bayes methods for posterior approxima-tion with the synthetic likelihood, employing un-biased estimates of the log likelihood. The maininterest of this idea is that the stochastic gradientoptimization is much more tolerant of noise in thelog-likelihood estimate, so that fewer simulations

67

Page 68: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

68

can be used in each synthetic log-likelihood evalu-ation compared to the number required in pseudo-marginal Bayesian synthetic likelihood. We comparethe pseudo-marginal implementation with the syn-thetic likelihood variational Bayes (SLVB) approachand an existing variational approach to likelihoodfree inference in the literature in a number of exam-ples.

Joint work with V. Ong, D. Nott, S. Sisson andC. Drovandi.

Inference via Bayesian Synthetic Likelihoodsfor a Mixed-Effects SDE Model of TumorGrowth and Response Treatment

Umberto PicchiniCentre for Mathematical Sciences, Lund University,[email protected]

We consider parameter inference for a state-spacemodel for repeated measurements obtained on sev-eral subjects. We construct a nonlinear mixed-effects model where individual dynamics are ex-pressed by stochastic differential equations (SDE).Nonlinear SDE state-space models are notoriouslydifficult to fit: however, advancements in parame-ter estimation using sequential Monte Carlo meth-ods have made the task in principle approachable, ifnot always straightforward. While each individual(subject specific) contribution to the overall likeli-hood function for the “population of subjects” can

be approximated using sequential Monte Carlo, herewe propose a global approximation performed di-rectly on the the entire likelihood function. We usesummary statistics to encode information pertain-ing between-subjects variation, as well as individualvariation, and resort to a synthetic likelihood ap-proximation [1]. In particular, we take advantage ofthe Bayesian (pseudo-marginal) synthetic likelihoodapproach in [2].

We consider longitudinal data from measure-ments of tumor volumes in mice. Mice are dividedin five groups and administered different treatments.We postulate a two compartments model to repre-sent the fraction of volume pertaining the compart-ment of tumor cells killed by the treatment and thefraction of cells still alive. The dataset is challeng-ing because of sparse observations. Inference viasynthetic likelihoods is then compared with exactBayesian inference from a standard pseudo-marginalapproach. Consistently for both approaches ourmodel is able to identify a specific treatment to bemore effective in reducing tumor growth.

Joint work with Julie Lyng Forman (Dept. Bio-statistics, University of Copenhagen, Denmark).

[1] S. Wood. Statistical inference for noisy nonlinearecological dynamic systems. Nature 466(7310):1102-1104, 2010.

[2] L. Price, C. Drovandi, A. Lee and D . Nott.Bayesian synthetic likelihood, http://eprints.qut.edu.au/92795/, 2016.

Page 69: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Bayesian Computation with IntractableNormalizing Constants

Organizer: Faming Liang, University of Florida.

Friday August 19. 1025–1155, Berg B

An MCMC Approach to Empirical Bayes In-ference and Bayesian Sensitivity Analysis viaEmpirical Processes

Hani DossUniversity of [email protected]

Consider a Bayesian situation in which we observeY ∼ pθ, where θ ∈ Θ, and we have a familyνh, h ∈ H of potential prior distributions on Θ.Let g be a real-valued function of θ, and let Ig(h) bethe posterior expectation of g(θ) when the prior isνh. We are interested in two problems: (i) selectinga particular value of h, and (ii) estimating the familyof posterior expectations Ig(h), h ∈ H. Letmy(h)be the marginal likelihood of the hyperparameter h:my(h) =

∫pθ(y) νh(dθ). The empirical Bayes es-

timate of h is, by definition, the value of h thatmaximizes my(h). It turns out that it is typicallypossible to use Markov chain Monte Carlo to formpoint estimates formy(h) and Ig(h) for each individ-ual h in a continuum, and also confidence intervalsfor my(h) and Ig(h) that are valid pointwise. How-ever, we are interested in forming estimates, withconfidence statements, of the entire families of inte-grals my(h), h ∈ H and Ig(h), h ∈ H: we needestimates of the first family in order to carry outempirical Bayes inference, and we need estimates ofthe second family in order to do Bayesian sensitivityanalysis. We establish strong consistency and func-tional central limit theorems for estimates of thesefamilies by using tools from empirical process the-ory. As an application, we consider Latent DirichletAllocation, which is heavily used in topic modelling.

The two-dimensional hyperparameter that governsthis model has a large impact on inference. Weshow how our methodology can be used to selectit. We also show how to form globally valid “two-dimensional confidence bands” for certain posteriorprobabilities of interest, and give an illustration ona real data set.

Joint work with Yeonhee Park (MD AndersonCancer Center).

A Bayesian Hierarchical Spatial Modelfor Dental Caries Assessment Using Non-Gaussian Markov Random Fields

Ick Hoon JinUniversity of Notre [email protected]

Research in dental caries generates data with twolevels of hierarchy: that of a tooth overall and thatof the different surfaces of the tooth. The outcomesoften exhibit spatial referencing among neighboringteeth and surfaces, i.e., the disease status of a toothor surface might be influenced by the status of aset of proximal teeth/surfaces. Assessments of den-tal caries (tooth decay) at the tooth level yield bi-nary outcomes indicating the presence/absence ofteeth, and trinary outcomes at the surface level in-dicating healthy, decayed, or filled surfaces. Thepresence of these mixed discrete responses compli-cates the data analysis under a unified framework.To mitigate complications, we develop a Bayesiantwo-level hierarchical model under suitable (spatial)Markov random field assumptions that accommo-

69

Page 70: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

70

dates the natural hierarchy within the mixed re-sponses. At the first level, we utilize an autologis-tic model to accommodate the spatial dependencefor the tooth-level binary outcomes. For the sec-ond level and conditioned on a tooth being non-missing, we utilize a Potts model to accommodatethe spatial referencing for the surface-level trinaryoutcomes. The regression models at both levels werecontrolled for plausible covariates (risk factors) ofcaries, and remain connected through shared pa-rameters. To tackle the computational challenges inour Bayesian estimation scheme caused due to thedoubly-intractable normalizing constant, we employa double Metropolis-Hastings sampler. We compareand contrast our model performances to the stan-dard non-spatial (naive) model using a small simu-lation study, and illustrate via an application to aclinical dataset on dental caries.

A Latent Potts model for Cancer Pathologi-cal Data Analysis

Qianyun LiUniversity of [email protected]

Nowadays, much biological data are acquired via im-ages. In this project, we study the pathological im-ages scanned from 205 lung cancer patients with thegoal to find out the relationship between the dis-tribution of different types of cells, including lym-phocytes, stroma and tumor cells, and the survivaltime.

Toward this goal, we model the distributionof different types of cells using a modified Pottsmodel for which the parameters represent inter-actions between different types of cells, and esti-mate the parameters of the Potts model using thedouble Metropolis-Hastings algorithm. The doubleMetropolis-Hastings algorithm allows us to simulatesamples approximately from a distribution with anintractable normalizing constant. Our numerical re-sults indicate that the distribution of different typesof cells is significantly related with the patient’s sur-vival time.

Page 71: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Combinatorial Discrepancy

Organizer: Aleksandar Nikolov, University of Toronto and Kunal Talwar, Google Inc.Combinatorial discrepancy is a quantity associated with finite set systems, and is a discrete analogue of

geometric discrepancy. Going back to work of Beck, it is well known that upper bounds on combinatorialdiscrepancy can be used to construct point sets with small geometric discrepancy. This connection hasbeen used (most notably by Matousek) to give tight bounds on the geometric discrepancy of severalnatural families of sets, for example halfspaces and Cartesian products of discs. The constructions basedon combinatorial discrepancy have the appealing property that they provide upper bounds on geometricdiscrepancy with respect to essentially any measure, rather than just Lebesgue measure. We propose aspecial session to cover the basic connection mentioned above, as well as several new developments incombinatorial discrepancy. These include:

• A near-resolution of Tusnady’s problem, which is the combinatorial analogue of the great openproblem of determining the star discrepancy in higher dimensions.

• The discovery of efficient algorithms to compute low discrepancy colorings.

• New upper bounds on random bounded degree set systems, which are related to the Beck-Fialaconjecture.

Tuesday August 16. 1025–1155, Berg B

Combinatorial Discrepancy – Introduction

Aleksandr NikolovUniversity of [email protected]

In this talk I will describe the basics of combinatorialdiscrepancy theory, some of the important historicalresults, and connections to geometric discrepancy. Iwill highlight some methods of proving upper andlower estimates on combinatorial discrepancy. Fi-nally, I will describe the connection between heredi-tary discrepancy and the γ2 factorization norm, andhow it helps in giving estimates on discrepancy.

Joint work with Kunal Talwar, Google (USA)and Jiří Matoušek (deceased).

Minimizing Discrepancy by Walking on theEdges

Shachar LovettUC San [email protected]

Minimizing the discrepancy of a set system is a fun-damental problem in combinatorics. One of the cor-nerstones in this area is the celebrated six standarddeviations result of Spencer (AMS 1985): In anysystem of n sets in a universe of size n, there al-ways exists a coloring which achieves discrepancy6√n. The original proof of Spencer was existential

in nature, and did not give an efficient algorithm tofind such a coloring. Bansal (FOCS 2010) gave thefirst efficient algorithm which finds such a coloring.His algorithm is based on an SDP relaxation of thediscrepancy problem and a clever rounding proce-dure. In a follow up work (FOCS 2012) we give asimple randomized algorithm to find a coloring as in

71

Page 72: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

72

Spencer’s result based on a restricted random walk.Our algorithm and its analysis use only basic linearalgebra and is “truly” constructive in that it does notappeal to the existential arguments, giving a newproof of Spencer’s theorem and the partial coloringlemma.

This is joint work with Raghu Meka (UCLA).

An Algorithm for Komlos Conjecture Match-ing Banaszczyk’s Bound

Shashwat GargTU [email protected]

The Komlós conjecture and its special case the Beck-Fiala conjecture are challenging open problems indiscrepancy theory and convex geometry. The bestknown bounds on both are due to Banaszczyk from1998, using a non-constructive convex geometric ar-gument. While there had been a lot of progressin the past few years in algorithms for discrepancytheory, matching Banaszczyk’s bounds algorithmi-cally seemed challenging. I will talk about our re-cent work which gives an algorithm giving the samebounds as Banaszczyk. I will also talk about themore general theorem proved by Banaszczyk andsome of its other applications. Our result uses ingre-dients from semidefinite programming duality, mar-tingales and convex geometry.

This is based on joint work with Nikhil Bansaland Daniel Dadush.

Page 73: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Construction and Error Analysis forQuadrature Rules

Organizers: Christian Irrgeher, RICAM Linz and Takehito Yoshiki, University of New South Wales.The analysis of multivariate integration for various classes of functions is of high interest, because

for many applications in different areas, like finance, biology or physics, it is necessary to compute high-dimensional integrals in a fast way. There is active research on constructing and studying integration ruleswhich solve the multivariate integration problem efficiently. The aim of this special session is to discusscurrent progress on the theoretical analysis of efficient quadrature rules for multivariate integration.

Thursday August 18. 350–550, Berg A

Optimal Order Quasi-Monte Carlo Integra-tion in Sobolev Spaces of Arbitrary Smooth-ness

Kosuke SuzukiUNSW, [email protected]

We investigate quasi-Monte Carlo integration usinghigher order digital nets in Sobolev spaces of ar-bitrary fixed smoothness α ∈ N, α ≥ 2, definedover the s-dimensional unit cube. We prove thatrandomly digitally shifted order β digital nets canachieve the convergence of the root mean squareworst-case error of order N−α(logN)(s−1)/2 whenβ ≥ 2α. The exponent of the logarithmic term, i.e.,(s − 1)/2, is improved compared to the known re-sult by Baldeaux and Dick, in which the exponent issα/2. Our result implies the existence of a digitallyshifted order β digital net achieving the convergenceof the worst-case error of order N−α(logN)(s−1)/2,which matches a lower bound on the convergencerate of the worst-case error for any cubature ruleusing N function evaluations and thus is best possi-ble. Further we discuss how we can obtain explicitconstructions of point sets which achieve such bestpossible rate of convergence.

Construction of Lattice Rules: from K-Optimal Lattices to Extremal Lattices

Ronald CoolsKU Leuven, [email protected]

The concept of K-optimal lattices was introducedby Cools and Lyness (2001) to set up searches forlattice rules for multivariate integration with a lownumber of points for a given trigonometric degree ofprecision. For more than 3 dimensions no so-calledcritical lattices are known but we are happy with lat-tices that are “nearly” critical. The original searcheswere restricted to 3 and 4 dimensions. Refinementswere later made to make the searches possible in 5and 6 dimensions.

The idea behind K-optimal lattices was later ex-tended in Russia using extremal lattices. One couldsay that an extremal lattice is a locally critical lat-tice.

In this talk both approaches to lattice rule con-struction will be described and a survey of the ob-tained results will be presented.

73

Page 74: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

74

Rank 1 Lattice Rules for Integrands with Per-mutation Invariant Inputs

Gowri SuryanarayanaVITO, [email protected]

We study multivariate integration of functions thatare invariant under the permutation (of a subset) oftheir arguments. We show an upper bound for the“n-th” minimal worst case error for such problems,with the highlight that under certain conditions thisupper bound only weakly depends on the number ofdimensions. We present the use of rank-1 latticerules in such a setting, and show that “randomlyshifted” rank-1 rules can achieve such a bound. Wethen present a component-by-component algorithmto find the generating vector for a shifted rank-1 lat-tice rule that obtains a rate of convergence arbitrar-ily close to the optimal rate of O(nα), where α > 1/2denotes the smoothness of our function space. Wefurther show numerical experiments to compare andcontrast the error convergence in these permutation-invariant spaces and spaces without this specialproperty.

A Universal Algorithm for Multivariate Inte-gration

David KriegJena University, [email protected]

I will present a randomized version of Frolov’s algo-rithm for multivariate integration over cubes. Theresulting method is unbiased and has optimal orderof convergence (in the randomized sense as well as inthe worst case setting) for many function spaces, in-cluding Sobolev Hilbert spaces of dominating mixedsmoothness r ∈ N or isotropic smoothness s ∈ Nfor s > d/2. The algorithm is a randomization andenhancement of Frolov’s quadrature rule from 1976.

My talk is based on a joint work with Erich No-vak that appeared under the same name in Founda-tions of Computational Mathematics and on a recentpaper by Mario Ullrich, which is yet to be published.

This is joint work with Erich Novak (Jena Uni-versity, Germany) and Mario Ullrich (Linz Univer-sity, Austria).

Page 75: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Constructions of Quadrature Rules

Organizers: Dirk Nuyens, KU Leuven, and Kosuke Suzuki, University of New South Wales.The construction of good quadrature rules has been of high interest in numerical integration. Recently,

different kinds of quadrature rules, including Frolov rules, (higher order) digital nets, and lattice rules,have been intensively discussed. The aim of this session is to bring experts of these research areas togetherto discuss recent theoretical and numerical results for the construction of such quadrature rules.

Tuesday August 16. 1025–1155, Berg C

Lp-Discrepancy and Beyond of Higher OrderDigital Sequences

Aicke HinrichsJohannes Kepler University Linz, [email protected]

The Lp-discrepancy is a quantitative measure for theirregularity of distribution modulo one of infinite se-quences. In 1986 Proinov proved for all p > 1 a lowerbound for the Lp-discrepancy of general infinite se-quences in the d-dimensional unit cube, but it re-mained an open question whether this lower boundis best possible in the order of magnitude until re-cently. In 2014 Dick and Pillichshammer gave a firstconstruction of an infinite sequence whose order ofL2-discrepancy matches the lower bound of Proinov.Here we give a complete solution to this problemfor all finite p > 1. We consider so-called order 2digital (t, d)-sequences over the finite field with twoelements and show that such sequences achieve theoptimal order of Lp-discrepancy simultaneously forall p ∈ (1,∞).

Beyond this result, we estimate the norm of thediscrepancy function of those sequences also in thespace of bounded mean oscillation, exponential Or-licz spaces, Besov and Triebel-Lizorkin spaces andgive some corresponding lower bounds which showthat the obtained upper bounds are optimal in theorder of magnitude.

The talk is based on joint work with Josef Dick,Lev Markhasin, and Friedrich Pillichshammer.

Antithetic Sampling for Quasi-Monte CarloIntegration using Digital Nets

Takashi GodaThe University of [email protected]

Antithetic sampling is one of the well-known vari-ance reduction techniques for Monte Carlo integra-tion. In this talk we discuss its application to deter-ministic quasi-Monte Carlo (QMC) integration us-ing digital nets. Numerical experiments based onSobol’ point sets up to dimension 100 indicate thatthe rate of convergence can be improved for smoothintegrands by using antithetic sampling technique.Although such a nice convergence behavior is be-yond the reach of our theoretical result obtainedso far, we can prove the existence of good polyno-mial lattice point sets in base 2 with antithetics inweighted Sobolev spaces of smoothness of second or-der. The result can be extended to the case of anarbitrary prime base by generalizing the notion ofantithetic sampling itself.

75

Page 76: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

76

Successive Coordinate Search andComponent-by-Component Construction ofRank-1 Lattice Rules in Weighted RKHS

Adrian EbertKU Leuven, [email protected]

The (fast) component-by-component (CBC) algo-rithm is an efficient tool for the construction ofgenerating vectors for quasi-Monte Carlo rank-onelattice rules in weighted reproducing kernel Hilbertspaces. Even though it was proven in [1] that theCBC algorithm achieves the optimal rate of conver-gence in the respective function spaces, the generat-ing vector z produced by the CBC method may notyield the smallest possible worst-case error en,d(z).Therefore, we investigate a generalization of thecomponent-by-component construction that allowsfor a general successive coordinate search basedon an initial generating vector z0. The proposedmethod admits the same type of worst-case errorupper bounds as the CBC algorithm, independentof the choice of the initial vector z0. Moreover, a

fast version of our method, based on the fast CBCalgorithm as in [2], is available, reducing the com-putational cost of the algorithm to O(dn log(n))operations. A practical combination with the clas-sical CBC construction can be performed so thatthe final results are never worse than the CBC out-puts and sometimes significantly better, dependingon the weights, with a minor overhead in the run-time. We will report on different test examples andanalyze variations of the new method.

Joint work with Hernan Leövey (Humboldt-University Berlin).

[1] F. Y. Kuo. Component-by-component construc-tions achieve the optimal rate of convergencefor multivariate integration in weighted Korobovand Sobolev spaces. Journal of Complexity,19(3):301–320, 2003.

[2] D. Nuyens and R. Cools. Fast algorithms forcomponent-by-component construction of rank-1lattice rules in shift-invariant reproducing kernelHilbert spaces. Math. Comp, 75:903–920, 2006.

Page 77: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Exact Sampling of Markov Processes

Organizer: Sandeep Juneja, Tata Institute of Fundamental Research.

Thursday August 18. 1025–1155, LK 101

Randomized MLMC for Markov Chains

Peter GlynnStanford [email protected]

Multi-level Monte Carlo (MLMC) algorithms havebeen extensively applied in recent years to obtainschemes that often converge at faster rates than cor-responding traditional Monte Carlo methods. Inthis talk, we shall introduce a randomized strat-ified variant of MLMC, and discuss applicationsof MLMC to equilibrium computations for Markovchains. computing relative value functions, spectraldensities, and sensitivity estimates.

This work is joint with Zeyu Zheng.

“Omnithermal” Perfect Simulation for M/G/cQueues

Stephen ConnorUniversity of York, [email protected]

The last few years have seen much progress in thearea of perfect simulation algorithms for multi-serverqueueing systems. Sigman [3] first showed how tobuild a dominated Coupling from the Past (dom-CFTP) algorithm for the super-stable M/G/c oper-ating under First Come First Served discipline; thedominating process used is an M/G/1 queue underProcessor Sharing. This was extended to the sta-ble case by Connor and Kendall [2], who used theM/G/c system under random assignment as an up-per bound. More recently, Blanchet, Pei and Sig-man [1] have shown how to extend this method toGI/GI/c queues.

In [2] the question was posed of whether itis possible to carry out domCFTP simultaneouslyfor a suitable range of c (the number of servers);what might be described as “omnithermal domi-nated CFTP”. In this talk I will demonstrate howthe domCFTP algorithm of [2] may be adjusted toaccomplish this in the M/G/c setting, and will pro-vide an indication of the additional computationalcomplexity involved.

[1] Jose Blanchet, Yinan Pei, and Karl Sigman. Ex-act sampling for some multi-dimensional queue-ing models with renewal input. ArXiv e-prints,2015.

[2] Stephen B. Connor and Wilfrid S. Kendall. Per-fect simulation of M/G/c queues. Adv. in Appl.Probab., 47(4):1039–1063, 2015.

[3] Karl Sigman. Exact Simulation of the Station-ary Distribution of the FIFO M/G/c Queue. J.Appl. Probab., 48A:209–213, 2011.

77

Page 78: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

78

Exact Sampling and Large Deviations forSpatial Birth and Death Processes

Sandeep JunejaTata Institute of Fundamental [email protected]

We propose a simple algorithm for exact samplingfrom the stationary distributions of spatial birth-and-death processes whose invariant measures areabsolutely continuous with respect to some Pois-son point process. Specifically, we consider systemswhere Poisson births are marked points on a vectorspace (such as spheres on a Euclidean space). Eachbirth is accepted with a probability that dependsupon the system state upon its arrival. Every ac-cepted birth stays for an exponentially distributed

time. Examples include loss systems, area interac-tion processes, Strauss processes and Ising models.In contrast to the existing literature, the proposedmethod does not require construction of auxilliarybirth-and-death processes and development of cou-pling and-or thinning methodologies. We analyzeand compare the performance of the proposed al-gorithm with the existing ones based upon domi-nated coupling from the past in an asymptotic regimewhere marked points are spheres in a compact Eu-clidean space. Here the arrival rate increases toinfinity while the sphere diameter reduces to zeroat varying rates. We also conduct large deviationsanalysis of number of spheres in the system in steadystate. This is useful to our comparative analysis andmay also be of independent interest.

Page 79: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Experimental Design for SimulationExperiments

Organizer: Peter Qian, University of Wisconsin-Madison.

Monday August 15. 1025–1155, LK 120

Multi-Level Monte Carlo for Metamodelingof Stochastic Simulations with DiscontinuousOutput

Jeremy StaumNorthwestern [email protected]

The Multi-Level Monte Carlo (MLMC) methodfor parametric integration has lower computationalcomplexity than standard Monte Carlo, given suffi-cient conditions. This MLMC method is applicableto metamodeling of stochastic simulations. How-ever, the sufficient conditions established so far re-quire that the integrand/simulation output be con-tinuous in the parameters of the simulation model.We prove that the MLMC method for paramet-ric integration improves computational complex-ity under novel sets of assumptions on the inte-grand/simulation output, allowing for discontinu-ities in it. We also present simulation experimentresults showing that the MLMC method can reducethe computational cost of simulation metamodelingby a large factor even when simulation output is dis-continuous.

Simulating Probability Distributions UsingMinimum Energy Designs

Roshan Joseph VengazhiyilGeorgia [email protected]

In this talk, I will explain an experimental design-based method known as minimum energy design([1]) for simulating complex probability distribu-tions. The method requires fewer evaluations of theprobability distribution and therefore, it has an ad-vantage over the usual Monte Carlo or Markov chainMonte Carlo methods when the distribution is ex-pensive to evaluate. I will explain the applicationof the proposed method in Bayesian computationalproblems, optimization, and uncertainty quantifica-tion.

Joint work with Tirthankar Dasgupta (HarvardUniversity), Rui Tuo (Chinese Academy of Sci-ences), and C. F. Jeff Wu (Georgia Tech).

[1] V. R. Joseph, T. Dasgupta, R. Tuo, and C. F. J.Wu. Sequential Exploration of Complex SurfacesUsing Minimum Energy Designs. Technometrics,57: 64-74, 2015.

79

Page 80: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

80

Samurai Sudoku-Based Space-Filling Designsfor Data Pooling

Peter QianUniversity of [email protected]

Pooling data from multiple sources plays an increas-ingly vital role in today’s world. By using a popu-lar Sudoku game, we propose a new type of design,called a Samurai Sudoku-based space-filling designto address this issue. Such a design is an orthogonalarray based Latin hypercube design with the follow-

ing attractive properties: (1) the complete designachieves uniformity in both univariate and bivari-ate margins; (2) it can be divided into groups ofsubdesigns with overlaps such that each subdesignachieves uniformity in both univariate and bivariatemargins; (3) each of the overlaps achieves uniformityin both univariate and bivariate margins. Examplesare given to illustrate the properties of the proposeddesign, and to demonstrate the advantages of usingthe proposed design for pooling data from multiplesources.

Page 81: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Function Space Embeddings for HighDimensional Problems

Organizers: Michael Gnewuch, Christian-Albrechts-Universität, Kiel and Aicke Hinrichs, JKU Linz.Results on function space embeddings are valuable tools in complexity studies of numerical problems.

For an illustration, consider the following situation: We want to approximate the solution of an s-variateproblem with the help of numerical algorithms. On the space Fs of input functions we have two norms;one norm is well suited for the application we are interested in and one norm is well suited for the analysisof the algorithms we want to apply. Assume that the two norms on our function space are equivalentup to a constant cs, i.e., Fs endowed with one norm can be embedded into Fs embedded with the othernorm and vice versa, and cs is an upper bound for the respective embedding constants. Then estimatesfor errors of algorithms measured with respect to one norm can increase only by a factor cs if measuredwith respect to the other norm. This example is a “classical” application of function space embeddings tonumerical analysis. In tractability studies of high-dimensional problems the situation gets more involved;instead of dealing with a fixed space Fs of input functions we usually deal with a whole scale (Fs)s∈N offunction spaces. A typical situation would be that F consists of real-valued functions defined on a domainDs, where D is some fixed set. If we have on each of the spaces Fs two equivalent norms, we get two scalesof normed spaces. To transfer tractability results from one scale to the other it is essential to keep trackof the dependence of the constants cs on s. If sups∈N cs is finite, then tractability results transfer directlyfrom one scale to the other. If the supremum is not finite, one still may be able to transfer tractabilityresults from one scale to the other; this can be done by using invariance properties and manipulating thenorms and function spaces of one scale (e.g., by changing weights that moderate the importance of groupsof variables) and proving only suitable embedding results in one direction (and not equivalence) and thendoing the same thing the other way around. This kind of application of function space embeddings totractability studies is a very recent development. In our special session we want to focus on applicationsof function space embeddings to high-dimensional problems with an emphasis on tractability studies ofapproximation and integration problems.

Friday August 19. 1025–1155, Berg C

Sparse Approximation with Respect to theFaber-Schauder System

Tino UllrichUniversity Bonn, [email protected]

We consider approximations of multivariate func-tions using m terms from its tensorized Faber-Schauder expansion. The univariate Faber-Schaudersystem on [0,1] is given by dyadic dilations andtranslations (in the wavelet sense) of the L∞ nor-

malized simple hat function with support [0,1]. Weobtain a hierarchical basis which will be tensorizedover all levels (hyperbolic) to get the dictionary F .The worst-case error with respect to a class of func-tions F is measured by the usual best m-term widthsdenoted by σm(F,F). As a first step we establishthe Faber-Schauder characterization for classes offunctions with bounded mixed derivative (Sobolevspaces with dominating mixed smoothess) and moregeneral Triebel-Lizorkin spaces, which gives us asequence space isomorphism. Based on results by

81

Page 82: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

82

Hansen, Sickel 2009 for the sequence space setting,we are able to compute upper bounds for σm. Inparticular we prove

σm(Hrmix, L∞) . m−r(logm)(d−1)(r+1/2) .

We emphasize, that the coefficients in the best m-term expansion can be computed from a finite setof function values. In addition, we emphasize that,compared to linear widths in this situation, the mainrate improves by 1/2. Let us also stress on thefact that sparse approximation with respect to theFaber-Schauder system behaves significantly bet-ter than sparse approximation with respect to thetrigonometric system as dictionary.

This is joint work with my student G. Byrenheid(Bonn).

High-Dimensional Problems: On the Univer-sality of Weighted Anchored and ANOVASpaces

Michael GnewuchChristian-Albrechts-Universitaet [email protected]

Analyzing high-dimensional approximation prob-lems, one often faces the following situation: In somesettings (depending on the considered class of inputfunctions, the norm used to measure the approxi-mation error, the admitted class of approximationalgorithms, and the way the computational cost isaccounted) one can derive good or even optimal er-ror bounds directly. But if one changes the norm,a correspondent direct error analysis can be muchmore involved and complicated or even seem im-possible. The focus of the talk is to discuss newresults on function space embeddings of weightedspaces of tensor product structure which allow foran easy transfer of error bounds.

To demonstrate the usefullness of these (ratherabstract, functional analytic) embedding theorems,we discuss as example problems multivariate andinfinite-dimensional integration problems. In partic-ular, we present new upper and lower error bounds

for high- and infinite-dimensional integration. More-over, we discuss an interesting transfer principle,which holds for weighted reproducing kernel Hilbertspaces of tensor product structure: If you want toestablish tractability results on such a space, thenit suffices to prove those results for a correspondingspace whose norm is induced by an underlying func-tion space decomposition either of anchored or ofANOVA type. In the talk we formulate this trans-fer principle for the integration problems we present,but we want to emphasize that it can be easily for-mulated for other approximation problems as, e.g.,function recovery.

Joint work with Mario Hefter (TU Kaiser-slautern, Germany), Aicke Hinrichs (JKU Linz, Aus-tria), and Klaus Ritter (TU Kaiserslautern, Ger-many).

[1] M. Gnewuch, M. Hefter, A. Hinrichs, K.Ritter.Embeddings of Weighted Hilbert Spaces and Ap-plications to Multivariate and Infinite-Dimen-sional Integration. Preprint 2016 (submitted).

Integration in Weighted Anchored andANOVA Spaces

Peter KritzerJohann Radon Institute for Computational and Ap-plied Mathematics, [email protected]

We consider the problem of numerical integration(and also function approximation) for weighted an-chored and ANOVA Sobolev spaces of s-variatefunctions, where s is large. Under the assumption ofsufficiently fast decaying weights, we show in a con-structive way that integrals can be approximated byquadratures for functions fk with only k variables,where k = k(ε) depends solely on the error demandε. A crucial tool in the derivation of our resultsare bounds on the norms of embeddings betweenweighted anchored and ANOVA spaces.

This talk is based on joint work with F. Pil-lichshammer (JKU Linz), and G. W. Wasilkowski(University of Kentucky).

Page 83: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Hamiltonian Monte Carlo in Theory andPractice

Organizer: Michael Betancourt, University of Warwick.Many of the most important problems in modern statistical practice are characterized by complex, high-

dimensional models that are extremely challenging to fit. Hamiltonian Monte Carlo, however, has provena tremendous empirical success in this domain, and aided with computational tools such as probabilisticprogramming and automatic differentiation, the algorithm has become a powerful tool for advancing thefrontiers of applied statistics. This session will consider multiple aspects of Hamiltonian Monte Carlo, fromthe underlying theory to challenges in robust implementations and innovative scientific applications.

Monday August 15. 1025–1155, Berg A

Scalable Bayesian Inference with Hamilto-nian Monte Carlo

Michael BetancourtUniversity of [email protected]

Despite the promise of big data, inferences are of-ten limited not by sample size but rather by sys-tematic effects. Only by carefully modeling theseeffects can we take full advantage of the data – bigdata must be complemented with big models andthe algorithms that can fit them. One such algo-rithm is Hamiltonian Monte Carlo, which exploitsthe inherent geometry of the posterior distributionto admit full Bayesian inference that scales to thecomplex models of practical interest. In this talk Iwill discuss the theoretical foundations of Hamilto-nian Monte Carlo, elucidating the geometric natureof its scalable performance and stressing the prop-erties critical to a robust implementation.

Efficient Computation of Uncertainty inSpatio-Temporal Forest Composition In-ferred from Fossil Pollen Records

Andria DawsonUniversity of [email protected]

Understanding past compositional changes in veg-etation provides insight about ecosystem dynamicsin response to changing environments. Past veg-etation reconstructions rely predominantly on fos-sil pollen data from sedimentary lake cores, whichacts as a proxy record for the surrounding vegeta-tion. Stratigraphic changes in these pollen recordsallow us to infer changes in composition and speciesdistributions. Pollen records collected from a net-work of sites allow us to make inference about thespatio-temporal changes in vegetation over thou-sands of years. Using a Bayesian hierarchical spatio-temporal model, we first calibrate the pollen vegeta-tion relationship using settlement era data (see be-low). We then use fossil pollen networks to estimatelatent species distributions and relative abundances.To avoid estimating the latent vegetation process forall spatial locations in the domain, we use a pre-dictive process framework which requires that thisprocess be estimated only for a selection of knots.In both the calibration and prediction phases of the

83

Page 84: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

84

model, we use a modified version of Stan, which pro-vides an efficient implementation of the No-U-TurnSampler (NUTS). Modifications included the speci-fication of user-defined gradients, and the ability totake advantage of parallel computation. This workhighlights the importance of efficient statistical andcomputational methods in cases such as this wherethe model is sufficiently complex and the domain ofinference is large.

Joint work with C. J. Paciorek (University ofCalifornia, Berkeley, USA), J. S. McLachlan (Uni-versity of Notre Dame, USA), S. Goring (Univer-sity of Wisconsin - Madison, USA), J. W. Williams(University of Wisconsin - Madison, USA), and S.T. Jackson (United States Geological Survey andUniversity of Arizona, USA).

@articledawson2016quantifying,title=Quantifying pollen-vegetation relationshipsto reconstruct ancient forests using19th-century forest composition and pollen data,author=Dawson, Andria and Paciorek, Christopher Jand McLachlan, Jason S and Goring, Simon andWilliams, John W and Jackson, Stephen T,journal=Quaternary Science Reviews,volume=137,pages=156--175,year=2016

Confronting Supernova Cosmology’s Statisti-cal and Systematic Uncertainties with Hamil-tonian Monte Carlo

David RubinSpace Telescope Science [email protected]

Cosmology with standard candle (type Ia) super-novae provides our best constraints on the dynam-ics of the expansion of the universe. I discuss thelimitations of current supernova cosmological anal-yses in treating outliers, selection effects, the non-linear correlations used for standardization, unex-plained dispersion, and heterogeneous observations.I present a new Bayesian hierarchical model, calledUNITY (Unified Nonlinear Inference for Type-IacosmologY), that is an significant step towards afull generative model of the data. UNITY is imple-mented using the code Stan, which efficiently sam-ples over the required thousands of parameters. I

apply the framework to real supernova observationsand demonstrate smaller statistical and systematicuncertainties.

Rubin et al. (2015) “UNITY: ConfrontingSupernova Cosmology’s Statistical and SystematicUncertainties in a Unified Bayesian Framework”arXiv:1507:01602.

Page 85: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

High Dimensional Numerical Integration

Organizer: Markus Weimar, University of Siegen, Germany.In recent years the enormous increase in computer power paved the way for the development of more

and more complicated models of multi-parameter real-world phenomena which have to be analyzed andsimulated. Thus, motivated by applications from natural sciences, engineering, and finance, the efficientnumerical treatment of these models has become an area of increasing importance. Often approximateintegration of multivariate or even infinite-dimensional functions is a crucial task in this context. Thishas led to the development of advanced numerical schemes such as, e.g., (generalized) quasi-Monte Carlomethods.

The talks in this special session aim to address recent progress in the field of cubature rules, as well asapplications of related techniques in the context of numerical analysis. Besides theoretical error boundson spaces of high-dimensional functions, also complexity estimates are discussed.

Monday August 15. 350–550, Berg A

Quasi-Monte Carlo Methods and PDEs withRandom Coefficients

Josef [email protected]

In this talk we consider quasi-Monte Carlo ruleswhich are based on (higher order) digital nets andtheir use in approximating expected values of solu-tions of partial differential equations (PDEs) withrandom coefficients. In particular, so-called inter-laced polynomial lattice rules have attractive prop-erties when approximating such integrals. These areapplied for PDEs with uniform random coefficients,but can also be used in Bayesian inversion. In thistalk we give an overview of recent results in thisarea.

Joint work with Frances Y. Kuo (UNSW, Aus-tralia), Quoc T. Le Gia (UNSW, Australia), andChristoph Schwab (ETH Zurich, Switzerland).

A Reduced Fast Component-by-ComponentConstruction of Lattice Point Sets with SmallWeighted Star Discrepancy

Ralph KritzingerJohannes Kepler University Linz, [email protected]

The weighted star discrepancy is a quantitative mea-sure for the performance of point sets in quasi-MonteCarlo algorithms for numerical integration. In thistalk we consider lattice point sets, whose generat-ing vectors can be obtained by a component-by-component construction to ensure a small weightedstar discrepancy.

Our aim is to reduce the construction cost ofsuch generating vectors significantly by restrictingthe set of integers from which we select the compo-nents of the vectors.

The research was supported by the Austrian Sci-ence Fund (FWF): Project F5509-N26, which is apart of the Special Research Program “Quasi-MonteCarlo Methods: Theory and Applications".

This talk is based on joint work with HeleneLaimer.

85

Page 86: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

86

High Dimensional Integration of Kinks andJumps:– Smoothing by Preintegration

Ian H. SloanUniversity of New South Wales, UNSW [email protected]

In many applications, including option pricing, inte-grals of d-variate functions with “kinks” or “jumps”are encountered. (Kinks describe simple discontinu-ities in the derivative, jumps describe simple discon-tinuities in function values.) The standard analysesof sparse grid or Quasi-Monte Carlo methods failcompletely in the presence of kinks or jumps notaligned to the axes, yet the observed performance ofthese methods can remain reasonable.

In recent joint papers with Michael Griebel andFrances Kuo we sought an explanation by show-ing that many terms of the ANOVA expansion ofa function with kinks can be smooth, because ofthe smoothing effect of integration. The problemsettings have included both the unit cube and d-dimensional Euclidean space. The underlying ideais that integration of a non-smooth function with re-spect to a well chosen variable, say xj , can result ina smooth function of d− 1 variables.

In the present work we have extended the theo-retical results from kinks to jumps, and have turned“preintegration” into a practical method for eval-uating integrals of non-smooth functions over d-dimensional Euclidean space. In this talk I willexplain both the method and the ideas behind“smoothing by preintegration”.

This is joint work with Andreas Griewank(Yachay Tech, Ecuador), Frances Kuo (UNSW Aus-tralia), and Hernan Leovey (Humboldt University,Germany).

On the Optimal Order of Integration in Her-mite Spaces with Finite Smoothness

Christian IrrgeherRICAM - Austrian Academy of Science, [email protected]

In many different areas, like finance, physics or bi-ology, it is necessary to approximate expected val-ues of functionals depending on s independent andnormal distributed random variables efficiently. Forthat, we study multivariate integration over Rs withrespect to the Gaussian measure in Hermite spacesof finite smoothness. These spaces are reproducingkernel Hilbert spaces of functions with polynomialdecaying Hermite coefficients.

In this talk we discuss the multivariate integra-tion problem in the worst-case setting and give up-per and lower bounds on the worst-case error. Fur-thermore, explicit quadrature rules are presentedwhich are of optimal order. The integration nodesof these algorithms are based on higher order netsand the integration weights are chosen according tothe Gaussian measure.

Joint work with Gunther Leobacher (JKU Linz,Austria), Friedrich Pillichshammer (JKU Linz, Aus-tria) and Josef Dick (UNSW, Australia).

Page 87: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Higher Order Quadrature Rules

Organizers: Josef Dick, University of New South Wales and Takashi Goda, University of TokyoRecently there has been significant interest in the design of quadrature rules which achieve a higher

rate of convergence for sufficiently smooth integrands. This session brings together several approaches toobtaining such methods, from Frolov rules, to higher order digital nets, weighted integration rules andlattice rules. Recent theoretical advances and numerical results will be discussed.

Tuesday August 16. 350–550, Berg A

Optimal Order Quadrature Error Bounds forHigher Order Digital Sequences

Takehito YoshikiThe University of New South [email protected]

In this talk we consider construction methods ofQuasi-Monte Carlo (QMC) quadrature rules, whichachieve the optimal rate of convergence of the worst-case error in Sobolev spaces.

QMC rules using higher order digital nets andsequences have been shown to achieve the almostoptimal rate of convergence of the worst-case er-ror in Sobolev spaces of arbitrary fixed smoothnessα ∈ N, α ≥ 2. In a recent work by the authors,it was proved that randomly-digitally-shifted order2α digital nets can achieve the best possible rateof convergence of the root mean square worst-caseerror of order N−α(logN)(s−1)/2, where N and sdenote the number of points and the dimension, re-spectively, which implies the existence of an opti-mal order QMC rule. More recently, the authorsprovided an explicit construction of such an opti-mal order QMC rule by using Chen-Skriganov’s dig-ital nets in conjunction with Dick’s digit interlacingcomposition.

In this talk we give a more general result on anexplicit construction of optimal order QMC rules,that is, we prove that any s dimensional order2α + 1 digital sequence can achieve the best pos-sible rate of convergence of the worst-case error oforder N−α(logN)(s−1)/2 when N is of the form bm

for any m with a fixed prime base b. The explicitconstruction presented in this talk is not only easyto implement but also extensible in N .

This is joint work with Takashi Goda (The Uni-versity of Tokyo, Japan) and Kosuke Suzuki (TheUniversity of New South Wales, Australia).

On Higher Order Monte Carlo Integration

Jens OettershagenUniversity of [email protected]

Classically, plain Monte Carlo (MC) for integra-tion on the unit cube uses the average over N ran-dom point evaluations of the integrand. For square-integrable functions this gives a convergence rateof N−1/2. But even if higher-order smoothness ispresent, the MC rate of convergence does not im-prove.

In this talk we give examples for quadraturerules that also rely on random samples of the in-tegrand but by the choice of certain non-uniformquadrature weights actually ‘see’ higher-order reg-ularity and can achieve convergence rates of typeN−r log(N)q, if the integrand possesses dominatingmixed smoothness of order r.

This talk is based on joint-work with MichaelGriebel, Aicke Hinrichs and Mario Ullrich.

87

Page 88: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

88

Orthogonality of the Chebychev-Frolov Lat-tice and its Implementation

Christopher KacwinBonn University, [email protected]

We deal with lattices that are generated by theVandermonde matrices associated to the roots ofChebyshev-polynomials. If the dimension d of thelattice is a power of two, i.e. d = 2m,m ∈ N,the resulting lattice is an admissible lattice in thesense of Skriganov (1994). Those are related to theFrolov cubature formulas, which recently drew at-tention due to their optimal convergence rates in abroad range of Besov-Lizorkin-Triebel spaces. Weprove that the resulting lattices are orthogonal andpossess a lattice representation matrix with entriesnot larger than 2 (in modulus). This allows for anefficient and stable enumeration of the Frolov cu-bature nodes in the d-cube [−1/2, 1/2]d up to di-mension d = 16. Using these results, we are able tocompare the Frolov cubature formula to other up-to-date schemes for high-dimensional integration, e.g.the sparse grid method.

This is joint work with Jens Oettershagen andTino Ullrich.

Integration of Functions in Weighted Ko-robov Spaces with (Super)- Exponential Rateof Convergence

Dong T. P. NguyenKU Leuven, [email protected]

We discuss multivariate integration for a weightedKorobov space of functions for which the Fouriercoefficients decay (super)-exponentially fast. Weconstruct cubature rules which achieve (super)-exponential rates of convergence, namely O(n−cn

p)

and O(exp(−cnp)) for two function spaces which areextensions of the function space of analytic functionsintroduced in [1, 2]. We also study the dependenceof the number of dimensions on the convergence andshow some tractability results.

This is joint work with Dirk Nuyens and RonaldCools (KU Leuven, Belgium).

[1] J. Dick, G. Larcher, F. Pillichshammer, andH. Woźniakowski. Exponential convergence andtractability of multivariate integration for Ko-robov spaces. Math. Comp., 80(274):905–930,2011.

[2] P. Kritzer, F. Pillichshammer, and H. Woźni-akowski. Multivariate integration of infinitelymany times differentiable functions in weightedKorobov spaces. Math. Comp., 83(287):1189–1206, 2014.

Page 89: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Monte Carlo for Financial SensitivityAnalysis

Organizer: Nan Chen, City University of Hong Kong.Sensitivity analysis plays an important role in financial risk management. For instance, derivative price

sensitivities, known as Greeks in the market jargon, provide key inputs for a variety of hedging schemes.The second example is quantile-based risk measures that the minimum capital requirement is based on.Their sensitivities will shed key insights on how to change portfolio allocation to improve its return-riskcharacteristics in response to certain market environment changes. The first speaker, Prof. Michael Fu,will discuss estimating quantile sensitivities via Monte Carlo simulation. His presentation reviews severalkey techniques on this topic, including both pathwise estimators such as infinitesimal perturbation analysisand measure-valued procedures such as the likelihood ratio method. Then, he will discuss recent researchdeveloped to handle various discontinuities that frequently arise in financial applications, such as in thepayoff function (e.g., a digital option) and in sample paths (e.g., jump processes).

The second talk primarily focuses on the sensitivities of diffusion models that are widely used infinancial applications. Taking advantage of the Beskos-Roberts exact simulation method and applyingthe Poisson-kernel method, the talk proposes unbiased Monte Carlo estimators of Greeks. The thirdpresentation is about the Monte Carlo application in stochastic dynamic programming, an essential toolin financial decision-making. Combined with the information relaxation based duality, the authors developa Monte Carlo primal-dual method to solve SDP problems. Optimal trading execution and investment-consumption problems are discussed. The information about optimal policy sensitivities are also importantto develop robust controls.

Wednesday August 17. 1025–1155, LK 203/4

Estimating Quantile Sensitivities

Michael FuUniversity of [email protected]

This talk will discuss estimating quantile sensitiv-ities via Monte Carlo simulation. It will reviewseveral key techniques on this topic, including bothpathwise estimators such as infinitesimal perturba-tion analysis and measure-valued procedures suchas the likelihood ratio method. It will also discussrecent research developed to handle various disconti-nuities that frequently arise in financial applications,such as in the payoff function (e.g., a digital option)and in sample paths (e.g., jump processes).

Unbiased Estimators of the Greeks for Gen-eral Diffusion Processes

Jong Mun LeeKAIST, South [email protected]

Derivative price sensitivities are useful in financialengineering and it is important to compute thesequantities accurately and efficiently. When usingthe general diffusion models as an underlying as-set price, discretization methods such as the Eulerscheme are conventionally used because of the lackof knowledge to the probability distributions asso-ciated with general diffusions. These methods areeasily applicable, but cause the discretization biasunavoidably. Taking advantages of Beskos-Roberts

89

Page 90: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

90

method, which is an exact simulation algorithm ofone dimensional SDE, and applying the Poisson-kernel method, we propose unbiased Monte Carloestimators of greeks. The suggested estimators canbe evaluated only with the finite points of the Brow-nian motion in any model, thus it is quite easy toimplement our algorithms. We detail the ideas andthe algorithms.

This is joint work with Wanmo Kang.

Monte Carlo Applications in Stochastic Dy-namic Programming

Nan ChenThe Chinese University of Hong [email protected]

This talk is about the Monte Carlo application instochastic dynamic programming, an essential toolin financial decision-making. Combined with theinformation relaxation based duality, we develop aMonte Carlo primal-dual method to solve SDP prob-lems. Optimal trading execution and investment-consumption problems are discussed. The informa-tion about optimal policy sensitivities are also im-portant to develop robust controls.

Page 91: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Monte Carlo Sampling from Multi-ModalDistributions

Organizers: David van Dyk, Imperial College London and Xiao-Li Meng, Harvard University.

Tuesday August 16. 350–550, Berg C

Love or Hate: a RAM for Exploring Multi-Modality

Hyungsuk TakHarvard [email protected]

Although the Metropolis algorithm is simple to im-plement, it often has difficulties exploring a mul-timodal distribution. We propose a repelling-attracting Metropolis (RAM) algorithm that main-tains the simple-to-implement characteristic of aMetropolis algorithm, but is more able to jump be-tween modes. The RAM algorithm is a Metropolis-Hastings algorithm with a proposal that consistsof a downhill move in density that aims to makelocal modes repelling, followed by an uphill movein density that aims to make local modes attract-ing. The downhill move is achieved via a reciprocalMetropolis ratio so that the algorithm prefers down-ward movement. The uphill move does the oppositeusing the standard Metropolis ratio which prefersupward movement. This down-up movement in den-sity increases the probability of a proposed move toa different mode. Because the acceptance proba-bility of the proposal involves a ratio of intractableintegrals, we introduce an auxiliary variable whichcreates a term in the acceptance probability thatcancels with the intractable ratio. Using several ex-amples, we demonstrate the potential for the RAMalgorithm to explore a multimodal distribution moreefficiently than a Metropolis algorithm and with lesstuning than is commonly required by tempering-based methods.

This is joint work with Xiao-Li Meng (HarvardUniversity) and David A. van Dyk (Imperial CollegeLondon).

Wormhole Hamiltonian Monte Carlo

Babak ShahbabaUC [email protected]

Statistical inference involving multimodal distribu-tions tends to be quite difficult. This is especiallytrue in high dimensional problems, where most ex-isting algorithms cannot easily move from one modeto another. To address this issue, we propose a novelBayesian inference approach based on Markov ChainMonte Carlo. Our method can effectively samplefrom multimodal distributions, especially when thedimension is high and the modes are isolated. Tothis end, it exploits and modifies the Riemanniangeometric properties of the target distribution tocreate wormholes connecting modes in order to facil-itate moving between them. Further, our proposedmethod uses the regeneration technique in order toadapt the algorithm by identifying new modes andupdating the network of wormholes without affect-ing the stationary distribution. To find new modes,as opposed to rediscovering those previously identi-fied, we employ a novel mode searching algorithmthat explores a residual energy function obtained bysubtracting an approximate Gaussian mixture den-sity (based on previously discovered modes) fromthe target density function.

91

Page 92: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

92

Exploring with Wang-Landau Algorithmsand Sequential Monte Carlo

Pierre JacobHarvard [email protected]

Among MCMC methods designed to tackle multi-modality, Wang-Landau algorithms (and variantsthereof) produce non-homogeneous Markov chains,that explore the state space by adaptively avoid-ing regions that have already been visited. Theregions can be defined implicitly by the values ofthe target density function. Thus, these algorithmsscale well with the dimension of the state space, de-spite the use of a state space partition. The ex-ploratory behavior of Wang-Landau is obtained byrecursively estimating penalty terms associated witheach region. We show how this estimation can ben-efit from being performed by multiple interactingchains, leading to a novel algorithm belonging tothe class of Sequential Monte Carlo methods. Wedescribe the stability properties of the proposed al-gorithm, the choice of its tuning parameters and itsperformance compared to related methods, such asstandard SMC samplers, Parallel Adaptive Wang-Landau and Free Energy Sequential Monte Carlo.

This is joint work with Luke Bornn, Pierre DelMoral and Mathieu Gerber.

Folding Markov Chains

Christian RobertCeremade, Université de Paris, [email protected]

In this talk, we consider the implications of “folding”a Markov chain Monte Carlo algorithm, namely torestrict simulation of a given target distribution toa convex subset of the support of the target via aprojection on this subset. We argue that this mod-ification should be implemented in most cases anMCMC algorithm is considered. In particular, wedemonstrate improvements in the acceptance rateand in the ergodicity of the resulting Markov chain.We illustrate those improvement on several exam-ples, but insist on the fact that they are universallyapplicable at a negligible computing cost, indepen-dently of the dimension of the problem.

Joint work with Randal Douc (CITI, TelecomSudParis, France) and Gareth Roberts (Universityof Warwick, UK).

Page 93: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Preasymptotics for MultivariateApproximation

Organizers: Thomas Kühn, University of Leipzig and Tino Ullrich, Hausdorff-Center for Mathematics,Bonn

In the investigation of high-dimensional function approximation or recovery problems gaining controlover associated worst-case errors is central in many respects. Such estimates are needed, for instance,in the search for optimal algorithms, for complexity-theoretical considerations (tractability analysis), orpractically, to obtain reliable a priori error estimates. Deriving worst-case error bounds requires modelassumptions. These narrow down the set of admissible functions, which potentially fit the given data, to afunction class Fd. The function class could be the unit ball of a smoothness space (e.g., Sobolev, Gevrey, ora class of analytic functions), which is a naturally appearing model assumption in the context of Galerkinmethods for PDEs. Another type of model assumption are structured dependencies. The idea to allowonly specific functional dependencies is very prominent in statistics and machine learning to mitigate theburden of high-dimensionality. Classically, people were interested in the correct asymptotical behaviorof worst-case errors. Those investigations are, of course, important for determining optimal algorithms.However, the obtained asymptotical error bounds are often useless for practical implementations sincethey are only relevant for very large numbers n which is, related to the amount of data an algorithmuses, beyond the range a computer is able to process. Moreover, asymptotical error bounds often hidedependencies on the ambient dimension d in the constants, which makes a rigorous tractability analysisimpossible. Controlling the dependence of the underlying constants is a first step to improve on theclassical error estimates. But this does not automatically provide preasymptotic error guarantees, that is,error guarantees for algorithms using smaller amounts of data.

The goal of this minisymposium is to present recent progress in the direction of finding reliablepreasymptotic error estimates and corresponding algorithms. It turned out, that this problem is math-ematically rather challenging and connected to deep problems in Banach space geometry. Some recentfindings highlight and elaborate a connection to metric entropy and present a generic tool how to obtainpreasymptotics for several classes of functions Fd when dealing with general linear information. Thisrather new viewpoint opens several research directions such that we are already faced with a huge list ofopen problems.

Monday August 15. 350–550, Berg C

Preasymptotic Estimates for Approximationin Periodic Function Spaces

Thomas KühnUniversität Leipzig, [email protected]

In this talk I will give a survey on some recent re-sults on preasymptotic estimates for approximation

in multivariate periodic function spaces, including amotivation for this type of problems.

For embeddings of classical Sobolev spaces theexact rate of approximation numbers has beenknown for many years. However, almost all asymp-totic estimates in the literature involve unspecifiedconstants which depend on the parameters of thespaces, in particular on the dimension d of the un-

93

Page 94: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

94

derlying domain, and on the chosen norm. Oftenthe asymptotic rate can only be seen after “waitingexponentially long”.

In high-dimensional numerical problems, as theyappear e.g. in financial mathematics, this mightbe beyond the capacity of computational methods.Therefore one needs additional information on thed-dependence of these constants, and also on thebehaviour in the so-called preasymptotic range, i.e.before the asymptotic rate becomes visible.

I will discuss these problems for the classicalisotropic Sobolev spaces, Sobolev spaces of domi-nating mixed smoothness, and for periodic Gevreyspaces. The latter have a long history, they consistof C∞-functions and have been used in numerousproblems related to partial differential equations.

Preasymptotics via Entropy

Sebastian MayerUniversity of [email protected]

I will present an abstract characterization resultwhich allows to readily derive preasymptotic er-ror bounds for approximation numbers of certainSobolev type spaces from the well-known entropynumbers of the embedding I : `p → `∞.

Point Distributions on the Sphere, AlmostIsometric Embeddings, One-Bit Sensing, andEnergy Optimization

Dmitriy BilykUniversity of [email protected]

We shall consider various versions and applicationsof questions of equidistribution of points on thesphere. First we address the question of uniformtessellation of the sphere Sd (or its subsets) by hy-perplanes, or equivalently, almost isometric embed-dings of subsets of the sphere into the Hammingcube. We find that in the general case this prob-lem is equivalent to the study of discrepancy withrespect to certain spherical “wedges”, which behavesvery similarly to the well-studied situation of thespherical cap discrepancy. In particular, asymptot-ically the so-called “jittered sampling” yields betterresults than random selection of points.

However, in one-bit compressed sensing (whichis closely related to this problem), one is interestedin the preasymptotic case, where the number ofpoints (measurements) is small relative to the di-mension. We address this issue and obtain a seriesof results, which may be roughly summarized as:Theoretically, one-bit compressed sensing is just asgood as the standard compressed sensing. In par-ticular, we prove direct one-bit analogs of severalclassical compressed sensing results (including thefamous Johnson-Lindenstrauss Lemma).

Finally, we obtain several different analogs ofthe Stolarsky principle, which states that minimiz-ing the L2 spherical cap discrepancy is equivalent tomaximizing the sum of pairwise Euclidean distancesbetween the points. These results bring up some in-teresting connections with frame theory and energyoptimization. In particular, we show that geodesicenergy integrals yield surprisingly different behaviorfrom their Euclidean counterparts.

Parts of this work have been done in collabora-tion with Michael Lacey and Feng Dai.

Non-Optimality of Rank-1 Lattice Samplingin Spaces of Mixed Smoothness

Glenn ByrenheidHausdorff Center for Mathematics, University [email protected]

We consider the approximation of functions withmixed smoothness by algorithms using m nodes ona rank-1 lattice

Λ(z,m) := kmz : k = 0, . . . ,m− 1

, z ∈ Zd.

We prove upper and lower bounds for the samplingrates with respect to the number of lattice points invarious situations and achieve improvements in themain error rate compared to earlier contributions tothe subject. This main rate (without logarithmicfactors) is half the optimal main rate coming forinstance from sparse grid sampling and suprisinglyturns out to be best possible among all algorithmstaking samples on rank-1 lattices.

This is joint work with Lutz Kämmerer (TUChemnitz, Germany), Tino Ullrich (HCM Bonn,Germany) and Toni Volkmer (TU Chemnitz, Ger-many).

Page 95: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Probabilistic Numerics

Organizer: Mark Girolami, University of Warwick.Probabilistic Numerics (PN) is an emerging theme of research that straddles the disciplines of math-

ematical analysis, statistical science and numerical computation. This nascent field takes it’s inspirationfrom early work in the mid 80s by Diaconis and O’Hagan, who posited that numerical computation shouldbe viewed as a problem of statistical inference. The progress in the intervening years has focused onalgorithm design for the probabilistic solution of ordinary differential equations and numerical quadra-ture, suggesting empirically that an enhanced performance of e.g. numerical quadrature routines can beachieved via an appropriate statistical formulation. However there has been little theoretical analysis tounderstand the mathematical principles underlying PN that deliver these improvements.

This mini-symposium seeks to bring the area of PN to the MCQMC research community, to presentrecent mathematical analysis which elucidates the relationship between PN schemes, functional regressionand QMC.

Thursday August 18. 350–550, Berg B

Probabilistic Numerical Computation: ANew Concept?

Mark GirolamiUniversity of [email protected]

The vast amounts of data in many different formsbecoming available to politicians, policy makers,technologists, and scientists of every hue presentstantalising opportunities for making advances neverbefore considered feasible.

Yet with these apparent opportunities has comean increase in the complexity of the mathematicsrequired to exploit this data. These sophisticatedmathematical representations are much more chal-lenging to analyse, and more and more computa-tionally expensive to evaluate. This is a particu-larly acute problem for many tasks of interest suchas making predictions since these will require theextensive use of numerical solvers for linear algebra,optimization, integration or differential equations.These methods will tend to be slow, due to the com-plexity of the models, and this will potentially leadto solutions with high levels of uncertainty.

This talk will introduce our contributions to an

emerging area of research defining a nexus of appliedmathematics, statistical science and computer sci-ence, called “probabilistic numerics”. The aim is toconsider numerical problems from a statistical view-point, and as such provide numerical methods forwhich numerical error can be quantified and con-trolled in a probabilistic manner. This philosophywill be illustrated on problems ranging from pre-dictive policing via crime modelling to computer vi-sion, where probabilistic numerical methods providea rich and essential quantification of the uncertaintyassociated with such models and their computation.

Probabilistic Integration: A Role for Statis-ticians in Numerical Analysis?

Francois-Xavier BriolUniversity of [email protected]

One of the current active branches of the field ofprobabilistic numerics focuses on numerical integra-tion. We will study such a probabilistic integra-tion method called Bayesian quadrature, which pro-vides solutions in the form of probability distribu-

95

Page 96: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

96

tions (rather than point estimates) whose structurecan provide additional insight into the uncertaintyemanating from the finite number of evaluationsof the integrand [1]. Strong mathematical guaran-tees will then be provided in the form of conver-gence and contraction rates in the number of func-tion evaluations. Finally, we will compare Bayesianquadrature with non-probabilistic methods includ-ing Monte Carlo and Quasi-Monte Carlo methods,and illustrate its performance on applications rang-ing from computer graphics to oil field simulation.

[1] F-X. Briol, C. J. Oates, M. Girolami, M. A. Os-borne & D. Sejdinovic. Probabilistic Integration:A Role for Statisticians in Numerical Analysis?arXiv:1512.00933, 2015.

Probabilistic Integration and Intractable Dis-tributions

Chris OatesNewcastle University, [email protected]

This talk will build on the theme of ProbabilisticIntegration, with particular focus on distributionsthat are intractable, such as posterior distributionsin Bayesian statistics. In this context we will discussa novel approach to estimate posterior expectationsusing Markov chains [1,2]. This is shown, under reg-ularity conditions, to produce Monte Carlo quadra-ture rules that converge as∣∣∣∣∣

n∑i=1

wif(xi)−∫f(x)p(dx)

∣∣∣∣∣ = O(n−12−a∧b

2+ε)

at a cost that is cubic in n, where p (the posteriordensity) has 2a+ 1 derivatives, the integrand f hasa ∧ b derivatives and ε > 0 can be arbitrarily small.

[1] C. J. Oates, J. Cockayne, F.-X. Briol,and M. Girolami. Convergence Rates for aClass of Estimators Based on Stein’sIdentity. arXiv:1603.03220, 2016.[2] C. J. Oates, M. Girolami, and N. Chopin.Control Functionals for Monte Carlointegration. JRSS B, 2017.

Probabilistic Meshless Methods for PartialDifferential Equations and Bayesian InverseProblems

Jon CockayneUniversity of Warwick, [email protected]

Partial differential equations (PDEs) are a challeng-ing class of problems which rarely admit closed-formsolutions, forcing us to resort to numerical meth-ods which discretise the problem to construct anapproximate solution. We seek to phrase solutionof PDEs as a statistical inference problem, and con-struct probability measures which quantify the epis-temic uncertainty in the solution resulting from thediscretisation [1]. We explore construction of proba-bility measures for the strong formulation of ellipticPDEs, and the connection between this and “mesh-less” methods.

We seek to apply these probability measuresin Bayesian inverse problems, parameter inferenceproblems whose dynamics are often constrained bya system of PDEs. Sampling from parameter pos-teriors in such problems often involves replacing anexact likelihood involving the unknown solution ofthe system with an approximate one, in which a nu-merical approximation is used. Such approximationshave been shown to produce biased and overconfi-dent posteriors when error in the forward solver isnot tightly controlled. We show how the uncertaintyfrom a probabilistic forward solver can be propa-gated into the parameter posteriors, thus permittingthe use of coarser discretisations which still producevalid statistical inferences.

Joint work with Chris Oates (University of Tech-nology, Sydney), Tim Sullivan (Free University ofBerlin, Germany), and Mark Girolami (Universityof Warwick, UK and The Alan Turing Institute forData Science, UK).

[1] Jon Cockayne, Chris Oates, Tim Sullivan, andMark Girolami. Probabilistic meshless methodsfor partial differential equations and Bayesian in-verse problems, 2016.

Page 97: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Quantum Computing and Monte CarloSimulations

Organizer: Yazhen Wang, University of Wisconsin-Madison.

Wednesday August 17. 1025–1155, Berg C

Quantum Computing Based on Quantum An-nealing and Monte Carlo Simulations

Yahzen WangUniversity of [email protected]

This talk introduces quantum annealing to solve op-timization problems and describe D-Wave comput-ing devices to implement quantum annealing. Weillustrate implementations of quantum annealing us-ing Markov chain Monte Carlo (MCMC) simulationscarried out by classical computers. Computing ex-periments have been conducted to generate data andcompare quantum annealing with classical anneal-ing. We propose statistical methodologies to ana-lyze computing experimental data from a D-Wavedevice and simulated data from the MCMC basedannealing methods and provide Monte Carlo basedstatistical analysis to explain possible sources for thedata patterns.

Quantum Supersampling

Eric R. JohnstonSolid Angle/University of Bristol Centre for Quan-tum Photonics, [email protected]

With a focus on practical use of near-futurequantum logic devices in computer graphics, weimplement a qubit-based supersampling techniquewith advantages over Monte Carlo estimation, de-signed for physical fabrication as an integrated sili-con photonic device.

Intended for an audience new to quantum com-putation, this talk provides a live demonstrationand a visual walk-through of the development andimplementation of a qubit-based image supersam-pling technique, with comparison of output images.Though still experimental, the surprising noise re-distribution properties of this technique have poten-tial applications in ray tracing, as well as other typesof computer imaging.

During this talk, an online quantum computa-tion simulator (QCEngine) is used to provide anintuitive visual demonstration of the QSS method.

97

Page 98: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

98

Beyond simulation, the increasing sophistication ofsilicon-based quantum photonic devices allows avery limited version of QSS to be run on actual hard-ware, using circuit designs which allow it to scale upas better hardware becomes available. Recent de-velopments of this ongoing work will be shown, inaddition to results from implementation and testingon a 5-qubit superconducting quantum computationdevice.

An Imputation- Consistency Algorithm forHigh-Dimensional Missing Data Problems

Faming LiangUniversity of [email protected]

Missing data problems are frequently encounteredin high-dimensional data analysis, but they are usu-ally difficult to solve using the standard algorithms,such as the EM and Monte Carlo EM algorithms, asthe E-step involves high dimensional integrations.We propose an attractive solution to this problem,which iterates between an imputation step and aconsistency step. At the imputation-step, the miss-ing data are imputed given the observed data anda current estimate of the parameters; and at theconsistency-step, a consistent estimate of the pa-rameters is computed based on the pseudo-completedata. The consistency of the averaged estimatorover iterations can be established under mild con-ditions. The proposed algorithm is illustrated withhigh dimensional Gaussian graphical models.

Page 99: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Stochastic Maps and Transport Approachesfor Monte Carlo

Organizers: Youssef Marzouk, MIT and Xiao-Li Meng, Harvard University.Transformations of random variables have broad utility in Monte Carlo simulation. They can be used

to improve the estimation of normalizing constants, to increase the efficiency of importance sampling orMCMC, to approximate the conditioning step in nonlinear filtering, and even as a mechanism for den-sity estimation from samples. Specific examples include bridge sampling and its generalizations fromdeterministic to stochastic maps; ensemble transform methods in data assimilation; and the numericalapproximation of various canonical couplings (e.g., the Knothe-Rosenblatt rearrangement) for stochasticsimulation. Many of these approaches can be understood and analyzed using ideas from optimal trans-portation. This session will present talks that describe how to construct useful maps/transports betweenprobability distributions, how to understand the structure of these transports in high dimensions, and howto use them to tackle a variety of challenging sampling problems.

Thursday August 18. 1025–1155, Berg A

Warp Bridge Sampling: the Next Generation

Xiao-Li MengHarvard [email protected]

Bridge sampling is an effective Monte Carlo methodfor estimating the ratio of normalizing constants oftwo probability densities, a routine computationalproblem in likelihood and Bayesian inferences, aswell as in physics, chemistry, etc. The Monte Carloerror of the bridge sampling estimator is determinedby the amount of overlap between the two densi-ties. Complementing and generalizing the Warp-I, II, and III transformations (Meng and Schilling,2002, JCGS ), which are most effective for increasingthe overlap between two (essentially) uni-modal den-sities, we introduce Warp-U transformations thataim to transform multi-modal densities into Uni-modal ones but without altering their normalizingconstants. The construction of Warp-U transforma-tion starts with a Gaussian (or other convenient)mixture distribution φmix that has a reasonable over-lap with a target density p underlying the unknownnormalizing constant. The stochastic transforma-

tion that maps φmix back to its generating distri-bution N(0, 1) is then applied to p, resulting itsWarp-U transformation p. The overlap between pand N(0, 1) is theoretically guaranteed to be no lessthan the overlap between p and φmix, as measuredby anyf -divergence, leading to statistically more ef-ficient bridge sampling estimators. We propose acomputationally efficient method to find an appro-priate φmix, and use simulations to explore vari-ous estimation strategies and the choices of tuningparameters, with the aim to achieve statistical ef-ficiency without unduly losing computational effi-ciency. We illustrate our findings using 10-50 di-mensional highly irregular multi-modal densities.

This is joint work with Lazhi Wang.

99

Page 100: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

100

Measure Transport, Variational Inference,and Low-Dimensional Maps

Alessio SpantiniMassachusetts Institute of [email protected]

Measure transport is a useful tool for characteriz-ing difficult and non-Gaussian probability measures,such as those arising in Bayesian inference. In thissetting, we seek a deterministic transport map be-tween an easy-to-sample reference distribution (e.g.,a Gaussian) and the desired target distribution (e.g.,the posterior), via optimization. As a result, one caneasily obtain samples from the target distribution bypushing forward reference samples through the map.In the same spirit, one can also leverage quadraturerules for the reference measure to induce a new setof quadrature rules for the target distribution.

In this talk, we address the computation of thetransport map in high dimensions. In particular, weshow how the Markov structure of the target mea-sure induces low-dimensional parameterizations ofthe transport map in terms of sparsity and decom-posability. The results of our analysis will not onlyenable the characterization of a large class of high-dimensional and intractable distributions, but willalso suggest new approaches to classical statisticalproblems such as the modeling of non-Gaussian ran-dom fields, as well as Bayesian filtering, smoothing,and joint parameter and state estimation.

Joint work with Daniele Bigoni (MIT, USA) andYoussef Marzouk (MIT, USA).

What the Collapse of the Ensemble KalmanFilter Tells us About Localization of ParticleFilters

Matthias MorzfeldUniversity of [email protected]

Suppose you have a mathematical model for theweather and that you want to use it to make a fore-cast. If the model calls for rain but you wake up tosunshine, then you should recalibrate your model tothis observation before you make a prediction for thenext day. This is an example of Bayesian inference,also called “data assimilation” in geophysics. Theensemble Kalman filter (EnKF) is used routinelyand daily in numerical weather prediction (NWP)for solving such a data assimilation problem. Parti-cle filters (PF) are sequential Monte Carlo methodsand are in principle applicable to this problem aswell. However, PF are never used in operationalNWP because a linear analysis shows that the com-putational requirements of PF scale exponentiallywith the dimension of the problem. In this talk Iwill show that a similar analysis applies to EnKF,i.e., “in theory” EnKF’s computational requirementsalso scale poorly with the dimension. Thus, there isa dramatic contradiction between theory and prac-tice. I will explain that this contradiction arises fromdifferent assessments of errors (local vs. global). Iwill also offer an explanation of why EnKF can workreliably with a very small ensemble (typically 50),and what these findings imply for the applicabilityof PF to high-dimensional estimation problems.

Joint work with Daniel Hodyss (Naval ResearchLaboratory), and Chris Snyder (National Center forAtmospheric Research).

Page 101: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Systemic Risk and Financial Networks

Organizer: Paul Glasserman, Columbia University.This session will address simulation problems arising in the modeling and analysis of systemic risk,

financial networks, and credit risk.

Tuesday August 16. 1025–1155, LK 120

Importance Sampling for Default TimingModels

Alex ShkolnikUC [email protected]

In credit risk management and other applications,one is often interested in estimating the likelihoodof large losses in a portfolio of positions exposed todefault risk. A serious challenge to the applicationof importance sampling (IS) methods in this settingis the computation of the transform of the loss onthe portfolio. This transform is intractable for manymodels of interest, limiting the scope of standard ISalgorithms. We develop, analyze and numericallytest an IS estimator of large-loss probabilities thatdoes not require a transform to be computed. Theresulting IS algorithm is implementable for any re-duced form model of name-by-name default timingthat is amenable to simulation.

We state conditions for asymptotic optimalityof the estimator, and characterize its performancewhen these conditions are not met. In the process,we develop a novel technique for analyzing the op-timality properties of importance sampling estima-tors. Our methods are based on the weak conver-gence approach to large deviations theory. In par-ticular, it is observed that the efficiency of an esti-mator may be analyzed only along convergent sub-sequences of a sufficiently large compact set. Char-acterizing the behavior of the estimator on each suchsubsequence is simpler than using the traditionalmethods in the literature. We sketch the potential ofthis approach for applications outside of credit risk

including queuing, insurance and system reliability.Joint work with Kay Giesecke, Stanford.

Revisiting Eisenberg-Noe: A Dual Perspec-tive

Kyoung-Kuk [email protected]

We consider the Eisenberg-Noe framework for sys-temic risk with random shocks. Using duality, wecharacterize the amount of shock amplification dueto the network structure and find the region forthe shock vector that makes a specific bank de-fault. These results enable us to improve our un-derstanding of the default probability of a specificbank. More importantly, we propose efficient sim-ulation schemes for the systemic risk measurementbased on the characterization, of which effectivenessis illustrated via several examples.

101

Page 102: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

102

How Informative is Eigenvector Centrality?

Wanmo [email protected]

Financial networks have attracted attention as mod-els of systemic risk. In practice, public data on in-terconnections between financial firms is limited toaggregate obligations by firm, with specific obliga-tions between firms remaining confidential. Recon-structing the network becomes a problem of infer-ring the entries of a matrix from its row and columnsums, a problem that has been studied through ran-

dom sampling in other contexts. We investigate theadditional information provided by eigenvector cen-trality (a standard network measure) in reducing thenumber or volume of feasible networks given firm-level information. When row sums are normalizedto one, the problem becomes one of estimating thevolume of Markov chains with a given stationarydistribution. We develop sampling procedures andfind that centrality can be surprisingly informative.For example, numerical results suggest that mostMarkov chains have rare stationary distributions.

Joint work with Paul Glasserman.

Page 103: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Contributed talks

We received numerous contributed abstracts. The ones listed here were accepted. In most cases we wereable to find enough similarities among pairs or triples of contributed talks to form them into sessions witha common theme. NB: each talk may contain more than just that theme.

Non-cubical QMCMonday August 15. 1025–1155, LK 101

Lp Results for the Discrepancy Associated to a Large Convex Body

Giancarlo TravagliniUniversitá di Milano-Bicocca, [email protected]

Let Ω ⊂ Rd (d > 1) be a convex body with measure |Ω|, and let R ≥ 2. The problem of estimating thediscrepancy

DRΩ := card(Zd ∩RΩ

)−Rd |Ω|

carries a long history, particularly when Ω is the ball Bd :=t ∈ Rd : |t| ≤ 1

(the case of B2 is Gauss’

circle problem) or a polyhedron.

L2 means of D have been studied for a long time. G. Hardy proved thatR−1

∫ R0 D

2rB2

dr1/2

≤c R1/2+ε and conjectured that |DRB2 | ≤ c R1/2+ε.

D. Kendall, A. Podkorytov et al. proved that, for every convex body Ω ⊂ Rd,

‖DRΩ‖L2(SO(d)×Td) :=

∫SO(d)

∫Td

∣∣Dσ(RΩ)+x

∣∣2 dxdσ

1/2

≤ cR(d−1)/2 .

We present several Lp estimates which depend on the geometry of Ω and on the dimension d.Joint work with L. Brandolini, L. Colzani, B. Gariboldi, G. Gigante, and M. Tupputi.

[1] L. Brandolini, L. Colzani, B. Gariboldi, G. Gigante, G. Travaglini, Lp and mixed Lp(L2)norms of

the lattice point discrepancy. preprint.

[2] L. Brandolini, L. Colzani, G. Gigante, G. Travaglini, Lp and Weak-Lp estimates for the number ofinteger points in translated domains. Math. Proc. Cambridge Philos. Soc., 159: 471–480 (2015).

[3] G. Travaglini, M.R. Tupputi A characterization theorem for the L2-discrepancy of integer points indilated polygons, J. Fourier Anal. Appl., 22: 675–693 (2016).

103

Page 104: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

104

The Variance Lower Bound and AsymptoticNormality of the Scrambled Geometric NetEstimator

Kinjal BasuStanford [email protected]

In a very recent work, Basu and Owen [1] proposethe use of scrambled geometric nets in numericalintegration when the domain is a product of s arbi-trary spaces of dimension d having a certain parti-tioning constraint. It was shown that for a class ofsmooth functions, the integral estimate has varianceO(n−1−2/d(log n)s−1) for scrambled geometric nets,compared to O(n−1) for ordinary Monte Carlo.

In this talk, we shall show a matching lowerbound for the variance and develop on the workby Loh [2], to show that the scrambled geometricnet estimate has an asymptotic normal distributionfor certain smooth functions defined on products ofsuitable subsets of Rd.

Joint work with Rajarshi Mukherjee (StanfordUniversity, USA).

[1] K. Basu and A. Owen. Scrambled Geometric NetIntegration Over General Product Spaces. Foun-dations of Computational Mathematics, 1-30, Toappear, 2015

[2] W. L. Loh, On the asymptotic distribution ofscrambled net quadrature. Annals of Statistics,31, 1282–1324, 2003.

Generating Mixture Quasirandom Pointswith Geometric Consideration

Hozumi MorohosiNational Graduate Institute for Policy [email protected]

This work proposes a quasi-Monte Carlo implemen-tation to generate a point set which follows a mix-ture distribution. While Monte Carlo method, i.e.using pseudorandom numbers, is commonly used forgenerating a sample from mixture distribution, ourmethod makes use of quasirandom numbers in hopeof generating as faithful points from a given mix-ture distribution as possible. Special attention isfocused on mimicking the adjacency of points in theEuclidean space where the sample points are scat-tered. To this end, our method uses a computationalgeometric tool, more specifically, we make the De-launay diagram of sample points to find the adja-cency of them, and partition the space of mixingprobability with keeping the adjacency. One fea-ture of the method is that it is fully implementedwith quasirandom numbers. We divide the coordi-nates of quasirandom point into two parts, then thefirst several coordinates are employed for the mixingprobability and the latter are for generating a pointfrom the chosen probability distribution.

Whereas we are at work on the theoretical in-vestigation of the error estimation of our method,we tested the performance of the method by someapplication problems. The result of computationalexperiments in particle filtering and stochastic op-timization is reported with comparison to existingQMC methods as well as traditional MC methods.Although our method still has a drawback in com-putational time, it shows a good performance in thecomputational experiments.

Page 105: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

105

Gibbs SamplingMonday August 15. 225–325, LK 120

Fast Bayesian Non-Parametric Inference,Prediction and Decision-Making at Big-DataScale

David DraperUniversity of California, Santa [email protected]

Bayesian non-parametric (BNP) methods providean approach to what may be termed “optimalBayesian model specification”, by which I mean aspecification approach that (a) makes full use ofall available contextual and data information, and(b) does not make use of “information” not impliedby problem context. However, traditional (MCMC)BNP computational methods do not scale well withsample size or dimensionality of the vector of un-knowns. In this talk, I will present a case study fromdata science in eCommerce — on the topic of “A/Btesting,” which is the data science term for random-ized controlled trials — in which optimal BNP infer-ence can be performed in a highly accurate manner,on a laptop computer in minutes or seconds, withsample sizes in each of the A (treatment) and B (con-trol) groups on the order of tens of millions. I willconclude the talk with a method, based on optimalBNP inference, with which information from a num-ber of control groups, judged similar to the B groupin the actual A/B test, may be combined to increaseaccuracy of the A/B comparison; this method per-mits the analysis of O( 100,000,000 ) observationson a laptop computer in minutes or seconds.

Joint work with Alex Terenin (University of Cal-ifornia, Santa Cruz, and eBay, inc).

Adapting the Gibbs Sampler

Cyril ChimisovUniversity of Warwick, [email protected]

The popularity of Adaptive MCMC has been fu-eled on the one hand by its success in applications,and on the other hand, by mathematically appeal-

ing and computationally straightforward optimisa-tion criteria for the Metropolis algorithm acceptancerate (and, equivalently, proposal scale). Similarlyprincipled and operational criteria for optimising theselection probabilities of the Random Scan GibbsSampler have not been devised to date.

In the present work we close this gap and developa general purpose Adaptive Random Scan GibbsSampler that adapts the selection probabilities. Theadaptation is guided by optimising the L2−spectralgap for the target’s Gaussian analogue [1,3], gradu-ally, as target’s global covariance is learned by thesampler. The additional computational cost of theadaptation represents a small fraction of the totalsimulation effort.

We present a number of moderately- and high-dimensional examples, including Truncated Nor-mals, Bayesian Hierarchical Models and HiddenMarkov Models, where significant computationalgains are empirically observed for both, AdaptiveGibbs, and Adaptive Metropolis within AdaptiveGibbs version of the algorithm, and where formalconvergence is guaranteed by [2]. We argue thatAdaptive Random Scan Gibbs Samplers can be rou-tinely implemented and substantial computationalgains will be observed across many typical Gibbssampling problems.

Joint work with Krys Latuszynski and GarethO. Roberts (Department of Statistics, University ofWarwick, UK).

[1] Amit, Y. Convergence properties of the Gibbssampler for perturbations of Gaussians, “TheAnnals of Statistics", 1996.

[2] Latuszynski, K., Roberts, G. O., and Rosen-thal, J. S. Adaptive Gibbs samplers and relatedMCMC methods, “The Annals of Applied Prob-ability", 2013.

[3] Roberts, G. O., and Sahu, S. K. UpdatingSchemes, Correlation Structure, Blocking andParameterization for the Gibbs Sampler, “Jour-nal of the Royal Statistical Society", 1997.

Page 106: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

106

Nuclear ScienceMonday August 15. 225–325, LK 101

Progress in Simulation of Stochastic NeutronDynamics

Ying Yang-JunInstitute of Applied Physics and ComputationalMathematics of [email protected]

Stochastic behavior of neutron dynamics is veryimportant in nuclear power plant design and nu-clear reactor safety. Based on the stochastic charac-ter of neutron reactions, simulation method for thestochastic neutron dynamics proposed by one of theauthors which is made in generalized semi-Markovprocess is introduced. The multiplicities of fissionneutrons are one of the main sources of reactor noise.Based on the description equation of the neutronfluctuation and its solution, the stochastic theoryand simulation code of zero power and power reactornoise is evolved. It is a distinct progress in stochasticneutron kinetic process simulation, which reveals in-herent law of neutron initiation experiments in pulsereactor. The simulation results agree with the ex-periments very well on the probability distributionof burst waiting time. So it can be used as a tool toimplement quantitative analysis of problems relatedto stochastic neutron dynamics.

Joint work with Xiao Gang.

Detailed Comparison of Uniform Tally Den-sity Algorithm and Uniform Fission Site Al-gorithm

DanHua ShangGuanInstitute of Applied Physics and ComputationalMathematics of [email protected]

Because of the non-uniform power distribution in re-actor core and the fact that relative uncertainties oflocal tallies tend to be large in low-power regions andsmall in higher-power regions, reducing the largestuncertainties to an acceptable level simply by run-ning a large number of neutron histories is oftenprohibitively expensive when doing global tallyingin Monte Carlo criticality calculation. The uniformtally density algorithm and uniform fission site algo-rithm are two different methods to increase the effi-ciency of global tallying. They have been comparedcarefully using the global volume averaged cell fluxand energy deposition tally of the Dayawan reactorpin-by-pin model. But, because it is unfair to use thesame parameters of survival bias method for thesetwo algorithms, this popular method for reducingthe fluctuation of particle weight has not been usedin former works. In this paper, the same strategy forproducing the parameters of survival bias methodhas been combined with these two algorithms andthe efficiencies have been compared carefully still us-ing the two global tallies of the Dayawan pin-by-pinmodel mentioned above.

Joint work with Gang Li, BaoYin Zhang, LiDeng of the Institute of Applied Physics and Com-putational Mathematics of Beijing and Rui Li andYuanGuan Fu of CAEP Software Center of HighPerformance Numerical Simulation.

Page 107: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

107

Hamiltonian MCMCMonday August 15. 350–450, LK 120

A Markov Jump Process for More EfficientHamiltonian Monte Carlo

Andrew BergerUniversity of California Berkeley, [email protected]

Discrete-time Markov Chain Monte Carlo samplingmethods are limited by the fact that transition ratesbetween states must be less than or equal to one.The popular Metropolis-Hastings acceptance ruleworks well because it maximizes the transition ratebetween a pair of states subject to this constraint.Yet it discards information about the distributionin the sense that the acceptance rate is the same forall states of higher probability relative to the currentstate.

On the other hand, the transition rates ofMarkov Jump processes, which make transitions be-tween discrete sets of states in continuous time, areonly constrained to be non-negative. Our primarycontribution is a Hamiltonian Monte Carlo samplingalgorithm that is a continuous time Markov Jumpprocess. One advantage of using Hamiltonian MonteCarlo is that only a small number of transitions needto be considered from each state - our algorithmtakes advantage of Markov Jump process dynam-ics while retaining approximately the same compu-tational cost per sample as standard HamiltonianMonte Carlo.

In this talk we present our algorithm, MarkovJump Hamiltonian Monte Carlo (MJHMC), andsome experimental results. Exact comparison of thespectral gap of the Markov operators correspond-ing to MJHMC and standard HMC on a toy prob-lem and autocorrelation analysis on a challenginghigh-dimensional distribution suggest that MJHMCmixes faster than standard HMC on a wide-range ofdistributions.

Additionally, we provide some intuition for theimproved performance of the method through con-nection to importance sampling. It is the dualitybetween a Markov Jump process and its embeddedMarkov Chain (the discretely indexed sequence oftransitions) that provides this connection.

Joint work with Mayur Mudigonda (UC Berke-ley, USA), Michael DeWeese (UC Berkeley, USA),and Jascha Sohl-Dickstein (Google Research, USA).

Hamiltonian Monte Carlo without DetailedBalance

Mayur MudigondaUC [email protected]

We present a method for performing HamiltonianMonte Carlo that largely eliminates sample rejec-tion for typical hyperparameters. In situations thatwould normally lead to rejection, instead a longertrajectory is computed until a new state is reachedthat can be accepted. This is achieved using Markovchain transitions that satisfy the fixed point equa-tion, but do not satisfy detailed balance. The re-sulting algorithm significantly suppresses the ran-dom walk behavior and wasted function evaluationsthat are typically the consequence of update rejec-tion. We demonstrate a greater than factor of twoimprovement in mixing time on three test problems.We release the source code as Python and MATLABpackages.

Joint work with Jascha Sohl-Dickstein (GoogleBrain) and Michael R. Deweese(UC Berkeley)Sohl-Dickstein, Mudigonda, DeWeese, HamiltonianMonte Carlo Without Detailed Balance, Interna-tional Conference of Machine Learning, 2014

Page 108: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

108

FinanceMonday August 15. 350–450, LK 101

Strong Law of Large Numbers for Interval-Valued Gradual Random Variables

Tomáš TichýVSB-TUO, Czech [email protected]

Financial problems can be analyzed by several ways.One of them is based on simulation of scenariosthat might happen. In the standard case, we as-sume that a given financial quantity in question is astochastic (or random) variable, ie. it follows someprobability distribution and its possible states canget prescribed particular probabilities. The analysisis basically related to the Law of Large Numbers,which allows one to show that as the number of sce-narios approaches to infinity, the modelled resultsshould approaches to the reality. However, in realworld modelling it can appear that a precise esti-mation of parameters of such models (e.g. volatilityof financial returns) is not feasible. By contrast, ifone decides to apply Monte Carlo simulation to an-alyze a given problem with values expressed by im-precisely defined numbers, it is important to showthat such variables with imprecisely defined num-bers satisfy the strong law of large numbers, as well.Otherwise such approach would have no sense. Inthis research we extend our previous results in thefield to interval-valued gradual variables. The stronglaw of large numbers for fuzzy random variables isstandardly proved with help of alpa-cut decomposi-tion of fuzzy numbers (or more general fuzzy rela-tions), where each alpha-cut is a compact set (see,[1],[2]). During last few years, an interesting general-isation of fuzzy numbers appeared, namely, intervalsof gradual numbers are considered instead of fuzzynumbers, where a gradual number is a function froma scale (usually a finite subset of the unit interval)into the set of real numbers (see, [3],[4]). In thepresentation, we design random variables in such away that their values are interpreted by means ofintervals of gradual numbers (special functions) anddiscuss the strong law of large numbers for such kindof random variables.

[1] R. Kruse, The strong law of large numbersfor fuzzy random variables, Inform. Sci. 28 (1982)233–241.

[2] V. Krätschmer, Limit theorems for fuzzy-random variables, Fuzzy sets and systems 126(2002), 253–263.

[3] D. Dubois, H. Prade, Gradual elements in afuzzy set, Soft Comput. 12 (2008), 165–175.

[4] J. Fortin, D. Dubois, H.Fargier, Gradualnumbers and their applications to fuzzy intervalanalysis, IEEE Trans. Fuzzy Syst. 16 (2) (2008),388–402.

Green Simulation for Repeated Experiments

Mingbin FengNorthwestern [email protected]

In many financial, actuarial, industrial applications,simulation models are often run repeatedly to per-form the same task with different inputs due tochanging market conditions or system states. Weintroduce a new paradigm in simulation experimentdesign and analysis, called “green simulation," thataims to increase computational efficiency in the set-ting of repeated experiments. Green simulationmeans recycling and reusing outputs from previousexperiments to answer the question being asked ofthe current simulation model.

As a first method for green simulation, we pro-pose estimators that reuse outputs from previous ex-periments by weighting them with likelihood ratios,when parameters of distributions in the simulationmodel differ across experiments. We analyze conver-gence of these green simulation estimators as moreexperiments are repeated, while a stochastic processchanges the parameters used in each experiment.

We find that green simulation reduces varianceby more than an order of magnitude in examplesinvolving catastrophe bond pricing and credit riskevaluation.

Joint work with Jeremy Staum.

Page 109: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

109

DiffusionsTuesday August 16. 1025–1155, LK 203/4

Second Order Langevin Markov Chain MonteCarlo

Thomas CatanachCalifornia Institute of [email protected]

Dynamical systems approaches to Markov ChainMonte Carlo (MCMC) have become very popularas the dimension and complexity of Bayesian in-ference problems has increased. These approacheslike Hamiltonian Monte Carlo (HMC) and Stochas-tic Gradient MCMC (SGMCMC) allow samplersto achieve much higher acceptance rates and ef-fective sample sizes compared to standard MCMCmethods because they are able to better exploitthe structure of the posterior distribution to ex-plore areas of high probability. In this work, wepresent a stochastic dynamical system based uponthe damped second-order Langevin stochastic differ-ential equation (SDE) whose stationary distributionis an arbitrary posterior. We can simulate this SDEto effectively generate samples from the posterior byusing an integrator derived from the Metropolis Ad-justed Geometric Langevin Algorithm (MAGLA).In the limit of no damping this sampler is deter-ministic and samples constant likelihood contoursas in HMC. When it has large damping, it sampleslike random walk Metropolis-Hastings. Further, likeHMC and SGMCMC, this SDE can be adapted us-ing a Riemannian metric to develop a sampler thatbetter exploits the local structure of the posterior.

Since this method is based upon an auxiliary dy-namical system, we can apply ideas from dynamicsand control theory to optimize the performance ofour sampler. When the system is linear, as is thecase for a Gaussian posterior, we can derive the op-timal structure for the damping matrix to speed upconvergence to the stationary distribution. We thendiscuss a control strategy to minimize a cost thatweighs both the sample correlation and the energycorrelation. This is related to balancing the effectivesample size for the mean and covariance estimateof the posterior. Finally we apply this method toseveral challenging Bayesian inference problem andcompare its performance to other methods.

Joint work with James L. Beck (California Insti-tute of Technology).

MCMC for Estimation of Incompletely Dis-cretely Observed Diffusions

Frank van der MeulenTU [email protected]

Suppose X is a multidimensional diffusion takingvalues in Rd with time dependent drift b and timedependent dispersion coefficient σ governed by thestochastic differential equation (SDE)

dXt = bθ(t,Xt)dt+ σθ(t,Xt)dWt. (1)

The process W is a vector valued process consistingof independent Brownian motions. Denote observa-tion times by 0 = t0 < t1 < · · · < tn. Assumeobservations

Vi = LiXti + ηi, i = 0, . . . , n,

where Li is a mi × d-matrix. The random variablesη1, . . . , ηn are independent random variables, inde-pendent of the diffusion process X. We equip theparameter θ with a prior and are interested in draw-ing from the posterior (Cf.[5]).

In case of full observations (Li = I for all i),it well known that this can be done using a data-augmentation algorithm, in which the latent pathsof the diffusion in between observations are treatedas missing data. One then iteratively samples diffu-sion bridges on all segments and next conditional onthe obtained “full” diffusion path samples from theposterior. A direct implementation of this approachwill fail if the parameter θ is present in the diffu-sion coefficient (Cf.[1]). We present a non-centredparametrisation and prove that it yields an irre-ducible Markov chain that samples from the pos-terior. We assume that diffusion bridges are sim-ulated using a Metropolis-Hastings algorithm withdiffusion bridge proposals generated from the SDE

dXt = bθ(t,Xt )dt+ σθ(t,X

t )dWt. (2)

Here the drift bθ is chosen to ensure the process X

ends up in the right endpoint of the bridge. Specific

Page 110: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

110

constructions of bθ are detailed in [2] and [3]. It turnsout that this framework can be directly extended tothe case of incomplete discrete observations.

Joint work with Moritz Schauer (Leiden Univer-sity, The Netherlands).

[1] G.O. Roberts and O. Stramer. On infer-ence for partially observed nonlinear diffusion.Biometrika 88(3), 603–621, 2001.

[2] B. Delyon and Y. Hu. Simulation of conditioneddiffusion and application to parameter estima-tion. Stochastic Processes and their Applications116(11), 1660–1675, 2006.

[3] M.R. Schauer, F.H. van der Meulen and J.H. vanZanten. Guided proposals for simulating multi-dimensional diffusion bridges. arXiv:1311.3606,Accepted for publication in Bernoulli, 2016.

[4] F.H. van der Meulen and M.R. Schauer.Bayesian estimation of discretely observed multi-dimensional diffusion processes using guided pro-posals. arXiv:1406.4704, 2015.

[5] F.H. van der Meulen and M.R. Schauer.Bayesian estimation of incompletely observeddiffusions. arXiv:, 2016.

Optimal diffusion for stochastic Bayesian Par-ticle Flow

Fred [email protected]

We derive the optimal covariance matrix of the dif-fusion in stochastic particle flow for nonlinear filters,Bayesian decisions, Bayesian learning and transport.Particle filters generally suffer from the curse of di-mensionality owing to particle degeneracy. The pur-pose of particle flow is to mitigate the problem ofparticle degeneracy in particle filters. Particle flowmoves the particles to the correct region in statespace corresponding to BayesâĂŹ rule. The flowis from the prior probability density to the poste-riori probability density. Particle flow is similar to

Monge-Kantorovich transport, but it is much sim-pler and much faster to compute in real time, be-cause we avoid solving a variational problem. Ourtransport of particles is not optimal in any sense,but rather it is designed to provide good estima-tion accuracy and good UQ for a given computa-tional complexity. In contrast to almost all paperson transport theory, we use non-zero diffusion inour particle flow corresponding to BayesâĂŹ rule.This talk shows how to compute the best covariancematrix for such diffusion, using several optimiza-tion criteria (most stable flow, least stiff flow, ro-bust UQ, best estimation accuracy, etc.). We derivea simple explicit formula for the optimal diffusionmatrix. We show numerical experiments that com-pare our stochastic particle flow with state-of-the-art MCMC algorithms (Hamiltonian Monte Carlo,MALA, auxiliary particle filters, regularized parti-cle filters, etc.); our stochastic particle flow beatssuch MCMC algorithms by orders of magnitude incomputation time for the same accuracy for diffi-cult nonlinear non-Gaussian high dimensional prob-lems. The computational complexity of particle fil-ters for optimal estimation accuracy generally de-pends on the dimension of the state vector, couplingbetween components of the state vector, stability ofthe plant model, initial uncertainty in the state vec-tor, measurement accuracy, geometry of the prob-ability densities (e.g., log-concave), ill-conditioningof the Fisher information matrix in a given coordi-nate system, stiffness of the flow in a given coordi-nate system, Lipschitz constants of the log-densities,etc. Our flow is designed by solving a linear highlyunderdetermined PDE similar to the Gauss law inelectrodynamics. We have several dozen distinct so-lutions of this PDE using different methods to picka unique solution. The best flows avoid explicit nor-malization of the posteriori probability density. Itturns out that particle flows are extremely sensitiveto errors in computing the normalization, and henceit is best to avoid this completely. This phenomenonis known in other applications of transport; e.g., inadaptive integration for numerical weather predic-tion the normalization must be computed to ma-chine precision!

Page 111: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

111

Quadrature at Non-QMC pointsTuesday August 16. 225–325, LK 120

High Accuracy Algorithms for IntegratingMultivariate Functions Defined at ScatteredData Points

James M. HymanTulane University, [email protected]

We will describe a class of accurate and efficient al-gorithms for estimating the integral of multidimen-sional functions defined at a small set of very sparsesamples in high dimensions. These situations arisewhen the function values are computationally ex-pensive to evaluate, experimentally difficult to ob-tain, or just unavailable. The methods are based onderiving a weighted quadrature rule specially for agiven set of sample points that is exact for a classof sparse multivariate orthogonal polynomials. Wewill compare the convergence rate of the method onsmooth and discontinuous functions defined on bothlow discrepancy quasi-Monte Carlo distributions ofsamples, as well as for sample distributions that arefar from uniformly distributed. We observe that al-though the approach does not change the error con-vergence rate, the constant multiplicative factor inthe error is often reduced by a factor of over 100 ormore.

Joint with Jeremy Dewar (Tulane U, USA) andMu Tian (Stony Brook U, USA).

Support points

Simon MakGeorgia Institute of [email protected]

In this talk, we present a new method for com-pacting any continuous distribution F into a set ofoptimal representative points called support points.Support points can have many useful applicationsin statistics, engineering and operations research.Two applications will be highlighted here: (1) asa way to optimally allocate experimental runs forstochastic simulations, and (2) as an improved alter-native to MCMC thinning in Bayesian computation.For numerical integration, we will also show the im-proved performance of support points over MonteCarlo (MC) and quasi-Monte Carlo (qMC) methodsusing several simulation studies.

On a technical level, support points differ fromexisting qMC methods on two fronts. The lat-ter often assumes integration on the unit hyper-cube to obtain a closed-form expression for discrep-ancy, and employs number-theoretic methods to gen-erate sampling points (see, e.g., Dick-Kuo-Sloan,2013). Support points, on the other hand, areobtained from the energy goodness-of-fit statistic(Székely and Rizzo, 2013), which can be efficientlyoptimized using subsampling methods, despite hav-ing no closed-form expression. As a result, thesepoints can be generated for any distribution fromwhich a random sample can be obtained. We willpresent a Koksma-Hlawka-like bound connecting in-tegration error to the energy statistic, and estab-lish an existence result for error convergence rateusing support points. A quick blockwise coordinate-descent algorithm is then proposed for generatingsupport points, with time-complexity scaling lin-early in both sample space dimension and numberof desired points.

Joint work with V. Roshan Joseph (Georgia In-stitute of Technology, USA).

Page 112: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

112

MCMC output analysisTuesday August 16. 225–325, LK 203/4

Measuring Sample Quality with Stein’sMethod

Lester MackeyStanford University, [email protected]

To improve the efficiency of Monte Carlo estimation,practitioners are turn- ing to biased Markov chainMonte Carlo procedures that trade off asymptoticexactness for computational speed. The reasoningis sound: a reduction in variance due to more rapidsampling can outweigh the bias introduced. How-ever, the inexactness creates new challenges for sam-pler and parameter selection, since standard mea-sures of sample quality like effective sample size donot account for asymptotic bias. To address thesechallenges, we introduce a new computable qualitymeasure based on SteinâĂŹs method that quantifiesthe maximum discrepancy between sample and tar-get expectations over a large class of test functions.We use our tool to compare exact, biased, and de-terministic sample sequences and illustrate applica-tions to hyperparameter selection, convergence rateassessment, and quantifying bias-variance tradeoffsin posterior inference.

Joint work with Jackson Gorham (Stanford Uni-versity, USA).

Multivariate Output Analysis for MarkovChain Monte Carlo

Dootika VatsUniversity of Minnesota, [email protected]

Markov chain Monte Carlo (MCMC) is a method ofproducing a correlated sample in order to estimateexpectations with respect to a target distribution. Afundamental question is when should sampling stopso that we have good estimates of the desired quan-tities? The key to answering these questions lies inassessing the Monte Carlo error through a multivari-ate Markov chain central limit theorem. However,the multivariate nature of this Monte Carlo error hasbeen ignored in the MCMC literature. I will giveconditions for consistently estimating the asymp-totic covariance matrix. Based on these theoreticalresults I present a relative standard deviation fixedvolume sequential stopping rule for deciding when toterminate the simulation. This stopping rule is thenconnected to the notion of effective sample size, giv-ing an intuitive, and theoretically justified approachto implementing the proposed method. The finitesample properties of the proposed method are thendemonstrated in examples.

Joint work with James Flegal (UC, Riverside)and Galin Jones (U of Minnesota).

Page 113: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

113

ProbabilityTuesday August 16. 350–550, Berg B

Estimating Orthant Probabilities of High Di-mensional Gaussian Vectors with an Applica-tion to Set Estimation

Dario AzzimontiUniversity of Bern, [email protected]

The computation of Gaussian orthant probabilitieshas been extensively studied for low dimensionalvectors. Here we focus on the high dimensional caseand we present a two step procedure relying on bothdeterministic and stochastic techniques.

The proposed estimator relies indeed on splittingthe probability into a low dimensional term and a re-mainder. While the low dimensional probability canbe estimated by fast and accurate quadrature, seee.g. [1], the remainder requires Monte Carlo sam-pling. We show that an estimator obtained withthis technique has higher efficiency than standardMonte Carlo methods. We further refine the esti-mation by using a novel asymmetric nested MonteCarlo algorithm for the remainder and we highlightcases where this approximation brings substantialefficiency gains.

Finally this method is applied to derive conser-vative estimates of excursion sets of expensive toevaluate deterministic functions under a Gaussianrandom field prior without requiring a Markov as-sumption.

This is joint work with David Ginsbourger (IdiapResearch Institute, Switzerland and University ofBern, Switzerland).

[1] A. Genz and F. Bretz. Computation of Multivari-ate Normal and t Probabilities. Springer-Verlag,2009.

Monte Carlo with User-Specified Relative Er-ror

Mark HuberClaremont McKenna [email protected]

Monte Carlo data has properties that are very dif-ferent from data obtained from other sources. Inparticular, the number of data points is typicallynot fixed, but can vary randomly as needed by the

user.

Compared to classical estimators which usuallyuse a fixed number of data points, this allows forextra flexibility in estimation of parameters derivedfrom Monte Carlo data. In this talk, I will de-scribe two new estimators for the means of Bernoulliand Poisson distributed output designed with MonteCarlo output in mind. These estimators have the re-markable property that the relative error in the dis-tribution is determined by the user ahead of time:it does not depend on the quantity being estimatedin any way.

Applications of these estimators include con-struction of randomized approximation schemes(also known as exact confidence intervals) for nor-malizing constants of posterior distributions and forapproximating the value of other high dimensionalintegrals.

Tail Probabilities of Infinite Sums of RandomVariables – Exact and Efficient Simulation

Karthyek MurthyColumbia University, New [email protected]

In addition to arising naturally in the study of linearprocesses and recurrence equations, infinite sums ofrandom variables form a main building block in var-ious stochastic risk models. In this talk, we presentan exact algorithm for estimating tail probabilitiesof such infinite sums with uniformly bounded com-putational effort.

The problem considered is challenging becauseany algorithm that stops after generating onlyfinitely many summands is likely to introduce bias.Apart from the task of eliminating bias, we are facedwith an additional challenge of estimating the raretail probabilities just with bounded computationalbudget. Assuming that the individual summands inthe infinite series possess regularly varying tails, weuse multi-level randomization techniques and heavy-tailed heuristics to arrive at a simulation algorithmthat efficiently estimates such tail probabilities with-out any bias.

Page 114: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

114

Joint work with Henrik Hult (KTH, Stockholm)and Sandeep Juneja (TIFR, Mumbai).

Faster Monte Carlo with DeterminantalPoint Processes

Rémi BardenetCNRS and Univ. Lille, [email protected]

In this talk, we show that repulsive point processescan yield Monte Carlo methods that converge fasterthan the vanilla Monte Carlo rate [1]. More pre-cisely, we build estimators of integrals, the vari-ance of which decreases as N−1−1/d, where N isthe number of integrand evaluations, and d is theambient dimension. We rely on stochastic numeri-cal quadratures involving determinantal point pro-cesses (DPPs; [2]) associated to multivariate orthog-onal polynomials.

The proposed method can be seen as a stochas-tic version of Gaussian quadrature, where samplesfrom a determinantal point process replace zeros oforthogonal polynomials. Furthermore, integrationwith DPPs is close in spirit to randomized quasi-Monte Carlo methods such as scrambled nets [3],

with the advantage of letting the user specify a non-uniform target measure, and most importantly tay-loring repulsiveness to the integration problem athand. This results in a central-limit theorem withan easy-to-interpret asymptotic variance.

DPPs are the kernel machines of point processes,and as such they come with computational limi-tations. In particular, sampling is Θ(N3) in mostcases, and further requires rejection sampling. Tar-geted applications so far are thus integration taskswhere the bottleneck is the cost of evaluating theintegrand.

Joint work with Adrien Hardy (Univ. Lille,France).

[1] R. Bardenet and A. Hardy. Monte Carlo withdeterminantal point processes. arXiv preprintarXiv:1605.00361, 2016.

[2] J. B. Hough, M. Krishnapur, Y. Peres, andB. Virág. Determinantal processes and indepen-dence. Probability surveys, 2006.

[3] A. Owen. Scrambled net variance for integralsof smooth functions. The Annals of Statistics,25(4):1541–1562, 1997.

Page 115: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

115

Uncertainty QuantificationTuesday August 16. 350–450, LK 120

Sobol’ Indices for Problems Defined in Non-Rectangular Domains

Sergei KucherenkoImperial College [email protected]

Global sensitivity analysis (GSA) is used to identify key parameters whose uncertainty most affects theoutput. This information can be used to rank variables, fix or eliminate unessential variables and thusdecrease problem dimensionality. Among different approaches to GSA variance-based Sobol’ sensitivityindices (SI) are most frequently used in practice owing to their efficiency and ease of interpretation [1].Most existing techniques for GSA were designed under the hypothesis that model inputs are independent.However, in many cases there are dependences among inputs, which may have significant impact on theresults. Such dependences in a form of correlations have been considered in the generalised Sobol’ GSAframework developed by Kucherenko et al. [2].

There is an even wider class of models involving inequality constraints (which naturally leads to theterm ’constrained GSA’ or cGSA) imposing structural dependences between model variables. It impliesthat the parameter space may no longer be considered to be an n-dimensional hypercube as in existingGSA methods, but may assume any shape. This class of problems encompasses a wide range of situationsencountered in engineering, economics and finances where model variables are subject to certain limitationsimposed e.g. by conservation laws, geometry, costs, quality constraints etc.

The development of efficient computational methods for cGSA requires the development of specialMonte Carlo or quasi-Monte Carlo sampling techniques and methods for computing sensitivity indices.Consider a model f(x1, ..., xn) defined in a non-rectangular domain Ωn, an arbitrary subset of variablesy = (xi1 , ...xis), 1 ≤ s < n and a complementary subset z = (xis+1 , ...xin), so that y = (xi1 , ...xin) = (y, z).Then formulas for the main effect and total Sobol’ SI have the following form:

Sy =1

D

∫Ωn

f(y′, z′)pΩ(y′, z′)dy′dz′(∫

Ωn−s

f(y′, z)pΩ(y′, z)

pΩ(y′)dz −

∫Ωn

f(y, z)pΩ(y, z)dydz

)

STy =1

2D

∫Ωn

∫Ωs

(f(y, z)− f(y′, z)

)2pΩ(y, z)

pΩ(y′, z)

pΩ(z)dydy′dz

Here pΩ(y, z) is a joint probability distribution function and pΩ(y) is a marginal distribution. Bothdistributions are defined in Ωn. We propose two methods for the estimation of SobolâĂŹ SI: 1) quadratureintegration method, and 2) MC/QMC estimates based on the acceptance-rejection sampling method.Several model test functions with constraints for which we found analytical solutions were considered.These solutions were used as benchmarks for verifying the developed integration methods. The method isshown to be general and efficient.

Joing work with Oleksiy V. Klymenko and Nilay Shah (Imperial College London, UK).

[1] I. Sobol’. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates.Math. Comput. Simul., 55(1âĂŞ3):271-280, 2001.

[2] S. Kucherenko, S. Tarantola, P. Annoni. Estimation of global sensitivity indices for models withdependent variables. Comput. Phys. Commun., 183:937-946, 2012.

Page 116: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

116

Uncertainty quantification in Multi-PhaseCloud Cavitation Collapse using Multi-LevelMonte Carlo

Jonas ŠukysETH Zurich, [email protected]

Cloud cavitation collapse pertains to the erosion ofliquid-fuel injectors, hydropower turbines, ship pro-pellers, and is harnessed in shock wave lithotripsy.To assess the development of extreme pressure spots,we performed numerical two-phase flow simulationsof cloud cavitation collapse containing 50’000 bub-bles using up to 1 trillion cells using petascale highperformance code CUBISM-MPCF on up to half amillion cores.

To quantify the uncertainties in the collaps-ing clouds under random initial conditions, we in-vestigate a novel non-intrusive multi-level MonteCarlo methodology, which accelerates the plainMonte Carlo sampling. Highly complex cloud

cavitation collapse phenomenon was observed toposses reduced statistical correlations across multi-ple discretization levels, severely limiting the vari-ance reduction potential of the standard multi-levelmethodology.

Newly proposed optimal control variate coeffi-cients for each resolution level of the multi-levelMonte Carlo estimator overcomes such limitationsand delivers significant variance reduction improve-ments, leading to an order of magnitude in compu-tational speedup. Conducted numerical experimentsreveal that uncertain locations of equally sized cavi-ties in vapor clouds lead to significant uncertaintiesin the location and magnitude of the peak pressurepulse, emphasizing the relevance of the uncertaintyquantification in cavitating flows.

This is joint work with Panagiotis Hadjidoukas,Diego Rossinelli, Ursula Rasthofer, Fabian Wer-melinger, and Petros Koumoutsakos all of (ETHZurich, Switzerland).

Page 117: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

117

Faster GibbsTuesday August 16. 350–450, LK 203/4

Using Low Discrepancy Sequences in Gridbased Bayesian Methods

Chaitanya JoshiUniversity of Waikato, New [email protected]

Computational Bayesian methods are dominated byMarkov Chain Monte Carlo (MCMC) based meth-ods. While versatile, for many complex problemsthese methods can become computationally very ex-pensive. For certain types of models, grid basedalternatives have been proposed. However, use ofgrids limits their accuracy as well as computationalefficiency.

We view this as a numerical integration prob-lem and make three unique contributions. First, weexplore the space using low discrepancy point setsinstead of a grid. This allows for accurate estimationof marginals of any shape at a much lower compu-tational cost than a grid based approach and thusmakes it possible to extend the computational ben-efit to a parameter space with higher dimensional-ity (10 or more). Second, we propose a new, quickand easy method to estimate the marginal using aleast squares polynomial and prove the conditionsunder which this polynomial will converge to thetrue marginal. Our results are valid for a wide rangeof point sets including grids, random points and lowdiscrepancy points. Third, we show that further ac-curacy and efficiency can be gained by taking intoconsideration the functional decomposition of the in-tegrand and illustrate how this can be done usinganchored f-ANOVA on weighted spaces.

Joint work with Paul T. Brown and Stephen Joe(University of Waikato, New Zealand).

Asynchronous Gibbs Sampling

Alexander TereninUniversity of California, Santa [email protected]

Gibbs sampling is a widely used Markov ChainMonte Carlo method for numerically approximat-ing integrals of interest in Bayesian statistics andother mathematical sciences. It is widely believedthat MCMC methods do not extend easily to par-allel implementations, as their inherently sequentialnature incurs a large synchronization cost.

We present a novel scheme - Asynchronous Gibbssampling - that allows us to parallelize Gibbs sam-pling in a way that involves no synchronization orlocking, avoiding typical performance bottlenecks ofparallel algorithms. We present two variants: anexact algorithm that converges to the correct tar-get distribution, and an approximate algorithm thatmakes the same moves with probability close to 1and possesses better scaling properties.

Asynchronous Gibbs sampling is well-suited forcluster computing, and latent variable models whereproblem dimensionality grows with data size. Weillustrate the algorithm on a number of applied ex-amples.

Joint work with Dan Simpson (University ofBath, UK) and David Draper (University of Cali-fornia, Santa Cruz).

Page 118: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

118

StatisticalWednesday August 17. 225–325, Berg C

Density Estimation via Discrepancy

Kun YangGoogle [email protected]

Given i.i.d samples from some unknown continu-ous density on hyper-rectangle [0, 1]d, we attemptto learn a piecewise constant function that approx-imates this underlying density nonparametrically.Our density estimate is defined on a binary splitof [0, 1]d and built up sequentially according to dis-crepancy criteria; the key ingredient is to controlthe discrepancy adaptively in each sub-rectangle toachieve overall bound. We prove that the estimate,even though simple as it appears, preserves most ofthe estimation power. By exploiting its structure, itcan be directly applied to some important patternrecognition tasks such as mode seeking and densitylandscape exploration. We demonstrate its applica-bility through simulations and examples: (1) modeseeking application in Flow Cytometry Analysis; (2)level set based multivariate data visualization; (3)improve k-means and neural network training andaccuracy.

Joint work with Dagna Li and Wing H. Wong ofStanford University.

Multivariate Concordance Measures andCopulas via Tensor Approximation of Gen-eralized Correlated Diffusions

Antonio DalessandroUniversity College [email protected]

There is now an increasingly large number of pro-posed concordance measures available to capture,measure and quantify different notions of depen-dence in stochastic processes. These can includeconcepts such as multivariate upper negative (pos-itive) dependence, lower negative (positive) depen-

dence, negative (positive) dependence; multivariatenegative (positive) quadrant dependence; multivari-ate association, comonotinicity, stochastic ordering;regression dependence negative (positive) and ex-treme dependence, asymptotic tail dependence andintermediate tail dependence; as well as other con-cepts such as directional dependence. In addition,there is an increasing number of copula modelsaimed at modelling dependence, however in manycases there is no clear and intuitive correspondencebetween the magnitude of a copula’s parameters andthe dependence structure they create. In addition,evaluation of these concordance measures for dif-ferent copula models can be very difficult. There-fore, we propose a class of new methods to over-come these two challenges, by developing a highlyaccurate and computationally efficient procedure toevaluate concordance measures for a given copula.In addition, this then allows us to reconstruct mapsof concordance measures locally in all regions of thestate space for any range of copula parameters. Webelieve this technique will be highly important formodellers and practitioners.

To achieve this, we developed a new class of tech-niques that takes a copula function and quantifiesthe dependence properties through a localized coeffi-cient of dependence. Effectively we develop a numer-ical procedure to map any copula function to a gen-eralized Gaussian copula function. This allows us tovisualize a copula’s characteristics. The copula map-ping is entirely based on tensor algebra and is part ofa new modelling framework we also propose consist-ing of the tensor approximation of the infinitesimalgenerator associated to multidimensional correlateddiffusions. This new result provides the represen-tation in a tensor space of both correlated diffusivetransition densities and generalized copula densities.We demonstrate the simplicity and intuition behindthe copula mapping through illustrative examplesand all the copula functions mapping results are ex-act up to the tensor space local discretization error

Page 119: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

119

G. Leobacher & M. SzolgyenyiWednesday August 17. 225–325, LK 203/4

QMC-Simulation of Piecewise DeterministicMarkov Processes

Gunther LeobacherJKU [email protected]

A piecewise deterministic Markov process (PDMP)is a Markov process whose paths follow ordinary dif-ferential equations between random jumps. Appli-cations arise, among other topics, in mathematics ofinsurance and financial mathematics.

We propose a PDMP model for catastrophe in-dices and show how it can be simulated efficientlyusing low discrepancy sequence. We investigate theefficiency of pricing CAT derivatives and profit lossdistribution.

Joint work with A. Eichler (JKU Linz), M. Szöl-gyenyi (WU Vienna).

A Numerical Method for MultidimensionalSDEs with Discontinuous Drift and Degen-erate Diffusion

Michaela SzölgyenyiVienna University of Economics and [email protected]

When solving certain stochastic optimization prob-lems, e.g., in mathematical finance, the optimal con-trol policy is of threshold type, meaning that it de-pends on the controlled process in a discontinuousway. The stochastic differential equations (SDEs)modeling the underlying process then typically havediscontinuous drift and degenerate diffusion param-eter. This motivates the study of a more generalclass of such SDEs.

We prove an existence and uniqueness result,based on certain a transformation of the state spaceby which the drift is “made continuous”.

As a consequence the transform becomes usefulfor the construction of a numerical method. Theresulting scheme is proven to converge with strongorder 1/2. This is the first result of that kind forsuch a general class of SDEs.

We will first present the one-dimensional caseand subsequently show how the ideas can be gener-alized to higher dimensions. We discuss the neces-sity of the geometric conditions we pose, and finallypresent an example, for which the drift of the SDEis discontinuous on the unit circle.

Joint work with G. Leobacher (JKU Linz).

Page 120: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

120

DiscrepancyWednesday August 17. 350–450, Berg A

Probabilistic Star Discrepancy Bounds forDigital Kronecker-Sequences

Mario NeumüllerJohannes Kepler UniversitÃďt, [email protected]

By a well known result of Heinrich, Novak,Wasilkowski and Woźniakowski there exists for anypositive integers N and d a set of N points P ⊂[0, 1)d such that the star-discrepancy of P satisfiesD∗N (P ) ≤ c

√d/N for some constant c > 0. It is not

known how to construct such a point set in general.In this talk we consider certain subsequences

of digital Kronecker-sequences S(f) = (xn)n≥1 inthe unit cube [0, 1)d whose construction is basedon the field of formal Laurent series over the fi-nite field Fq of prime order q. More detailed, forf = (f1, . . . , fd) ∈ (Fq((t−1)))d we define xn =(tn−1f1|t=q, . . . , tn−1fd|t=q). Then we prove fora uniformly distributed f that

D∗N (S(f)) ≤ C(ε)

√d log d

N

with probability 1− ε and C(ε) is independent of Nand d and is of order C(ε) log ε−1. Similar resultsfor classical Kronecker sequences have been recentlyshown by Th. Löbbe.

This is joint work with Friedrich Pillichshammer.

Sharp General and Metric Bounds for theStar Discrepancy of Perturbed Halton-Kronecker-Sequences

Florian PuchhammerJohannes Kepler University Linz, [email protected]

We consider distribution properties of two-dimensional hybrid sequences (zk)k of the formzk = (kα, yk), where α ∈ (0, 1) is irrational and(yk)k denotes a digital Niederreiter-sequence. Bydefinition, the construction of the sequence (yk)krelies on an infinite matrix C over Z2. Two specialcases of such matrices were studied by G. Larcherin 2013 and by C. Aistleitner, R. Hofer, and G.Larcher in 2016. On the one hand, taking theidentity in place of C (so-called Halton-Kronecker-sequence) yields an optimal metric discrepancy esti-mate. On the other hand, taking C as the identityagain, but filling its first row entirely with an infi-nite sequence of 1’s (related to the so-called Evil-Kronecker-sequence) worsens the metrical behaviorof (zk)k significantly. We are interested in whathappens in between these two cases. In particular,we consider Halton-Kronecker-sequences, where thefirst row of C is exchanged by a periodic pertuba-tion of blocks of length n of the form (1, 0, . . . , 0).In this case, we witness the following phenomenon:While, from a metrical point of view, the star dis-crepancy becomes essentially optimal in the limitingcase n→∞, it surprisingly worsens for more generalα, as the density of 1’s decreases.

Joint work with Roswitha Hofer (Johannes Ke-pler University Linz, Austria),

Page 121: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

121

Processes in FinanceWednesday August 17. 350–450, Berg C

Forward and Backward Simulation of Corre-lated Poisson Processes

Alex [email protected]

Backward simulation of the multivariate Poissonprocesses (BS) was recently proposed in [1], [2]. Theidea of the BS is to utilize the conditional distribu-tion property of the arrival moments: conditionalon the number of arrivals at the end of time horizonthe arrival moments form a uniformly distributedrandom set.

We propose a pure probabilistic method for thecomputation of the joint probabilities, describing theextreme measures, based on the Strong Law of LargeNumbers. Thus, we can describe all extreme mea-sures given marginal distributions of the multivari-ate Poisson process. Taking convex combination ofthese measures one can construct an admissible jointdistribution of the mutivariate process.

We also compare the backward simulation ap-proach with the forward simulation. We demon-strate that the range of correlations attainable underBS approach is different from that obtained usingthe forward simulation.

The BS approach allows us to consider in ageneral framework both Gaussian and Poisson pro-cesses. In particular, the correlation boundaries canbe computed in closed form in the case of the pro-cess (Wt, Nt) having the Wiener and Poisson com-ponents.

This is joint work with Ken Jackson and MichaelChiu.

[1] K. Duch, A. Kreinin, and Y. Jiang. New ap-proaches to operational risk modelling. IBMJournal of Research and Development, 3:31–45,2014.

[2] A. Kreinin. Correlated Poisson Processes andTheir Applications in Financial Modeling. Fi-nancial Signal Processing and Machine Learning,JohnWiley & Sons, Ltd.,191–232, 2016.

Efficient Simulation For Diffusion ProcessesUsing Integrated Gaussian Bridges

Adam KolkiewiczUniversity of Waterloo, [email protected]

In this talk we present an extension of a recently pro-posed method of sampling Brownian paths based onintegrated bridges [1]. By combining the Brownianbridge construction with conditioning on integralsalong paths of the process, the method turns outto be a very effective way of capturing importantdimensions in problems that involve integral func-tionals of Brownian motions.

In this paper we first show that by conditioningon more general integrated Gaussian bridges, com-bined with a proper change of measure, we can re-duce the dimension of integral functionals that de-pend on squared Brownian motion. We then demon-strate how this result can be used to construct effi-cient integration methods for problems that involvefunctionals of diffusion processes. We illustrate themethod using some examples from finance.

[1] A. W. Kolkiewicz. Efficient Monte Carlo simula-tion for integral functionals of Brownian motion.J. Complexity, (30):255–278, 2014.

Page 122: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

122

A. Zaremba & H. HaramotoWednesday August 17. 350–450, LK 203/4

Non-Parameteric Gaussian Process VectorValued Time Series Models and Testing ForCausality

Anna ZarembaUniversity College London, [email protected]

In this work we are developing new classes ofmultivariate time-series models based on Gaussianprocess functional non-parametric relationships be-tween current vector of observations and the laggedhistory of each marginal time series. Then withinthese new classes of non-parameter time series modelwe consider several families of causality testing thatcan be used to identify the conditional dependencerelationships that may arise in the non-parametrictime series structures. This can include structuralcausality relationships in the mean function, covari-ance function and higher order information.

Our main goal is to study the ability to detectcausality in such time series models, whilst allowingfor very general non-linear causal relationships notoften considered in classical parametric time-seriesmodels.

To perform the testing for such features wedevelop compound tests that first require estima-tion/calibration of the mean and covariance func-tions parameterizing the non-parameteric vector val-ued time series.

We then provide a generic framework that canbe applied to a variety of different problem classesand produce a number of examples to illustrate theideas developed.

Joint work with Gareth W. Peters.

A Method to Compute an Appropriate Sam-ple Size of the Two-Level Test of NIST TestSuite for Frequency and Binary Matrix RankTest

Hiroshi HaramotoEhime University, [email protected]

The National Institute of Standards and Technol-

ogy (NIST) has implemented a test suite for ran-dom number generators (RNGs) and pseudo randomnumber generators (PRNGs). NIST test suite con-sists of 15 statistical tests, each of which measuresuniform randomness and returns p-values (called theone-level tests). NIST test suite also offers a two-level test, which measures the discrepancy betweenthe uniform distribution U(0, 1) and the empiricaldistribution of p-values which are provided by a one-level test by χ2 goodness of fit test.

Two-level tests are widely used, however, itsometimes yields a false rejection because the p-value of a one-level test is often obtained by an ap-proximated computation, then these p-values maydefect the accuracy of the two-level. Therefore weneed to measure the defect of the numerical error inone-level tests.

In this talk, we introduce a method to computean appropriate sample size of the two-level test whenwe adopt Frequency test and Binary Matrix Ranktest as the one-level test. This method is based onthe δ-discrepancy δ =

∑νi=0

(qi−pi)2pi

, which measuresthe differences between two probability distributionspi and qi [2].

For example, when we use Frequency test withthe sample size 106, the appropriate sample size ofthe two-level test is approximately 125000, and thedangerous sample size (which means that the av-erage p-value becomes 0.9999) is 1300000. Someexperiments show the estimations of the two-levelsample size above are plausible.

Joint work with Makoto Matsumoto (HiroshimaU., Japan).

[1] L. Bassham,III et al., SP 800-22 Rev. 1a. AStatistical Test Suite for Random and Pseudo-random Number Generators for CryptographicApplications, National Institute of Standards &Technology, 2010.

[2] M. Matsumoto and T. Nishimura, A Non empir-ical Test on the Weight of Pseudorandom Num-ber Generators, in: Monte Carlo and Quasi-Monte Carlo methods 2000, pp. 381–395.

Page 123: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

123

Markov chain Monte CarloThursday August 18. 1025–1155, LK 102

Approximations of Markov Chains andBayesian Inference

James JohndrowDuke [email protected]

The Markov Chain Monte Carlo method is thedominant paradigm for posterior computation inBayesian analysis. It has long been common tocontrol computation time by making approxima-tions to the Markov transition kernel. Compara-tively little attention has been paid to convergenceand estimation error in these approximating MarkovChains. We propose a framework for assessing whento use approximations in MCMC algorithms, andhow much error in the transition kernel should betolerated to obtain optimal estimation performancewith respect to a specified loss function and com-putational budget. The results require only ergod-icity of the exact kernel and control of the kernelapproximation accuracy. The theoretical frameworkis applied to approximations based on random sub-sets of data, low-rank approximations of Gaussianprocesses, and a novel approximating Markov chainfor discrete mixture models.

This is joint work with Jonathan C. Mattingly,Sayan Mukherjee, and David B. Dunson

Function Mixing Times and Concentrationaway from Equilibrium

Aaditya RamdasUniversity of California, [email protected]

Slow mixing is the central hurdle when workingwith Markov chains, especially those used for MonteCarlo approximations (MCMC). In many applica-tions, it is only of interest to estimate the station-ary expectations of a small set of functions, and sothe usual definition of mixing based on total vari-ation convergence may be too conservative. Ac-cordingly, we introduce function-specific analogs ofmixing times and spectral gaps, and use them toprove Hoeffding-like function-specific concentrationinequalities. These results show that it is possible forempirical expectations of functions to concentrate

long before the underlying chain has mixed in theclassical sense. We use our techniques to derive con-fidence intervals that are sharper than those impliedby both classical Markov chain Hoeffding boundsand Berry-Esseen-corrected CLT bounds. For ap-plications that require testing, rather than point es-timation, we show similar improvements over recentsequential testing results for MCMC. We concludeby applying our framework to real data examples ofMCMC, providing evidence that our theory is bothaccurate and relevant to practice.

Joint work with Maxim Rabinovich, Michael I.Jordan and Martin J. Wainwright of UC Berkeley.

BAIS+L: An Adaptive MCMC Approach toSampling from Probability Distributions withMany Modes

Christian DaveyMonash University, [email protected]

Probability distributions with multiple modes playan important role in many applications. For exam-ple, the “rough energy landscapes” of spin glass andprotein folding configurations can be modelled bymultimodal probability distributions. Optimisationproblems also require the exploration of such roughfunctions in order to find maxima or minima. Unfor-tunately, we often do not know the positions, sizesor number of the modes in these distributions, sosampling from them becomes difficult.

Here we introduce a Metropolis-Hastings-likeMarkov chain Monte Carlo sampler designed to ad-dress these issues. The new sampler, which we callthe Bayesian Adaptive Independence Sampler withLatent variables (BAIS+L) is an extension of anearlier sampler by Keith, Kroese and Sofronov [1],called the Bayesian Adaptive Independence Sam-pler (BAIS). BAIS+L extends BAIS by replacingits single-component Gaussian proposal distributionwith a mixture of Gaussian distributions. Thisproduces a multimodal proposal distribution, whichshould allow a better approximation to a multimodaltarget distribution, for more efficient sampling.

Page 124: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

124

BAIS+L involves an acceptance ratio that is im-practical to calculate, so we use an approximate ac-ceptance ratio instead. We show, using a criterionfrom Meyn and Tweedie [2] and certain conditionson the proposal and target, that, even with this ap-proximation, BAIS+L is still uniformly ergodic. Weclose this introduction to BAIS+L with a brief illus-tration of its implementation.

[1] J. M. Keith, D. P. Kroese, and G. Y. Sofronov.Adaptive independence samplers. Statistics andComputing, 18(4):409–420, 2008.

[2] S. Meyn, and R. L. Tweedie. Markov Chains andStochastic Stability (2nd edition). CambridgeUniversity Press, Cambridge, United Kingdom,2009.

Stochastic VolatilityThursday August 18. 225–325, LK 101

Bayesian Inference for Heston-STAR Models

Osnat StramerUniversity of [email protected]

The Heston-STAR model is a new class of stochas-tic volatility models defined by generalizing the He-ston model to allow the volatility of the volatilityprocess as well as the correlation between asset log-returns and variance shocks to change across dif-ferent regimes via smooth transition autoregressive(STAR) functions. The form of the STAR functionsis very flexible, much more so than the functionsintroduced in Jones (2003), and provides the frame-work for a wide range of stochastic volatility mod-els. A Bayesian inference approach using data aug-mentation techniques is used for the parameters ofour model. We also explore goodness of fit of ourHeston-STAR model. Our analysis of the S&P 500and VIX index demonstrates that the Heston-STARmodel is more capable of dealing with large marketfluctuations (such as in 2008) compared to the stan-dard Heston model.

Joint work with Xiaoyu Shen and Matthew Bog-nar, (University of Iowa, USA).

Novel Approach to Calibration of StochasticVolatility Models using Quasi EvolutionaryAlgorithm with Local Search Optimization

Jan PospíšilUniversity of West Bohemia, Czech [email protected]

Stochastic volatility (SV) models are widely usedin mathematical finance to evaluate derivative se-curities, such as options. Optimization techniquesfor calibration of SV models to real market datahave been investigated; recently, global optimiza-tion based on evolutionary algorithms (EA) haveattracted particular attention. For the calibrationtasks without having a good knowledge of the mar-ket (e.g. a suitable initial model parameters) anapproach of combining local and global optimizersseems to be crucial. The aim of this talk is toevaluate thoroughly the performance of calibrationprocess under different EA setting. We propose anovel approach to evolutionary algorithms with localsearch optimization together with the quasi-randominitial population generation. Proposed approach issuperior to existing methods especially in terms ofspeed.

Page 125: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

125

MedicalThursday August 18. 350–450, LK 203/4

Adaptive Multi-Level Monte-Carlo Methodfor Stochastic Variational Inequalities

Evgenia BabushkinaFreie Universität Berlin, [email protected]

In this work we focus on a variation of MLMCmethod for elliptic variational inequalities withstochastic input data. Adaptive Multi-Level Monte-Carlo Finite Element method combines the ideas ofMulti-Level Monte-Carlo Finite Element (MLMC-FE) method and a posteriori error estimation foradaptive solution of deterministic spatial problems.Whereas the classical MLMC-FE method is basedon a hierarchy of uniform meshes, in the adaptiveversion of the method we use meshes generated byadaptive mesh refinement of an initially given mesh.Generation of meshes in the method is based on aposteriori error estimation. In contrast to MLMC-FE method, there is no given notion of mesh sizeanymore and levels are characterized by a hierarchyof FE-error tolerances. Under suitable assumptionson the problem, convergence of adaptive MLMC-FEmethod is shown and upper bounds for it’s compu-tational cost are obtained. We illustrate our theo-retical findings by applying the method to a modelstochastic elliptic inequality. Our numerical exper-iments show a reduction in the computational costof adaptive MLMC-FE method compared to stan-dard MLMC-FE method. In the end we apply themethod to a practically relevant stochastic contactproblem with an application in biomechanical in-vestigations of human knee implants. Although wefocus on elliptic variational inequalities, the methodcan be used for solution of other problems contain-ing partial derivatives.

Joint work with Ralf Kornhuber (Freie Univer-sität Berlin, Germany).

Enhancing Morphological Categorization viaMonte Carlo Methods

Serdar CellatFlorida State [email protected]

Quantifying shape variation within a group of indi-viduals, identifying morphological contrasts betweenpopulations and categorizing these groups accordingto morphological similarities and dissimilarities arecentral problems in developmental evolutionary bi-ology and genetics.

In this talk, we will present an approach to op-timal shape categorization through the use of a newfamily of metrics for shapes represented by a finitecollection of landmarks. We develop a technique toidentify metrics that optimally differentiate and cat-egorize shapes using Monte Carlo based optimiza-tion methods. We discuss the theory and the prac-tice of the methods and apply them to the classifi-cation of multiple species of fruit flies based on theshape of their wings.

Joint work with Giray Okten and WashingtonMio (Florida State University).

Page 126: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

126

Interest ratesThursday August 18. 350–450, LK 205/6

Robustness of Short Rate Models

David MandelFlorida State [email protected]

Short rate models are a popular means of pricinginterest rate-dependent financial products. Thesemodels attempt to capture the dynamics of th eshort rate, a mathematical concept that is readilytransformed into prices. Bond prices obtained fromdifferent models are quite similar, yet some modelsdemonstrate instability under perturbations to theirparameters.

We introduce a notion of robustness of a model,by applying Sobol’ global sensitivity analysis, wherethe input parameters of the sensitivity an alysis arethe means and variances of the input parameters ofthe model in question, obtained via calibration, andthe output is the Sobol’ sens itivity indices. We re-gard a model to be more robust if its sensitivity toparameters is more balanced. We use quasi-MonteCarlo to evaluate the Sobol’ sensitivity indices of theshort rate models, after the models are calibrated torecent U.S. Treasury yields, and compare them withresp ect to their robustness.

Joint work with Giray Ökten.

Dynamic State Space Panel Regression Mod-els for Yield Curves

Dorota ToczydlowskaDepartment of Statistical Sciences, University Col-lege London, [email protected]

In this study we develop a novel class of dimen-sion reduction stochastic multi-factor panel regres-sion based state-space models for the modelling ofdynamic yield curves. This includes consideration ofPrinciple Component Analysis (PCA), IndependentComponent Analysis (ICA) in both vector and func-tion space settings. We embed these rank reductionformulations of multi-curve interest rate dynamicsinto a stochastic representation for state-space mod-els and compare the results to classical multi-factordynamic Nelson-Seigal state-space models. We in-troduce new class of filter to study these models andwe also introduce new approaches to the extractionof the independent components via generalisationsof the classical ICA methods.

This leads to important new representations ofyield curve models that can be practically impor-tant for addressing questions of financial stress test-ing and monetary policy interventions. We will il-lustrate our results on data for multi-curve Liborinterest rates in multiple countries.

This is joint work with Gareth W. Peters.

Page 127: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

127

Rare events and importance samplingFriday August 19. 1025–1155, LK 101

Reliability Estimation for Networks withMinimal Flow Demand and Random Link Ca-pacities

Zdravko BotevUniversity of New South [email protected]

We consider a network whose links have random ca-pacities and in which a certain target amount of flowmust be carried from a source node to a destinationnode. The destination node has a fixed demandthat must be satisfied and the source node has agiven supply. We wish to estimate the unreliabilityof the network, defined as the probability that thenetwork cannot meet the flow demand. We describea novel Monte Carlo estimator to estimate this typi-cally small probability and explain how this estima-tion approach can be applied to estimation problemsin finance and Bayesian statistics with similar math-ematical structure.

Joint work with P. LâĂŹEcuyer, R. Shah, andB. Tuffin.

Convex Optimization for Monte Carlo:Stochastic Optimization for Importance Sam-pling

Ernest K. RyuStanford [email protected]

In many applications, variance reduction is essen-tial for a Monte Carlo simulation to converge withina practical amount of computation time. Variancereduction is an optimization problem. However, nu-merical optimization has been used sparingly for thisproblem, and convex optimization more so.

In this talk, we explore how to use convex op-timization for Monte Carlo simulations and, in par-ticular, how to use stochastic convex optimization

for importance sampling. We propose a principledframework for using convex optimization for sev-eral variations of importance sampling: adaptive im-portance sampling, self-normalized importance sam-pling, and what-if simulations. The resulting algo-rithms are simple, conceptually and computation-ally, and have convergence guarantees due to con-vexity.

Parallel Adaptive Importance Sampling

Paul RussellUniversity of [email protected]

It is frequently of interest to solve data assimilationproblems of the form D = G(u) + ε, ε ∼ N(0,Σ)where the data D is a noisy observation of the systemG(u). We assume knowledge of the observation op-erator G, and the covariance operator for the noise,Σ, and try to recover the parameter u which bestsatisfies the equation. Viewing the problem from aBayesian perspective and assuming prior knowledgeof u ∼ N(0, T ) we can construct a posterior dis-tribution, µY , and use Markov Chain Monte Carlo(MCMC) to produce a sample from this distribu-tion. I will demonstrate that commonly used se-rial MCMC methods do not take full advantage ofmodern computer architecture, and that designingparallel algorithms can lead to a large reduction incomputational costs. We will compare the perfor-mance of our novel algorithm with other recent de-velopments in the parallelisation of the Metropolis-Hastings algorithm for low dimensional inverse prob-lems. I will then discuss extensions to the algorithmwhich take into account local information about theposterior when building the proposal distribution.

This is joint work with Colin Cotter and SimonCotter of University of Manchester.

Page 128: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

128

Rank one latticesFriday August 19. 230–330, Berg A

Sparse High-Dimensional FFT Based onRank-1 Lattice Sampling

Toni VolkmerTechnische Universität Chemnitz, [email protected]

We consider the (approximate) reconstruction of ahigh-dimensional (e.g. d = 10) periodic signal fromsamples using a trigonometric polynomial pI : Td '[0, 1)d → C,

pI(x) :=∑k∈I

pk e2πik·x, pk ∈ C,

where I ⊂ Zd is a suitable and unknown fre-quency index set. For this setting, we present amethod which adaptively constructs the index setI of frequencies belonging to the non-zero or (ap-proximately) largest Fourier coefficients in a dimen-sion incremental way. This method computes pro-jected Fourier coefficients from samples along suit-able rank-1 lattices and then determines the fre-quency locations. For the computation, only one-dimensional fast Fourier transforms (FFTs) and sim-ple index transforms are used.

This method can be transfered to the non-periodic case, where we consider the (approximate)reconstruction of a multi-dimensional signal re-stricted to the domain [−1, 1]d using an algebraicpolynomial aI : [−1, 1]d → R in Chebyshev basis,

aI(x) :=∑k∈I

ak Tk(x) =∑k∈I

ak

d∏t=1

Tkt(xt), ak ∈ R,

where I ⊂ Nd0, x := (x1, . . . , xd)> ∈ [−1, 1]d, k :=

(k1, . . . , kd)> ∈ Nd0 and Tk(x) := cos(k arccosx)

for k ∈ N0. Here, we sample along suitablerank-1 Chebyshev lattices and again only use one-dimensional FFTs / discrete cosine transforms aswell as simple index transforms.

Finally, we compare the dimension incrementalmethod with an alternative method in the periodiccase which uses algorithms from spectral estimation.This method is based on shifted sampling in combi-nation with a divide-and-conquer technique for theESPRIT (Estimation of Signal Parameters via Ro-tational Invariance Technique) algorithm.

Joint work with Daniel Potts (Technische Uni-versität Chemnitz, Germany), and Manfred Tasche(University of Rostock, Germany).

[1] D. Potts, M. Tasche, and T. Volkmer. Effi-cient spectral estimation by MUSIC and ES-PRIT with application to sparse FFT. Front.Appl. Math. Stat., 2, 2016.

[2] D. Potts and T. Volkmer. Fast and exact re-construction of arbitrary multivariate algebraicpolynomials in Chebyshev form. In 11th inter-national conference on Sampling Theory and Ap-plications (SampTA 2015), pages 392–396, 2015.

[3] D. Potts and T. Volkmer. Sparse high-dimensional FFT based on rank-1 lattice sam-pling. Appl. Comput. Harmon. Anal., in press,2015.

On Combined Component-by-ComponentConstructions of Lattice Point Sets

Helene LaimerRicam [email protected]

Lattice point sets are for example used as samplepoints in quasi-Monte Carlo algorithms for numer-ical integration. Properties of the point sets usedcan influence the quality of the approximation thatis obtained by the algorithm. Thus the goal is tofind good lattice point sets.

The standard method for constructing gener-ating vectors for good lattice point sets is thecomponent-by-component construction. Numericalexperiments have shown that the generating vectorsfound by these constructions sometimes tend to haverecurring components, which can lead to the prob-lem of having projections with all lattice points lyingon the main diagonal.

We combine methods of Dick and Kritzer toavoid this problem with a reduced fast component-by-component construction. That is, we give a vari-ation of the standard component-by-component con-struction which avoids repeated components and si-multaneously results in a considerable speed-up incomparison to the standard construction.

Page 129: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

129

The research was supported by the Austrian Sci-ence Fund (FWF): Projects F5506-N26 and F5508-N26, which are part of the Special Research Program

“Quasi-Monte Carlo Methods: Theory and Applica-tions".

QMC for financeFriday August 19. 230–330, Berg B

Adaptive Quasi-Monte Carlo Where Each In-tegrand Value Depends on All Data Sites: theAmerican Option Case

Lluís Antoni Jiménez RugamaIllinois Institute of [email protected]

The theory of quasi-Monte Carlo cubature assumesthat one is approximating µ = E(Y ) where Y =f(X), X ∼ U [0, 1]d by the sample mean µn =n−1

∑n−1i=0 yi with yi = f(xi) for an evenly dis-

tributed sequence of data sites xi∞i=0. Recentadaptive integration Sobol’ rules [2] and lattice rules[3] choose n automatically to ensure that |µ− µn| ≤ε for a user-specified error tolerance, ε.

However in some practical settings, such as val-uation of American options via the Longstaff andSchwartz method [4], the quantity yj depends notonly on xj but on all xi∞i=0. This dependencearises because the value y is a function of the ex-ercise boundary which is in turn estimated from thefull set of sample paths. We present a modificationof [2] and [3] that handles this problem. In addi-tion, we discuss alternative Brownian motion pathgeneration and control variates that improve the in-tegration efficiency.

Joint work with Da Li and Fred J. Hickernell ofIllinois Institute of Technology.

[1] R. Cools and D. Nuyens, editors. Monte Carloand Quasi-Monte Carlo Methods 2014. Springer-Verlag, Berlin, 2016+.

[2] F. J. Hickernell and Ll. A. Jiménez Rugama.Reliable adaptive cubature using digital se-quences. In Cools and Nuyens [1]. to appear,arXiv:1410.8615 [math.NA].

[3] Ll. A. Jiménez Rugama and F. J. Hickernell.Adaptive multidimensional integration based onrank-1 lattices. In Cools and Nuyens [1]. to ap-pear, arXiv:1411.1966.

[4] F. A. Longstaff and E. S. Schwartz. Valuing

american options by simulation: A simple least-squares approach. Review of Financial Studies,14:113–147, 2001.

A Comparison Study of Sobol’ Sequences inFinancial Derivatives

Shin HaraseRitsumeikan University, [email protected]

Monte Carlo simulation is the primary method forpricing complex financial derivatives. Under cer-tain assumptions, the prices of derivatives can berepresented as expected values that correspond tohigh-dimensional integrals, and it is well-known thatquasi-Monte Carlo methods are significantly effec-tive. Sobol’ sequences are widely used quasi-MonteCarlo sequences in applications.

Here, there are several implementations of Sobol’sequences with distinct parameter sets (so-calleddirection numbers). Joe and Kuo [2] constructedSobol’ sequences with good two-dimensional projec-tions. Their motivation is due to the observationthat integrands for some problems in finance can bewell approximated by a sum of functions with eachdepending on only a small number of variables at atime (i.e., low effective dimension in the superposi-tion sense in terms of the ANOVA decomposition).To the best of our knowledge, the comprehensiveinvestigation of superiority of the above implemen-tation in practice still seems to remain.

In this talk, we show and analyze when Sobol’sequences in [2] are superior to other Sobol’ se-quences (e.g., [3, 1]) for various numerical examplesin finance, such as option pricing and interest ratederivatives. In addition, we explore the possibilityof better choices for direction numbers of Sobol’ se-quences.

Joint work with: Tomooki Yuasa (RitsumeikanUniversity, Japan).

[1] S. Joe and F. Y. Kuo, “Remark on Algorithm659: Implementing Sobol’s quasirandom se-

Page 130: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

130

quence generator”, ACM Trans. Math. Softw.29, 49-57 (2003).

[2] S. Joe and F. Y. Kuo, “Constructing Sobol’sequences with better two-dimensional projec-tions”, SIAM J. Sci. Comput. 30, 2635-2654(2008).

[3] C. Lemieux, M. Cieslak, K. Luttmer,“RandQMC User’s Guide: A Package forRandomized Quasi-Monte Carlo Methods inC (2004)”. https://www.math.uwaterloo.ca/~clemieux/randqmc/

StatisticalFriday August 19. 230–330, Berg C

Permutation p-value Approximation via Gen-eralized Stolarsky Invariance

Art OwenStanford [email protected]

It is really expensive to get a tiny p-value via permu-tations. For linear test statistics, the permutation pvalue is the fraction of permuted data vectors in agiven spherical cap. Stolarsky’s invariance princi-pal from quasi-Monte Carlo sampling gives the rootmean squared discrepancy between the observed andexpected proportions of points in spherical caps. Weuse and extend this principal to develop approxi-mate p values with small relative errors. Their ac-curacy is competitive with saddlepoint approxima-tions, but they come with an error estimate.

This is joint work with Hera He, Kinjal Basu andQingyuan Zhao.

Fast Random Field Generation with H-Matrices

Michael FeischlUniversity of New South [email protected]

We use the H-matrix technology to compute the ap-proximate square root of a covariance matrix in lin-ear complexity. This allows us to generate normaland log-normal random fields on general finite pointsets with optimal complexity. We derive rigorous er-ror estimates, which show that particularly for shortcorrelation length, this new method outperforms thestandard method of truncating the Karhunen-Loèveexpansion. Besides getting rid of the log-factor inthe complexity estimate, the method requires onlymild assumptions on the covariance function and onthe point set. Therefore, it might also be an alter-native to circulant embedding, which is well-definedonly for regular grids and stationary covariance func-tions.

This is joint work with Frances Kuo and Ian H.Sloan, both of UNSW.

Page 131: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

List of Session Chairs

Monday

Time Room Chair

900–1000 Berg BC Ronald Cools

1025–1155 Berg A Wing WongBerg B T. Mueller-GronbachBerg C Raul TemponeLK 120 Simon MakLK 101 Ronald Cools

115–215 Berg BC Lester Mackey

225–325 Berg A Michael GilesBerg B Stefan HeinrichBerg C Alex KellerLK 120 Steve ScottLK 101 Yahzen Wang

350–550 Berg A Markus WeimarBerg B Denis BelomestnyBerg C Tino UllrichLK 120 Thomas CatanachLK 101 Adam Kolkiewicz

Tuesday

Time Room Chair

900–1000 Berg BC Pierre L’Ecuyer

1025–1155 Berg A Christian RobertBerg B Kunal TalwarBerg C Kosuke SuzukiLK 120 Paul GlassermanLK 203/4 Kryzystof Latuszynski

115–215 Berg BC Wing Wong

225–325 Berg A Michael GilesBerg B Pawel PrzybylowiczBerg C Alex KellerLK 120 Kinjal BasuLK 203/4 Michael Betancourt

350–550 Berg A Takashi GodaBerg B Sourav ChatterjeeBerg C Faming LiangLK 120 Peter QianLK 203/4 David Draper

131

Page 132: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

132

Wednesday

Time Room Chair

900–1000 Hewlett 200 Peter Glynn

1025–1155 Berg A Christophe HeryBerg B Nicolas ChopinBerg C Christiane LemieuxLK 203/4 Kay GieseckeLK 205/6 Lukasz Szpruch

115–215 Berg BC Fred Hickernell

225–325 Berg A Josef DickBerg B T. Mueller-GronbachBerg C Mac HymanLK 203/4 Christian Bender

350–450 Berg A Dimitriy BilykBerg B Raul TemponeBerg C Osnat StramerLK 203/4 Jon Cockayne

Thursday

Time Room Chair

900–1000 Berg BC Josef Dick

1025–1155 Berg A Youssef MarzoukBerg B Christoph SchwabBerg C Mike GilesLK 101 Jose BlanchetLK 102 Pierre Jacob

115–215 Berg BC Frances Kuo

225–325 Berg A Michael GilesBerg B Stefan HeinrichBerg C Christophe HeryLK 101 Shin Harase

350–550 Berg A Takehito YoshikiBerg B Arnaud DoucetBerg C Johann BrauchartLK 203/4 Giray OktenLK 205/6 Kay Giesecke

Friday

Time Room Chair

900–1000 Berg BC TBA

1025–1155 Berg A Andrew StuartBerg B Krzysztof LatuszynskiBerg C Aicke HinrichsLK 101 Bruno Tuffin

115–215 Berg BC Dawn Woodard

230–330 Berg A Dirk NuyensBerg B Alex KreininBerg C Ian Sloan

Page 133: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

Index of Speakers and Co-Authors

Aistleitner, C., 35Allasseur, C., 53Azzimonti, D., 113

Babushkina, E., 125Bansal, N., 72Bardenet, R., 114Basu, K., 104, 130Beck, J. L., 109Belomestny, D., 55Bender, C., 54Berger, A., 107Beskos, A., 52Betancourt, M., 83Bigoni, D., 100Bilyk, D., 94Blanchet, J. H., 33Bognar, M., 124Bornn, L., 92Botev, Z., 127Brandolini, L., 103Brauchart, J. S., 56Briol, F.-X., 95Brown, P. T., 117Bugallo, M. F., 65Bui-Thanh, T., 42Byrenheid, G., 82, 94

Calvin, J. M., 60Cambou, M., 31Campi, L., 53Catanach, T., 109Cellat, S., 125Chatterjee, Sourav, 65Chen, B., 51Chen, N., 90Chimisov, C., 105Chiu, M., 121Chopin, N., 32Cockayne, J., 96Colzani, L., 103Connor, S., 77

Cools, R., 73, 88Cotter, C., 127Cotter, S., 127

Dadush, D., 72Dahm, K., 45Dai, F., 94Dalessandro, A., 118Dasgupta, T., 79Daum, F., 110Davey, C., 123Dawson, A., 83Del Moral, P., 52, 92Deng, L., 106Dereich, S., 63Dewar, J., 111DeWeese, M., 107Deweese, M. R., 107Diaconis, P., 65Dick, J., 27, 40, 75, 85, 86Dieker, T., 33Doss, H., 69Douc, R., 92Doucet, A., 37Draper, D., 105, 117Drovandi, C., 67, 68Dunson, D. B., 123Durand, F., 45

Ebert, A., 76Eichler, A., 119Elvira, V., 65Ernst, O., 42

Fang, W, 47Fearnhead, P., 66Feischl, M., 130Feng, M., 108Flegal, J., 112Forman, J. L., 68Frazier, P., 38Fu, M., 89Fu, Y.G., 106

133

Page 134: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

INDEX OF SPEAKERS AND CO-AUTHORS 134

Gärtner, C., 54Gantner, R. N., 40, 41, 49Garg, S., 72Gariboldi, B., 103Gerber, M., 32, 92Giesecke, K., 101Gigante, G., 58, 103Gilbert, A., 48Giles, M., 47Ginsbourger, D., 113Girolami, M., 95, 96Glasserman, P., 102Glynn, P. W., 77Gnewuch, M., 82Gobet, E., 55Goda, T., 75, 87Gorham, J., 112Graham, I., 27Griebel, M., 87Griewank, A., 86Guyon, J., 54

Häppölä, J., 47Hachisuka, T., 46Hadjidoukas, P., 116Hammouda, C. B., 50Haramoto, H., 122Harase, S., 129Hardy, A., 114Hayakawa, C. K., 46He, H., 130Hefter, M., 50, 60, 82Heinrich, S., 59Herrmann, L., 49Hery, C., 43Hery, C. & Villemin, R., 43Herzwurm, A., 60Hickernell, F. J., 26, 129Hinrichs, A., 75, 82, 87Hirao, M., 57Hoang, V. H., 41Hodyss, D., 100Hofer, R., 120Hofert, M., 31Huber, M., 113Hult, H., 114Hutzenthaler, M., 59Hyman, J. M., 111

Irrgeher, C., 86

Jackson, K., 121Jacob, P., 92Jakob, W., 45Jasra, A., 52Jentzen, A., 59Jiménez Rugama, Ll. A., 129Jin, I. H., 69Joe, S., 117Johndrow, J., 123Johnston, E. R., 97Jones, G., 112Jordan, M. I., 30, 123Joseph, V. R., 111Joshi, C., 117Juneja, S., 78, 114

Kühn, T., 93Kacwin, C., 88Kang, W., 90, 102Kaplanyan, A, 45Keller, A., 45Kim, K.-K., 101Klymenko, O. V., 115Kolkiewicz, A., 121Kong, R., 44Kornhuber, R., 125Koumoutsakos, P., 116Kreinin, A., 121Krieg, D., 74Kritzer, P., 82Kritzinger, R., 85Kucherenko, S., 115Kumar, C., 61Kuo, F. Y., 27, 34, 85, 86, 130Kusner, W., 56Kämmerer, L., 94

L’Ecuyer, P., 25, 127Lacey, M., 94Laimer, H., 85, 128Lang, A., 49Latuszynski, K., 66, 105Law, K. J. H., 52Le Corff, S., 66Le Gia, Q. T., 27, 40, 85Leövey, H., 76Lee, A., 67Lee, J. M., 89Lehtinen, J., 45Lemieux, C., 31Leobacher, G., 86, 119

Page 135: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

INDEX OF SPEAKERS AND CO-AUTHORS 135

Leonhard, C., 62Leopardi, P., 58Leovey, H., 86Li, D., 118, 129Li, G., 106Li, Q., 70Li, R., 106Li, T.-M., 45Liang, F., 98Ling, J., 43Liu, Z., 33Lovett, S., 71Luengo, D., 65

Mackey, L., 112Mak, S., 111Mandel, D., 126Markhasin, L., 75Martino. L., 65Marzouk, Y., 41, 42, 100Matoušek, J., 71Matsumoto, M., 122Mattingly, J. C., 123Mayer, L., 50Mayer, S., 94Meka, R., 72Meng, X.-L., 91, 99Mikosch, T., 33Mio, W., 125Moraes, A., 50Morkisz, P., 63Morohosi, H., 104Morzfeld, M., 100Mourant, J. R., 46Mudigonda, M., 107Mueller-Gronbach, T., 63Mukherjee, R., 104Mukherjee, S., 123Murthy, K., 113

Neuenkirch, A., 60Neumüller, M., 120Nguyen, D. T. P., 88Nguyen, J., 46Nichols, J., 27Nikolov, A., 71Nott, D., 67, 68Novak, E., 74Nuyens, D., 27, 29, 88

Oates, C., 96

Oettershagen, J., 87, 88Okten, G., 125, 126Olvera-Cravioto, M., 52Omland, S., 50Ong, V., 68Owen, A. B., 130

Park, Y., 69Parno, M., 42Pekelis, L., 43Peters, G. W., 122, 126Picchini, U., 68Pillichshammer, F., 75, 82, 86, 120Pospisil, J., 124Potts, D., 128Price, L., 67Przybyłowicz, P., 62, 63Puchhammer, F., 120

Qian, P., 80Quek, J. H., 41

Rößler, A., 62Rabinovich, M., 123Ramamoorthi, R., 45Ramanan, K., 47Ramdas, A., 123Rasthofer, U., 116Reznikov, A., 58Rhee, C.-H., 50Ritter, K., 50, 82Robbe, P., 49Robert, C., 92Roberts, G. O., 66, 92, 105Rossinelli, D., 116Rubin, D., 84Russell, P., 127Ryu, E. K., 127

Sabanis, S., 61Saff, E., 58Salimova, D., 59Schauer, M., 110Scheichl, R., 27, 48Schwab, Ch., 27, 40, 41, 49, 85Schweizer, N., 54Sermaidis, G., 66Shah, N., 115Shah, R., 127Shahbaba, B., 91Shalaiko, T., 60

Page 136: WelcometoMCQMC2016atStanford University - MCQMC 2016, Stanford

INDEX OF SPEAKERS AND CO-AUTHORS 136

ShangGuan, D. H., 106Shen, X., 124Shirley, P., 44Shkolnik, A., 101Simpson, D., 117Sisson, S., 68Sloan, I. H., 27, 86, 130Snyder, C., 100Sohl-Dickstein, J., 107Spanier, J., 44, 46Spantini, A., 41, 42, 100Staum, J., 79, 108Stramer, O., 124Stuart, A. M., 36, 41, 48Sukys, J., 116Sullivan, T., 96Suryanarayana, G., 74Suzuki, K., 73, 87Szölgyenyi, M., 119Szpruch, L., 53

Tak, H., 91Talwar, K., 71Taniguchi, Y., 31Tasche, M., 128Teckentrup, A., 48Tempone, R., 50–52Terenin, A., 105, 117Tian, M., 111Tichý, T., 108Toczydlowska, D., 126Tran, M.-N., 67Travaglini, G., 103Tuffin, B., 127Tuo, R., 79Tupputi, M., 103Turkedjiev, P., 53

Ullmann, E., 27Ullrich, M., 74, 87Ullrich, T., 81, 88, 94

van der Meulen, F., 109van Dyk, D. A., 91Vats, D., 112Vengazhiyil, R. J., 79Venugopalan, V., 46Vihola, M., 51Villemin, R., 43Volberg, A., 58Volkmer, T., 94, 128

Wainwright, M. J., 123Wang, L., 99Wang, Y., 97Wang, Y. G., 57Wasilkowski, G. W., 82Wermelinger, F., 116Wibisono, A., 30Wilson, A., 30Wong, W. H., 118Wu, C. F. J., 79

Xiao, G., 106

Yang, K., 118Yang-Jun, Y., 106Yaroslavtseva, L., 61Yoshiki, T., 87Yuasa, T., 129

Zaremba, A., 122Zhang, B.Y., 106Zhao, Q., 130Zhao, S., 44Zheng, Z., 77Zhou, Y., 52Zhuang, X., 57Zhuo, J., 54Zwart, B., 51