what’s new in high-dimensional integration? – …...what’s new? what’s new is that we now...
TRANSCRIPT
![Page 1: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/1.jpg)
What’s new in high-dimensionalintegration? – designing for
applications
Ian H. Sloan
The University of New South Wales – UNSW Australia
ANZIAM NSW/ACT, November, 2015
![Page 2: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/2.jpg)
The theme
High dimensional problems are important, but hard.
Some interesting problems can now be tackled successfully.
What’s new? What’s new is that we now know how to design high
dimensional integration rules that are good for particular
problems.
These are Quasi-Monte Carlo (or QMC) rules
![Page 3: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/3.jpg)
High-dimensional integration
Consider∫ 1
0
· · ·∫ 1
0
F (y1, . . . , y300)dy1 · · · dy300.
![Page 4: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/4.jpg)
High-dimensional integration
Consider∫ 1
0
· · ·∫ 1
0
F (y1, . . . , y300)dy1 · · · dy300.
Where might integrals with hundreds of dimensions occur?
![Page 5: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/5.jpg)
High-dimensional integration
Consider∫ 1
0
· · ·∫ 1
0
F (y1, . . . , y300)dy1 · · · dy300.
Where might integrals with hundreds of dimensions occur?
finance
statistics
flow through a porous medium
other stochastic pde – e.g. climate change done properly
![Page 6: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/6.jpg)
How?
We cannot use a product rule! Even a 2-point Gauss rule in each of
100 dimensions would need 2100 function evaluations!
![Page 7: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/7.jpg)
How?
We cannot use a product rule! Even a 2-point Gauss rule in each of
100 dimensions would need 2100 function evaluations!
In general we want to approximate (with maybe large s)
Is(F ) :=
∫ 1
0
. . .
∫ 1
0
F (y1, . . . , ys)dy1 . . . dys
=
∫
[0,1]sF (y)dy.
In practice this usually means that some transformation to the unit cube has already
been carried out.
![Page 8: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/8.jpg)
How?
We cannot use a product rule! Even a 2-point Gauss rule in each of
100 dimensions would need 2100 function evaluations!
In general we want to approximate (with maybe large s)
Is(F ) :=
∫ 1
0
. . .
∫ 1
0
F (y1, . . . , ys)dy1 . . . dys
=
∫
[0,1]sF (y)dy.
In practice this usually means that some transformation to the unit cube has already
been carried out.
To design a good integration rule we need to be guided by
applications .
![Page 9: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/9.jpg)
PDE with random coefficients
PDE with random coefficients are now attracting great interest.
Example: flow through a porous medium
Darcy’s law is ~q(x) = −a(x)∇p(x),
where
p(x) is pressure of the fluid
~q(x) is velocity of the fluid
a(x) is “permeability” of the medium
![Page 10: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/10.jpg)
PDE with random coefficients
PDE with random coefficients are now attracting great interest.
Example: flow through a porous medium
Darcy’s law is ~q(x) = −a(x)∇p(x),
where
p(x) is pressure of the fluid
~q(x) is velocity of the fluid
a(x) is “permeability” of the medium
Incompressibility: ∇ · ~q = 0
Together these give a second order elliptic PDE:
∇ · (a(x)∇p(x)) = 0
![Page 11: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/11.jpg)
Modelling the permeability
Describing in all the microscopic pores and channels in a real material
is commonly considered much too hard. So it is common engineering
practice to model the permeability as a random field:
![Page 12: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/12.jpg)
A model problem – the “uniform” case
−∇ · (a(x, y)∇u(x, y)) = f(x) in D ,
u(x, y) = 0 on ∂D, y ∈ U := [0, 1]N,
with D a bounded Lipschitz domain in Rd,
![Page 13: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/13.jpg)
A model problem – the “uniform” case
−∇ · (a(x, y)∇u(x, y)) = f(x) in D ,
u(x, y) = 0 on ∂D, y ∈ U := [0, 1]N,
with D a bounded Lipschitz domain in Rd, and
a(x, y) = a+∞∑
j=1
(yj − 12)ψj(x), x ∈ D, y ∈ U ,
where y1, y2, . . . are independent random variables uniformly
distributed on [0, 1];
![Page 14: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/14.jpg)
A model problem – the “uniform” case
−∇ · (a(x, y)∇u(x, y)) = f(x) in D ,
u(x, y) = 0 on ∂D, y ∈ U := [0, 1]N,
with D a bounded Lipschitz domain in Rd, and
a(x, y) = a+∞∑
j=1
(yj − 12)ψj(x), x ∈ D, y ∈ U ,
where y1, y2, . . . are independent random variables uniformly
distributed on [0, 1]; with a,ψj such that∑
j ‖ψj‖∞ < ∞, and
amax ≥ a(x, y) ≥ amin > 0,
making the PDE strongly elliptic for every y.
![Page 15: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/15.jpg)
A model problem – the “uniform” case
−∇ · (a(x, y)∇u(x, y)) = f(x) in D ,
u(x, y) = 0 on ∂D, y ∈ U := [0, 1]N,
with D a bounded Lipschitz domain in Rd, and
a(x, y) = a+s∑
j=1
(yj − 12)ψj(x), x ∈ D, y ∈ U ,
where y1, y2, . . . are independent random variables uniformly
distributed on [0, 1]; with a,ψj such that∑
j ‖ψj‖∞ < ∞, and
amax ≥ a(x, y) ≥ amin > 0,
making the PDE strongly elliptic for every y.
In practice truncate the sum after s terms.
![Page 16: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/16.jpg)
Example
Take d = 1,
a(x, y) = a+∞∑
j=1
(yj − 12)ψj(x), x ∈ [0, π], y ∈ U ,
and
ψj(x) :=sin(jx)
jα, for some α > 1.
The bigger is α, the smoother the field a(·, y).
![Page 17: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/17.jpg)
The lognormal case
In the lognormal case
a(x, y) = a(x) + exp
∞∑
j=1
yj√µjξj(x)
, x ∈ D,
yj are i.i.d. standard normal random numbers
µj , ξj are the eigenvalues and normalized eigenfunctions of the covariance
operator for the Gaussian random field in the exponent
![Page 18: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/18.jpg)
What might we want to compute?
The mean pressure at a particular point or over a particular small region
The effective permeability
The mean “breakthrough time”
. . .
![Page 19: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/19.jpg)
What might we want to compute?
The mean pressure at a particular point or over a particular small region
The effective permeability
The mean “breakthrough time”
. . .
All are expected values – and expected values are integrals.
If there are many random variables, then the expected values are
high-dimensional integrals.
![Page 20: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/20.jpg)
Suppose the problem is to compute the expected value of
F (y) := G(u(·, y))
for some linear functional G of the solution u of the PDE.
![Page 21: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/21.jpg)
Suppose the problem is to compute the expected value of
F (y) := G(u(·, y))
for some linear functional G of the solution u of the PDE.
The expected value is in principle an infinite-dimensional integral , where
the meaning is:
I[F ] :=
∫
[0,1]NF (y)dy
:= lims→∞
∫
[0,1]sF (y1, . . . , ys,
12, 12, . . .)dy1 . . . dys.
![Page 22: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/22.jpg)
Suppose the problem is to compute the expected value of
F (y) := G(u(·, y))
for some linear functional G of the solution u of the PDE.
The expected value is in principle an infinite-dimensional integral , where
the meaning is:
I[F ] :=
∫
[0,1]NF (y)dy
:= lims→∞
∫
[0,1]sF (y1, . . . , ys,
12, 12, . . .)dy1 . . . dys.
Note that replacing ys+1, ys+2, . . . by 12
is equivalent to replacing a(x, y) by
as(x, y) := a+∑s
j=1(yj − 12)ψj(x).
![Page 23: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/23.jpg)
Many approaches:
polynomial chaos,
generalized polynomial chaos,
stochastic Galerkin,
stochastic collocation
Monte Carlo
multilevel Monte Carlo
Quasi-Monte Carlo (QMC)
multilevel Quasi-Monte Carlo
![Page 24: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/24.jpg)
Many approaches:
polynomial chaos,
generalized polynomial chaos,
stochastic Galerkin,
stochastic collocation
Monte Carlo
multilevel Monte Carlo
Quasi-Monte Carlo (QMC)
multilevel Quasi-Monte Carlo
All methods face serious challenges when the effective
dimensionality is high.
![Page 25: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/25.jpg)
Many approaches:
polynomial chaos,
generalized polynomial chaos,
stochastic Galerkin,
stochastic collocation
Monte Carlo
multilevel Monte Carlo
Quasi-Monte Carlo (QMC)
multilevel Quasi-Monte Carlo
All methods face serious challenges when the effective
dimensionality is high. And when all else fails, people turn to Monte
Carlo methods. QMC aims to beat Monte Carlo.
![Page 26: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/26.jpg)
Many contributors:
Norbert Wiener, "The Homogeneous Chaos", 1938
Babuska/Nobile/Tempone, Babuska/Tempone/Zouraris, Barth/Schwab/Zollinger,
Charrier, Charrier/Scheichl/Teckentrup, Cliffe, Giles/Scheichl/Teckentrup, Cliffe,
Graham/Scheichl/Stals, Cohen/ Chkifa/Schwab, Cohen/De Vore/Schwab,
Graham/Scheichl/Ullmann, Hansen/Schwab, Harbrecht/Peters/Siebenmorgen,
Hoang/Schwab, Karniadakis/ Xiu, Kunoth/Schwab, Nobile/Tempone/Webster,
Schwab/Todor, Schillings/Schwab, Teckentrup/Scheichl/Giles/Ullmann, Webster
![Page 27: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/27.jpg)
And for QMC applied to PDE with random coefficients:
Graham/Kuo/Nuyens/Scheichl/Sloan 2011 (lognormal case, no error analysis, circulant
embedding), Kuo/Schwab/Sloan (uniform case with error analysis), Kuo/Schwab/Sloan
(multi-level for the uniform case), Schwab (uniform case, general operator equations), Le
Gia (uniform case for sphere), Graham/Kuo/Nichols/Scheichl/Schwab/Sloan (lognormal
case, analysis and numerics), Graham/Kuo/Scheichl/Schwab/Sloan/Ullmann),
(multi-level lognormal case), Harbrecht/Peters/Siebenmorgen
![Page 28: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/28.jpg)
Monte Carlo (MC)
QMCN,s(F ) :=
1
N
N∑
k=1
F (tk),
with t1, . . . , tN chosen randomly and independently from a uniform
distribution on [0, 1]s.
![Page 29: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/29.jpg)
Quasi-Monte Carlo
For QMC we take
Is(F ) ≈ QN,s(F ) :=1
N
N∑
k=1
F (tk) ,
with t1, . . . , tN deterministic (and cleverly chosen).
![Page 30: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/30.jpg)
Quasi-Monte Carlo
For QMC we take
Is(F ) ≈ QN,s(F ) :=1
N
N∑
k=1
F (tk) ,
with t1, . . . , tN deterministic (and cleverly chosen).
How to choose t1, . . . , tN?
![Page 31: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/31.jpg)
MC and QMC points for 2 dimensions
The points:
Is(F ) ≈ 1
N
N∑
k=1
F (tk), tk ∈ [0, 1]s
Monte Carlo methodwith 64 “random” points
•
•
•
•
•
•
•
•
•
•
•
•••
•
•
••
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
••
• •
•
••
•
•
•
••
••
•
•
•
•
•
•
••
•
First 64 points of2D Sobol′ sequence
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
A lattice rulewith 64 points
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
![Page 32: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/32.jpg)
We consider only the simplest kind of lattice rule, given by
QN,s(z;F ) =1
N
N−1∑
k=0
F
({
kz
N
})
,
where z ∈ {1, . . . , N − 1}s, and the braces mean that each
component of the s-vector in the braces is to be replaced by its
fractional part.
![Page 33: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/33.jpg)
Example of lattice rule
N = 34, z = (1, 21)
0 10
1
•
•
•
•
••
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
![Page 34: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/34.jpg)
The QMC approach to PDE with random coefficients
We approximate the s-dimensional integral by an N -point QMC
rule,
![Page 35: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/35.jpg)
The QMC approach to PDE with random coefficients
We approximate the s-dimensional integral by an N -point QMC
rule,
for each QMC point yk, k = 1, . . . , N compute the field
as(x, yk), and then find an approximate solution of the flow
problem by the finite-element method;
![Page 36: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/36.jpg)
The QMC approach to PDE with random coefficients
We approximate the s-dimensional integral by an N -point QMC
rule,
for each QMC point yk, k = 1, . . . , N compute the field
as(x, yk), and then find an approximate solution of the flow
problem by the finite-element method;
then compute any desired physical quantity by averaging the N
values so obtained.
![Page 37: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/37.jpg)
The QMC approach to PDE with random coefficients
We approximate the s-dimensional integral by an N -point QMC
rule,
for each QMC point yk, k = 1, . . . , N compute the field
as(x, yk), and then find an approximate solution of the flow
problem by the finite-element method;
then compute any desired physical quantity by averaging the N
values so obtained.
What would we like to achieve?
Fast convergence (or at at least better than the MC rate O(N−1/2));
![Page 38: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/38.jpg)
The QMC approach to PDE with random coefficients
We approximate the s-dimensional integral by an N -point QMC
rule,
for each QMC point yk, k = 1, . . . , N compute the field
as(x, yk), and then find an approximate solution of the flow
problem by the finite-element method;
then compute any desired physical quantity by averaging the N
values so obtained.
What would we like to achieve?
Fast convergence (or at at least better than the MC rate O(N−1/2));
And integration errors that are independent of s.
![Page 39: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/39.jpg)
If we are to use a lattice rule how to choose z?
Recall: the lattice rule for the integral over [0, 1]s is
QN,s(z;F ) =1
N
N−1∑
k=0
F
({
kz
N
})
.
So we need to choose z. But there is no known formula for a
“good” z, beyond s = 2.
![Page 40: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/40.jpg)
If we are to use a lattice rule how to choose z?
Recall: the lattice rule for the integral over [0, 1]s is
QN,s(z;F ) =1
N
N−1∑
k=0
F
({
kz
N
})
.
So we need to choose z. But there is no known formula for a
“good” z, beyond s = 2.
Can we construct a good z? Yes, it’s possible!
![Page 41: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/41.jpg)
The idea
The idea: Assume that F belongs to a convenient function space H ,
one that depends on lots of parameters. Specifically, . . .
![Page 42: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/42.jpg)
The idea
The idea: Assume that F belongs to a convenient function space H ,
one that depends on lots of parameters. Specifically, . . .
And choose those weight parameters to minimise a certain bound on
the error for the s-dimensional integral. (What bound? Later!)
![Page 43: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/43.jpg)
The idea
The idea: Assume that F belongs to a convenient function space H ,
one that depends on lots of parameters. Specifically, . . .
And choose those weight parameters to minimise a certain bound on
the error for the s-dimensional integral. (What bound? Later!)
And choose z to minimise the “worst-case error ” in H .
![Page 44: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/44.jpg)
Worst-case error
Definition : The worst case error in the space H of a QMC rule
QN,s(P ; ·) using the point set P = {t0, t1, · · · , tN−1} is
eN,s(P ;H) := sup‖F‖H≤1
|Is(F )−QN,s(P ;F )| ,
i.e. it is the largest error of QN,s(P ;F ) for F in the unit ball of H .
![Page 45: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/45.jpg)
Worst-case error
Definition : The worst case error in the space H of a QMC rule
QN,s(P ; ·) using the point set P = {t0, t1, · · · , tN−1} is
eN,s(P ;H) := sup‖F‖H≤1
|Is(F )−QN,s(P ;F )| ,
i.e. it is the largest error of QN,s(P ;F ) for F in the unit ball of H .
The following Hilbert space of functions has the big advantage that the
worst case error is computable .
![Page 46: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/46.jpg)
A good choice of H
A good choice for the norm squared of F in H is
‖F‖2s,γγγ :=
∑
u⊆{1,...,s}
1
γu
∫
[0,1]|u|
∣
∣
∣
∣
∣
∂|u|F
∂yu
(yu;111222)
∣
∣
∣
∣
∣
2
dyu,
where
(yu;111222)j =
yj if j ∈ u,
1
2if j /∈ u.
![Page 47: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/47.jpg)
A good choice of H
A good choice for the norm squared of F in H is
‖F‖2s,γγγ :=
∑
u⊆{1,...,s}
1
γu
∫
[0,1]|u|
∣
∣
∣
∣
∣
∂|u|F
∂yu
(yu;111222)
∣
∣
∣
∣
∣
2
dyu,
where
(yu;111222)j =
yj if j ∈ u,
1
2if j /∈ u.
The norm squared is the SUM OVER ALL SUBSETS u OF {1, . . . , s}.
For example, for u = {1, 3} the corresponding term is
1
γ{1,3}
∫ 1
0
∫ 1
0
∣
∣
∣
∣
∂2F
∂y1∂y3(y1,
12, y3,
12, 12, . . .)
∣
∣
∣
∣
2
dy1dy3
![Page 48: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/48.jpg)
Weights
‖F‖2s,γγγ :=
∑
u⊆{1,...,s}
1
γu
∫
[0,1]|u|
∣
∣
∣
∣
∣
∂|u|F
∂yu
(yu;111222)
∣
∣
∣
∣
∣
2
dyu,
The norm squared is the SUM OVER ALL SUBSETS u OF {1, . . . , s}.
The term for subset u is divided by the weight γu. So there are 2s weights!
The weight γu is a positive number that measures the importance of
the subset u. A small weight forces the corresponding derivative to be small.
We denote by H = Hs,γγγ the space with weights {γu}.
![Page 49: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/49.jpg)
What’s so good about this H?
It’s a reproducing kernel Hilbert space with a very simple kernel:
K(y, y′) =∑
u⊆{1,...,s}
γu∏
j∈u
η(yj , y′j) , with
η(y, y′) =
min(y, y′) − 12
if y, y′ > 12,
12− max(y, y′) if y, y′ < 1
2,
0 otherwise.
![Page 50: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/50.jpg)
What’s so good about this H?
It’s a reproducing kernel Hilbert space with a very simple kernel:
K(y, y′) =∑
u⊆{1,...,s}
γu∏
j∈u
η(yj , y′j) , with
η(y, y′) =
min(y, y′) − 12
if y, y′ > 12,
12− max(y, y′) if y, y′ < 1
2,
0 otherwise.
What does this mean? It means
K(y, ·) ∈ H, and
〈f(·),K(y, ·)〉H = f(y) ∀y ∈ [0, 1]s ∀f ∈ H.
![Page 51: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/51.jpg)
What’s so good about this H?
It’s a reproducing kernel Hilbert space with a very simple kernel:
K(y, y′) =∑
u⊆{1,...,s}
γu∏
j∈u
η(yj , y′j) , with
η(y, y′) =
min(y, y′) − 12
if y, y′ > 12,
12− max(y, y′) if y, y′ < 1
2,
0 otherwise.
What does this mean? It means
K(y, ·) ∈ H, and
〈f(·),K(y, ·)〉H = f(y) ∀y ∈ [0, 1]s ∀f ∈ H.
The great thing about a RKHS with kernel is that there is a simple
formula for the worst-case error:
![Page 52: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/52.jpg)
The worst case error in a RKHS
For the QMC rule with points P = {t0, . . . , tN−1} the squared WCE
is:
e2N,s(P ;H) =
∫
[0,1]s
∫
[0,1]sK(y, y′) dy dy′
−
2
N
N−1∑
i=0
∫
[0,1]sK(ti, y) dy +
1
N2
N−1∑
i=0
N−1∑
k=0
K(ti, tk),
Thus in a RKHS with a known kernel the WCE can be computed
![Page 53: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/53.jpg)
The worst case error in a RKHS
For the QMC rule with points P = {t0, . . . , tN−1} the squared WCE
is:
e2N,s(P ;H) =
∫
[0,1]s
∫
[0,1]sK(y, y′) dy dy′
−
2
N
N−1∑
i=0
∫
[0,1]sK(ti, y) dy +
1
N2
N−1∑
i=0
N−1∑
k=0
K(ti, tk),
Thus in a RKHS with a known kernel the WCE can be computed
except for the little fact that 2s terms are needed for eachK(y, y′)! .
![Page 54: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/54.jpg)
The case of product weights
Originally we considered only product weights (IHS and H Wozniakowski,
98),
γu =∏
j∈u
αj .
In this case the WCE is easily computed:
1
N
N−1∑
k=0
s∏
j=1
(
1 + αj
[
B2
(
{kzj
N})
+ 112
])
−s∏
j=1
(
1 +αj
12
)
1/2
,
where B2(x) := x2 − x+ 1/6.
![Page 55: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/55.jpg)
The case of product weights
Originally we considered only product weights (IHS and H Wozniakowski,
98),
γu =∏
j∈u
αj .
In this case the WCE is easily computed:
1
N
N−1∑
k=0
s∏
j=1
(
1 + αj
[
B2
(
{kzj
N})
+ 112
])
−s∏
j=1
(
1 +αj
12
)
1/2
,
where B2(x) := x2 − x+ 1/6.
Actually, this is the worst case error in an appropriate root mean square sense for a
randomly shifted lattice rule.
![Page 56: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/56.jpg)
How to minimise the worst case error?
Since z is an integer vector of length s, with values between 1 and
N − 1, in principle we could evaluate the WCE for all possible
values of z.
![Page 57: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/57.jpg)
How to minimise the worst case error?
Since z is an integer vector of length s, with values between 1 and
N − 1, in principle we could evaluate the WCE for all possible
values of z.
But that would be exponentially costly.
![Page 58: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/58.jpg)
How to minimise the worst case error?
Since z is an integer vector of length s, with values between 1 and
N − 1, in principle we could evaluate the WCE for all possible
values of z.
But that would be exponentially costly.
Fortunately, we can provably get close to the best WCE by the
component-by-component (CBC) algorithm :
![Page 59: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/59.jpg)
CBC
The CBC algorithm: Korobov, IHS, Reztsov, Kuo, Joe
With CBC, a good generator z = (z1, . . . , zs) is constructed one
component at a time:
choose z1 = 1
choose z2 to minimise WCE for s = 2, then
choose z3 to minimise WCE for s = 3, then
. . .
so that at each step there are only (at most) N − 1 choices.
A naive implementation costs O(s2N2) operations.
![Page 60: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/60.jpg)
Fast CBC
In 2006 Cools and Nuyens developed Fast CBC – it uses FFT ideas,
and requires a time of order only O(s N logN).
![Page 61: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/61.jpg)
Fast CBC
In 2006 Cools and Nuyens developed Fast CBC – it uses FFT ideas,
and requires a time of order only O(s N logN).
The Nuyens and Cools implementation allows the CBC algorithm for
product weights to be run with s in the thousands, N in the millions.
![Page 62: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/62.jpg)
Now to fix the weights. This is what’s new!
Recall the worst case error for integration over [0, 1]s:
eN,s,γγγ(t1, . . . , tN) := sup‖F‖s,γγγ≤1
|Is(F ) −Qs,N(F )| .
![Page 63: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/63.jpg)
Now to fix the weights. This is what’s new!
Recall the worst case error for integration over [0, 1]s:
eN,s,γγγ(t1, . . . , tN) := sup‖F‖s,γγγ≤1
|Is(F ) −Qs,N(F )| .
Given F , we can use the resulting error bound:
|Is(F )−Qs,N(F )| ≤ eN,s,γγγ(t1, . . . , tN) × ‖F‖s,γγγ
and choose weights that minimize the right-hand side
or some upper bound on the right-hand side.
![Page 64: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/64.jpg)
Skipping details, for the PDE problem
Error ≤ C
N (1/2λ)
∑
0<|u|<∞
γγγu
λAu
1/2λ
×
∑
|u|<∞
Bu
γγγu
1/2
,
for all λ ∈ (12, 1], where Au = . . . and Bu = . . ..
We cannot take λ = 12
because Au → ∞ as λ → 12+.
![Page 65: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/65.jpg)
Skipping details, for the PDE problem
Error ≤ C
N (1/2λ)
∑
0<|u|<∞
γγγu
λAu
1/2λ
×
∑
|u|<∞
Bu
γγγu
1/2
,
for all λ ∈ (12, 1], where Au = . . . and Bu = . . ..
We cannot take λ = 12
because Au → ∞ as λ → 12+.
Minimising the product yields:
γγγu =
(
Bu
Au
)1/(1+λ)
= (|u|!)2
1+λ∏
j∈u
αj , αj = . . .
The γγγu are “POD” (for product and order dependent) weights.
![Page 66: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/66.jpg)
For the PDE application assuming exact PDE solution
Convergence theorem. (Kuo/Schwab/IHS 2012) If z is chosen by CBC,
using the minimising weights, then with F (y) = G(u(y)),
Cubature error for approximating F ≤ Cδ
N1−δ. for all δ > 0.
And generalizations ....
![Page 67: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/67.jpg)
For the PDE application assuming exact PDE solution
Convergence theorem. (Kuo/Schwab/IHS 2012) If z is chosen by CBC,
using the minimising weights, then with F (y) = G(u(y)),
Cubature error for approximating F ≤ Cδ
N1−δ. for all δ > 0.
And generalizations ....
The error is independent of s provided∑∞
j=1 ‖ψj‖2/3∞ < ∞
Thus the convergence is provably faster than the Monte Carlo rate
N−1/2 – and the CURSE OF DIMENSIONALITY has disappeared!
![Page 68: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/68.jpg)
What else is new?
Similar analysis of the lognormal case.
Analysis of a multilevel method for both uniform and lognormal
cases:
[Kuo, Schwab, Sloan (to appear)]
I(G(u)) ≈ QL∗ (G(u)) =
L∑
ℓ=0
Qsℓ,nℓ
(
G(usℓ
hℓ− u
sℓ−1
hℓ−1))
cost = O(
L∑
ℓ=0
sℓnℓhℓ−d
)
![Page 69: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/69.jpg)
What’s newer still?
We know how to construct higher order QMC rules, with e.g.
O(N−2), and even how to apply them the above PDE problem.
(Dick, Kuo, Le Gia, Nuyens and Schwab 2015 for the uniform case)
[The best convergence rate achievable with a lattice rule is (close to) O(N−1).]
Lattice rules are now replaced by interlaced polynomial lattice rules , and POD
weights by SPOD weights (standing for “smoothness-driven product and
order-dependent weights”)
Now randomisation is not needed, and the rate of convergence is
better: the theoretical convergence rate is O(N−1/p),
instead of O(N−1/p+1/2) for lattice rules with 23< p ≤ 1.
But higher order QMC is still very new, and there are few calculations.
![Page 70: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/70.jpg)
Summarising the PDE+QMC story
![Page 71: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/71.jpg)
Summarising the PDE+QMC story
we truncate the ∞-dimensional problem at a high enough s;
![Page 72: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/72.jpg)
Summarising the PDE+QMC story
we truncate the ∞-dimensional problem at a high enough s;
we use a theoretical bound on the norm of the quantity of interest
to deduce good (POD) weights γγγu.
![Page 73: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/73.jpg)
Summarising the PDE+QMC story
we truncate the ∞-dimensional problem at a high enough s;
we use a theoretical bound on the norm of the quantity of interest
to deduce good (POD) weights γγγu.
using those good weights, we compute good a lattice parameter z
by the fast CBC algorithm;
![Page 74: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/74.jpg)
Summarising the PDE+QMC story
we truncate the ∞-dimensional problem at a high enough s;
we use a theoretical bound on the norm of the quantity of interest
to deduce good (POD) weights γγγu.
using those good weights, we compute good a lattice parameter z
by the fast CBC algorithm;
using the corresponding lattice points yk, k = 1, . . . , N we
compute the field a(x, yk), then find the approximate solution of
the PDE and compute the quantity of interest (QOI), and then
average the QOI over all N realisations;
![Page 75: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/75.jpg)
Summarising the PDE+QMC story
we truncate the ∞-dimensional problem at a high enough s;
we use a theoretical bound on the norm of the quantity of interest
to deduce good (POD) weights γγγu.
using those good weights, we compute good a lattice parameter z
by the fast CBC algorithm;
using the corresponding lattice points yk, k = 1, . . . , N we
compute the field a(x, yk), then find the approximate solution of
the PDE and compute the quantity of interest (QOI), and then
average the QOI over all N realisations;
and so obtain rigorously and constructively a convergence rate
better than MC, and an error bound independent of s.
![Page 76: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/76.jpg)
But some cautions
The constants may be very large.
The numerical evidence is so far not completely convincing. We
always do better than MC, but often no better than with an off-the-shelf QMC rule.
For other high-dimensional applications the present theory cannot
be applied at all (option pricing), or is way in the future (weather
and climate).
![Page 77: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/77.jpg)
Nevertheless, high-dimensional problems will not go away, and we are
perhaps making some progress.
In the world of high dimensions, we live in interesting times!
![Page 78: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/78.jpg)
Some reading
I H Sloan, What’s new in high dimensional integration? –
designing quasi-Monte Carlo for applications, Proceedings of
ICIAM 2015, Beijing, China, Higher Education Press, 2015. pp.
365–386.
J. Dick, F.Y. Kuo, & I.H. Sloan, Numerical integration in high
dimensions – the Quasi Monte Carlo way, Acta Numerica 2013.
F.Y. Kuo and D Nuyens, Application of quasi-Monte Carlo
methods to PDEs with random coefficients – survey of analysis
and implementation, in preparation.
![Page 79: What’s new in high-dimensional integration? – …...What’s new? What’s new is that we now know how to design high dimensional integration rules that are good for particular](https://reader035.vdocument.in/reader035/viewer/2022070804/5f0340187e708231d408484e/html5/thumbnails/79.jpg)
Collaborators
Josef Dick, UNSWIvan Graham, BathMichael Griebel, BonnFred Hickernell, IITStephen Joe, WaikatoFrances Kuo, UNSWQ Thong Le Gia, UNSWJames Nichols, UNSWErich Novak, JenaDirk Nuyens, LeuvenLeszek Plaskota, WarsawAndrei ReztsovRob Scheichl, BathChristoph Schwab, ETHElisabeth Ullmann, HamburgGrzegorz Wasilkowski, KentuckyHenryk Wozniakowski, Warsaw and Columbia