polynomial approximation of elliptic pdes with … · polynomial approximation of elliptic pdes...
TRANSCRIPT
Polynomial approximation of elliptic PDEswith stochastic coefficients
Lorenzo Tamellini]
Joakim Back[, Fabio Nobile†,], Raul Tempone[
] MOX - Department of Mathematics, Politecnico di Milano, Italy[ Applied Mathematics and Computational Science, KAUST, Saudi Arabia
† CSQI - MATHICSE, EPFL, Switzerland
Journees Lions-Magenes
14-12-2011
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 1 / 38
Outline
1 Uncertainty Quantification and PDEs with stochastic coefficients
2 Optimal sparse grids for Stochastic Collocation
3 Numerical examples
4 Conclusions
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 2 / 38
1 Uncertainty Quantification and PDEs with stochastic coefficients
2 Optimal sparse grids for Stochastic Collocation
3 Numerical examples
4 Conclusions
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 3 / 38
Differential problem with uncertainty on parameters{L(x, y)[u] = f (x, y) x ∈ D
B(x, y)[u] = g(x, y) x ∈ ∂D
L,B, f , g depend on parameters that may be affected by uncertainty(experimental measures, limited knowledge on system properties).The shape of D may also be uncertain.
y can be modeled as a random vector with N components, over theprobability space (Γ,B(Γ), ρ(y)dy). Therefore u is a random function,u(x, y).
Goal: Uncertainty Quantification. Compute statistical quantities foru(x, y), i.e. to assess how the uncertainty on the parameters reflects on u.
E[u](x0)
Var[u](x0)
P(u(x0) > u0)
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 4 / 38
Some examples on what can be done
Diffusion problem in a medium with random “inclusions” [BNTT10]
realization of a(x, y)
mean of u
standard deviation of u
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 5 / 38
Some examples on what can be done
Steady Navier-Stokes equations with uncertain Reynolds number andforcing term [TLN–]
0 0.2 0.4 0.6 0.8 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
−1.5
−1
−0.5
0
0.5
1
mean vorticity field
0 0.2 0.4 0.6 0.8 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
standard deviaton of the vorticity field
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 6 / 38
Darcy problem with uncertain permeability{−∇ · (a(x, y)∇u) = f (x) x ∈ D
+ boundary conditions
goal: oil/water reservoir simulation
a(x, y) is a random field
each realization a(·, y) is a function in L∞(D)
for each physical point a(x, ·) is a random variable
a covariance function describes the interaction between any couple of
points, e.g. Cov [x0, x1] = exp(−‖x0−x1‖2
L2C
)represented by a (truncated) Karhunen-Loeve or Fourier expansion
a(x, y) ≈ aN = a0 +N∑
n=1
bn(x)yn, yn uncorrelated
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 7 / 38
Karhunen-Loeve expansion convergence properties
supx∈D
E[(
a(x, ·)− aN(x, ·))2]→ 0 when N →∞ .
The more regular Cov [·, ·], the faster the convergence
Example - Uniform field
a = a0 + σ
N∑n=1
bn(x)yn
yn ∼ U(−√
3,√
3)
E[yn] = 0
Var [yn] = 1
Example - Lognormal field
log(a) = a0 +N∑
n=1
bn(x)yn
yn ∼ N (0, 1)
E[yn] = 0
Var [yn] = 1
more realistic, but requires aslightly more complex analysis(equation not coercive w.r.t y)
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 8 / 38
Problem: we may need tens-hundreds of random variables to representaccurately the field! How can we handle this efficiently?
This is a realization of alognormal field:
a(x0, ·) ∼ N (µ, σ)
gaussian covariance:
Cov [x1, x2] = σ2e− ||x1−x2||
2
L2c
With LC = 0.3 we need ∼ 30variables to take into account90% of total variability!
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 9 / 38
1 Uncertainty Quantification and PDEs with stochastic coefficients
2 Optimal sparse grids for Stochastic Collocation
3 Numerical examples
4 Conclusions
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 10 / 38
an intuitive approach
We want to compute statistics for u(x, y) solving{−∇·[a(x, y)∇u(x, y)] = f (x) x ∈ D
u(x) = 0 x ∈ ∂D
with a(x, y) uniform random field (for now).
Monte Carlo method is very simple and the convergence rate isindependent of N, (no “curse of dimensionality”) but has a slowconvergence:
E[u](x0) ' 1
M
∑i
u(x0, yi ) converges as O(1/√
M)
Can we do better than this?
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 11 / 38
regularity of u
It is possible to show that the map y→ u(x, y) is analytical([BNTT11, CDS10])
General strategy
Exploit the regularity of the map y→ u(x, y) and build a polynomialsurrogate model
1 Stochastic Galerkin - projection on spectral ρ(y)dy-orthogonalpolynomials (modal approach)
I y are Uniform r.v → Legendre pol.I y are Gaussian r.v → Hermite pol.
2 Stochastic Collocation - sum of Lagrangian interpolants over sparsegrids (nodal approach).
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 12 / 38
Assumptions on a
1 P(amin ≤ a(x, ω) ≤ amax , ∀x ∈ D) = 1, amin > 0, amax <∞.
2 a(x, y) is infinitely many times differentiable with respect to y and∃ r ∈ RN
+ independent of y. s.t.∥∥∥∥∂ia
a(·, y)
∥∥∥∥L∞(D)
≤ ri ∀y ∈ Γ,
i is a multi-index in NN , |i| =∑N
n=1 in, ri =∏N
n=1 r inn , ∂ia =
∂ i1+...+iN a
∂y i11 · · · ∂y iN
N
The derivatives of u can be bounded as (see [BNTT11, CDS10])
‖∂iu(y)‖V ≤ C0|i|! ri ∀y ∈ Γ, r =
(1
log 2
)r.
Therefore u can be extended analytically to the set
Σ ={
y ∈ RN : ∃ y0 ∈ Γ s.t. r · abs (y − y0) < 1}
abs v = (|v1|, . . . , |vN |)Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 13 / 38
Stoc. Galerkin
Pros:1 L2 optimality of projection2 functional analysis approach
Cons:1 deterministic code not
readily usable (intrusiveapproach)
2 coupled system for modes:need for preconditioners
Stoc. Collocation
Pros:1 reusability of code2 de-coupled systems
Cons:1 uses (much) more DoF than
Galerkin2 the Lebesgue constant is
affecting the error estimates
see [BNTT10, ET11] for further comparison between Galerkin andCollocation methods
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 14 / 38
Tensor grid Stochastic Collocation
uTG ,i(y): Lagrange interpolant of u(y) over tensorized quadrature points
Choose points according to the prob. measure (e.g. Gauss–Legendreor Gauss–Hermite points)
the grid has m(in) points in direction n
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
Pros: Fully parallelizable, faster than Monte Carlo for small N.
Cons: The number of points grows exponentially fast with the numberof random variables N. Clearly unfeasible, even for moderate N!
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 15 / 38
Sparse grid Stochastic Collocation
take linear combinations of tensor grids, with few points per grid.
uSG (y) =∑
i∈I c(i)uTG ,i(y)
The sparse grid
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
is a sum of tensor grids like these
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 16 / 38
Hierarchical representation of a sparse grid
Um(i)n u is an interpolant operator along yn over m(in) points.
uTG ,i =N⊗
n=1
Um(in)n [u](y)
∆m(i)n u = Um(i)
n [u]− Um(i−1)n [u] is the detail operator
∆m(i)u =N⊗
n=1
∆m(in)n [u](y) is the hierarchical surplus
uSG (y) =∑i∈I
∆m(i)[u](y)
Admissibility condition for I∀i ∈ I, i− ej ∈ I for 1 ≤ j ≤ N, ij > 1. (see e.g. [GG03])
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 17 / 38
Question
uSG ,I(y) =∑i∈I
∆m(i)[u](y). What terms ∆m(i) to include in the sum?
Standard Sparse Grid [Sm63]
I ={
i ∈ NN :∑
n(in − 1) ≤ w , w ∈ N}
idea: fix the maximum number of points per grid
Anisotropic Sparse Grid [BNTT10]
I ={
i ∈ NN :∑
n αn(in − 1) ≤ w , w ∈ N}
idea: put more points in the important variables
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 18 / 38
Question
uSG ,I(y) =∑i∈I
∆m(i)[u](y). What terms ∆m(i) to include in the sum?
Knapsack approach see also [GK09, GG03, BG04]
For each ∆m(i) estimate:
error contribution ∆E (i): how much the error decreases adding ∆m(i)
work contribution ∆W (i): evaluations required by ∆m(i)
individual profit: Prof (i) = ∆E (i)/∆W (i)
Then build the sparse grid by taking the S terms with largest profit
I =
{i ∈ NN :
∆E (i)
∆W (i)≥ ε}
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 18 / 38
Question
uSG ,I(y) =∑i∈I
∆m(i)[u](y). What terms ∆m(i) to include in the sum?
Knapsack approach
I =
{i ∈ NN :
∆E (i)
∆W (i)≥ ε}
Adaptive/a posteriori [GK09, GG03, BG04]
given I, explore its “neighbourhood”, compute an a-posteriori profitestimate, and add to I the most profitable ∆m(i).
A priori [BNTT11]
Provide a-priori estimates for ∆W ,∆E (saves exploration costs).
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 18 / 38
Let J be any set of indices such that i /∈ J and {J ∪ i} is admissible.
Estimate for ∆W (i)
∆W (i) = |W (uSG ,{J∪i})−W (uSG ,J )|
If we use nested points (e.g. Clenshaw - Curtis, Gauss - Patterson),∆W (i) is independent of J :
∆W (i) =N∏
n=1
(m(in)−m(in − 1))
If we use non-nested points (e.g. Gauss-Legendre), ∆W (i) dependson J . Upper bounds independent of J are usually too pessimistic tobe useful
From here on we focus on nested points.How to obtain sharp estimates for non-nested points?
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 19 / 38
Let J be any set of indices such that i /∈ J and {J ∪ i} is admissible.
Estimate for ∆E (i) - pt. 1
∆E (i) =∥∥uSG ,{J∪i} − uSG ,J
∥∥V⊗L2
ρ(Γ)
∆E (i) is always independent of J
∆E (i) =
∥∥∥∥∥∥∑
j∈{J∪i}
∆m(j)[u]−∑
j∈{J }
∆m(j)[u]
∥∥∥∥∥∥V⊗L2
ρ(Γ)
=∥∥∥∆m(i)[u]
∥∥∥V⊗L2
ρ(Γ)
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 19 / 38
Let J be any set of indices such that i /∈ J and {J ∪ i} is admissible.
Estimate for ∆E (i) - pt. 1
∆E (i) =∥∥uSG ,{J∪i} − uSG ,J
∥∥V⊗L2
ρ(Γ)
∆E (i) is always independent of J
we conjecture ∆E (i) .∥∥um(i−1)
∥∥V
N∏n=1
Lm(in)n
u(x, y) =∑i∈NN
ui(x)Li(y)
spectral expansion over Legendrepolynomials (ρ(y)dy-orthogonal)
Li(y) =∏N
n=1 Lin (yn)
Lm(i)n = sup
v∈C 0(Γn)
∥∥∥Um(i)n v
∥∥∥L∞(Γn)
‖v‖L∞(Γn)
Lebesgue constant for Um(i)n
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 19 / 38
Example: Comparison ∆E vs. estimate:
Let y1, y2 ∼ U(−1, 1).{−∇ · [(1 + c1y1 + c2y2)∇u(x, y1, y2) ] = f (x) x ∈ D
u(x) = 0 x ∈ ∂D
u(x, y1, y2) = ∆−1f (x)1+c1y1+c2y2
admits a Legendre expansion.
Nested knots: Clenshaw-Curtis: m(i) = 2i+1 − 1, y k = cos(
kπm(i)
)
0 5 10 15 20 25
10−10
10−5
100
∆m(i)
um(i−1)
⋅ Leb(m(i))
um(i−1)
c1 = 0.3, c2 = 0.3
0 5 10 15 20 25
10−10
10−5
100
∆m(i)
um(i−1)
⋅ Leb(m(i))
um(i−1)
c1 = 0.1, c2 = 0.5Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 20 / 38
Estimate for ∆E (i) - pt. 2
The final step is an estimate for ‖ui‖ (Legendre coefficients of u)
Using
the Assumption on a(x, y):∥∥∥∂ia
a (·, y)∥∥∥
L∞(D)≤ ri
the result on the derivatives of u: ‖∂iu(y)‖V ≤ C0|i|! ri
It is possible to show:
‖ui‖V ≤ C0e−P
n gnin |i|!i!
|i|!i! is an isotropic coupling term between rand. var. yn
Remarks
gn can be estimated numerically
such bound can be used for an optimal construction of the spectralapproximation of u
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 21 / 38
Remarks
‖ui‖V ≤ C0e−P
n gnin |i|!i!
(1)
gn = gn(rn) = log(
rn√3
)Corollary of estimate (1) (see [BNTT11]):∑
rn < log 2⇒ Legendre expansion of u converges uniformely to u
Problem: u is analytic, regardless of r → (1) can be improved.
Estimates based on complex analysis do not suffer this phenomenon(see [1, CDS10b]).
However (1) with g numerically estimated shows good performance
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 22 / 38
Examples: bound for ‖ui‖
0 10 20 30 40 50 60 70
10−12
10−10
10−8
10−6
10−4
10−2
100
Legendre coeffEstimate, no fact. corr.Estimate
yi ∼ U(−1, 1), u = 11+0.3y1+0.3y2
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 23 / 38
Estimate of gn {−∇ · [a(x, y)∇u(x, y) ] = f (x) x ∈ D
u(x) = 0 x ∈ ∂D
fix all ym but one (yn) at the mid-point of their supports
Fix i∗ ∈ N. Let Um(i∗)n [u] be a reference solution.
for i = 1, . . . , i∗
I compute Um(i)n [u]
I compute erri,n =∥∥∥Um(i)
n [u]− Um(i∗)n [u]
∥∥∥V⊗L2
ρ(Γn).
I if the knots used are “good” (Gaussian, Clenshaw–Curtis)
erri,n ≈∥∥u(0,0,...,i,0,...,0)
∥∥V≤ C0e−gn in
(the factorial term |i|!i! cancels out)
Use e.g. least square on erri to estimate gn
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 24 / 38
Examples: computation of gn
Let y1, y2 ∼ U(−1, 1).{−∇ · [(1 + 0.1y1 + 0.5y2)∇u(x, y1, y2) ] = f (x) x ∈ D
u(x) = 0 x ∈ ∂D
0 2 4 6 8 1010
−15
10−10
10−5
100
erri − Computed
C0 e−g
n m(i
n) − Fitted
g1 ≈ 3.2
0 5 10 15 2010
−15
10−10
10−5
100
erri − Computed
C0 e−g
n m(i
n) − Fitted
g2 ≈ 1.5
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 25 / 38
Procedure summary
Given a problem
1 choose a nested family of interpolation points, according to the prob.distr. of y and estimate its Lebesgue constant
2 estimate the decay of the spectral coefficients of u and computenumerically the rates gn (1D problems)
3 compute the profit of each ∆m(i) operatorI estimate ∆E (i) combining Lebesgue constant and spectral decayI estimate ∆W (i)
4 compute the sets of most profitable ∆m(i) (knapsack problem)
5 build the sparse grid
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 26 / 38
Procedure summary
The sparse grid will be built on the set {i ∈ NN : P(i) ≥ ε}
I =
i ∈ NN
+ :
∥∥um(i−1)
∥∥ N∏n=1
Lm(in)n
N∏n=1
(m(in)−m(in − 1))
≥ ε
(EW - Error Work grids)
where
Spectral expansion coeff + Lebesgue constant = error estimate
work estimate for nested knots
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 27 / 38
Procedure summary
Uniform case:
Clenshaw - Curtis knots, m(1) = 1, m(i) = 2i−1 + 1
u admites a Legendre expansion
I =
i ∈ NN
+ :
C0 exp
(−
N∑n=1
m(in − 1)gn
)|m(i− 1)|!m(i− 1)!
N∏n=1
2
πlog(m(in)+1)+1
N∏n=1
(m(in)−m(in − 1))
≥ ε
where
Spectral expansion coeff + Lebesgue constant = error estimate
work estimate for nested knots
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 27 / 38
Procedure summary
Uniform case:
Clenshaw - Curtis knots, m(1) = 1, m(i) = 2i−1 + 1
u admites a Legendre expansion
I =
{i ∈ NN
+ :N∑
i=n
m(in − 1)gn − log|m(i− 1)|!m(i− 1)!
−
N∑n=1
log2π log(m(in) + 1) + 1
m(in)−m(in − 1)≤ w
}
where
Spectral expansion coeff + Lebesgue constant = error estimate
work estimate for nested knots
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 27 / 38
1 Uncertainty Quantification and PDEs with stochastic coefficients
2 Optimal sparse grids for Stochastic Collocation
3 Numerical examples
4 Conclusions
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 28 / 38
Numerical test 1 - Uniform case{−(a(x , y)u(x , y)′)′ = 1 x ∈ D = (0, 1),
u(0, y) = u(1, y) = 0
y ∈ Γ = [−1, 1]N , N = 2, 4
different choices of diffusion coefficient a(x , y).
We focus on a linear functional ψ : V → R, ψ(v) = v( 12 );
ψ(u) is a scalar random variable.
Convergence: ‖ψ(uSG )− ψ(u)‖L2ρ(Γ) vs. nb of points in sparse grid
We compare
I standard isotropic Smolyak Sp. Grid, I = {i ∈ NN :∑N
n=1(in− 1) ≤ w}I the Knapsack grid derived
I “best M terms”: knapsack grid, with computed profits P(i)
I dimension adaptive algorithm [GG03, Kl06],www.ians.uni-stuttgart.de/spinterp
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 29 / 38
0 20 40 60 80 100 120 14010
−7
10−6
10−5
10−4
10−3
10−2
10−1
iso SMEWadaptivebest M terms
a = 1 + 0.3y1 + 0.3y2
0 20 40 60 80 100 120 14010
−8
10−7
10−6
10−5
10−4
10−3
10−2
10−1
iso SMadaptivebest M termsEW
a = 1 + 0.1y1 + 0.5y2
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 30 / 38
0 20 40 60 80 100 120 14010
−9
10−8
10−7
10−6
10−5
10−4
10−3
10−2
iso SMEWadaptivebest M terms
a(x , y) = 4 + y1 + 0.2 sin(πx)y2 +
0.04 sin(2πx)y3 + 0.008 sin(3πx)y4
0 20 40 60 80 100 120 14010
−8
10−7
10−6
10−5
10−4
10−3
10−2
10−1
iso SMEWadaptivebest M terms
log a(x , y) = y1 + 0.2 sin(πx)y2 +
0.04 sin(2πx)y3 + 0.008 sin(3πx)y4
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 31 / 38
Numerical test - 1D lognormal field
L = 1, D = [0, L]2.−∇ · a(x, y)∇u(y, x) = 0
u = 1 on x = 0, u = 0 on x = 1
no flux otherwise
a(x, y) = eγ(x,y)
µγ(x) = 0
Covγ [x, x′] = σ2e−|x1−x′1|
2
LC2
We approximate γ as
γ(y, x) ≈ µ(x) + σa0Y0 + σ
K∑k=1
ak
[Y2k−1 cos
(πL
kx1
)+ Y2k sin
(πL
kx1
)]with Yi ∼ N (0, 1), i.i.d.
Given the Fourier series σ2e−|z|2
LC2 =∑∞
k=0 ck cos(πL kz), ak =
√ck .
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 32 / 38
Well-posedness analysis [Ch11]
1 Let amin(y) = minx∈D a(x, y)
2 Fernique’s theorem: 1/amin ∈ Lqρ(Γ)
3 Lax–Milgram: ‖u(·, y)‖H1(D) ≤1
amin(y)‖f ‖H1(D) ∈ Lq
ρ(Γ)
Knapsack grid procedure
Hermite-Gauss-Patterson nested knots, Lm(in)n ' 1, m(i) tabulated
u admites a Hermite expansion
I∗ =
{i ∈ NN :
N∑n=1
[gnm(in − 1) +
1
2log(m(in − 1)!
)− log Lm(in)
n + log(m(in)−m(in − 1)
)]≤ w
}Spectral expansion coeff + Lebesgue constant = error estimate
work estimate for nested knots
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 33 / 38
Numerical test - 1D lognormal field
Quantity of interest: given the flux at the end of the domain
Φ =
[∫ L
0k(·, x)
∂u(·, x)
∂xdx
]we want to compute its expected value, E[Φ(u)]
Convergence: |E[Φ(uSG )]− E[Φ(u)]|
We compareI Monte Carlo estimateI the Knapsack grids
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 34 / 38
Numerical test - 1D lognormal field
Here LC = 0.2, σ = 0.3.K = 6→ N = 13 r.v., and 99% of total variability of eγ .K = 10→ N = 21 r.v., and 99.99% of total variability of eγ .K = 16→ N = 33 r.v., and 100% of total variability of eγ .
100
101
102
103
104
105
10−10
10−8
10−6
10−4
10−2
100
5
14 17
19
21 25
sparse grid, N=13 newsparse grid, N=21 newsparse grid, N=33 new1/M1.5
1/M0.5
1/M
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 35 / 38
1 Uncertainty Quantification and PDEs with stochastic coefficients
2 Optimal sparse grids for Stochastic Collocation
3 Numerical examples
4 Conclusions
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 36 / 38
Conclusions
PDEs with stochastic coefficients arise in the context of uncertaintyquantification in many engineering areas
Plain sampling methods require a remarkable computational effort
Sparse grids may be an effective alternative that exploit the possibleextra-regularity of u w.r.t. y, but care has to be taken in theconstruction, because of the “Curse of Dimensionality” effect.
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 37 / 38
Conclusions
A knapsack approach may be useful to handle this effect
We have developed profit estimates for the hierarchical surpluses ofthe sparse grid.
The profit estimates combine properties of u itself (decay of spectralcoefficients) and of the knots type (Lebesgue constant, nestedness ofknots)
numerical results support our analysis
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 37 / 38
I. Babuska, R. Tempone, and G. E. Zouraris. Galerkin
finite element approximations of stochastic elliptic partialdifferential equations. SIAM J. Numer. Anal.,42(2):800–825, 2004.
J. Back, F. Nobile, L. Tamellini, and R. Tempone.
Stochastic spectral Galerkin and collocation methods forPDEs with random coefficients: a numerical comparison.In J. Hesthaven and E. Ronquist, editors, Spectral andHigh Order Methods for Partial Differential Equations.
J. Beck, F. Nobile, L. Tamellini, and R. Tempone. On the
optimal polynomial approximation of stochastic PDEs byGalerkin and collocation methods. To appear on Math.Models Methods Appl. Sci.
H. Bungartz and M. Griebel. Sparse grids. Acta Numer.,
13:147–269, 2004.
A. Cohen, R. DeVore, and C. Schwab. Analytic regularity
and polynomial approximation of parametric andstochastic elliptic PDEs. SAM-Report 2010-03, Seminarfur Angewandte Mathematik, ETH, Zurich, 2010.
A. Cohen, R. DeVore, and C. Schwab. Convergence rates
of best n-term Galerkin approximations for a class ofelliptic sPDEs. Found. Comput. Math., 10:615–646, 2010.
H. C. Elman, C. W. Miller, E. T. Phipps, and R. S.
Tuminaro. Assessment of Collocation and Galerkinapproaches to linear diffusion equations with random data.International Journal for Uncertainty Quantification,1(1):19–33, 2011.
T. Gerstner and M. Griebel. Dimension-adaptive
tensor-product quadrature. Computing, 71(1):65–87,2003.
M. Griebel and S. Knapek. Optimized general sparse grid
approximation spaces for operator equations. Math.Comp., 78(268):2223–2257, 2009.
A. Klimke. Uncertainty modeling using fuzzy arithmetic
and sparse grids. PhD thesis, Universitat Stuttgart, ShakerVerlag, Aachen, 2006.
S. Smolyak. Quadrature and interpolation formulas for
tensor products of certain classes of functions. Dokl.Akad. Nauk SSSR, 4:240–243, 1963.
L. Tamellini, O. Le Maıtre, A. Nouy Generalized
Stochastic spectral decomposition for the steadyNavier–Stokes equations In preparation
J. Charrier Strong and weak error estimates for the
solutions of elliptic partial differential equations withrandom coefficients, INRIA - Rapport de recherche,7300-version 3, 2011.
Lorenzo Tamellini (Politecnico di Milano) 14 December 2011 38 / 38