Multilevel stochastic collocations withdimensionality reduction
Ionut Farcas
TUM, Chair of Scientific Computing in Computer Science (I5)
27.01.2017
Outline
1 Motivation
2 Theoretical backgroundUncertainty modelingSparse gridsGeneralized polynomial chaos and sparse gridsMultilevel collocation methodsStochastic dimensionality reduction
3 Test scenario
4 Discussion
Motivation
problem: quantification of uncertainty in complex phenomenamultiphysics (e.g. fluid-structure interaction)plasma physics...
main challenge: “curse of dimensionality”→ “curse of resources”solution 1.1: delay the “curse of dimensionality”→ sparse gridssolution 1.2: try reducing the dimensionality→ sensitivity analysis
Motivation
problem: quantification of uncertainty in complex phenomenamultiphysics (e.g. fluid-structure interaction)plasma physics...
main challenge: “curse of dimensionality”→ “curse of resources”
solution 1.1: delay the “curse of dimensionality”→ sparse gridssolution 1.2: try reducing the dimensionality→ sensitivity analysis
Motivation
problem: quantification of uncertainty in complex phenomenamultiphysics (e.g. fluid-structure interaction)plasma physics...
main challenge: “curse of dimensionality”→ “curse of resources”solution 1.1: delay the “curse of dimensionality”→ sparse grids
solution 1.2: try reducing the dimensionality→ sensitivity analysis
Motivation
problem: quantification of uncertainty in complex phenomenamultiphysics (e.g. fluid-structure interaction)plasma physics...
main challenge: “curse of dimensionality”→ “curse of resources”solution 1.1: delay the “curse of dimensionality”→ sparse gridssolution 1.2: try reducing the dimensionality→ sensitivity analysis
Uncertainty modeling
probabilistic modelingprobability space (Ω,F ,P)
θ = (θ1, θ2, . . . , θd ) vector of continuous i.i.d. random variablessupp(θi) = Γi , supp(θ) = Γ1 × Γ2 × . . .× Γd = Γ
Generalized polynomial chaos approximation
idea: represent an arbitrary random variable (of interest) as afunction of another random variable with given distribution
how: use a series expansion of orthogonal polynomialslet p = (p1, . . . ,pd ) ∈ Nd :
∑di=1 pi < P
consider d-variate orthogonal polynomials
Φp(θ) := Φp1(θ1) . . .Φpd (θd )
for simplicity, drop the multi-index subscript p and use instead ascalar index n = 1, . . . ,N, N =
(d+Pd
)orthogonality means
E[Φn(θ)Φm(θ)] =
∫ΓΦn(θ)Φm(θ)ρ(θ)dθ = γnδnm, γn ∈ R
Generalized polynomial chaos approximation
idea: represent an arbitrary random variable (of interest) as afunction of another random variable with given distributionhow: use a series expansion of orthogonal polynomials
let p = (p1, . . . ,pd ) ∈ Nd :∑d
i=1 pi < Pconsider d-variate orthogonal polynomials
Φp(θ) := Φp1(θ1) . . .Φpd (θd )
for simplicity, drop the multi-index subscript p and use instead ascalar index n = 1, . . . ,N, N =
(d+Pd
)orthogonality means
E[Φn(θ)Φm(θ)] =
∫ΓΦn(θ)Φm(θ)ρ(θ)dθ = γnδnm, γn ∈ R
Generalized polynomial chaos approximation
idea: represent an arbitrary random variable (of interest) as afunction of another random variable with given distributionhow: use a series expansion of orthogonal polynomialslet p = (p1, . . . ,pd ) ∈ Nd :
∑di=1 pi < P
consider d-variate orthogonal polynomials
Φp(θ) := Φp1(θ1) . . .Φpd (θd )
for simplicity, drop the multi-index subscript p and use instead ascalar index n = 1, . . . ,N, N =
(d+Pd
)orthogonality means
E[Φn(θ)Φm(θ)] =
∫ΓΦn(θ)Φm(θ)ρ(θ)dθ = γnδnm, γn ∈ R
Generalized polynomial chaos approximation
idea: represent an arbitrary random variable (of interest) as afunction of another random variable with given distributionhow: use a series expansion of orthogonal polynomialslet p = (p1, . . . ,pd ) ∈ Nd :
∑di=1 pi < P
consider d-variate orthogonal polynomials
Φp(θ) := Φp1(θ1) . . .Φpd (θd )
for simplicity, drop the multi-index subscript p and use instead ascalar index n = 1, . . . ,N, N =
(d+Pd
)orthogonality means
E[Φn(θ)Φm(θ)] =
∫ΓΦn(θ)Φm(θ)ρ(θ)dθ = γnδnm, γn ∈ R
Generalized polynomial chaos approximation
idea: represent an arbitrary random variable (of interest) as afunction of another random variable with given distributionhow: use a series expansion of orthogonal polynomialslet p = (p1, . . . ,pd ) ∈ Nd :
∑di=1 pi < P
consider d-variate orthogonal polynomials
Φp(θ) := Φp1(θ1) . . .Φpd (θd )
for simplicity, drop the multi-index subscript p and use instead ascalar index n = 1, . . . ,N, N =
(d+Pd
)orthogonality means
E[Φn(θ)Φm(θ)] =
∫ΓΦn(θ)Φm(θ)ρ(θ)dθ = γnδnm, γn ∈ R
Generalized polynomial chaos
let x - deterministic inputs, θ - stochastic inputs, f - modelthe gPC approximation of order N reads
f (x ,θ) ≈ fN(x ,θ) =N−1∑n=0
cn(x)Φn(θ)
gPC coefficients via projection
cn(x) =
∫Γ
f (x ,θ)Φn(θ)ρ(θ)dθ = E[f (x ,θ)Φn(θ)]
Post-processing
expectationE[f (x ,θ)] = c0(x),
variance
Var [f (x ,θ)] =N−1∑n=1
c2n(x).
total Sobol’ indices
STi (x) =
Varp[f (x ,θ)]
Var [f (x ,θ)]=
∑k∈Ap
c2k (x)
Var [f (x ,θ)],
Ap = p ∈ Nd : pi ∈ p,pi 6= 0d∑
i=1
STi (x) = 1
Sparse grid idea
problem: discretize efficiently a tensor product space
standard approach: full grid→ O(Nd ) dof, if N dof in one direction→ “curse of dimensionality”idea: delay the curse of dimensionalityuse sparse grids: weaken the assumed coupling between theinput dimensionsO(Nd )→ O(N(logN)d−1) dof
Sparse grid idea
problem: discretize efficiently a tensor product spacestandard approach: full grid→ O(Nd ) dof, if N dof in one direction→ “curse of dimensionality”
idea: delay the curse of dimensionalityuse sparse grids: weaken the assumed coupling between theinput dimensionsO(Nd )→ O(N(logN)d−1) dof
Sparse grid idea
problem: discretize efficiently a tensor product spacestandard approach: full grid→ O(Nd ) dof, if N dof in one direction→ “curse of dimensionality”idea: delay the curse of dimensionalityuse sparse grids: weaken the assumed coupling between theinput dimensionsO(Nd )→ O(N(logN)d−1) dof
Sparse grid idea
⇒
Hierarchical sparse grids ingredients
grid level l = (l1, . . . , ld ) ∈ Nd
spatial position i = (i1, . . . , id ) ∈ Nd
generic grid point ul,i = (ul1,i1 , . . . ,uld ,id )
equidistant grid with mesh size hli = 2−li , i = 1, . . . ,dbasis functions ϕl,i with support [ul,i − hl ,ul,i + hl ]
ϕl,i(u) = ϕ(u − ihl
hl
)in d-dimensions,
ϕl,i (u) =d∏
j=1
ϕlj ,ij (uj )
Hierarchical sparse grids preliminaries
Hl = spanϕl,i : 1 ≤ i ≤2l − 1 - nodal setWl = spanϕl,i : i ∈ Il -hierarchical increment set
Il = i ∈ Nd : 1 ≤ ik ≤ 2lk − 1, ik odd , k = 1 . . . d
Hl =⊗
k≤l Wk
Hierarchical sparse grids preliminaries
given the hierarchical increment spaces Wl and given a level L,we can create further spaces VL
VL =⊗
k∈J Wk , for some multiindex set J
if J = l ∈ Nd : |l |∞ ≤ L - full grid spaceif J = l ∈ Nd : |l |1 ≤ L + d − 1 - standard sparse grid space
Hierarchical sparse grids preliminaries
given the hierarchical increment spaces Wl and given a level L,we can create further spaces VL
VL =⊗
k∈J Wk , for some multiindex set Jif J = l ∈ Nd : |l |∞ ≤ L - full grid space
if J = l ∈ Nd : |l |1 ≤ L + d − 1 - standard sparse grid space
Hierarchical sparse grids preliminaries
given the hierarchical increment spaces Wl and given a level L,we can create further spaces VL
VL =⊗
k∈J Wk , for some multiindex set Jif J = l ∈ Nd : |l |∞ ≤ L - full grid spaceif J = l ∈ Nd : |l |1 ≤ L + d − 1 - standard sparse grid space
Hierarchical sparse grids example
L = 5
Interpolation on hierarchical sparse grids
consider g : [0,1]d → Rthe sparse grid interpolant gI(u) of g(u) is
gI(u) =∑
l∈J ,i∈Il
αl,iϕl,i(u) (1)
αl,i are the so-called hierarchical surplusesassumeg ∈ Hmix
2 ([0,1]d ) = f : [0,1]d → R : Dl f ∈ L2([0,1]d ), |l |∞ ≤ 2,Dl f = ∂|l|1 f/∂x l1
1 . . . ∂x ldd
if full grid||g(u)− gI(u)||L2 ∈ O
(h2
L)
if sparse grid||g(u)− gI(u)||L2 ∈ O
(h2
LLd−1)
Piecewise linear basis functions
Piecewise polynomial basis functions
Spatial refinement
due to the hierarchical construction→ local refinement possibleαl,i - good measure of the interpolation error
the absolute value of αl,i - good refinement metricselect the grid points with the largest surpluses valuesadd their hierarchical descendants to Jif not all hierarchical parents exist add them
multiple grid points can be refined in one step
Spatial refinement
due to the hierarchical construction→ local refinement possibleαl,i - good measure of the interpolation errorthe absolute value of αl,i - good refinement metric
select the grid points with the largest surpluses valuesadd their hierarchical descendants to Jif not all hierarchical parents exist add them
multiple grid points can be refined in one step
Spatial refinement
due to the hierarchical construction→ local refinement possibleαl,i - good measure of the interpolation errorthe absolute value of αl,i - good refinement metric
select the grid points with the largest surpluses valuesadd their hierarchical descendants to Jif not all hierarchical parents exist add them
multiple grid points can be refined in one step
Spatial refinement: Franke’s function
f (x1, x2) = 0.75 exp(− (9x1 − 2)2
4− (9x2 − 2)2
4
)+
0.75 exp(− (9x1 + 1)2
49− 9x2 + 2
10
)+
0.5 exp(− (9x1 − 7)2
4− (9x2 − 3)2
4
)−
0.2 exp(− (9x1 − 4)2 − (9x2 − 7)
)
Franke’s function
Franke’s function refinement part 1L = 5refine 20% of the grid points
Franke’s function refinement part 2L = 6refine 20% of the grid points
gPC coefficients computation
remembercn(x) =
∫Γ
f (x ,θ)Φn(θ)ρ(θ)dθ
how can we use sparse grids?let T : [0,1]d → Γ
then,
cn(x) =
∫[0,1]d
f (x ,T (u))Φn(T (u))|detJT (u)|ρ(T (u))du
intuitionΦn(T (u)) - tensor structureif f (x ,T (u)) would also have a tensor structure ...
gPC coefficients computation
remembercn(x) =
∫Γ
f (x ,θ)Φn(θ)ρ(θ)dθ
how can we use sparse grids?
let T : [0,1]d → Γ
then,
cn(x) =
∫[0,1]d
f (x ,T (u))Φn(T (u))|detJT (u)|ρ(T (u))du
intuitionΦn(T (u)) - tensor structureif f (x ,T (u)) would also have a tensor structure ...
gPC coefficients computation
remembercn(x) =
∫Γ
f (x ,θ)Φn(θ)ρ(θ)dθ
how can we use sparse grids?let T : [0,1]d → Γ
then,
cn(x) =
∫[0,1]d
f (x ,T (u))Φn(T (u))|detJT (u)|ρ(T (u))du
intuitionΦn(T (u)) - tensor structureif f (x ,T (u)) would also have a tensor structure ...
gPC coefficients computation
remembercn(x) =
∫Γ
f (x ,θ)Φn(θ)ρ(θ)dθ
how can we use sparse grids?let T : [0,1]d → Γ
then,
cn(x) =
∫[0,1]d
f (x ,T (u))Φn(T (u))|detJT (u)|ρ(T (u))du
intuitionΦn(T (u)) - tensor structureif f (x ,T (u)) would also have a tensor structure ...
gPC coefficients computation
iff (x ,T (u)) ≈ f (x ,T (u)) =
∑l∈J ,i∈Il
αl,iϕl,i (u)
T (u) := (F−11 (u1), . . . ,F−1
d (ud )), Fi cdf of θi
then
cn(x) =
∫[0,1]d
f (x ,T (u))Φn(T (u))du
=
∫[0,1]d
( ∑l∈J ,i∈Il
αl,i(x)ϕl,i(u))Φn(T (u))du
=∑
l∈J ,i∈Il
αl,i(x)d∏
j=1
∫[0,1]
Φj(F−1j (uj))ϕlj ,ij (uj)duj
gPC coefficients computation
iff (x ,T (u)) ≈ f (x ,T (u)) =
∑l∈J ,i∈Il
αl,iϕl,i (u)
T (u) := (F−11 (u1), . . . ,F−1
d (ud )), Fi cdf of θi
then
cn(x) =
∫[0,1]d
f (x ,T (u))Φn(T (u))du
=
∫[0,1]d
( ∑l∈J ,i∈Il
αl,i(x)ϕl,i(u))Φn(T (u))du
=∑
l∈J ,i∈Il
αl,i(x)d∏
j=1
∫[0,1]
Φj(F−1j (uj))ϕlj ,ij (uj)duj
Multilevel approaches
“monolevelapproach”can we furtherreduce thecomputational cost?use multilevelapproaches
Multilevel stochastic collocation: no refinement
Multilevel stochastic collocation: with refinement
Multilevel gPC coefficients
let Mh denote the level of the deterministic domain discretizationlet Ll denote the sparse grid level
let cMh,Lln (x) denote the gPC coefficient computed using adeterministic grid of level Mhsparse grid of level Ll
then, for K + 1 levels
cMK ,LKn (x) =cM0,LK
n (x)
+ (cM1,LK−1n (x)− cM0,LK−1
n (x))+
...
+ (cMK ,L0n (x)− cMK−1,L0
n (x))
if nested sparse grids, cMK−l ,Ll−1n (x) ⊂ cMK−l ,Ll
n (x)
Multilevel gPC coefficients
let Mh denote the level of the deterministic domain discretizationlet Ll denote the sparse grid level
let cMh,Lln (x) denote the gPC coefficient computed using adeterministic grid of level Mhsparse grid of level Ll
then, for K + 1 levels
cMK ,LKn (x) =cM0,LK
n (x)
+ (cM1,LK−1n (x)− cM0,LK−1
n (x))+
...
+ (cMK ,L0n (x)− cMK−1,L0
n (x))
if nested sparse grids, cMK−l ,Ll−1n (x) ⊂ cMK−l ,Ll
n (x)
Multilevel gPC coefficients
let Mh denote the level of the deterministic domain discretizationlet Ll denote the sparse grid level
let cMh,Lln (x) denote the gPC coefficient computed using adeterministic grid of level Mhsparse grid of level Ll
then, for K + 1 levels
cMK ,LKn (x) =cM0,LK
n (x)
+ (cM1,LK−1n (x)− cM0,LK−1
n (x))+
...
+ (cMK ,L0n (x)− cMK−1,L0
n (x))
if nested sparse grids, cMK−l ,Ll−1n (x) ⊂ cMK−l ,Ll
n (x)
Stochastic dimensionality reduction
each uncertain input has a different contribution to the outputuncertainty
some inputs contribute very little→ they can be “ignored” (takenas deterministic)use sensitivity information to determine each input’s contributionin the multilevel scheme, given Kc < K and τ ∈ [0,1] (e.g. τ = 5%)
if STi (x) ≤ τ , “ignore” input i
determine the new stochastic dimensionality“project” computed result on the new (sparse) grid
Stochastic dimensionality reduction
each uncertain input has a different contribution to the outputuncertaintysome inputs contribute very little→ they can be “ignored” (takenas deterministic)
use sensitivity information to determine each input’s contributionin the multilevel scheme, given Kc < K and τ ∈ [0,1] (e.g. τ = 5%)
if STi (x) ≤ τ , “ignore” input i
determine the new stochastic dimensionality“project” computed result on the new (sparse) grid
Stochastic dimensionality reduction
each uncertain input has a different contribution to the outputuncertaintysome inputs contribute very little→ they can be “ignored” (takenas deterministic)use sensitivity information to determine each input’s contribution
in the multilevel scheme, given Kc < K and τ ∈ [0,1] (e.g. τ = 5%)
if STi (x) ≤ τ , “ignore” input i
determine the new stochastic dimensionality“project” computed result on the new (sparse) grid
Stochastic dimensionality reduction
each uncertain input has a different contribution to the outputuncertaintysome inputs contribute very little→ they can be “ignored” (takenas deterministic)use sensitivity information to determine each input’s contributionin the multilevel scheme, given Kc < K and τ ∈ [0,1] (e.g. τ = 5%)
if STi (x) ≤ τ , “ignore” input i
determine the new stochastic dimensionality“project” computed result on the new (sparse) grid
Sparse grid projection
if input k ,1 ≤ i ≤ d is “ignored” and uk is the correspondingdeterministic value
f (x ,T (u)) =∑
l∈J ,i∈Il
αl,iϕl,i(u)
=∑
l∈J ,i∈Il
αl,i
d∏j=1
ϕlj ,ij (uj)
=∑
l∈J ,i∈Il
αl,iϕlk ,ik (uk )d−1∏j=1
ϕlj ,ij (uj)
=∑
l∈J ′,i∈I′l
α′l,iϕ′l,i(u)
Test scenario
d2ydt2 (t) + c dy
dt (t) + ky(t) = f cos(wt)y(0) = y0dydt (0) = y1
t ∈ [0,20], w = 1.05
five uncertain inputsdamping coefficient c ∼ U(0.08,0.12)spring constant k ∼ U(0.03,0.04)forcing amplitude f ∼ U(0.08,0.12)initial position y0 ∼ U(0.45,0.55)initial velocity y1 ∼ U(−0.05,0.05)
underdamped regime
Test scenario
d2ydt2 (t) + c dy
dt (t) + ky(t) = f cos(wt)y(0) = y0dydt (0) = y1
t ∈ [0,20], w = 1.05five uncertain inputs
damping coefficient c ∼ U(0.08,0.12)spring constant k ∼ U(0.03,0.04)forcing amplitude f ∼ U(0.08,0.12)initial position y0 ∼ U(0.45,0.55)initial velocity y1 ∼ U(−0.05,0.05)
underdamped regime
Tests setup
sparse grid functionality: SG++1
finite difference discretizationuniform inputs→ Legendre polynomialsmodified polynomial basis functions of deg 2tinterest = 10reference results with 32768 Gauss-Legendre nodesmultilevel approach with K = 2
cM2,L2n (x) = cM0,L2
n (x) + (cM1,L1n (x)− cM0,L1
n (x))
+ (cM2,L0n (x)− cM1,L0
n (x))
when using refinement, Li , i = 1,2 means L0 with i refinementsteps
1http://sgpp.sparsegrids.org/
Tests setup
sparse grid functionality: SG++1
finite difference discretization
uniform inputs→ Legendre polynomialsmodified polynomial basis functions of deg 2tinterest = 10reference results with 32768 Gauss-Legendre nodesmultilevel approach with K = 2
cM2,L2n (x) = cM0,L2
n (x) + (cM1,L1n (x)− cM0,L1
n (x))
+ (cM2,L0n (x)− cM1,L0
n (x))
when using refinement, Li , i = 1,2 means L0 with i refinementsteps
1http://sgpp.sparsegrids.org/
Tests setup
sparse grid functionality: SG++1
finite difference discretizationuniform inputs→ Legendre polynomialsmodified polynomial basis functions of deg 2
tinterest = 10reference results with 32768 Gauss-Legendre nodesmultilevel approach with K = 2
cM2,L2n (x) = cM0,L2
n (x) + (cM1,L1n (x)− cM0,L1
n (x))
+ (cM2,L0n (x)− cM1,L0
n (x))
when using refinement, Li , i = 1,2 means L0 with i refinementsteps
1http://sgpp.sparsegrids.org/
Tests setup
sparse grid functionality: SG++1
finite difference discretizationuniform inputs→ Legendre polynomialsmodified polynomial basis functions of deg 2tinterest = 10
reference results with 32768 Gauss-Legendre nodesmultilevel approach with K = 2
cM2,L2n (x) = cM0,L2
n (x) + (cM1,L1n (x)− cM0,L1
n (x))
+ (cM2,L0n (x)− cM1,L0
n (x))
when using refinement, Li , i = 1,2 means L0 with i refinementsteps
1http://sgpp.sparsegrids.org/
Tests setup
sparse grid functionality: SG++1
finite difference discretizationuniform inputs→ Legendre polynomialsmodified polynomial basis functions of deg 2tinterest = 10reference results with 32768 Gauss-Legendre nodes
multilevel approach with K = 2
cM2,L2n (x) = cM0,L2
n (x) + (cM1,L1n (x)− cM0,L1
n (x))
+ (cM2,L0n (x)− cM1,L0
n (x))
when using refinement, Li , i = 1,2 means L0 with i refinementsteps
1http://sgpp.sparsegrids.org/
Tests setup
sparse grid functionality: SG++1
finite difference discretizationuniform inputs→ Legendre polynomialsmodified polynomial basis functions of deg 2tinterest = 10reference results with 32768 Gauss-Legendre nodesmultilevel approach with K = 2
cM2,L2n (x) = cM0,L2
n (x) + (cM1,L1n (x)− cM0,L1
n (x))
+ (cM2,L0n (x)− cM1,L0
n (x))
when using refinement, Li , i = 1,2 means L0 with i refinementsteps
1http://sgpp.sparsegrids.org/
Tests setup
sparse grid functionality: SG++1
finite difference discretizationuniform inputs→ Legendre polynomialsmodified polynomial basis functions of deg 2tinterest = 10reference results with 32768 Gauss-Legendre nodesmultilevel approach with K = 2
cM2,L2n (x) = cM0,L2
n (x) + (cM1,L1n (x)− cM0,L1
n (x))
+ (cM2,L0n (x)− cM1,L0
n (x))
when using refinement, Li , i = 1,2 means L0 with i refinementsteps
1http://sgpp.sparsegrids.org/
Tests setup
M0 = 500,M1 = 2000,M2 = 8000reference results
Eref [y(10)] = −0.155165Varref [y(10)] = 0.0002267
error measurement
err =∣∣∣qoiref − qoi
qoiref
∣∣∣
Tests setup
M0 = 500,M1 = 2000,M2 = 8000reference results
Eref [y(10)] = −0.155165Varref [y(10)] = 0.0002267
error measurement
err =∣∣∣qoiref − qoi
qoiref
∣∣∣
Test case 1: no dimensionality reduction
L0 L1 L2 ref % err exp err var11 71 351 - 4.2012e-06 1.3147e-0471 351 1471 - 7.7759e-07 1.5950e-0511 31 67 20% 4.0983e-04 7.8192e-0471 191 423 20% 1.4499e-06 2.8220e-0571 230 655 30% 7.8551e-07 1.4199e-05
Test case 2: dimensionality reduction
compute Sobol’ indices for cM1,L1n (x), i.e. start with K = 1
τ = 5%
err expectation ∈ O(10−3)
L0 L1 L2 ref % ST1 ST
2 ST3 ST
4 ST5
5 71 49 - 4.1% 1.2% 56.5% 4.8% 34.0%
17 351 129 - 4.1% 1.2% 56.5% 4.8% 34.0%
5 31 13 20% 4.1% 0.65% 56.7% 4.8% 33.9%
17 191 45 20% 4.1% 1.2% 56.5% 4.8% 34.0%
17 230 67 30% 4.0% 1.2% 56.6% 4.8% 34.0%
Test case 3: dimensionality reduction
compute Sobol’ indices for cM0,L2n (x) + (cM1,L1
n (x)− cM0,L1n (x))
τ = 5%
err expectation ∈ O(10−4)
L0 L1 L2 ref % ST1 ST
2 ST3 ST
4 ST5
5 71 351 - 4.1% 1.2% 56.5% 4.8% 34.0%
17 351 1471 - 4.1% 1.2% 56.5% 4.8% 34.0%
5 31 67 20% 4.1% 1.2% 56.5% 4.8% 34.0%
17 191 423 20% 4.0% 1.2% 56.6% 4.8% 34.0%
17 230 655 30% 4.0% 1.2% 56.6% 4.8% 34.0%
Discussion
from polynomial chaos coefficients, we can compute mean,variance, Sobol’ (sensitivity) indices etc.spatially adaptive sparse grids suitable to delay the curse ofdimensionalitymultilevel ideas can further reduce the computational costuncertain inputs that contribute little to output uncertainty can beignoredall of these should be used with care
Thank you for your attention!