computational uncertainty quantification for …pcha/uq/talk1.pdfcomputational uncertainty quanti...
TRANSCRIPT
![Page 1: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/1.jpg)
Computational Uncertainty Quantification forInverse Problems: Part 1, Linear Problems
John BardsleyUniversity of Montana
SIAM Conference on Imaging Science, June 2018
![Page 2: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/2.jpg)
Outline
Computational Uncertainty Quantifications for Inverse Problems
MATLAB codes:https://github.com/bardsleyj/SIAMBookCodes
to be published by SIAM in late-summer/early-fall 2018
• Characteristics of inverse problems.
• Prior Modeling Using Gaussian Markov random fields.
• Hierarchical Bayesian Inverse Problems and MCMC
![Page 3: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/3.jpg)
General Statistical Model
Consider the linear, Gaussian statistical model
y = Ax + ε,
where
• y ∈ Rm is the vector of observations;
• x ∈ Rn is the vector of unknown parameters;
• A : Rn → Rm;
• ε ∼ N (0, λ−1I), i.e., ε is i.i.d. Gaussian with mean 0 andvariance λ−1.
![Page 4: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/4.jpg)
Numerical Discretization of a Physical Model
For us, y = [y1, . . . , ym]T , with
yi = y(si)
=
∫Ωa(si, s
′)x(s′)ds′(
def= [Amx]i
)
≈ 1
∆s′
n∑j=1
a(si, s′j)x(s′j) (numerical quadrature)
= [Ax]i
([A]ij =
1
∆s′a(si, s
′j) & x = [x(s′1), . . . , x(s′n)]T
),
where Ω = [0, 1] or [0, 1]× [0, 1], defines the equation
y = Ax.
![Page 5: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/5.jpg)
Numerical Discretization of a Physical Model
For us, y = [y1, . . . , ym]T , with
yi = y(si)
=
∫Ωa(si, s
′)x(s′)ds′(
def= [Amx]i
)≈ 1
∆s′
n∑j=1
a(si, s′j)x(s′j) (numerical quadrature)
= [Ax]i
([A]ij =
1
∆s′a(si, s
′j) & x = [x(s′1), . . . , x(s′n)]T
),
where Ω = [0, 1] or [0, 1]× [0, 1], defines the equation
y = Ax.
![Page 6: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/6.jpg)
Synthetic ExamplesData y examples:
Corresponding true images x:
![Page 7: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/7.jpg)
Naive Solutions: xLS = A†y
Corresponding true images x:
![Page 8: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/8.jpg)
Properties of the model matrix A
It is typical in inverse problems that if the matrix A has SVD
A =
r∑i=1
σiuivTi , where r = rank(A).
Characteristics of Inverse Problems:
• the σi’s decay to 0 as i→ r;
• the ui,vi’s become increasingly oscillatory as i→ n.
The least squares solution can then be written
A†y = A†(Ax + ε)
=r∑i=1
(vTi x)vi︸ ︷︷ ︸portion due to signal
+r∑i=1
(uTi ε
σi
)vi︸ ︷︷ ︸
portion due to noise
![Page 9: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/9.jpg)
Properties of the model matrix A
It is typical in inverse problems that if the matrix A has SVD
A =
r∑i=1
σiuivTi , where r = rank(A).
Characteristics of Inverse Problems:
• the σi’s decay to 0 as i→ r;
• the ui,vi’s become increasingly oscillatory as i→ n.
The least squares solution can then be written
A†y = A†(Ax + ε)
=
r∑i=1
(vTi x)vi︸ ︷︷ ︸portion due to signal
+
r∑i=1
(uTi ε
σi
)vi︸ ︷︷ ︸
portion due to noise
![Page 10: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/10.jpg)
Least Squares Solutions: A†y =∑r
i=1(vTi x)vi +∑r
i=1
(uTi εσi
)vi
Corresponding true images x:
![Page 11: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/11.jpg)
The Fix: Regularization
Parameter Set P Data Set D
Knowledge
Prior
REGULARIZATION
ILL−POSED
![Page 12: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/12.jpg)
Bayes Law:
p(x|y, λ, δ)︸ ︷︷ ︸posterior
∝ p(y|x, λ)︸ ︷︷ ︸likelihood
p(x|δ)︸ ︷︷ ︸prior
.
For our assumed statistical model,
p(y|x, λ) ∝ exp
(−λ
2‖Ax− y‖2
).
In this talk, we will assume a Gaussian prior
p(x|δ) ∝ exp
(−δ
2xTLx
),
so that
p(x|y, λ, δ) ∝ exp
(−λ
2‖Ax− y‖2 − δ
2xTLx
).
![Page 13: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/13.jpg)
Bayes Law:
p(x|y, λ, δ)︸ ︷︷ ︸posterior
∝ p(y|x, λ)︸ ︷︷ ︸likelihood
p(x|δ)︸ ︷︷ ︸prior
.
For our assumed statistical model,
p(y|x, λ) ∝ exp
(−λ
2‖Ax− y‖2
).
In this talk, we will assume a Gaussian prior
p(x|δ) ∝ exp
(−δ
2xTLx
),
so that
p(x|y, λ, δ) ∝ exp
(−λ
2‖Ax− y‖2 − δ
2xTLx
).
![Page 14: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/14.jpg)
Maximum a Posteriori (MAP) Estimation
The maximizer of the posterior density is
xMAP = arg minx
λ
2‖Ax− y‖2 +
δ
2xTLx
which is the regularized solution xα with α = δ/λ.
α = 2.5× 10−4 α = 1.05× 10−4.
![Page 15: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/15.jpg)
Modeling the Prior p(x|δ)
Parameter Set P Data Set D
Knowledge
Prior
REGULARIZATION
ILL−POSED
![Page 16: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/16.jpg)
Gaussian Markov Random field (GMRF) priors
The neighbor values for xij are below (in black)
x∂ij = xi−1,j , xi,j−1, xi+1,j , xi,j+1
=
xi,j+1
xi−1,j xij xi+1,j
xi,j−1
.
Then we assume
xij |x∂ij ∼ N(x∂ij , δ
−1),
where x∂ij = 1nij
∑(r,s)∈∂ij xrs and nij = |∂ij |.
![Page 17: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/17.jpg)
Gaussian Markov Random field (GMRF) priors
The neighbor values for xij are below (in black)
x∂ij = xi−1,j , xi,j−1, xi+1,j , xi,j+1
=
xi,j+1
xi−1,j xij xi+1,j
xi,j−1
.Then we assume
xij |x∂ij ∼ N(x∂ij , δ
−1),
where x∂ij = 1nij
∑(r,s)∈∂ij xrs and nij = |∂ij |.
![Page 18: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/18.jpg)
Gaussian Markov Random field (GMRF) priors
This leads to the prior
p(x|δ) ∝ δn exp
(−δ
2xTLx
),
where if r = (i, j) after column-stacking 2D arrays
[L]rs =
4 s = r,−1 s ∈ ∂r,0 otherwise.
NOTE: L = 2D discrete unscaled (by 1/h2) negative-Laplacian.Recall the MAP estimator
xα = arg minx
1
2‖Ax− y‖2 +
α
2xTLx
![Page 19: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/19.jpg)
α = 2.5× 10−4 α = 1.05× 10−4
![Page 20: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/20.jpg)
2D GMRF Increment ModelsFor a 2D signal, suppose
xi+1,j − xij ∼ N (0, (wijδ)−1), i, j = 1, . . . ,
√n
xi,j+1 − xij ∼ N (0, (wijδ)−1), i, j = 1, . . . ,
√n.
Assuming independence the density function for x has the form
p(x|δ) ∝ δn/2 exp
−δ2
√n∑
i,j=1
wij(xi+1,j − xij)2
×exp
−δ2
√n∑
i,j=1
wij(xi,j+1 − xij)2
= δn/2 exp
(−δ
2xT (DT
hΛDh + DTv ΛDv)x
),
• Dh = I⊗D, Dv = D⊗ I, where D = 1D difference matrix;
• Λ = diag(vec(wij√n
ij=1))
![Page 21: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/21.jpg)
2D GMRF Increment ModelsFor a 2D signal, suppose
xi+1,j − xij ∼ N (0, (wijδ)−1), i, j = 1, . . . ,
√n
xi,j+1 − xij ∼ N (0, (wijδ)−1), i, j = 1, . . . ,
√n.
Assuming independence the density function for x has the form
p(x|δ) ∝ δn/2 exp
−δ2
√n∑
i,j=1
wij(xi+1,j − xij)2
×exp
−δ2
√n∑
i,j=1
wij(xi,j+1 − xij)2
= δn/2 exp
(−δ
2xT (DT
hΛDh + DTv ΛDv)x
),
• Dh = I⊗D, Dv = D⊗ I, where D = 1D difference matrix;
• Λ = diag(vec(wij√n
ij=1))
![Page 22: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/22.jpg)
2D GMRF Increment ModelsFor a 2D signal, suppose
xi+1,j − xij ∼ N (0, (wijδ)−1), i, j = 1, . . . ,
√n
xi,j+1 − xij ∼ N (0, (wijδ)−1), i, j = 1, . . . ,
√n.
Assuming independence the density function for x has the form
p(x|δ) ∝ δn/2 exp
−δ2
√n∑
i,j=1
wij(xi+1,j − xij)2
×exp
−δ2
√n∑
i,j=1
wij(xi,j+1 − xij)2
= δn/2 exp
(−δ
2xT (DT
hΛDh + DTv ΛDv)x
),
• Dh = I⊗D, Dv = D⊗ I, where D = 1D difference matrix;
• Λ = diag(vec(wij√n
ij=1))
![Page 23: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/23.jpg)
2D GMRF Increment ModelsThe matrix 1
∆s2DThΛDh + 1
∆t2DTv ΛDv is a discretization of
− ∂
∂s
(w(s, t)
∂
∂s
)− ∂
∂t
(w(s, t)
∂
∂t
)
Left: wij = 1 for all ij.Right: wij = 0.01 for ij on the circle boundary.
![Page 24: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/24.jpg)
GMRF Edge-Preserving Reconstruction
0. Set Λ = I.
1. Define L = DThΛDh + DT
v ΛDv, where
2. Computexα = (ATA + αL)−1ATy
using PCG with α obtained via L-curve, GCV, etc.
3. Set
Λ(xα) = diag
(1√
(Dhxα)2 + (Dvxα)2 + β1
),
0 < β 1, and return to Step 1.
NOTE: This is just the lagged-diffusivity iteration.
![Page 25: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/25.jpg)
Numerical Results
![Page 26: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/26.jpg)
Infinite Dimnensional Limit
Question: When is
p(x|y, λ, δ) def= lim
n→∞p(x|y, λ, δ)
well defined?
First
limn→∞
‖Ax− y‖2 = ‖Amx− y‖2,
where
[Amx]i =
∫Ωa(si, s
′)x(s′)ds′, i = 1, . . . ,m.
Note: Am : C∞(Ω)→ Rm, where Ω = [0, 1] or [0, 1]× [0, 1], andC∞(Ω) is the space of smooth functions on Ω.
![Page 27: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/27.jpg)
Infinite Dimnensional Limit
Question: When is
p(x|y, λ, δ) def= lim
n→∞p(x|y, λ, δ)
well defined? First
limn→∞
‖Ax− y‖2 = ‖Amx− y‖2,
where
[Amx]i =
∫Ωa(si, s
′)x(s′)ds′, i = 1, . . . ,m.
Note: Am : C∞(Ω)→ Rm, where Ω = [0, 1] or [0, 1]× [0, 1], andC∞(Ω) is the space of smooth functions on Ω.
![Page 28: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/28.jpg)
The Infinite Dimnensional Limit
Next,
limn→∞
c(n)xTLx = 〈x,Lx〉 def=
∫Ωx(s′)Lx(s′) ds′,
where c(n) = n in one-dimension and
L = − d
ds
(w(s)
d
ds
), 0 < s < 1;
whereas c(n) = 1 in two-dimensions and
L = − ∂
∂s
(w(s, t)
∂
∂s
)− ∂
∂t
(w(s, t)
∂
∂t
), 0 < s, t < 1.
![Page 29: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/29.jpg)
The Infinite Dimnensional Limit
Next,
limn→∞
c(n)xTLx = 〈x,Lx〉 def=
∫Ωx(s′)Lx(s′) ds′,
where c(n) = n in one-dimension and
L = − d
ds
(w(s)
d
ds
), 0 < s < 1;
whereas c(n) = 1 in two-dimensions and
L = − ∂
∂s
(w(s, t)
∂
∂s
)− ∂
∂t
(w(s, t)
∂
∂t
), 0 < s, t < 1.
![Page 30: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/30.jpg)
The Infinite Dimnensional Limit
limn→∞
λ
2‖Ax− y‖2 +
δc(n)
2xTLx
=λ
2‖Amx−y‖2 +
δ
2〈x,Lx〉,
and hence
p(x|y, λ, δ) ∝ exp
(−λ
2‖Amx− y‖2 − δ
2〈x,Lx〉
).
Question: When is the resulting probability measure well-defined:
µpost(A)def=
∫Ap(x|y, λ, δ) dx, A ⊂ L2(Ω)?
Answer [Stuart]: When L−1 is a trace-class operator on L2(Ω),i.e., when
∑∞i=1〈φi,L−1φi〉 <∞ for any o.n. basis φi of L2(Ω).
![Page 31: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/31.jpg)
The Infinite Dimnensional Limit
limn→∞
λ
2‖Ax− y‖2 +
δc(n)
2xTLx
=λ
2‖Amx−y‖2 +
δ
2〈x,Lx〉,
and hence
p(x|y, λ, δ) ∝ exp
(−λ
2‖Amx− y‖2 − δ
2〈x,Lx〉
).
Question: When is the resulting probability measure well-defined:
µpost(A)def=
∫Ap(x|y, λ, δ) dx, A ⊂ L2(Ω)?
Answer [Stuart]: When L−1 is a trace-class operator on L2(Ω),i.e., when
∑∞i=1〈φi,L−1φi〉 <∞ for any o.n. basis φi of L2(Ω).
![Page 32: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/32.jpg)
The Infinite Dimnensional Limit
limn→∞
λ
2‖Ax− y‖2 +
δc(n)
2xTLx
=λ
2‖Amx−y‖2 +
δ
2〈x,Lx〉,
and hence
p(x|y, λ, δ) ∝ exp
(−λ
2‖Amx− y‖2 − δ
2〈x,Lx〉
).
Question: When is the resulting probability measure well-defined:
µpost(A)def=
∫Ap(x|y, λ, δ) dx, A ⊂ L2(Ω)?
Answer [Stuart]: When L−1 is a trace-class operator on L2(Ω),i.e., when
∑∞i=1〈φi,L−1φi〉 <∞ for any o.n. basis φi of L2(Ω).
![Page 33: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/33.jpg)
The Infinite Dimnensional Limit
In one-dimension, if
L = − d
ds
(w(s)
d
ds
), 0 < s < 1,
then L−1 is trace class.
In two-dimensions, if
L = − ∂
∂s
(w(s, t)
∂
∂s
)− ∂
∂t
(w(s, t)
∂
∂t
), 0 < s, t < 1,
then L−1 is not trace-class.
FIX: in two-dimensions, use L def= L2, which is trace class; note
that if w = w = 1 above, this is called the biharmonic operator.
![Page 34: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/34.jpg)
The Infinite Dimnensional Limit
In one-dimension, if
L = − d
ds
(w(s)
d
ds
), 0 < s < 1,
then L−1 is trace class.
In two-dimensions, if
L = − ∂
∂s
(w(s, t)
∂
∂s
)− ∂
∂t
(w(s, t)
∂
∂t
), 0 < s, t < 1,
then L−1 is not trace-class.
FIX: in two-dimensions, use L def= L2, which is trace class; note
that if w = w = 1 above, this is called the biharmonic operator.
![Page 35: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/35.jpg)
The Infinite Dimnensional Limit
In one-dimension, if
L = − d
ds
(w(s)
d
ds
), 0 < s < 1,
then L−1 is trace class.
In two-dimensions, if
L = − ∂
∂s
(w(s, t)
∂
∂s
)− ∂
∂t
(w(s, t)
∂
∂t
), 0 < s, t < 1,
then L−1 is not trace-class.
FIX: in two-dimensions, use L def= L2, which is trace class; note
that if w = w = 1 above, this is called the biharmonic operator.
![Page 36: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/36.jpg)
An Alternative: Second-Order GMRFsFor a 2D signal, suppose
xi−1,j − 2xij + xi+1,j ∼ N (0, (wijδ)−1), i, j = 1, . . . ,
√n
xi,j−1 − 2xij + xi,j+1 ∼ N (0, (wijδ)−1), i, j = 1, . . . ,
√n.
Assuming independence, the density function for x has the form
p(x|δ) ∝ δn/2 exp
−δ2
n∑i,j=1
wij(xi−1,j − 2xij + xi+1,j)2
×exp
−δ2
n∑i,j=1
wij(xi,j−1 − 2xij + xi,j+1)2
= δn/2 exp
(−δ
2xT (LThΛLh + LTv ΛLv)x
),
• Lv = L⊗ I, Lh = I⊗ L, L = 1D discrete neg-Laplacian;
• Λ = diag(vec(wij√n
ij=1))
![Page 37: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/37.jpg)
An Alternative: Second-Order GMRFsFor a 2D signal, suppose
xi−1,j − 2xij + xi+1,j ∼ N (0, (wijδ)−1), i, j = 1, . . . ,
√n
xi,j−1 − 2xij + xi,j+1 ∼ N (0, (wijδ)−1), i, j = 1, . . . ,
√n.
Assuming independence, the density function for x has the form
p(x|δ) ∝ δn/2 exp
−δ2
n∑i,j=1
wij(xi−1,j − 2xij + xi+1,j)2
×exp
−δ2
n∑i,j=1
wij(xi,j−1 − 2xij + xi,j+1)2
= δn/2 exp
(−δ
2xT (LThΛLh + LTv ΛLv)x
),
• Lv = L⊗ I, Lh = I⊗ L, L = 1D discrete neg-Laplacian;
• Λ = diag(vec(wij√n
ij=1))
![Page 38: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/38.jpg)
An Alternative: Second-Order GMRFsFor a 2D signal, suppose
xi−1,j − 2xij + xi+1,j ∼ N (0, (wijδ)−1), i, j = 1, . . . ,
√n
xi,j−1 − 2xij + xi,j+1 ∼ N (0, (wijδ)−1), i, j = 1, . . . ,
√n.
Assuming independence, the density function for x has the form
p(x|δ) ∝ δn/2 exp
−δ2
n∑i,j=1
wij(xi−1,j − 2xij + xi+1,j)2
×exp
−δ2
n∑i,j=1
wij(xi,j−1 − 2xij + xi,j+1)2
= δn/2 exp
(−δ
2xT (LThΛLh + LTv ΛLv)x
),
• Lv = L⊗ I, Lh = I⊗ L, L = 1D discrete neg-Laplacian;
• Λ = diag(vec(wij√n
ij=1))
![Page 39: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/39.jpg)
The Infinite Dimensional Limit
Let L = LThΛhLh + LTv ΛvLv, then
limn→∞
c(n)xTLx = 〈x,Lx〉,
where
L = − ∂2
∂s2
(w(s, t)
∂2
∂s2
)− ∂2
∂t2
(w(s, t)
∂2
∂t2
), 0 < s, t < 1,
NOTE: L−1 is trace-class, and hence if
p(x|y, λ, δ) ∝ exp
(−λ
2‖AMx− y‖2 − δ
2〈x,Lx〉
)then µpost(A) =
∫A p(x|y, λ, δ) dx, for L2(Ω), is well-defined.
![Page 40: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/40.jpg)
The Infinite Dimensional Limit
Let L = LThΛhLh + LTv ΛvLv, then
limn→∞
c(n)xTLx = 〈x,Lx〉,
where
L = − ∂2
∂s2
(w(s, t)
∂2
∂s2
)− ∂2
∂t2
(w(s, t)
∂2
∂t2
), 0 < s, t < 1,
NOTE: L−1 is trace-class, and hence if
p(x|y, λ, δ) ∝ exp
(−λ
2‖AMx− y‖2 − δ
2〈x,Lx〉
)then µpost(A) =
∫A p(x|y, λ, δ) dx, for L2(Ω), is well-defined.
![Page 41: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/41.jpg)
Higher-Order GMRF, Edge-Preserving Reconstruction
0. Set Λ = I.
1. Define L = LThΛLh + LTv ΛLv, where
2. Computexα = (ATA + αL)−1ATy
using PCG with α obtained using GCV.
3. Set
Λ(xα) = diag
(1√
(Lhxα)2 + (Lvxα)2 + β1
)
where 0 < β 1, and return to Step 1.
![Page 42: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/42.jpg)
Plot after 10 iterations
![Page 43: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/43.jpg)
Hierarchical Bayes: Assume Hyper-Priors on λ and δ
Uncertainty in λ and δ: λ ∼ p(λ) and δ ∼ p(δ). Then
p(x, λ, δ|y) ∝ p(y|x, λ)p(λ) p(x|δ)p(δ),
is the Bayesian posterior
, where
p(y|x, λ) ∝ λm/2 exp
(−λ
2‖Ax− y‖2
),
and we choose a GMRF prior and Gamma hyper-priors:
p(x|δ) ∝ δn/2 exp
(−δ
2xTLx
),
p(λ) ∝ λαλ−1 exp(−βλλ),
p(δ) ∝ δαδ−1 exp(−βδδ),
where αλ = αδ = 1 and βλ = βδ = 10−4.
![Page 44: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/44.jpg)
Hierarchical Bayes: Assume Hyper-Priors on λ and δ
Uncertainty in λ and δ: λ ∼ p(λ) and δ ∼ p(δ). Then
p(x, λ, δ|y) ∝ p(y|x, λ)p(λ) p(x|δ)p(δ),
is the Bayesian posterior, where
p(y|x, λ) ∝ λm/2 exp
(−λ
2‖Ax− y‖2
),
and we choose a GMRF prior and Gamma hyper-priors:
p(x|δ) ∝ δn/2 exp
(−δ
2xTLx
),
p(λ) ∝ λαλ−1 exp(−βλλ),
p(δ) ∝ δαδ−1 exp(−βδδ),
where αλ = αδ = 1 and βλ = βδ = 10−4.
![Page 45: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/45.jpg)
The Full Posterior Distribution: Linear Case
p(x, λ, δ|y) ∝ the posterior
λm/2+αλ−1δn/2+αδ−1 exp
(−λ
2‖Ax− y‖2 − δ
2xTLx− βλλ− βδδ
).
Sampling versus Computing the MAP Estimator
![Page 46: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/46.jpg)
The Full Posterior Distribution
The full conditionals have the form
p(λ|x, δ,y) ∝ λm/2+αλ−1 exp
(−(
1
2‖Ax− y‖2 + βλ
)λ
);
p(δ|x, λ,y) ∝ δn/2+αδ−1 exp
(−(
1
2xTLx + βδ
)δ
);
p(x|y, λ, δ) ∝ exp
(−λ
2‖Ax− y‖2 − δ
2xTLx
).
OBSERVATIONS:
1. p(x|y, λ, δ) is a Gaussian random vector;
2. p(λ, δ|y,x) = p(λ|x, δ,y)p(δ|x, λ,y);
3. and we have the natural blocking: (x, λ, δ) = (x; (λ, δ)).
![Page 47: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/47.jpg)
Hierarchical Gibbs for sampling from p(x, λ, δ|y)
0. Choose x0, and set k = 0;
1. Sample from p(λ, δ|y,xk) via:
a. λk+1 ∼ Γ(m/2 + αλ,
12‖Axk − y‖2 + βλ
);
b. δk+1 ∼ Γ(n/2 + αδ,
12 (xk)TLxk + βδ
);
2. Sample from the Gaussian p(x|y, λk+1, δk+1) via:
xk+1 =(λk+1A
TA + δk+1L)−1 (
λk+1ATy + η
),
where η ∼ N(0, λk+1A
TA + δk+1L).
3. Set k = k + 1 and return to Step 1.
NOTE: Two-stage Gibbs samplers have nice properties, including
(xk, λk, δk)∞k=1
′dist′→ p(x, λ, δ|y).
![Page 48: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/48.jpg)
A One-Dimensional Example
True Image, Mean, and 95% c.i. Histograms for λ and δ
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1-20
-10
0
10
20
30
40
50
60
70
MCMC sample meantrue image95% credibility bounds
2 2.5 3 3.5 4 4.5 5 5.5 6 6.50
500
1000
1500
0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.090
500
1000
1500
![Page 49: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/49.jpg)
An example from X-ray Radiography (w/ Luttman)
0 500 1000 1500 2000 25000
200
400
600
800
1000
1200
lineoutMCMC sample mean95% credibility bounds
![Page 50: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/50.jpg)
A Two-Dimensional Example
![Page 51: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/51.jpg)
Step 2: sample from the Gaussian p(x|y, λ, δ)The conditional density for x|y, λ, δ is
p(x|y, λ, δ) ∝ exp
(−λ
2‖Ax− y‖2 − δ
2xTLx
)= exp
(−1
2
∥∥∥∥[ λ1/2A
(δL)1/2
]x−
[λ1/2y
0
]∥∥∥∥2).
From here on out, we define:
Aλ,δ =
[λ1/2A
(δL)1/2
], and yλ,δ =
[λ1/2y
0
].
so that
p(x|y, λ, δ) = exp
(−1
2‖Aλ,δx− yλ,δ‖2
).
![Page 52: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/52.jpg)
Step 2: sample from the Gaussian p(x|y, λ, δ)The conditional density for x|y, λ, δ is
p(x|y, λ, δ) ∝ exp
(−λ
2‖Ax− y‖2 − δ
2xTLx
)= exp
(−1
2
∥∥∥∥[ λ1/2A
(δL)1/2
]x−
[λ1/2y
0
]∥∥∥∥2).
From here on out, we define:
Aλ,δ =
[λ1/2A
(δL)1/2
], and yλ,δ =
[λ1/2y
0
].
so that
p(x|y, λ, δ) = exp
(−1
2‖Aλ,δx− yλ,δ‖2
).
![Page 53: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/53.jpg)
Step 2: sample from the Gaussian p(x|y, λ, δ)
For large-scale problems, you can use optimization:
x = arg minψ‖Aλ,δψ − (yλ,δ + ε)‖2, ε ∼ N (0, I).
QR-rewrite: if Aλ,δ = QR, with Q ∈ Rm×n, R ∈ Rn×n, then
x = (ATλ,δAλ,δ)
−1ATλ,δ(yλ,δ + ε), ε ∼ N (0, I)
= R−1 QT (yλ,δ + ε)︸ ︷︷ ︸def= v
, ε ∼ N (0, I)
def= F−1 (v) , v ∼ N (QTyλ,δ, I),
where F−1(x) = R−1x.
![Page 54: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/54.jpg)
Step 2: sample from the Gaussian p(x|y, λ, δ)
For large-scale problems, you can use optimization:
x = arg minψ‖Aλ,δψ − (yλ,δ + ε)‖2, ε ∼ N (0, I).
QR-rewrite: if Aλ,δ = QR, with Q ∈ Rm×n, R ∈ Rn×n, then
x = (ATλ,δAλ,δ)
−1ATλ,δ(yλ,δ + ε), ε ∼ N (0, I)
= R−1 QT (yλ,δ + ε)︸ ︷︷ ︸def= v
, ε ∼ N (0, I)
def= F−1 (v) , v ∼ N (QTyλ,δ, I),
where F−1(x) = R−1x.
![Page 55: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/55.jpg)
Step 2: sample from the Gaussian p(x|y, λ, δ)
For large-scale problems, you can use optimization:
x = arg minψ‖Aλ,δψ − (yλ,δ + ε)‖2, ε ∼ N (0, I).
QR-rewrite: if Aλ,δ = QR, with Q ∈ Rm×n, R ∈ Rn×n, then
x = (ATλ,δAλ,δ)
−1ATλ,δ(yλ,δ + ε), ε ∼ N (0, I)
= R−1 QT (yλ,δ + ε)︸ ︷︷ ︸def= v
, ε ∼ N (0, I)
def= F−1 (v) , v ∼ N (QTyλ,δ, I),
where F−1(x) = R−1x.
![Page 56: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/56.jpg)
Proof that x = F−1(v) has the right distribution:
What we know:
• v ∼ N (QTyλ,δ, I) =⇒ pv(v) ∝ exp(−1
2‖v −QTyλ,δ‖2);
• F(x) = Rx.
p(x) =
x=F−1(v) ⇒ p(x)=|det( ddx
F(x))|pv(F(x))︷ ︸︸ ︷(2π)−n/2 |det (R)| exp
(−1
2‖Rx−QTyλ,δ‖2
)= (2π)−n/2
∣∣det(ATλ,δAλ,δ
)∣∣1/2 exp
(−1
2‖Aλ,δx− yλ,δ‖2
)‘dist′= p(x|y, λ, δ).
![Page 57: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/57.jpg)
Proof that x = F−1(v) has the right distribution:
What we know:
• v ∼ N (QTyλ,δ, I) =⇒ pv(v) ∝ exp(−1
2‖v −QTyλ,δ‖2);
• F(x) = Rx.
p(x) =
x=F−1(v) ⇒ p(x)=|det( ddx
F(x))|pv(F(x))︷ ︸︸ ︷(2π)−n/2 |det (R)| exp
(−1
2‖Rx−QTyλ,δ‖2
)= (2π)−n/2
∣∣det(ATλ,δAλ,δ
)∣∣1/2 exp
(−1
2‖Aλ,δx− yλ,δ‖2
)‘dist′= p(x|y, λ, δ).
![Page 58: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/58.jpg)
MCMC Chain Diagnostics
Question: when has the (x, λ, δ)-chain generated by hierarchicalGibbs converged in distribution to p(x, λ, δ|y)?
Observations that follow from two-stage structure:
1. The (λ, δ)-chain has stationary distribution
p(λ, δ|y)def=
∫xp(x, λ, δ|y)dx.
2. Since p(x, λ, δ|y) ∝ p(x|y, λ, δ)p(λ, δ|y)
2.1 (λ′, δ′) ∼ p(λ, δ|y),2.2 x′ ∼ p(x|y, λ′, δ′),
yields a sample (x′, λ′, δ′) ∼ p(x, λ, δ|y).
Answer: (λ, δ)-chain convergence ⇒ (x, λ, δ)-chain convergence.
![Page 59: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/59.jpg)
MCMC Chain Diagnostics
Question: when has the (x, λ, δ)-chain generated by hierarchicalGibbs converged in distribution to p(x, λ, δ|y)?
Observations that follow from two-stage structure:
1. The (λ, δ)-chain has stationary distribution
p(λ, δ|y)def=
∫xp(x, λ, δ|y)dx.
2. Since p(x, λ, δ|y) ∝ p(x|y, λ, δ)p(λ, δ|y)
2.1 (λ′, δ′) ∼ p(λ, δ|y),2.2 x′ ∼ p(x|y, λ′, δ′),
yields a sample (x′, λ′, δ′) ∼ p(x, λ, δ|y).
Answer: (λ, δ)-chain convergence ⇒ (x, λ, δ)-chain convergence.
![Page 60: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/60.jpg)
MCMC Chain Diagnostics
Question: when has the (x, λ, δ)-chain generated by hierarchicalGibbs converged in distribution to p(x, λ, δ|y)?
Observations that follow from two-stage structure:
1. The (λ, δ)-chain has stationary distribution
p(λ, δ|y)def=
∫xp(x, λ, δ|y)dx.
2. Since p(x, λ, δ|y) ∝ p(x|y, λ, δ)p(λ, δ|y)
2.1 (λ′, δ′) ∼ p(λ, δ|y),2.2 x′ ∼ p(x|y, λ′, δ′),
yields a sample (x′, λ′, δ′) ∼ p(x, λ, δ|y).
Answer: (λ, δ)-chain convergence ⇒ (x, λ, δ)-chain convergence.
![Page 61: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/61.jpg)
MCMC Chain Diagnostics
Question: when has the (x, λ, δ)-chain generated by hierarchicalGibbs converged in distribution to p(x, λ, δ|y)?
Observations that follow from two-stage structure:
1. The (λ, δ)-chain has stationary distribution
p(λ, δ|y)def=
∫xp(x, λ, δ|y)dx.
2. Since p(x, λ, δ|y) ∝ p(x|y, λ, δ)p(λ, δ|y)
2.1 (λ′, δ′) ∼ p(λ, δ|y),2.2 x′ ∼ p(x|y, λ′, δ′),
yields a sample (x′, λ′, δ′) ∼ p(x, λ, δ|y).
Answer: (λ, δ)-chain convergence ⇒ (x, λ, δ)-chain convergence.
![Page 62: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/62.jpg)
Chain Correlation: How correlated is δiKk=1?The autocorrelation function is defined
ρ(k) = C(k)/C(0),
where
C(k) =1
K − |k|
K−|k|∑i=1
(δi − δ)(δi+|k| − δ), where δ =1
K
k∑i=1
δi.
The integrated autocorrelation time is defined
τint =K∑
k=−K
ρ(k),
where K is the smallest integer such that K ≥ 3τint, and
# independent samples in δiKk=1 ≈ K/τint.
![Page 63: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/63.jpg)
Chain Correlation: How correlated is δiKk=1?The autocorrelation function is defined
ρ(k) = C(k)/C(0),
where
C(k) =1
K − |k|
K−|k|∑i=1
(δi − δ)(δi+|k| − δ), where δ =1
K
k∑i=1
δi.
The integrated autocorrelation time is defined
τint =
K∑k=−K
ρ(k),
where K is the smallest integer such that K ≥ 3τint,
and
# independent samples in δiKk=1 ≈ K/τint.
![Page 64: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/64.jpg)
Chain Correlation: How correlated is δiKk=1?The autocorrelation function is defined
ρ(k) = C(k)/C(0),
where
C(k) =1
K − |k|
K−|k|∑i=1
(δi − δ)(δi+|k| − δ), where δ =1
K
k∑i=1
δi.
The integrated autocorrelation time is defined
τint =
K∑k=−K
ρ(k),
where K is the smallest integer such that K ≥ 3τint, and
# independent samples in δiKk=1 ≈ K/τint.
![Page 65: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/65.jpg)
As n→∞, correlation in λ/δ-chains disappears/increases
(Work with S. Agapiou, O. Papaspiliopoulos, & Andrew Stuart)
n = 50
0 500 1000 1500 2000 2500 3000 3500 4000 4500 50000
10
20
30
λ
MCMC chains for λ, n=50
0 500 1000 1500 2000 2500 3000 3500 4000 4500 50000
0.01
0.02
0.03
0.04
δ
MCMC chains for δ, n=50
2 4 6 8 10 12 14 16 18 20
0.2
0.4
0.6
0.8
1
λ autocorrelation
Sam
ple
Aut
ocor
rela
tion
Disc n = 50, for 5000 samples the ESS M/τ ≈ 1274.8
2 4 6 8 10 12 14 16 18 20
0.2
0.4
0.6
0.8
1
δ autocorrelation
Sam
ple
Aut
ocor
rela
tion
Disc n = 50, for 5000 samples the ESS M/τ ≈ 1448.8
n = 100
0 500 1000 1500 2000 2500 3000 3500 4000 4500 50002
4
6
8
10
12
λ
MCMC chains for λ, n=100
0 500 1000 1500 2000 2500 3000 3500 4000 4500 50000
0.02
0.04
0.06
0.08
δ
MCMC chains for δ, n=100
2 4 6 8 10 12 14 16 18 20
0.2
0.4
0.6
0.8
1
λ autocorrelation
Sam
ple
Aut
ocor
rela
tion
Disc n = 100, for 5000 samples the ESS M/τ ≈ 2702.2
2 4 6 8 10 12 14 16 18 20
0.2
0.4
0.6
0.8
1
δ autocorrelation
Sam
ple
Aut
ocor
rela
tion
Disc n = 100, for 5000 samples the ESS M/τ ≈ 560.4
![Page 66: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/66.jpg)
As n→∞, correlation in λ/δ-chains disappears/increases
(Work with S. Agapiou, O. Papaspiliopoulos, & Andrew Stuart)
n = 500
0 500 1000 1500 2000 2500 3000 3500 4000 4500 50004
5
6
7
λ
MCMC chains for λ, n=500
0 500 1000 1500 2000 2500 3000 3500 4000 4500 50000
0.1
0.2
0.3
0.4
δ
MCMC chains for δ, n=5002 4 6 8 10 12 14 16 18 20
0.2
0.4
0.6
0.8
1
λ autocorrelation
Sam
ple
Aut
ocor
rela
tion
Disc n = 500, for 5000 samples the ESS M/τ ≈ 4244.3
2 4 6 8 10 12 14 16 18 20
0.2
0.4
0.6
0.8
1
δ autocorrelation
Sam
ple
Aut
ocor
rela
tion
Disc n = 500, for 5000 samples the ESS M/τ ≈ 136.7
n = 1000
0 500 1000 1500 2000 2500 3000 3500 4000 4500 50004.5
5
5.5
6
6.5
λ
MCMC chains for λ, n=1000
0 500 1000 1500 2000 2500 3000 3500 4000 4500 50000
0.2
0.4
0.6
0.8
δ
MCMC chains for δ, n=10002 4 6 8 10 12 14 16 18 20
0.2
0.4
0.6
0.8
1
λ autocorrelation
Sam
ple
Aut
ocor
rela
tion
Disc n = 1000, for 5000 samples the ESS M/τ ≈ 4453.5
2 4 6 8 10 12 14 16 18 20
0.2
0.4
0.6
0.8
1
δ autocorrelation
Sam
ple
Aut
ocor
rela
tion
Disc n = 1000, for 5000 samples the ESS M/τ ≈ 81.3
![Page 67: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/67.jpg)
To overcome this issue, we use marginalization
First note that
λ
2‖Ax− y‖2 +
δ
2xTLx =
1
2(λ‖y‖2 − µT (λATA + δL)µ︸ ︷︷ ︸
U(λ,δ)
) +
1
2(x− µ)T (λATA + δL)(x− µ),
where µ = (λATA + δL)−1λATy.
Then
p(x, λ, δ|y) ∝ p(λ)p(δ) exp
(−1
2U(λ, δ)
)×
exp
(−1
2(x− µ)T (λATA + δL)(x− µ)
)
![Page 68: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/68.jpg)
To overcome this issue, we use marginalization
First note that
λ
2‖Ax− y‖2 +
δ
2xTLx =
1
2(λ‖y‖2 − µT (λATA + δL)µ︸ ︷︷ ︸
U(λ,δ)
) +
1
2(x− µ)T (λATA + δL)(x− µ),
where µ = (λATA + δL)−1λATy. Then
p(x, λ, δ|y) ∝ p(λ)p(δ) exp
(−1
2U(λ, δ)
)×
exp
(−1
2(x− µ)T (λATA + δL)(x− µ)
)
![Page 69: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/69.jpg)
To overcome this issue, we use marginalization
p(λ, δ|y) ∝∫Rnp(x, λ, δ|y)dx
∝ p(λ)p(δ) exp
(−1
2U(λ, δ)
)×∫
Rnexp
(−1
2(x− µ)T (λATA + δL)(x− µ)
)dx︸ ︷︷ ︸
(2π)n/2 det(λATA+δL)−1/2
∝ p(λ)p(δ) exp
−1
2U(λ, δ)− 1
2ln det(λATA + δL)︸ ︷︷ ︸
c(λ,δ)
.
![Page 70: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/70.jpg)
To overcome this issue, we use marginalization
p(λ, δ|y) ∝∫Rnp(x, λ, δ|y)dx
∝ p(λ)p(δ) exp
(−1
2U(λ, δ)
)×∫
Rnexp
(−1
2(x− µ)T (λATA + δL)(x− µ)
)dx︸ ︷︷ ︸
(2π)n/2 det(λATA+δL)−1/2
∝ p(λ)p(δ) exp
−1
2U(λ, δ)− 1
2ln det(λATA + δL)︸ ︷︷ ︸
c(λ,δ)
.
![Page 71: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/71.jpg)
To overcome this issue, we use marginalization
Thus we have the marginal density
p(λ, δ|y) ∝ p(λ)p(δ) exp
(−1
2U(λ, δ)− 1
2c(λ, δ)
),
where
U(λ, δ) = yT (λI− λ2A(λATA + δL)−1AT )y
c(λ, δ) = ln det(λATA + δL).
![Page 72: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/72.jpg)
Partially Collapsed Gibbs: Step 1, Reduce Conditioning
Reduce Conditioning in step 2 of the Gibbs Sampler
0. Choose x0, and set k = 0;
1. Sample from p(λ, δ|y,xk) via:
a. λk+1 ∼ Γ(m/2 + αλ,
12‖Axk − y‖2 + βλ
);
b. Old: δk+1 ∼ p(δ|y,xk, λk+1).New: (xk+1, δk+1) ∼ p(x, δ|y, λk+1);
2. Sample from the Gaussian p(x|y, λk+1, δk+1) via:
xk+1 =(λk+1A
TA + δk+1L)−1 (
λk+1ATy + η
),
where η ∼ N(0, λk+1A
TA + δk+1L).
3. Set k = k + 1 and return to Step 1.
NOTE: p(λ|y,x, δ) and p(x, δ|y, λ) are not conditionallyindependent, so the result is not a two-stage Gibbs sampler.
![Page 73: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/73.jpg)
Partially Collapsed Gibbs: Step 2, Collapse/MarginalizeIn step 2, xk+1 is redundant, so we can integrate it out, to obtain
δk+1 ∼ p(δ|y, λk+1)′d′=
∫Rnp(x, δ|y, λk+1)dx
∝ p(δ) exp
(−1
2U(λk+1, δ)−
1
2c(λk+1, δ)
).
The Partially Collapsed Hierarchical Gibbs Sampler
0. Choose x0, and set k = 0;
1. Sample λk+1 ∼ Γ(m/2 + αλ,
12‖Axk − y‖2 + βλ
);
2. Sample δk+1 ∼ p(δ|y, λk+1);
3. Sample from the Gaussian p(x|y, λk+1, δk+1) via:
xk+1 =(λk+1A
TA + δk+1L)−1 (
λk+1ATy + η
),
where η ∼ N(0, λk+1A
TA + δk+1L).
4. Set k = k + 1 and return to Step 1.
![Page 74: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/74.jpg)
Partially Collapsed Gibbs: Step 2, Collapse/MarginalizeIn step 2, xk+1 is redundant, so we can integrate it out, to obtain
δk+1 ∼ p(δ|y, λk+1)′d′=
∫Rnp(x, δ|y, λk+1)dx
∝ p(δ) exp
(−1
2U(λk+1, δ)−
1
2c(λk+1, δ)
).
The Partially Collapsed Hierarchical Gibbs Sampler
0. Choose x0, and set k = 0;
1. Sample λk+1 ∼ Γ(m/2 + αλ,
12‖Axk − y‖2 + βλ
);
2. Sample δk+1 ∼ p(δ|y, λk+1);
3. Sample from the Gaussian p(x|y, λk+1, δk+1) via:
xk+1 =(λk+1A
TA + δk+1L)−1 (
λk+1ATy + η
),
where η ∼ N(0, λk+1A
TA + δk+1L).
4. Set k = k + 1 and return to Step 1.
![Page 75: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/75.jpg)
Chain auto-correlation plots for Partially Collapsed Gibbs
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 100004.5
5
5.5
6
6.5
7
λ
MCMC chains for λ, n=1000
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 100000
0.2
0.4
0.6
0.8
δ
MCMC chains for δ, n=1000
0 2 4 6 8 10 12 14 16 18 20−0.5
0
0.5
1
λ autocorrelation
Sam
ple
Aut
ocor
rela
tion
Disc n = 1000, for 10000 samples the ESS M/τ ≈ 9160.6
0 2 4 6 8 10 12 14 16 18 20−0.2
0
0.2
0.4
0.6
0.8
δ autocorrelation
Sam
ple
Aut
ocor
rela
tion
Disc n = 1000, for 10000 samples the ESS M/τ ≈ 9348.3
![Page 76: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/76.jpg)
Another option: sample directly from p(λ, δ|y)0. Initialize λ0, δ0, and C0 ∈ R2×2. Set k = 1. Define ktotal.
1. Compute [ln(λ∗)ln(δ∗)
]∼ N
([ln(λk−1)ln(δk−1)
],Ck−1
).
Set [λk, δk]T = [λ∗, δ∗]T with probability
α = min
1,
p(λ∗, δ∗|y)
p(λk−1, δk−1|y)
,
else set [λk, δk]T = [λk−1, δk−1]T .
2. Update the proposal covariance:
Ck = cov
ln(λ0) ln(δ0)
......
ln(λk) ln(δk)
+ εI, 0 < ε 1.
3. If k = ktotal stop, else set k = k + 1 and return to Step 1.
![Page 77: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/77.jpg)
Chain diagnostics for AM applied to p(λ, δ|y)
![Page 78: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/78.jpg)
Computational Bottlekneck
Evaluating
p(λ, δ|y) ∝ p(λ)p(δ) exp
(−1
2U(λ, δ)− 1
2c(λ, δ)
),
requires
U(λ, δ) = yT (λI− λ2A(λATA + δL)−1AT )y
c(λ, δ) = ln det(λATA + δL),
which in turn requires
• computing xMAP = (λATA + δL)−1λATy;
• computing ln det(λATA + δL).
NOTE: For the CT test case, these can only computedapproximately.
![Page 79: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/79.jpg)
Computational Bottlekneck
Evaluating
p(λ, δ|y) ∝ p(λ)p(δ) exp
(−1
2U(λ, δ)− 1
2c(λ, δ)
),
requires
U(λ, δ) = yT (λI− λ2A(λATA + δL)−1AT )y
c(λ, δ) = ln det(λATA + δL),
which in turn requires
• computing xMAP = (λATA + δL)−1λATy;
• computing ln det(λATA + δL).
NOTE: For the CT test case, these can only computedapproximately.
![Page 80: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/80.jpg)
Hierarchical Gibbs for sampling from p(x, λ, δ|y)
0. Choose x0, and set k = 0;
1. Sample from p(λ, δ|xk,y):
0.1 Compute λk+1 ∼ Γ(m/2 + αλ,
12‖Axk − y‖2 + βλ
);
0.2 Compute δk+1 ∼ Γ(n/2 + αδ,
12 (xk)TLxk + βδ
);
2. Sample from p(x|λk+1, δk+1,y): Compute
xk+1 =(λk+1A
TA + δk+1L)−1 (
λk+1ATy + η
),
where η ∼ N(0, λk+1A
TA + δk+1L).
3. Set k = k + 1 and return to Step 1.
NOTE: step 3 can be computationally intractible.
![Page 81: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/81.jpg)
Gradient Scan Gibbs Sampler
Replace step 2 with jk+1 CG iterations applied to
(λk+1ATA + δk+1L)(xk + p) = λk+1A
Ty + η,
where η ∼ N(0, λk+1A
TA + δk+1L). Then define
xk+1 = xk + pjk+1 ,
where pjk+1 is the final CG iterate.
NOTE: if jk = n, this reduces to hierarchical Gibbs.
![Page 82: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/82.jpg)
Gradient Scan Gibbs Sampler
0. Choose x0, and set k = 0.
1. Sample from p(λ, δ|xk,y):
0.1 Compute λk+1 ∼ Γ(m/2 + αλ,
12‖Axk − y‖2 + βλ
);
0.2 Compute δk+1 ∼ Γ(n/2 + αδ,
12 (xk)TLxk + βδ
).
2. Approximately sample from p(x|λk+1, δk+1,y): apply CG to
(λk+1ATA + δk+1L)(xk + p) = λk+1A
Ty + η,
where η ∼ N(0, λk+1A
TA + δk+1L). Define
xk+1 = xk + pjk+1 ,
where pjk+1 is the jthk+1 CG iterate.
3. If k = ktotal stop, otherwise, set k = k + 1 and return toStep 1.
NOTE: the smaller is jk, the more correlated will be the x-chain.
![Page 83: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/83.jpg)
Grad Scan Gibbs Numerical Test: jk = 20, n = 1282.
0 500 1000 1500 2000 250010
5
λ samples
0 500 1000 1500 2000 250010
0
101
δ samples
# of samples
6.6 6.8 7 7.2 7.4 7.60
200
400δ, the prior precision
1.2 1.25 1.3 1.35 1.4 1.45 1.5 1.55
x 105
0
200
400λ, the noise precision
4.5 5 5.5 6
x 10−5
0
200
400α=δ/λ, the regularization parameter
20 40 60 80 100
10
20
30
40
50
60
70
80
90
100
0
0.2
0.4
0.6
0.8
1
1.2
20 40 60 80 100
10
20
30
40
50
60
70
80
90
100
0.14
0.15
0.16
0.17
0.18
0.19
![Page 84: Computational Uncertainty Quantification for …pcha/UQ/Talk1.pdfComputational Uncertainty Quanti cation for Inverse Problems: Part 1, Linear Problems John Bardsley University of Montana](https://reader033.vdocument.in/reader033/viewer/2022053020/5f2a36d6a99b8643ec10cb99/html5/thumbnails/84.jpg)
Conclusions/Takeaways
• Inverse problems have unique characteristics, making theuse of Bayesian methods for their solution practical,challenging, and interesting.
• GMRFs provide a way of modelling the prior from pixel-levelassumptions. However, not all GMRFs yield a well-definedposterior density in the infinite dimensional limit.
• Placing probability densities on λ and δ yields a hierarchicalposterior density p(x, λ, δ|y).
• We provided MCMC methods, derived from the Gibbssampler, for sampling from the posterior p(x, λ, δ|y).