designing optimal spectral filters and low-rank matrices ......designing optimal low-rank...
TRANSCRIPT
-
Designing Optimal Spectral Filters and Low-RankMatrices for Inverse Problems
Julianne Chung
Department of MathematicsVirginia Tech
Joint work with:
Matthias Chung, Virginia Tech
Dianne OāLeary, University of Maryland
Julianne Chung, Virginia Tech
-
What is an inverse problem?
Physical SystemInput Signal Output Signal
Forward Model
Julianne Chung, Virginia Tech
-
What is an inverse problem?
Physical SystemInput Signal Output Signal
Forward Model
Inverse Problem
Julianne Chung, Virginia Tech
-
Discrete Linear Inverse Problem
b = AĪ¾ + Ī“
whereĪ¾ ā Rn - unknown parametersA ā RmĆn - large, ill-conditioned matrixĪ“ ā Rm - additive noiseb ā Rm - observation
Goal: Given b and A, compute approximation of Ī¾
Julianne Chung, Virginia Tech
-
Application: Image Deblurring
b = AĪ¾ + Ī“
Given: Blurred image, b, andsome information about theblurring, A
Goal: Compute approximation oftrue image, Ī¾
Julianne Chung, Virginia Tech
-
Application: Image Deblurring
b = AĪ¾ + Ī“
Given: Blurred image, b, andsome information about theblurring, A
Goal: Compute approximation oftrue image, Ī¾
Julianne Chung, Virginia Tech
-
Application: Super-Resolution Imaging
bi = A(yi) Ī¾ + Ī“i
Given: LR images
b1...bm
ļøø ļø·ļø· ļøø
=
A(y1)...A(ym)
ļøø ļø·ļø· ļøø
Ī¾+
Ī“1...Ī“m
ļøø ļø·ļø· ļøø
b = A(y) Ī¾ + Ī“
Goal: Improve parameters andapproximate HR image
Julianne Chung, Virginia Tech
SRimages.aviMedia File (video/avi)
-
Application: Limited-Angle Tomography
X-ray ImagingDigital TomosynthesisComputed Tomography
Julianne Chung, Virginia Tech
tomomovie.movMedia File (video/quicktime)
-
Application: Tomosynthesis Reconstruction
Given: 2D projection images
Goal: Reconstruct a 3D volume
bi = Ī„[AiĪ¾] + Ī“i
where Ī„[Ā·] represents nonlinearenergy dependent transmissiontomography
True Images
Julianne Chung, Virginia Tech
-
What is an Ill-posed Inverse Problem?
Hadamard (1923): A problem is ill-posed if the solutiondoes not exist,is not unique, ordoes not depend continuously on the data.
Julianne Chung, Virginia Tech
-
What is an Ill-posed Inverse Problem?Hadamard (1923): A problem is ill-posed if the solution
does not exist,is not unique, ordoes not depend continuously on the data.
Forward Problem
Inverse Problem
True image: x Blurred & noisy image: b
Inverse solu:on: A-ā1b
Julianne Chung, Virginia Tech
-
Regularization
Incorporate prior knowledge:1 Knowledge about the noise in the data2 Knowledge about the unknown solution
Goals of this work:Incorporate probabilistic information in the form of training dataCompute optimal regularization:
Optimal Spectral FiltersOptimal Low-Rank Inverse Matrices
Julianne Chung, Virginia Tech
-
Outline
1 Designing Optimal Spectral FiltersBackground on Spectral FiltersComputing Optimal FiltersNumerical Results
2 Designing Optimal Low-Rank Regularized Inverse MatricesRank-Constrained ProblemBayes Problem: Theoretical ResultsEmpirical Bayes Problem: Numerical Results
3 Conclusions
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
Outline
1 Designing Optimal Spectral FiltersBackground on Spectral FiltersComputing Optimal FiltersNumerical Results
2 Designing Optimal Low-Rank Regularized Inverse MatricesRank-Constrained ProblemBayes Problem: Theoretical ResultsEmpirical Bayes Problem: Numerical Results
3 Conclusions
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
Regularization and Filtering
Singular Value Decomposition:Let A = UĪ£VT where
Ī£ =diag(Ļ1, Ļ2, . . . , Ļn) , Ļ1 ā„ Ļ2 ā„ Ā· Ā· Ā· ā„ Ļn ā„ 0UT U = I , VT V = I
For ill-posed inverse problems,Singular values Ļi decrease to and cluster at 0There is no gap separating large and small singular valuesSmall singular valuesā highly oscillatory singular vectors
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
Discrete Picard Condition
Inverse Solution, Aā1b
Investigate behavior of:Singular values, ĻiSingular vectors, viSVD coefficients, uTi bSolution coefficients, u
Ti bĻi
Two toy problems:1D deconvolutionGravity surveying
Hansen, Discrete Inverse Problems (2010)
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
SVD AnalysisThe naĆÆve inverse solution:
Ī¾ = Aā1b
= VĪ£ā1UT b
=nā
i=1
uTi bĻi
vi
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
SVD AnalysisThe naĆÆve inverse solution:
Ī¾Ģ = Aā1(b + Ī“)
= VĪ£ā1UT (b + Ī“)
=nā
i=1
uTi (b + Ī“)Ļi
vi
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
SVD AnalysisThe naĆÆve inverse solution:
Ī¾Ģ = Aā1(b + Ī“)
= VĪ£ā1UT (b + Ī“)
=nā
i=1
uTi (b + Ī“)Ļi
vi
=nā
i=1
uTi bĻi
vi +nā
i=1
uTi Ī“Ļi
vi
= Ī¾ + error
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
Regularization via Spectral Filtering
Filtered Solution
Ī¾filter =nā
i=1
ĻiuTi bĻi
vi
= VCĪ£ā1Ļ
Ļi - filter factorsĻ ā Rn contains ĻiC = diag(UT b) 0.2 0.4 0.6 0.8
0
0.2
0.4
0.6
0.8
1
Singular values
Filt
er fa
ctor
s
TSVDTikhonov
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
Some Filter RepresentationsTruncated SVD (TSVD)
Ļtsvdi (Ī±) =
{1, if i ā¤ Ī±,0, else,
Ī± ā {1, . . . ,n}
Tikhonov filter
Ļtiki (Ī±) =Ļ2i
Ļ2i + Ī±2, Ī± ā R
Error filterĻerri (Ī±) = Ī±i , Ī± ā R
n
Spline filterĻ
spli (Ī±) = s(Ļ ,Ī±;Ļi), Ī± ā R
`, ` < n
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
How to choose Ī±?Previous approaches (1 parameter)
Discrepancy PrincipleGeneralized Cross-Validation (GCV)L-Curve
Our approach to compute optimal parameters:Stochastic programming formulation to incorporate probabilisticinformationUse training data and numerical optimization to minimize errors
Shapiro, Dentcheva, Ruszczynski. SIAM, 2009.
Vapnik. Wiley & Sons, 1998.
Tenorio. SIAM Review, 2006.
Horesh, Haber, Tenorio. Inverse Problems, 2008.
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
Some Assumptions
Suppose we havea set of possible signals Ī ā Rn
a set of possible noise samples ā ā Rn
Selectsignal Ī¾ ā Ī according to probability distribution PĪ¾noise sample Ī“ ā ā according to probability distribution PĪ“
Inverse Problem: determine Ī¾ given A and b, where
b = AĪ¾ + Ī“
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
How to define error?
Error vector e(Ī±, Ī¾, Ī“) = xfilter(Ī±, Ī¾, Ī“)ā Ī¾
Error function: Ļ : Rn ā R+0
Errorerr(Ī±, Ī¾, Ī“) = Ļ(e(Ī±, Ī¾, Ī“))
For example, Ļ(x) = āxāpp , err(Ī±, Ī¾, Ī“) = āe(Ī±, Ī¾, Ī“)āpp
Goal: Find optimal parameters Ī± that minimize average error over setof signals and noise
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
Bayes Risk Minimization
Ideally...
Optimal filter Ļ(Ī±Ģ) where
Ī±Ģ = arg minĪ±
EĪ¾,Ī“{
err(Ī±, Ī¾, Ī“)}
Remarks:Minimizing expected value is difficultUse Monte Carlo sampling approach
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
Empirical Bayes Risk MinimizationIf PĪ¾ and PĪ“ knownā use Monte Carlo samples as training dataIf PĪ¾ and PĪ“ unknown but samples are givenā use samples as training data
Training data:
Ī¾(1), . . . , Ī¾(K ) realizations of random variable Ī¾
Ī“(1), . . . , Ī“(K ) realizations of random variable Ī“
b(k) = AĪ¾(k) + Ī“(k) , k = 1, ...,K
Empirical Bayes risk:
EĪ¾,Ī“{
err(Ī±, Ī¾, Ī“)}ā 1
K
Kāk=1
Ļ(e(k)(Ī±))
Optimal filter Ļ(Ī±Ģ) where
Ī±Ģ = arg minĪ±
1K
Kāk=1
Ļ(e(k)(Ī±))
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
Suggested Optimization Methods
error function\ filter tsvd tik spl errHuber GSS GN GN GN
1 < p < 2 (smoothing) GSS GN GN GNp = 2 GSS GN GN LSp > 2 GSS GN GN GN
p =ā,p = 1 GSS IPM-N IPM-N IMP-L
GSS - discrete golden section search algorithmGN - Gauss-Newton methodLS - linear least squares systemIPM-N / IPM-L -interior point methods for nonlinear / linearproblems
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
Gauss-Newton Optimization
x(k)filter = VC(k)Ī£ā1Ļ
Jacobian
J =1K
J(1)
...J(K )
where J(k) = VC(k)Ī£ā1ĻĪ±ĻĪ± - partial derivatives of Ļ w.r.t. Ī±
Gradient and GN Hessian
g =1K
Kāk=1
J(k)>
g(k) , H =1K
Kāk=1
J(k)>
D(k)J(k)
g(k) and D(k) contain information regarding derivatives of Ļ
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
Special Case: 2-norm
Ī±Ģ = arg minĪ±
12K
Kāk=1
āe(k)ā22
1 If Ī“ ā¼ N(0, Ī²Ī“I) and Ļerri (Ī±) = Ī±i , approximate Wiener filter
2 If, in addition, Ī¾ ā¼ N(0, Ī²Ī¾I) , get Tikhonov filter with Ī± = Ī²Ī“Ī²Ī¾
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
1D Deconvolution
Convolution kernel Columns used for training
0 50 100 150 200 2500
0.01
0.02
0.03
0.04
0.05
0.06
0.07
Blurred image
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
Compare to Standard Approaches
600 training signals2800 validation signalsNoise level: 0.001ā 0.01
For each validation signal,reconstruct:
1 opt-Tik2 opt-error3 Tik-GCV4 Tik-MSE
Box and whisker plot (err)
optāTik optāerror TikāGCV TikāMSE
0
0.5
1
1.5
2
x 10ā3
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
Corresponding Error Images
opt-Tik opt-error Tik-GCV Tik-MSE
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
Pareto Curve
100
101
102
103
10ā2
10ā1
number of training signals
aver
age
RRE
optāTikāSVDoptāerrorāSVDoptāTikāGSVD
minxāAĪ¾ ā bā22 + Ī»
2 āLĪ¾ā2 , L =
1
ā1 . . .. . . 1
ā1
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
2D DeconvolutionTraining Validation
800 training images800 validation imagesGaussian point spread functionNoise level: 0.1ā 0.15
Filter, Ļ(Ī±) :opt-TSVDopt-Tikopt-splineopt-errorsmooth
Error Function, Ļ(z) :Huber function2-norm4-norm
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
Numerical Results
Huber function 2-norm 4-norm
optāTSVD optāTik optāspline optāerror smooth0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.1
optāTSVD optāTik optāspline optāerror smooth0
1
2
3
4
5
6
7
8
x 10ā3
optāTSVD optāTik optāspline optāerror smooth
0
1
2
x 10ā4
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
Filter Factors and Error Images
Huber function 2-norm 4-norm
0.2 0.4 0.6 0.8
0
0.2
0.4
0.6
0.8
1
Singular values
Filt
er fa
ctor
s
optāerroroptāTSVDoptāTikoptāspline
0.2 0.4 0.6 0.8
0
0.2
0.4
0.6
0.8
1
Singular values
Filt
er fa
ctor
s
optāerroroptāTSVDoptāTikoptāspline
0.2 0.4 0.6 0.8
0
0.2
0.4
0.6
0.8
1
Singular values
Filt
er fa
ctor
s
optāerroroptāTSVDoptāTikoptāspline
0
0.05
0.1
0.15
0.2
0.25
0.3
0
0.05
0.1
0.15
0.2
0.25
0.3
0
0.05
0.1
0.15
0.2
0.25
0.3
Julianne Chung, Virginia Tech
-
Designing Optimal Spectral Filters
Summary for Optimal Filters
Computing good regularization parameters can be difficult
Use training data to get optimal parameters/filters
Different error measures and filter representations can be used
Optimal filters can be computed off-line
Chung, Chung, OāLeary. SISC, 2011.Chung, Chung, OāLeary. JMIV, 2012.Chung, EspaƱol, Nguyen. ArXiv, 2014.
Julianne Chung, Virginia Tech
-
Designing Optimal Low-Rank Regularized Inverse Matrices
Outline
1 Designing Optimal Spectral FiltersBackground on Spectral FiltersComputing Optimal FiltersNumerical Results
2 Designing Optimal Low-Rank Regularized Inverse MatricesRank-Constrained ProblemBayes Problem: Theoretical ResultsEmpirical Bayes Problem: Numerical Results
3 Conclusions
Julianne Chung, Virginia Tech
-
Designing Optimal Low-Rank Regularized Inverse Matrices
What is an optimal regularized inverse matrix(ORIM)?
Let Z ā RnĆm be a reconstruction matrixError vector: Zbā Ī¾
Error function: Ļ : Rn ā R+0
Error: err(Z) = Ļ(Zbā Ī¾)= Ļ(Z(AĪ¾ + Ī“)ā Ī¾)= Ļ((ZAā In)Ī¾ + ZĪ“)
For example, Ļ(y) = āyāpp , err(Z) = āZbā Ī¾āpp
Goal: Find a regularized inverse matrix Z ā RnĆm that minimizes
minZĻ((ZAā In)Ī¾ + ZĪ“)
Julianne Chung, Virginia Tech
-
Designing Optimal Low-Rank Regularized Inverse Matrices
Rank-constrained Problem
Basic Idea: Enforce ORIM Z to be low-rank
Rank-constrained Problem:
arg minrank(Z)ā¤r
Ļ((ZAā In)Ī¾ + ZĪ“)
Julianne Chung, Virginia Tech
-
Designing Optimal Low-Rank Regularized Inverse Matrices
Why does a low-rank inverse approximation makesense?
Rank-r truncated SVD (TSVD) solution can be written as
Ī¾TSVD =rā
i=1
u>i bĻi
vi = VrĪ£ā1r U>r b ,
whereVr and Ur contain the first r vectors of V and U respectivelyĪ£r is the r Ć r principal submatrix of Ī£
Matlab demo
Julianne Chung, Virginia Tech
-
Designing Optimal Low-Rank Regularized Inverse Matrices
Bayes Risk Minimization
Suppose wetreat Ī¾ as a random variable with given probability distributiontreat Ī“ as a random variable with given probability distribution
Define the Bayes risk:
f (Z) = EĪ¾,Ī“(Ļ((ZAā In)Ī¾ + ZĪ“))
where E is the expected value.
Rank-constrained Bayes risk minimization problem:
minrank(Z)ā¤r
f (Z) = EĪ¾,Ī“ (Ļ((ZAā In)Ī¾ + ZĪ“)) (1)
Julianne Chung, Virginia Tech
-
Designing Optimal Low-Rank Regularized Inverse Matrices
Bayes Risk Minimization for Ļ = āĀ·ā22
minrank(Z)ā¤r
f (Z) = EĪ¾,Ī“(ā(ZAā In)Ī¾ + ZĪ“)ā22
)Assume
Ī¾ and Ī“ are statistically independentĀµĪ¾ = 0 (mean) and C
ā1Ī¾ = MĪ¾M
>Ī¾ for PĪ¾
ĀµĪ“ = 0 (mean) and Cā1Ī“ = Ī·
2Im for PĪ“
Then EĪ¾,Ī“(ā(ZAā In)Ī¾ + ZĪ“ā22
)= ā(ZAā In)MĪ¾ā2F + Ī·
2 āZā2F
In addition, if Cā1Ī¾ = Ī²2In,
EĪ¾,Ī“(ā(ZAā In)Ī¾ + ZĪ“ā22
)= Ī²2 āZAā Inā2F + Ī·
2 āZā2F
Julianne Chung, Virginia Tech
-
Designing Optimal Low-Rank Regularized Inverse Matrices
A Theoretical ResultTheoremGiven a matrix A ā RmĆn of rank r ā¤ n ā¤ m and an invertible matrixM ā RnĆn, let their generalized singular value decomposition beA = UĪ£Gā1, M = GSā1V>. Let Ī· be a given parameter, nonzero ifr < m. Let J ā¤ r be a given positive integer. DefineD = Ī£Sā2Ī£> + Ī·2Im. Let the symmetric matrix H = GSā4Ī£>Dā1Ī£G>
have eigenvalue decomposition H = VĢĪVĢ> with eigenvalues orderedso that Ī»j ā„ Ī»i for j < i ā¤ n. Then a global minimizer ZĢ ā RnĆm of theproblem
ZĢ = arg minrank(Z)ā¤J
ā(ZAā In)Mā2F + Ī·2 āZā2F
isZĢ = VĢJ VĢ>J GS
ā2Ī£>Dā1U>,
where VĢJ contains the first J columns of VĢ. Moreover this ZĢ is theunique global minimizer if and only if Ī»J > Ī»J+1.
Julianne Chung, Virginia Tech
-
Designing Optimal Low-Rank Regularized Inverse Matrices
A Special Case
Theorem
A global minimizer ZĢ ā RnĆm of the problem
ZĢ = arg minrank(Z)ā¤r
āZAā Inā2F + Ī±2 āZā2F
is ZĢ = VrĪØr U>r , where Vr contains the first r columns of V, Ur containsthe first r columns of U, and ĪØr = diag
(Ļ1
Ļ21+Ī±2 , . . . ,
ĻrĻ2r +Ī±
2
). Moreover, ZĢ
is unique if and only if Ļr > Ļr+1.
Remarks on Bayes problem:Expected value difficult to evaluateIn real applications, PĪ¾ and PĪ“ are unknownNo theory for cases with Ļ 6= āĀ·ā22
Chung, Chung, and OāLeary (LAA 2014), Spantini et. al. (ArXiv 2014)
Julianne Chung, Virginia Tech
-
Designing Optimal Low-Rank Regularized Inverse Matrices
Empirical Bayes Risk MinimizationIf PĪ¾ and PĪ“ knownā use Monte Carlo samples as training dataIf PĪ¾ and PĪ“ unknown but samples are givenā use samples as training data
Training data:
Ī¾(1), . . . , Ī¾(K ) realizations of random variable Ī¾
Ī“(1), . . . , Ī“(K ) realizations of random variable Ī“
b(k) = AĪ¾(k) + Ī“(k) , k = 1, ...,K
Empirical Bayes risk:
EĪ¾,Ī“ (Ļ((ZAā In)Ī¾ + ZĪ“)) ā1K
Kāk=1
Ļ(Zb(k) ā Ī¾(k))
Rank-constrained Empirical Bayes Minimization Problem:
ZĢ = arg minrank(Z)ā¤r
1K
Kāk=1
Ļ(Zb(k) ā Ī¾(k))
Julianne Chung, Virginia Tech
-
Designing Optimal Low-Rank Regularized Inverse Matrices
Empirical Bayes Minimization Problem
Let Z be of rank r < min(m,n), then
Z = XY> =rā
j=1
xjy>j
where X = [x1, ...xr ] ā RnĆr and Y = [y1, ...yr ] ā RmĆr
Rank-constrained problem:
(XĢ, YĢ) = arg minX,Y
1K
Kāk=1
Ļ(XY>b(k) ā Ī¾(k))
Julianne Chung, Virginia Tech
-
Designing Optimal Low-Rank Regularized Inverse Matrices
Special Case: 2-norm
Let B = [b(1),b(2), ...,b(K )] and C = [Ī¾(1), Ī¾(2), ..., Ī¾(K )], then
minrank(Z)ā¤r
1K
Kāk=1
āZb(k) ā Ī¾(k)ā22 =1KāZBā Cā2F (2)
Theorem
ZĢ = Pr Bā
is a solution to the minimization problem (2), where VĢ is the matrix ofright singular vectors of B and P = CVĢs(VĢs)> where s = rank(B). Thissolution is unique if and only if either r ā„ rank(P) or 1 ā¤ r ā¤ rank(P)and ĻĢr > ĻĢr+1, where ĻĢr and ĻĢr+1 denote the r and (r + 1)st singularvalues of P.
Friedland and Torokhti (2007), Sondermann (1986), Chung and Chung (2013)
Julianne Chung, Virginia Tech
-
Designing Optimal Low-Rank Regularized Inverse Matrices
For general error measures and large-scale problems
Rank-` update approach for computing a solution
Initialize ZĢ = 0nĆm, XĢ = [ ], YĢ = [ ]
while rank ZĢ < r
ā¢ Compute matrices XĢ` ā RnĆ` and YĢ` ā RmĆ` such that
(XĢ`, YĢ`) = arg minXāRnĆ`,YāRmĆ`
1K
Kāk=1
Ļ((ZĢ + XY>)b(k) ā Ī¾(k))
ā¢ update matrix inverse approximation: ZĢāā ZĢ + XĢ`YĢ>`ā¢ update solutions: XĢāā [XĢ, XĢ`], YĢāā [YĢ, YĢ`]
end
Chung and Chung (Inverse Problems, 2014)
Julianne Chung, Virginia Tech
-
Designing Optimal Low-Rank Regularized Inverse Matrices
Empirical Bayes Problem
Remarks:Knowledge about forward model can be incorporated, not required
Need adequate number of training data
Solve inverse problems with only a matrix-vector multiplication: ZĢb
Framework allows more general error measures and morerealistic probability distributions
Julianne Chung, Virginia Tech
-
Designing Optimal Low-Rank Regularized Inverse Matrices
Numerical Results: 1D Example10,000 signal observations, length 150Gaussian blur, white noise level 0.01Compare methods
TSVD-AĢ: Estimate AĢ from training data, then use TSVDTSVD-A: requires AORIM2: uses training data
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.5
1
signal1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.5
1
signal2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.5
1
signal3
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.5
1
signal4
time
TrueBlurred
Julianne Chung, Virginia Tech
-
Designing Optimal Low-Rank Regularized Inverse Matrices
Reconstruction Errors for Validation Data
10 20 30 40 50 60 70 80 90 100
100
rank r
fK(Z
)
TSVD-AĢTSVD-AORIM2
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8
ORIM2
TSVD-A
TSVD-AĢ
sample error
Julianne Chung, Virginia Tech
-
Designing Optimal Low-Rank Regularized Inverse Matrices
Pareto Curves
100 200 300 400 500 600 700 800 900 10000
0.1
0.2
0.3
0.4
sample
errorfor
trainingset
100 200 300 400 500 600 700 800 900 10000
1
2
3
# of training signals
sample
errorfor
validationset
25 to 75 percentilemedian error
Julianne Chung, Virginia Tech
-
Designing Optimal Low-Rank Regularized Inverse Matrices
Numerical Results: 2D Example
6,000 observations, 128Ć 128spatially invariant Gaussian blur,reflexive BCGaussian white noise, levels rangefrom 0.1 to 0.15Compare methods
TSVD-AĢ: Estimate AĢ from trainingdata, then use TSVDTSVD-A: requires AORIM2: uses training data
Julianne Chung, Virginia Tech
-
Designing Optimal Low-Rank Regularized Inverse Matrices
Reconstruction Errors for Validation Data
100 200 300 400 500 600 700 800 900 1000101
102
103
104
rank r
fK(Z
)
TSVD-AĢTSVD-AORIM2
0 200 400 600 800 1000 1200 1400 1600 1800 20000
0.002
0.004
0.006
0.008
0.01
0.012
0.014
0.016
sample error
den
sity
ORIM2TSVD-AĢ
Julianne Chung, Virginia Tech
-
Designing Optimal Low-Rank Regularized Inverse Matrices
Reconstructed Images
True
Observed
ORIM2 TSVD-A TSVD-AĢ
ORIM2 TSVD-A TSVD-AĢ
Julianne Chung, Virginia Tech
-
Conclusions
Outline
1 Designing Optimal Spectral FiltersBackground on Spectral FiltersComputing Optimal FiltersNumerical Results
2 Designing Optimal Low-Rank Regularized Inverse MatricesRank-Constrained ProblemBayes Problem: Theoretical ResultsEmpirical Bayes Problem: Numerical Results
3 Conclusions
Julianne Chung, Virginia Tech
-
Conclusions
Concluding Remarks
New framework for solving inverse problems.Bayes problem:
Theoretical results can be derived for 2-normBayes problem provides insight
Empirical Bayes problem:Use training data to get
optimal spectral filters - for cases where A and its SVD are availableoptimal low-rank regularized inverse matrix - for cases where theforward model is unknown
Incorporate probabilistic informationOptimal filters and matrices can be computed off-lineReconstruction can be done efficiently and quality is good
Thank you!!
Julianne Chung, Virginia Tech
-
Conclusions
Some References on Inverse Problems
Discrete Inverse Problems: Insights and Algorithms - Per ChristianHansen
Deblurring Images: Matrices, Spectra, and Filtering - Hansen, Nagy andOāLeary
Introduction to Bayesian Scientific Computing: Ten Lectures onSubjective Computing - Calvetti and Somersalo
Computational Methods for Inverse Problems - Vogel
Introduction to Inverse Problems in Imaging - Bertero and Boccacci
Linear and Nonlinear Inverse Problems with Practical Applications -Mueller and Siltanen
Julianne Chung, Virginia Tech
Designing Optimal Spectral FiltersDesigning Optimal Low-Rank Regularized Inverse MatricesConclusions