designing optimal spectral filters and low-rank matrices ......designing optimal low-rank...

59
Designing Optimal Spectral Filters and Low-Rank Matrices for Inverse Problems Julianne Chung Department of Mathematics Virginia Tech Joint work with: Matthias Chung, Virginia Tech Dianne Oā€™Leary, University of Maryland Julianne Chung, Virginia Tech

Upload: others

Post on 11-Aug-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

  • Designing Optimal Spectral Filters and Low-RankMatrices for Inverse Problems

    Julianne Chung

    Department of MathematicsVirginia Tech

    Joint work with:

    Matthias Chung, Virginia Tech

    Dianne Oā€™Leary, University of Maryland

    Julianne Chung, Virginia Tech

  • What is an inverse problem?

    Physical SystemInput Signal Output Signal

    Forward Model

    Julianne Chung, Virginia Tech

  • What is an inverse problem?

    Physical SystemInput Signal Output Signal

    Forward Model

    Inverse Problem

    Julianne Chung, Virginia Tech

  • Discrete Linear Inverse Problem

    b = AĪ¾ + Ī“

    whereĪ¾ āˆˆ Rn - unknown parametersA āˆˆ RmƗn - large, ill-conditioned matrixĪ“ āˆˆ Rm - additive noiseb āˆˆ Rm - observation

    Goal: Given b and A, compute approximation of Ī¾

    Julianne Chung, Virginia Tech

  • Application: Image Deblurring

    b = AĪ¾ + Ī“

    Given: Blurred image, b, andsome information about theblurring, A

    Goal: Compute approximation oftrue image, Ī¾

    Julianne Chung, Virginia Tech

  • Application: Image Deblurring

    b = AĪ¾ + Ī“

    Given: Blurred image, b, andsome information about theblurring, A

    Goal: Compute approximation oftrue image, Ī¾

    Julianne Chung, Virginia Tech

  • Application: Super-Resolution Imaging

    bi = A(yi) Ī¾ + Ī“i

    Given: LR images

    b1...bm

    ļøø ļø·ļø· ļøø

    =

    A(y1)...A(ym)

    ļøø ļø·ļø· ļøø

    Ī¾+

    Ī“1...Ī“m

    ļøø ļø·ļø· ļøø

    b = A(y) Ī¾ + Ī“

    Goal: Improve parameters andapproximate HR image

    Julianne Chung, Virginia Tech

    SRimages.aviMedia File (video/avi)

  • Application: Limited-Angle Tomography

    X-ray ImagingDigital TomosynthesisComputed Tomography

    Julianne Chung, Virginia Tech

    tomomovie.movMedia File (video/quicktime)

  • Application: Tomosynthesis Reconstruction

    Given: 2D projection images

    Goal: Reconstruct a 3D volume

    bi = Ī„[AiĪ¾] + Ī“i

    where Ī„[Ā·] represents nonlinearenergy dependent transmissiontomography

    True Images

    Julianne Chung, Virginia Tech

  • What is an Ill-posed Inverse Problem?

    Hadamard (1923): A problem is ill-posed if the solutiondoes not exist,is not unique, ordoes not depend continuously on the data.

    Julianne Chung, Virginia Tech

  • What is an Ill-posed Inverse Problem?Hadamard (1923): A problem is ill-posed if the solution

    does not exist,is not unique, ordoes not depend continuously on the data.

    Forward  Problem  

    Inverse  Problem  

    True  image:  x   Blurred  &  noisy  image:  b  

    Inverse  solu:on:  A-ā€1b  

    Julianne Chung, Virginia Tech

  • Regularization

    Incorporate prior knowledge:1 Knowledge about the noise in the data2 Knowledge about the unknown solution

    Goals of this work:Incorporate probabilistic information in the form of training dataCompute optimal regularization:

    Optimal Spectral FiltersOptimal Low-Rank Inverse Matrices

    Julianne Chung, Virginia Tech

  • Outline

    1 Designing Optimal Spectral FiltersBackground on Spectral FiltersComputing Optimal FiltersNumerical Results

    2 Designing Optimal Low-Rank Regularized Inverse MatricesRank-Constrained ProblemBayes Problem: Theoretical ResultsEmpirical Bayes Problem: Numerical Results

    3 Conclusions

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Outline

    1 Designing Optimal Spectral FiltersBackground on Spectral FiltersComputing Optimal FiltersNumerical Results

    2 Designing Optimal Low-Rank Regularized Inverse MatricesRank-Constrained ProblemBayes Problem: Theoretical ResultsEmpirical Bayes Problem: Numerical Results

    3 Conclusions

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Regularization and Filtering

    Singular Value Decomposition:Let A = UĪ£VT where

    Ī£ =diag(Ļƒ1, Ļƒ2, . . . , Ļƒn) , Ļƒ1 ā‰„ Ļƒ2 ā‰„ Ā· Ā· Ā· ā‰„ Ļƒn ā‰„ 0UT U = I , VT V = I

    For ill-posed inverse problems,Singular values Ļƒi decrease to and cluster at 0There is no gap separating large and small singular valuesSmall singular valuesā‡’ highly oscillatory singular vectors

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Discrete Picard Condition

    Inverse Solution, Aāˆ’1b

    Investigate behavior of:Singular values, ĻƒiSingular vectors, viSVD coefficients, uTi bSolution coefficients, u

    Ti bĻƒi

    Two toy problems:1D deconvolutionGravity surveying

    Hansen, Discrete Inverse Problems (2010)

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    SVD AnalysisThe naĆÆve inverse solution:

    Ī¾ = Aāˆ’1b

    = VĪ£āˆ’1UT b

    =nāˆ‘

    i=1

    uTi bĻƒi

    vi

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    SVD AnalysisThe naĆÆve inverse solution:

    Ī¾Ģ‚ = Aāˆ’1(b + Ī“)

    = VĪ£āˆ’1UT (b + Ī“)

    =nāˆ‘

    i=1

    uTi (b + Ī“)Ļƒi

    vi

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    SVD AnalysisThe naĆÆve inverse solution:

    Ī¾Ģ‚ = Aāˆ’1(b + Ī“)

    = VĪ£āˆ’1UT (b + Ī“)

    =nāˆ‘

    i=1

    uTi (b + Ī“)Ļƒi

    vi

    =nāˆ‘

    i=1

    uTi bĻƒi

    vi +nāˆ‘

    i=1

    uTi Ī“Ļƒi

    vi

    = Ī¾ + error

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Regularization via Spectral Filtering

    Filtered Solution

    Ī¾filter =nāˆ‘

    i=1

    Ļ†iuTi bĻƒi

    vi

    = VCĪ£āˆ’1Ļ†

    Ļ†i - filter factorsĻ† āˆˆ Rn contains Ļ†iC = diag(UT b) 0.2 0.4 0.6 0.8

    0

    0.2

    0.4

    0.6

    0.8

    1

    Singular values

    Filt

    er fa

    ctor

    s

    TSVDTikhonov

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Some Filter RepresentationsTruncated SVD (TSVD)

    Ļ†tsvdi (Ī±) =

    {1, if i ā‰¤ Ī±,0, else,

    Ī± āˆˆ {1, . . . ,n}

    Tikhonov filter

    Ļ†tiki (Ī±) =Ļƒ2i

    Ļƒ2i + Ī±2, Ī± āˆˆ R

    Error filterĻ†erri (Ī±) = Ī±i , Ī± āˆˆ R

    n

    Spline filterĻ†

    spli (Ī±) = s(Ļ„ ,Ī±;Ļƒi), Ī± āˆˆ R

    `, ` < n

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    How to choose Ī±?Previous approaches (1 parameter)

    Discrepancy PrincipleGeneralized Cross-Validation (GCV)L-Curve

    Our approach to compute optimal parameters:Stochastic programming formulation to incorporate probabilisticinformationUse training data and numerical optimization to minimize errors

    Shapiro, Dentcheva, Ruszczynski. SIAM, 2009.

    Vapnik. Wiley & Sons, 1998.

    Tenorio. SIAM Review, 2006.

    Horesh, Haber, Tenorio. Inverse Problems, 2008.

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Some Assumptions

    Suppose we havea set of possible signals Īž āŠ† Rn

    a set of possible noise samples āˆ† āˆˆ Rn

    Selectsignal Ī¾ āˆˆ Īž according to probability distribution PĪ¾noise sample Ī“ āˆˆ āˆ† according to probability distribution PĪ“

    Inverse Problem: determine Ī¾ given A and b, where

    b = AĪ¾ + Ī“

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    How to define error?

    Error vector e(Ī±, Ī¾, Ī“) = xfilter(Ī±, Ī¾, Ī“)āˆ’ Ī¾

    Error function: Ļ : Rn ā†’ R+0

    Errorerr(Ī±, Ī¾, Ī“) = Ļ(e(Ī±, Ī¾, Ī“))

    For example, Ļ(x) = ā€–xā€–pp , err(Ī±, Ī¾, Ī“) = ā€–e(Ī±, Ī¾, Ī“)ā€–pp

    Goal: Find optimal parameters Ī± that minimize average error over setof signals and noise

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Bayes Risk Minimization

    Ideally...

    Optimal filter Ļ†(Ī±ĢŒ) where

    Ī±ĢŒ = arg minĪ±

    EĪ¾,Ī“{

    err(Ī±, Ī¾, Ī“)}

    Remarks:Minimizing expected value is difficultUse Monte Carlo sampling approach

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Empirical Bayes Risk MinimizationIf PĪ¾ and PĪ“ knownā†’ use Monte Carlo samples as training dataIf PĪ¾ and PĪ“ unknown but samples are givenā†’ use samples as training data

    Training data:

    Ī¾(1), . . . , Ī¾(K ) realizations of random variable Ī¾

    Ī“(1), . . . , Ī“(K ) realizations of random variable Ī“

    b(k) = AĪ¾(k) + Ī“(k) , k = 1, ...,K

    Empirical Bayes risk:

    EĪ¾,Ī“{

    err(Ī±, Ī¾, Ī“)}ā‰ˆ 1

    K

    Kāˆ‘k=1

    Ļ(e(k)(Ī±))

    Optimal filter Ļ†(Ī±Ģ‚) where

    Ī±Ģ‚ = arg minĪ±

    1K

    Kāˆ‘k=1

    Ļ(e(k)(Ī±))

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Suggested Optimization Methods

    error function\ filter tsvd tik spl errHuber GSS GN GN GN

    1 < p < 2 (smoothing) GSS GN GN GNp = 2 GSS GN GN LSp > 2 GSS GN GN GN

    p =āˆž,p = 1 GSS IPM-N IPM-N IMP-L

    GSS - discrete golden section search algorithmGN - Gauss-Newton methodLS - linear least squares systemIPM-N / IPM-L -interior point methods for nonlinear / linearproblems

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Gauss-Newton Optimization

    x(k)filter = VC(k)Ī£āˆ’1Ļ†

    Jacobian

    J =1K

    J(1)

    ...J(K )

    where J(k) = VC(k)Ī£āˆ’1Ļ†Ī±Ļ†Ī± - partial derivatives of Ļ† w.r.t. Ī±

    Gradient and GN Hessian

    g =1K

    Kāˆ‘k=1

    J(k)>

    g(k) , H =1K

    Kāˆ‘k=1

    J(k)>

    D(k)J(k)

    g(k) and D(k) contain information regarding derivatives of Ļ

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Special Case: 2-norm

    Ī±Ģ‚ = arg minĪ±

    12K

    Kāˆ‘k=1

    ā€–e(k)ā€–22

    1 If Ī“ āˆ¼ N(0, Ī²Ī“I) and Ļ†erri (Ī±) = Ī±i , approximate Wiener filter

    2 If, in addition, Ī¾ āˆ¼ N(0, Ī²Ī¾I) , get Tikhonov filter with Ī± = Ī²Ī“Ī²Ī¾

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    1D Deconvolution

    Convolution kernel Columns used for training

    0 50 100 150 200 2500

    0.01

    0.02

    0.03

    0.04

    0.05

    0.06

    0.07

    Blurred image

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Compare to Standard Approaches

    600 training signals2800 validation signalsNoise level: 0.001āˆ’ 0.01

    For each validation signal,reconstruct:

    1 opt-Tik2 opt-error3 Tik-GCV4 Tik-MSE

    Box and whisker plot (err)

    optāˆ’Tik optāˆ’error Tikāˆ’GCV Tikāˆ’MSE

    0

    0.5

    1

    1.5

    2

    x 10āˆ’3

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Corresponding Error Images

    opt-Tik opt-error Tik-GCV Tik-MSE

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Pareto Curve

    100

    101

    102

    103

    10āˆ’2

    10āˆ’1

    number of training signals

    aver

    age

    RRE

    optāˆ’Tikāˆ’SVDoptāˆ’errorāˆ’SVDoptāˆ’Tikāˆ’GSVD

    minxā€–AĪ¾ āˆ’ bā€–22 + Ī»

    2 ā€–LĪ¾ā€–2 , L =

    1

    āˆ’1 . . .. . . 1

    āˆ’1

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    2D DeconvolutionTraining Validation

    800 training images800 validation imagesGaussian point spread functionNoise level: 0.1āˆ’ 0.15

    Filter, Ļ†(Ī±) :opt-TSVDopt-Tikopt-splineopt-errorsmooth

    Error Function, Ļ(z) :Huber function2-norm4-norm

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Numerical Results

    Huber function 2-norm 4-norm

    optāˆ’TSVD optāˆ’Tik optāˆ’spline optāˆ’error smooth0.01

    0.02

    0.03

    0.04

    0.05

    0.06

    0.07

    0.08

    0.09

    0.1

    optāˆ’TSVD optāˆ’Tik optāˆ’spline optāˆ’error smooth0

    1

    2

    3

    4

    5

    6

    7

    8

    x 10āˆ’3

    optāˆ’TSVD optāˆ’Tik optāˆ’spline optāˆ’error smooth

    0

    1

    2

    x 10āˆ’4

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Filter Factors and Error Images

    Huber function 2-norm 4-norm

    0.2 0.4 0.6 0.8

    0

    0.2

    0.4

    0.6

    0.8

    1

    Singular values

    Filt

    er fa

    ctor

    s

    optāˆ’erroroptāˆ’TSVDoptāˆ’Tikoptāˆ’spline

    0.2 0.4 0.6 0.8

    0

    0.2

    0.4

    0.6

    0.8

    1

    Singular values

    Filt

    er fa

    ctor

    s

    optāˆ’erroroptāˆ’TSVDoptāˆ’Tikoptāˆ’spline

    0.2 0.4 0.6 0.8

    0

    0.2

    0.4

    0.6

    0.8

    1

    Singular values

    Filt

    er fa

    ctor

    s

    optāˆ’erroroptāˆ’TSVDoptāˆ’Tikoptāˆ’spline

    0

    0.05

    0.1

    0.15

    0.2

    0.25

    0.3

    0

    0.05

    0.1

    0.15

    0.2

    0.25

    0.3

    0

    0.05

    0.1

    0.15

    0.2

    0.25

    0.3

    Julianne Chung, Virginia Tech

  • Designing Optimal Spectral Filters

    Summary for Optimal Filters

    Computing good regularization parameters can be difficult

    Use training data to get optimal parameters/filters

    Different error measures and filter representations can be used

    Optimal filters can be computed off-line

    Chung, Chung, Oā€™Leary. SISC, 2011.Chung, Chung, Oā€™Leary. JMIV, 2012.Chung, EspaƱol, Nguyen. ArXiv, 2014.

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Outline

    1 Designing Optimal Spectral FiltersBackground on Spectral FiltersComputing Optimal FiltersNumerical Results

    2 Designing Optimal Low-Rank Regularized Inverse MatricesRank-Constrained ProblemBayes Problem: Theoretical ResultsEmpirical Bayes Problem: Numerical Results

    3 Conclusions

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    What is an optimal regularized inverse matrix(ORIM)?

    Let Z āˆˆ RnƗm be a reconstruction matrixError vector: Zbāˆ’ Ī¾

    Error function: Ļ : Rn ā†’ R+0

    Error: err(Z) = Ļ(Zbāˆ’ Ī¾)= Ļ(Z(AĪ¾ + Ī“)āˆ’ Ī¾)= Ļ((ZAāˆ’ In)Ī¾ + ZĪ“)

    For example, Ļ(y) = ā€–yā€–pp , err(Z) = ā€–Zbāˆ’ Ī¾ā€–pp

    Goal: Find a regularized inverse matrix Z āˆˆ RnƗm that minimizes

    minZĻ((ZAāˆ’ In)Ī¾ + ZĪ“)

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Rank-constrained Problem

    Basic Idea: Enforce ORIM Z to be low-rank

    Rank-constrained Problem:

    arg minrank(Z)ā‰¤r

    Ļ((ZAāˆ’ In)Ī¾ + ZĪ“)

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Why does a low-rank inverse approximation makesense?

    Rank-r truncated SVD (TSVD) solution can be written as

    Ī¾TSVD =rāˆ‘

    i=1

    u>i bĻƒi

    vi = VrĪ£āˆ’1r U>r b ,

    whereVr and Ur contain the first r vectors of V and U respectivelyĪ£r is the r Ɨ r principal submatrix of Ī£

    Matlab demo

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Bayes Risk Minimization

    Suppose wetreat Ī¾ as a random variable with given probability distributiontreat Ī“ as a random variable with given probability distribution

    Define the Bayes risk:

    f (Z) = EĪ¾,Ī“(Ļ((ZAāˆ’ In)Ī¾ + ZĪ“))

    where E is the expected value.

    Rank-constrained Bayes risk minimization problem:

    minrank(Z)ā‰¤r

    f (Z) = EĪ¾,Ī“ (Ļ((ZAāˆ’ In)Ī¾ + ZĪ“)) (1)

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Bayes Risk Minimization for Ļ = ā€–Ā·ā€–22

    minrank(Z)ā‰¤r

    f (Z) = EĪ¾,Ī“(ā€–(ZAāˆ’ In)Ī¾ + ZĪ“)ā€–22

    )Assume

    Ī¾ and Ī“ are statistically independentĀµĪ¾ = 0 (mean) and C

    āˆ’1Ī¾ = MĪ¾M

    >Ī¾ for PĪ¾

    ĀµĪ“ = 0 (mean) and Cāˆ’1Ī“ = Ī·

    2Im for PĪ“

    Then EĪ¾,Ī“(ā€–(ZAāˆ’ In)Ī¾ + ZĪ“ā€–22

    )= ā€–(ZAāˆ’ In)MĪ¾ā€–2F + Ī·

    2 ā€–Zā€–2F

    In addition, if Cāˆ’1Ī¾ = Ī²2In,

    EĪ¾,Ī“(ā€–(ZAāˆ’ In)Ī¾ + ZĪ“ā€–22

    )= Ī²2 ā€–ZAāˆ’ Inā€–2F + Ī·

    2 ā€–Zā€–2F

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    A Theoretical ResultTheoremGiven a matrix A āˆˆ RmƗn of rank r ā‰¤ n ā‰¤ m and an invertible matrixM āˆˆ RnƗn, let their generalized singular value decomposition beA = UĪ£Gāˆ’1, M = GSāˆ’1V>. Let Ī· be a given parameter, nonzero ifr < m. Let J ā‰¤ r be a given positive integer. DefineD = Ī£Sāˆ’2Ī£> + Ī·2Im. Let the symmetric matrix H = GSāˆ’4Ī£>Dāˆ’1Ī£G>

    have eigenvalue decomposition H = VĢ‚Ī›VĢ‚> with eigenvalues orderedso that Ī»j ā‰„ Ī»i for j < i ā‰¤ n. Then a global minimizer ZĢ‚ āˆˆ RnƗm of theproblem

    ZĢ‚ = arg minrank(Z)ā‰¤J

    ā€–(ZAāˆ’ In)Mā€–2F + Ī·2 ā€–Zā€–2F

    isZĢ‚ = VĢ‚J VĢ‚>J GS

    āˆ’2Ī£>Dāˆ’1U>,

    where VĢ‚J contains the first J columns of VĢ‚. Moreover this ZĢ‚ is theunique global minimizer if and only if Ī»J > Ī»J+1.

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    A Special Case

    Theorem

    A global minimizer ZĢ‚ āˆˆ RnƗm of the problem

    ZĢ‚ = arg minrank(Z)ā‰¤r

    ā€–ZAāˆ’ Inā€–2F + Ī±2 ā€–Zā€–2F

    is ZĢ‚ = VrĪØr U>r , where Vr contains the first r columns of V, Ur containsthe first r columns of U, and ĪØr = diag

    (Ļƒ1

    Ļƒ21+Ī±2 , . . . ,

    ĻƒrĻƒ2r +Ī±

    2

    ). Moreover, ZĢ‚

    is unique if and only if Ļƒr > Ļƒr+1.

    Remarks on Bayes problem:Expected value difficult to evaluateIn real applications, PĪ¾ and PĪ“ are unknownNo theory for cases with Ļ 6= ā€–Ā·ā€–22

    Chung, Chung, and Oā€™Leary (LAA 2014), Spantini et. al. (ArXiv 2014)

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Empirical Bayes Risk MinimizationIf PĪ¾ and PĪ“ knownā†’ use Monte Carlo samples as training dataIf PĪ¾ and PĪ“ unknown but samples are givenā†’ use samples as training data

    Training data:

    Ī¾(1), . . . , Ī¾(K ) realizations of random variable Ī¾

    Ī“(1), . . . , Ī“(K ) realizations of random variable Ī“

    b(k) = AĪ¾(k) + Ī“(k) , k = 1, ...,K

    Empirical Bayes risk:

    EĪ¾,Ī“ (Ļ((ZAāˆ’ In)Ī¾ + ZĪ“)) ā‰ˆ1K

    Kāˆ‘k=1

    Ļ(Zb(k) āˆ’ Ī¾(k))

    Rank-constrained Empirical Bayes Minimization Problem:

    ZĢ‚ = arg minrank(Z)ā‰¤r

    1K

    Kāˆ‘k=1

    Ļ(Zb(k) āˆ’ Ī¾(k))

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Empirical Bayes Minimization Problem

    Let Z be of rank r < min(m,n), then

    Z = XY> =rāˆ‘

    j=1

    xjy>j

    where X = [x1, ...xr ] āˆˆ RnƗr and Y = [y1, ...yr ] āˆˆ RmƗr

    Rank-constrained problem:

    (XĢ‚, YĢ‚) = arg minX,Y

    1K

    Kāˆ‘k=1

    Ļ(XY>b(k) āˆ’ Ī¾(k))

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Special Case: 2-norm

    Let B = [b(1),b(2), ...,b(K )] and C = [Ī¾(1), Ī¾(2), ..., Ī¾(K )], then

    minrank(Z)ā‰¤r

    1K

    Kāˆ‘k=1

    ā€–Zb(k) āˆ’ Ī¾(k)ā€–22 =1Kā€–ZBāˆ’ Cā€–2F (2)

    Theorem

    ZĢ‚ = Pr Bā€ 

    is a solution to the minimization problem (2), where VĢƒ is the matrix ofright singular vectors of B and P = CVĢƒs(VĢƒs)> where s = rank(B). Thissolution is unique if and only if either r ā‰„ rank(P) or 1 ā‰¤ r ā‰¤ rank(P)and ĻƒĢ„r > ĻƒĢ„r+1, where ĻƒĢ„r and ĻƒĢ„r+1 denote the r and (r + 1)st singularvalues of P.

    Friedland and Torokhti (2007), Sondermann (1986), Chung and Chung (2013)

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    For general error measures and large-scale problems

    Rank-` update approach for computing a solution

    Initialize ZĢ‚ = 0nƗm, XĢ‚ = [ ], YĢ‚ = [ ]

    while rank ZĢ‚ < r

    ā€¢ Compute matrices XĢ‚` āˆˆ RnƗ` and YĢ‚` āˆˆ RmƗ` such that

    (XĢ‚`, YĢ‚`) = arg minXāˆˆRnƗ`,YāˆˆRmƗ`

    1K

    Kāˆ‘k=1

    Ļ((ZĢ‚ + XY>)b(k) āˆ’ Ī¾(k))

    ā€¢ update matrix inverse approximation: ZĢ‚ā†āˆ’ ZĢ‚ + XĢ‚`YĢ‚>`ā€¢ update solutions: XĢ‚ā†āˆ’ [XĢ‚, XĢ‚`], YĢ‚ā†āˆ’ [YĢ‚, YĢ‚`]

    end

    Chung and Chung (Inverse Problems, 2014)

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Empirical Bayes Problem

    Remarks:Knowledge about forward model can be incorporated, not required

    Need adequate number of training data

    Solve inverse problems with only a matrix-vector multiplication: ZĢ‚b

    Framework allows more general error measures and morerealistic probability distributions

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Numerical Results: 1D Example10,000 signal observations, length 150Gaussian blur, white noise level 0.01Compare methods

    TSVD-AĢ‚: Estimate AĢ‚ from training data, then use TSVDTSVD-A: requires AORIM2: uses training data

    0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

    0.5

    1

    signal1

    0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

    0.5

    1

    signal2

    0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

    0.5

    1

    signal3

    0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

    0.5

    1

    signal4

    time

    TrueBlurred

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Reconstruction Errors for Validation Data

    10 20 30 40 50 60 70 80 90 100

    100

    rank r

    fK(Z

    )

    TSVD-AĢ‚TSVD-AORIM2

    0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8

    ORIM2

    TSVD-A

    TSVD-AĢ‚

    sample error

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Pareto Curves

    100 200 300 400 500 600 700 800 900 10000

    0.1

    0.2

    0.3

    0.4

    sample

    errorfor

    trainingset

    100 200 300 400 500 600 700 800 900 10000

    1

    2

    3

    # of training signals

    sample

    errorfor

    validationset

    25 to 75 percentilemedian error

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Numerical Results: 2D Example

    6,000 observations, 128Ɨ 128spatially invariant Gaussian blur,reflexive BCGaussian white noise, levels rangefrom 0.1 to 0.15Compare methods

    TSVD-AĢ‚: Estimate AĢ‚ from trainingdata, then use TSVDTSVD-A: requires AORIM2: uses training data

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Reconstruction Errors for Validation Data

    100 200 300 400 500 600 700 800 900 1000101

    102

    103

    104

    rank r

    fK(Z

    )

    TSVD-AĢ‚TSVD-AORIM2

    0 200 400 600 800 1000 1200 1400 1600 1800 20000

    0.002

    0.004

    0.006

    0.008

    0.01

    0.012

    0.014

    0.016

    sample error

    den

    sity

    ORIM2TSVD-AĢ‚

    Julianne Chung, Virginia Tech

  • Designing Optimal Low-Rank Regularized Inverse Matrices

    Reconstructed Images

    True

    Observed

    ORIM2 TSVD-A TSVD-AĢ‚

    ORIM2 TSVD-A TSVD-AĢ‚

    Julianne Chung, Virginia Tech

  • Conclusions

    Outline

    1 Designing Optimal Spectral FiltersBackground on Spectral FiltersComputing Optimal FiltersNumerical Results

    2 Designing Optimal Low-Rank Regularized Inverse MatricesRank-Constrained ProblemBayes Problem: Theoretical ResultsEmpirical Bayes Problem: Numerical Results

    3 Conclusions

    Julianne Chung, Virginia Tech

  • Conclusions

    Concluding Remarks

    New framework for solving inverse problems.Bayes problem:

    Theoretical results can be derived for 2-normBayes problem provides insight

    Empirical Bayes problem:Use training data to get

    optimal spectral filters - for cases where A and its SVD are availableoptimal low-rank regularized inverse matrix - for cases where theforward model is unknown

    Incorporate probabilistic informationOptimal filters and matrices can be computed off-lineReconstruction can be done efficiently and quality is good

    Thank you!!

    Julianne Chung, Virginia Tech

  • Conclusions

    Some References on Inverse Problems

    Discrete Inverse Problems: Insights and Algorithms - Per ChristianHansen

    Deblurring Images: Matrices, Spectra, and Filtering - Hansen, Nagy andOā€™Leary

    Introduction to Bayesian Scientific Computing: Ten Lectures onSubjective Computing - Calvetti and Somersalo

    Computational Methods for Inverse Problems - Vogel

    Introduction to Inverse Problems in Imaging - Bertero and Boccacci

    Linear and Nonlinear Inverse Problems with Practical Applications -Mueller and Siltanen

    Julianne Chung, Virginia Tech

    Designing Optimal Spectral FiltersDesigning Optimal Low-Rank Regularized Inverse MatricesConclusions