comment on an iterative method of imaging

Upload: ricardo-borriquero

Post on 03-Apr-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/28/2019 Comment on An Iterative Method of Imaging

    1/10

    arXiv:ast

    ro-ph/9812359v1

    18Dec1998

    Comment on An iterative method of imaging

    D. D. Dixon

    University of California

    Institute of Geophysics and Planetary Physics

    Riverside, CA 92521

    [email protected]

    June 29, 2011

    Abstract

    Image processing is an increasingly important aspectfor analysis of data from X and -ray astrophysicsmissions. In this paper, I review a method proposedby Kebede (L. W. Kebede 1994, ApJ, 423, 878), andpoint out an error in the derivation of this method.It turns out that this error is not debilitating themethod still works in some sense but as publishedis rather sub-optimal, as demonstrated both on the-oretical grounds and via a set of examples.

    1 Introduction

    Image processing techniques are increasingly beingemployed to aid in the interpretation of data fromX and -ray astrophysics missions, both to suppressnoise from low photon statistics and to invert instru-mental responses when required. An excellent exam-ple of this is for Compton telescopes, such as COMP-TEL ([Schonfelder et al.1993]), where directional in-formation of detected photons has a complex relation-ship to the measured quantities, source count ratesare relatively low, and background is high.

    In Kebede (1994) (hereafter referred to as K94),a method is presented for unfolding data from theinstrumental response. Further examination of K94reveals certain mathematical and conceptual errors.I will review and correct these in this paper, showspecific examples of the application of the proposed

    method and compare with other similar simple al-gorithms. Some claims of Kebede (1994) are dis-cussed, and those which are incorrect are noted.Interestingly, the claim that The method totallyeliminates the possibility of any error amplification([Kebede 1994]), while misleading, turns out to betrue. However, we also show that while the methodof K94 may not amplify noise in the data, it also doesnothing to suppress noise, rather passing it throughto the estimate. Whether or not this is considereda useful property may be a function of ones partic-ular application, though it seems for most scientificstudies that some suppression of the noise (easily ac-complished) is desirable.

    To facilitate direct comparison with K94, weshall adopt its somewhat non-standard notation andnomenclature. The transpose of a matrix R (termedthe converse by Kebede) is denoted with a tilde,i.e., R RT. The matrix factors from the singularvalue decomposition ofR = USVT are instead givenas AB.

    2 Mathematical Formalism

    The problem under consideration is essentially thatof inverting a discretized version of a linear integralequation, given an instrument response kernel anddata perturbed by noise. This may be formulated asa matrix equation

    Rs d (1)

    1

    http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1http://arxiv.org/abs/astro-ph/9812359v1
  • 7/28/2019 Comment on An Iterative Method of Imaging

    2/10

    where R is an MNmatrix describing the discretizedinstrument response, d is an M-vector representing

    the binned data, and s is an N-vector respresentingthe the source field flux distribution, usually speci-fied as a set of pixels (note that K94 uses S and D;we have changed to lower case to avoid confusion be-tween matrices and vectors). The approximate equal-ity is a result of noise. IfM N, then equality canpotentially be achieved, but this is not usually a desir-able goal, for reasons which are generally well-known([Groetsch 1984]). In particular, this leads to a se-rious overfit of the data, accounting for every littlefeature whether simply noise or not.

    The approach in K94 is to find constants i andvectors ai and bi such that

    Rbi = iai

    Rai = ibi. (2)

    Though K94 never refers to it as such, eqs. 2 definethe singular value decomposition (SVD) of the matrixR, with i being the singular values, ai the left singu-lar vectors, and bi the right singular vectors. The a

    i

    form a complete orthonormal basis, as do the bi. K94uses the rather confusing term eigenvalues for thei, as opposed to the more standard singular val-ues. The i are the square roots of the eigenvaluesofRR, but these are not the eigenvalues ofR, whichare undefined for M= N.

    K94 suggests least-squares solution of eqn. 1, givenschematically by

    s = R1d. (3)

    Note that this makes no reference to the data statis-tics. Data bins with low expected counts willreceive equal weight as those with high expectedcounts. We address this point later. For non-square R the standard matrix inverse is not de-fined; however one can use the generalized left in-verse ([Campbell & Meyer 1979]). Equation 2 im-plies R = AB, with A and B orthonormal ma-

    trices whose columns are the ai and bi respec-tively, and = diag(i). As is well-known([Campbell & Meyer 1979]), the generalized left in-verse ofR is given by

    R1L = B1A. (4)

    Substituting this in eqn. 3 yields the least-squares so-lution for s, i.e., the s which minimizes ||Rs d||2.

    If R is singular, the least-squares solution is notunique, and contains zero elements on the di-agonal. The SVD inverse is computed by setting1 = diag(1/i) for i = 0, and 0 otherwise. TheSVD inverse then chooses the solution which mini-mizes the Euclidean norm ofs (given by ||s||2).

    Linear inverse theory tells us that the solu-tion of eqn. 1 is almost inevitably unstable forthe types of systems we encounter experimentally([Groetsch 1984]). This instability is related to thefact that our instrument has finite resolving power,and nearby pixels may have a high degree of potentialconfusion. The singular value spectrum reflects this,generally decaying rapidly, and such matrices aretermed ill-conditioned ([Golub & Van Loan 1989]).From eqn. 4 we see that a small singular values makesa large contribution to the generalized inverse, andthis tends to lead to noise amplification. To re-duce these effects, one must regularize the inverse,which for our purposes means suppressing the ef-fects of small singular values. This is the claim forthe method of K94 which, as it turns out, actuallyachieves this in an odd fashion.

    3 The Method

    The derivation of K94 is limited to the case M Nand non-singular R, for which the solution of eqn. 1is

    s = (RR)1Rd. (5)

    K94 then notes that s and Rd can be related viaa diagonal matrix (s and Rd are both N-vectors).Note that this is not the matrix (RR)1, which isvery definitely not diagonal for the cases of interest.To be more specific, one can relate s and Rd via adiagonal matrix, but this matrix necessarily dependson R and d. If we denote this matrix as T, then a

    little algebra shows that for s given by eqn. 5,

    Tkk =sk

    (Rd)k=

    ((RR)1Rd)k

    (Rd)k. (6)

    Thus, finding T and finding s are equivalent opera-tions when estimating the unregularized inverse. K94

    2

  • 7/28/2019 Comment on An Iterative Method of Imaging

    3/10

    asserts that T is non-negative, but this will not betrue in general, since the s given by eqn. 5 is not non-

    negative. In fact, the noise amplification due to smallsingular values guarantees large positive and negativeoscillations, which is exactly the problem one is tryingto mitigate. As we shall see below, eqn. 6 suggests aniteration for estimating T (or s) which under certainconditions does force T (s) to be non-negative, but inthis case s is no longer the least-squares solution toeqn. 5.

    K94 goes on the suggest the following iteration ons (K94, eqn. (11)):

    s(n+1) = T(n+1)Rd. (7)

    The next question is how to compute T(n+1) fromR,d, and s(n). Equation (12) of K94 gives the answeras

    T(n+1)kk =

    s(n)k

    (Rd)(n)k

    . (8)

    K94 does not elucidate on the meaning of (Rd)(n)k ,

    in particular what the superscript means, since Rd isjust given by R and d and has nothing to do with theiteration. Referring to eqn. 6, we might guess thatthis is supposed to mean what Rd would be if s(n)

    were the true image, in which case the iterationwould be

    T(n+1)kk =

    s(n)k(RRs(n))k

    . (9)

    The next step in the derivation apparently containsan algebraic error. K94 (eqn. 13) gives the followingexpression:

    RRB = B, (10)

    which is simply incorrect. Referring to the SVD ofR, and remembering that A and B are orthonormalmatrices (i.e., AA is the identity), we actually find

    RRB = BAABB = B2. (11)

    This difference then explains the form of eqn. (14) ofK94, which gives the iteration as

    s(n+1)k =

    s(n)k (Rd)k

    i

    j jb

    jkb

    jis

    (n)i

    . (12)

    Note that the denominator of eqn. 12 is just(BBs(n))k. If one referred to eqn. 10, and incor-

    rectly substituted RR = BB into eqn. 9, eqn. 12would result.

    4 Discussion

    Following the derivation of eqn. 12 (K94, eqn. (14)),the author makes the following statement: Noticehow the solution given in equation (14) virtuallyignores the small eigenvalues; remember that byeigenvalues Kebede is actually referring to the sin-gular values of R. Let us examine this statementfurther, especially in light of the errors leading to

    eqn. 12.We begin by considering the correct version of

    eqn. 12, given by

    s(n+1)k =s(n)k (Rd)k

    (RRs(n))k, (13)

    where weve simplified the notation a bit and haventbothered expanding RR in terms of its SVD factors,since theres no compelling reason to do so. In fact,one encounters many cases where R is too large toexplicitly construct, but where matrix-vector prod-ucts can be calculated via fast implicit algorithms. A

    simple example of this would be an imaging systemwith a translationally-invariant point-spread function(PSF), for which the products Rs and Rd can be cal-culated via the FFT and convolution theorem. For alarge number of image/data pixels, explicit calcula-tion and use of the SVD factors would involve massivecomputational overhead.

    Since we are not advocating use of eqn. 13, wewont discuss the convergence properties, exceptto note that in numerical experiments it convergesmonotonically in the squared residual ||Rs d||2. Ofmore interest is what it converges to. Not surpris-ingly, the stable fixed point of the iteration is just

    the solution given by eqn. 5; proof of this is simpleand follows directly from eqn. 13. The iteration alsohas saddle points, the trivial example being s = 0,which does lead to an interesting point. For R, d 0(as is often the case for imaging systems), if the ini-tial value s(0) is positive then eqn. 13 converges to a

    3

  • 7/28/2019 Comment on An Iterative Method of Imaging

    4/10

    saddle point for which s is non-negative. This s is notthe solution to eqn. 5, but thats a good thing, since

    the non-negativity constraint is not only physical, butalso serves to stabilize the solution for ill-conditionedR ([Dixon et al.1996]).

    Let us now consider eqn. 12 at face value, and seewhat it implies for the estimation ofs. From eqn. 12,the convergence condition s(n+1) = s(n) gives the so-lution as

    s(F)k =

    s(F)k (Rd)k

    (BBs(F))k, (14)

    where the superscript (F) denotes the value at con-vergence, i.e., the fixed point. Solving for s(F), againusing the SVD factorization ofR and the orthonor-mality ofB, we find the stable fixed point to be

    s(F) = B1BRd

    = B1BBAd = BAd. (15)

    Interestingly, we see from eqn. 15 that not only doesthe iteration of eqn. 12 virtually (ignore) the small(singular values), but in fact ignores them com-pletely! So the claim of K94 that this method totallyeliminates . . . error amplification is true, strictlyspeaking, since it is 1 which leads to this phe-nomenon. On first glance a reader might take this

    statement to mean that noise itself is eliminated inthe estimate, which is not the case. The orthonor-mality of A and B imply that noise is propagatedthrough to the estimate. If the noise were white, thisorthonormality implies that the noise in the estimateis also white and of the same magnitude as in thedata. The situation is less clear for Poisson noise,but generally one might expect the image pixel noiseto be similar on average to that in the data pixels;examples given below will illustrate this. Note thatthe non-negativity implied by eqn. 12 implies somenoise suppression, but only for images where the av-erage intensity level is somewhat larger than the noise

    level.Equation 15 represents a regularized estimate ofs,

    and might be considered a variant on the dampedSVD (DSVD) method ([Ekstrom & Rhodes 1974]).To obtain the DSVD estimate, one purposely sup-presses the effect of small singular values with a

    damping function. Here, our damping function wouldbe simply the singular values themselves, so they

    cancel out the inverse. Naively this may soundlike a good idea, but it actually ignores a lot ofthe information provided by the singular values([Groetsch 1984]). If nothing else, the regulariza-tion provided by use of eqn. 15 is always the same we have no control over it. Yet in actual situ-ations, we need to control the amount of regular-ization, since wed apply more or less dependingon the signal-to-noise of the data. DSVD meth-ods in general allow such control via the specifica-tion of a cutoff value, where the damping functiondrops to zero (or very small values). We mightconceive of controlling the amount of regulariza-tion by stopping the iteration early. This is of-ten employed in iterative schemes such as conju-gate gradient ([van der Sluis & van der Vorst 1990]),LSQR ([Paige & Saunders 1982]), expectation max-imization ([Knodlseder 1997]), and Maximum En-tropy ([Knodlseder 1997]), where it is found eitherempirically or rigorously that the early iterationstend to pick out the statistically interesting struc-ture, while the later ones tend to just amplify thenoise. For the iterations described herein and in K94we dont pursue this mathematically, but show ex-amples of below.

    5 A brief note on statistics

    As we stated above, the formulation of K94 makesno reference to the data statistics, nor the statisticalinterpretation of the converged solution. The formu-lation is easily modified, though, so that eqn. 13 isdirectly related to Maximum Likelihood estimationof the pixel fluxes. Consider modification of eqn. 1to the following:

    Q1/2

    Rs Q1/2

    d, (16)

    where Q is the (symmetric positive-definite) covari-ance matrix of the noise in d; consider this to be aconstant for the moment, with the noise Gaussiandistributed. Least-squares solution of eqn. 16 corre-

    4

  • 7/28/2019 Comment on An Iterative Method of Imaging

    5/10

    sponds to 2-minimization, i.e.,

    mins ||Q1/2

    RsQ1/2

    d||2

    = mins (Rsd)T

    Q1

    (Rsd),(17)

    where weve reverted to the superscript T notationto denote the vector transpose. For independent ob-servations in d, Q is diagonal and eqn. 17 reduces tothe more familiar form of the 2. Minimization ofeqn. 17 implies the solution given by the condition

    RQ1Rs = RQ1d. (18)

    The iteration of eqn. 13 then becomes

    s(n+1)k =

    s(n)k (RQ

    1d)k

    (RQ

    1Rs(n))k. (19)

    Use of the iteration of K94 would require the calcu-lation of the singular factors for the matrix Q1/2R,but its not at all clear what the answer means sta-tistically, since carries much of the statistical in-formation in the SVD estimate (i.e., if we considerthe estimate as an expansion in bi, then the 2i arethe statistical variances of the corresponding coeffi-cients). Since is completely cancelled for K94sapproach, this statistical information is lost.

    Let us now consider the case of Poisson noise, en-countered for photon counting experiments. It can be

    shown that maximizing the Poisson likelihood func-tion over the pixel fluxes s yields also implies a condi-tion of the form eqn. 18 ([Wheaton et al.1995]), ex-cept that Q now has a dependence on the solutiongiven by

    Q = diag(Rs). (20)

    Substituting the nth iterate s(n) and substituting intoeqn. 19, we find

    s(n+1)k =

    s(n)k (Rd

    Rs(n))k

    (RRs(n)

    Rs(n))k

    =

    s(n)k (R

    dRs(n)

    )k

    j Rjk , (21)

    where the division of vectors above is taken to beelement-by-element. This is simply the ExpectationMaximization Maximum Likelihood (EMML) algo-rithm ([Hebert, Leahy, & Singh 1990]), also known

    5 10 15 20 25 30

    5

    10

    15

    20

    25

    30

    0

    1

    2

    3

    4

    5

    x 103

    Figure 1: Response for a single Compton scatter an-gle for a unit source located at pixel (16,16). Thisresponse is normalized to unit integral. Units arecounts.

    as Richardson-Lucy ([Richardson 1972, Lucy 1974])from a different derivation. This iteration does con-verge to the Poisson Maximum Likelihood solution interms of the pixel fluxes s.

    6 Examples

    To illustrate some aspects of the discussion above, wewill show some results from an idealized scenario, aswell as a more realistic simulation. As in K94, weemploy a typical Compton telescope response for asingle Compton scatter angle. The idealized PSF iscomputed over a 3232 pixel grid, and shown in Fig-ure 1. For simplicity, we dont worry about finite sizeor edge effects, and simply assume the PSF is nor-malized to unity, and wrap it around the boundariesas appropriate. The response R is computed from all

    possible translates of this PSF.Our first example is a unit source located at

    (16, 16), shown in Figure 2. The dataset is simplythe corresponding PSF, with no background or noiseadded. Solution via eqn. 5 gives, not surprisingly, theexact answer, i.e. Figure 2. The regularized solution

    5

  • 7/28/2019 Comment on An Iterative Method of Imaging

    6/10

    5 10 15 20 25 30

    5

    10

    15

    20

    25

    30

    0

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    Figure 2: Test case 1, unit source at pixel (16,16).The corresponding data is noise-free, and simplygiven by the PSF in Figure 1.

    5 10 15 20 25 30

    5

    10

    15

    20

    25

    30

    0

    2

    4

    6

    8

    10

    12

    14

    16

    x 103

    Figure 3: Estimated inverse for test case 1 via eqn. 15.Note that even though the data is noise-free, we havesubstantially underresolved the source.

    5 10 15 20 25 30

    5

    10

    15

    20

    25

    30

    0

    0.05

    0.1

    0.15

    0.2

    0.25

    0.3

    0.35

    Figure 4: Estimate for test case 1, using 100 steps ofthe iteration of eqn. 13.

    5 10 15 20 25 30

    5

    10

    15

    20

    25

    30

    0

    0.002

    0.004

    0.006

    0.008

    0.01

    0.012

    0.014

    0.016

    0.018

    Figure 5: Estimate for test case 1, using 100 steps ofthe iteration of Paper I, given by eqn. 12.

    6

  • 7/28/2019 Comment on An Iterative Method of Imaging

    7/10

    5 10 15 20 25 30

    5

    10

    15

    20

    25

    30

    0

    1

    2

    3

    4

    5

    6

    7

    8

    Figure 6: Test case 2, with multiple point sources ofvarious strengths.

    of eqn. 15 is shown in Figure 3, while the answer after100 iterations of eqs. 13 and 12 are shown in Figures 4and 5 respectively. Note that Figure 3 is quite spreadcompared to Figure 2, despite the fact that there isno noise or background in the data. This is an ad-mittedly extreme example, but a properly regularizedtechnique would allow one to take the statistics into

    account by varying the degree of regularization. Thecancellation of the singular values in eqn. 15 impliesthat the bi corresponding to small singular values getlarger weight in the solution. Since these bi typi-cally correspond to large-scale or smooth functions,oversmoothing is not a surprising effect. Comparisonof Figures 4 and 5 also demonstrate this limitation,since the results from eqn. 12 are inherently limitedto be no better than the solution of eqn. 15 in termsof resolving power.

    The second example uses several point sources ofvarying intensity, shown in Figure 6. We generatedata by convolution with the PSF and add a con-

    stant background such that the integrated source-to-background ratio is 10% (which would be extremelygood for existing Compton telescopes, but useful forpurposes of demonstration). Poisson random num-bers are then generated for these expected count lev-els, resulting in the simulated data shown in Figure 7.

    5 10 15 20 25 30

    5

    10

    15

    20

    25

    30

    25

    30

    35

    40

    45

    50

    55

    60

    65

    Figure 7: Data for test case 2, with a uniformbackground added to give an integrates source-to-background ratio of 10%, and Poisson noise. Theoverplotted contours show the noise-free data. Unitsare counts.

    5 10 15 20 25 30

    5

    10

    15

    20

    25

    30

    2

    1.5

    1

    0.5

    0

    0.5

    1

    1.5

    2

    x 109

    Figure 8: Direct least-squares estimate for test case 2.The large positive/negative fluctuations are the resultof noise amplification from small singular values.

    7

  • 7/28/2019 Comment on An Iterative Method of Imaging

    8/10

    5 10 15 20 25 30

    5

    10

    15

    20

    25

    30

    25

    30

    35

    40

    45

    50

    55

    60

    65

    Figure 9: Estimate for test case 2 from eqn. 15. Notethat the solution in much more stable compared withFigure 8, though noise roughly of the same magni-tude as in the data is present, as expected from theorthonormality ofA and B.

    5 10 15 20 25 30

    5

    10

    15

    20

    25

    30

    35

    40

    45

    50

    55

    60

    Figure 10: Estimate for test case 2 after 100 steps ofthe iteration of Paper I (eqn. 12). Note some regu-larization compared to Figure 9, due solely to stop-ping the iteration before convergence. Examinationof Figure 9 indicates that non-negativity does notplay a role, since due to the background level thestable fixed point solution is everywhere positive.

    5 10 15 20 25 30

    5

    10

    15

    20

    25

    30

    20

    40

    60

    80

    100

    120

    140

    160

    Figure 11: Estimate for test case 2 from the 100thiteration of eqn. 13.

    5 10 15 20 25 30

    5

    10

    15

    20

    25

    30

    20

    40

    60

    80

    100

    120

    140

    160

    Figure 12: Estimate for test case 2 after 100 steps ofEMML (eqn. 21).

    8

  • 7/28/2019 Comment on An Iterative Method of Imaging

    9/10

    5 10 15 20 25 30

    5

    10

    15

    20

    25

    30

    100

    200

    300

    400

    500

    600

    Figure 13: Estimate for test case 2 after 100 steps ofyet another iteration, where we replace the term inthe denominator of eqn. 12 with 3.

    For these simple examples, we make no attempt tofit or otherwise subtract the background, so the es-timates will include a uniform background level aswell. The least-squares solution is shown in Figure 8,and exhibits precisely the large oscillations we wishto suppress with regularization. The regularized di-rect solution of eqn. 15 is given in Figure 9; as weexpect, the large oscillations are damped, since thesmall singular values have no effect, but plenty ofnoise is still evident. If we compare to Figure 10,computed via 100 iterations of of eqn. 12, we see bet-ter noise suppression. However, the result of 100 it-erations of eqn. 13 in Figure 11 is certainly qualita-tively better in terms of noise suppression and sourceresolution. The result from 100 steps of the EMMLiteration of eqn. 21 is shown in Figure 12, and ap-pears comparable with Figure 11. Just for fun, wealso computed a result where we replaced the termin eqn. 12 with 3 (there is a 2 term implicit in

    eqn. 13). Shown in Figure 13, this map seems qual-itatively better yet. Quantitatively, of course, onlyeqn. 13 will give correct photometry, since it corre-sponds to the case where we use the proper general-ized inverse.

    7 Final comments

    I have shown above that the deconvolution methodof Kebede (1994) appears to be erroneously derived.Kebedes final expression (eqn. 12), however, doesprovide a regularized estimate of the inverse, wherethe regularization is caused by the exact cancellationof the singular values. It is left to the reader to decideif this is a positive characteristic of Kebedes pub-lished algorithm, though the above discussion andexamples would seem to indicate that it does notperform particularly well, even when compared withsimilar simple approaches. I have showed how to ex-plicitly include statistical information, but also thatmuch of this is lost due to the cancellation of thesingular values.

    I close with one final comment on the efficiencyof the method. K94 claims that . . . it takes verylittle computing time to run a program written basedon this iterative method regardless of the size of theproblem. However, this is clearly not true. Thecomputational complexity of SVD scales very badlywith the problem size, going like the aMN2+bN3 foran M N matrix ([Golub & Van Loan 1989]). Forsquare image and data with J J pixels, this wouldbe O(J6), which is terrible. Whether one uses eqn. 12or eqn. 15, the SVD must be calculated explicitly,

    which would impose a heavy computational burdenfor all but very small images.

    Acknowledgements

    DDD thanks Prof. Allen Zych for helpful comments.This work was partially supported by NASA GrantNAG5-5116.

    References

    [Campbell & Meyer 1979] Campbell, S. L. & Meyer,C. D. 1979, Generalized inverses of linear transfor-mations (Copp Clark Pitman:Toronto).

    [Dixon et al.1996] Dixon, D. D. et al.1996, ApJ, 457,789.

    9

  • 7/28/2019 Comment on An Iterative Method of Imaging

    10/10

    [Ekstrom & Rhodes 1974] Eckstrom, M. P. &Rhodes, R. L. 1974, J. Comp. Phys., 14, 319.

    [Golub & Van Loan 1989] Golub, G. H. & Van Loan,C. F. 1989, Matrix Computations, 2nd ed. (Johns-Hopkins University Press:Baltimore).

    [Groetsch 1984] Groetsch, C. W. 1984, The Theoryof Tikhonov Regularization for Fredholm Equa-tions of the First Kind (Pitman:Boston).

    [Hebert, Leahy, & Singh 1990] Hebert, T., Leahy,R., & Singh, M. 1990, JOSA, 7, 1305.

    [Lucy 1974] Lucy, L.B. 1974, AJ, 79, 745.

    [Kebede 1994] Kebede, L. W. 1994, ApJ, 423, 878.

    [Knodlseder 1997] Knodlseder, J. 1997, Ph.D. The-sis, LUniversite Paul Sabatier.

    [Paige & Saunders 1982] Paige, C. C. & Saunders,M. A. 1982, ACM Trans. Math Software, 8, 43.

    [Richardson 1972] Richardson, W.H. 1972, JOSA,62, 55.

    [Schonfelder et al.1993] Schonfelder et al.1993,ApJS, 86, 657.

    [van der Sluis & van der Vorst 1990] van der Sluis,A. & van der Vorst, H. A. 1990, Lin. Alg. Appl.,130, 257.

    [Wheaton et al.1995] Wheaton, Wm. A. et al.1995,ApJ, 438, 322.

    10