a reconstruction algorithm for eit..,2012

Upload: jagomez7896

Post on 02-Apr-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/27/2019 A Reconstruction Algorithm for EIT..,2012

    1/17

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING Int. J. Numer. Meth. Engng 2012; 89 :337353Published online 24 August 2011 in Wiley Online Library (wileyonlinelibrary.com). DOI: 10.1002/nme.3247

    A reconstruction algorithm for electrical impedance tomographybased on sparsity regularization

    Bangti Jin 1, * , , Tauquar Khan 2 and Peter Maass 3

    1 Department of Mathematics and Institute for Applied Mathematics and Computational Science, Texas A&M University, College Station, TX 77843-3368, U.S.A.

    2 Department of Mathematical Sciences, Clemson University, Clemson, SC 29634-0975, U.S.A.3Center for Industrial Mathematics, University of Bremen, Bremen D-28334, Germany

    SUMMARY

    This paper develops a novel sparse reconstruction algorithm for the electrical impedance tomography

    problem of determining a conductivity parameter from boundary measurements. The sparsity of the inho-mogeneity with respect to a certain basis is a priori assumed. The proposed approach is motivated by aTikhonov functional incorporating a sparsity-promoting ` 1 -penalty term, and it allows us to obtain quanti-tative results when the assumption is valid. A novel iterative algorithm of soft shrinkage type was proposed.Numerical results for several two-dimensional problems with both single and multiple convex and noncon-vex inclusions were presented to illustrate the features of the proposed algorithm and were compared withone conventional approach based on smoothness regularization. Copyright 2011 John Wiley & Sons, Ltd.

    Received 26 October 2010; Revised 10 May 2011; Accepted 14 May 2011

    KEY WORDS : electrical impedance tomography, reconstruction algorithm, sparsity regularization

    1. INTRODUCTION

    Electrical impedance tomography (EIT) is a diffusive imaging modality for determining the electri-cal conductivity/resistivity distribution of an object from boundary measurements. The experimentalsetup is as follows. One rst attaches a set of electrodes to the surface of the object, for example,a human body, then injects an electrical current through these electrodes and measures the result-ing electrical potential on these electrodes. In practice, several input currents are applied, and theinduced electrical potentials are measured. The inverse problem is to determine an unknown (spa-tially varying) physical electrical conductivity from such noisy measurements. Potential applicationsinclude noninvasive medical imaging, for example, detecting skin cancer and locating epileptic foci[1, 2], and nondestructive testing, for example, locating resistivity anomalies due to the presence of minerals or other contaminated sites [3], monitoring of oil and gas mixtures in oil pipelines [4], andow measurement in pneumatic conveying [5].

    As typical of many inverse coefcient problems with differential equations, EIT suffers from ahigh degree of nonlinearity and severe ill-posedness. However, its broad prospective applicationshave aroused signicant interest in designing numerical techniques for its efcient solution. A largenumber of reconstruction methods, which roughly can be divided into two groups, have been devel-oped in the literature: (i) general-purpose approaches based on various regularization strategies and(ii) specic approaches based on analytical considerations.

    *Correspondence to: Bangti Jin, Department of Mathematics, Texas A&M University, College Station, TX 77843-3368,U.S.A.

    E-mail: [email protected]

    Copyright 2011 John Wiley & Sons, Ltd.

  • 7/27/2019 A Reconstruction Algorithm for EIT..,2012

    2/17

    338 B. JIN, T. KHAN AND P. MAASS

    The former group includes variational-type methods of minimizing a certain discrepancy func-tional, for example, the least squares tting for the linearized or fully nonlinear model, either of Tikhonov-type or of iterative regularization methods. One prominent idea is linearization [68],which can often give reasonable reconstructions and thus is widely used in practice. One suchapproach is NOSER [8], which applies one step of a Newton method with a constant conduc-tivity as the initial guess. In practice, often some sort of regularization is included to restore the

    numerical stability. In [9], the authors investigated the standard Tikhonov regularization methodfor EIT in unbounded domains. In [10] and [11], the Mumford-Shah functional and total variationregularization, which are suitable for reconstructing piecewise constant conductivity coefcients,were studied. The latter group relies on rened analytical results, for example, spectral analysis, andincludes factorization method [12], d-bar method [13], and so on. These methods can be effectivefor specic type of conductivity distributions. A closely related problem is the inclusion detection(see, e.g., [14] for some analytical bounds for inclusion size). The aforementioned studies focuson deterministic inversion techniques. An alternative approach, statistical inversion [15], can shedinteresting insights into EIT reconstructions.

    Despite these impressive progress, there remains a signicant interest in developing new algo-rithms with a focus on identifying useful information and on fully exploiting a priori knowledge inthe hope of achieving better resolution. For conductivity distributions that consist of an unknown butessentially uninteresting background plus a number of interesting features that have relatively sim-ple mathematical descriptions, the ideas related to the concept of sparsity seem to offer an extremelypromising way forward. In the area of biomedical imaging, the sparsity idea dates at least back to[16, 17], where an ` 1 minimum-norm solution was sought after. Recently, the idea of sparsity hasbeen popularized by [18]. The basic idea is to incorporate a sparsity-promoting ` 1 penalty intothe Tikhonov functional. The use of ` 1 penalty for EIT reconstruction traces back to [19], wherea Huberized ` 1 penalty was derived from the Bayesian framework. Statistically, sparsity regular-ization amounts to enforcing an ` 1 prior on the expansion coefcients (in a certain basis). This isnowadays well known, and in EIT, it was discussed in [20]. In [20], two numerical methods, thatis, a GaussNewton method based on a smooth approximation of the L 1 penalty [20, pp. 1508]and MCMC sampling, were also suggested for obtaining an approximate solution. The numericalresults for complete electrode model demonstrated the feasibility of the L 1 approach. However, themodel in [20] is nite-dimensional because of its focus on statistical inversion, and its extension to

    a functional analytic setting is nontrivial as a direct generalization may be ill-dened.To the best of our knowledge, the sparsity idea has not been systematically explored for EIT

    so far. The goal of the present study is to develop a novel sparse reconstruction algorithm and todemonstrate its excellent performances in EIT imaging. We shall consider an approach that com-bines a data tting term J. / D .1=2/ k F. /j k2L 2 . / with an ` 1 penalty, where F. / isthe parameter-to-data map (cf. Section 2). We aim at deriving an applicable sparse reconstructionalgorithm of iterative soft shrinkage type by approximately minimizing this functional. Let 0 be aknown background. Then the standard iterative soft shrinkage algorithm [18,21] for reconstructingthe inhomogeneity D 0 takes the form

    i C 1 D S s i s. F 0. i /j / . F. i /j / , i C 1 D 0 C i C 1 ,

    where S is the soft shrinkage operator. It consists of two steps: a gradient descent step and ashrinkage step. Intuitively, the latter promotes the sparsity of the reconstruction as it zeros outsmall coefcients. Unfortunately, a straightforward implementation of this basic algorithm fails toyield reasonable results. In order to obtain an improved reconstructions, we exploit two essentialingredients, that is, an intermediate Sobolev smoothing step and an adaptive step size selection bythe BarzilaiBorwein (BB) rule. Partial theoretical justications, for example, well-posedness andconvergence rates, of our approach can be found in the companion theoretical paper [22].

    The remaining part of the paper is organized as follows. In Section 2, we describe the basic math-ematical model for EIT, that is, the continuum model, sparsity constraints, as well as the resultingTikhonov functional. Then, we describe the intricacies of the proposed reconstruction technique

    Copyright 2011 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2012; 89 :337353

    DOI: 10.1002/nme

  • 7/27/2019 A Reconstruction Algorithm for EIT..,2012

    3/17

    SPARSITY RECONSTRUCTION IN ELECTRICAL IMPEDANCE TOMOGRAPHY 339

    motivated by minimizing such functionals in Section 3. The numerical results for conductivitieswith convex/nonconvex inclusions are presented and are compared with the reconstructions by amore conventional approach based on smoothness regularization in Section 4. The results show thatthe sparsity approach yields quantitatively correct reconstructions in case that the inclusion doeshave a sparse representation.

    2. MATHEMATICAL MODEL AND SPARSITY CONSTRAINT

    Let be an open-bounded domain in R d .d D 2, 3/ and be its boundary, which is assumed to besmooth. The EIT forward problem is often modeled by the elliptic partial differential equation

    r . r u/ D 0 in ,

    @u@n

    D j on , (1)

    where u and j denote the electrical potential and current, respectively. To ensure the solv-ability of the forward problem, we require R j d s D 0, that is, j 2 QH 1=2 . / D v 2H 1=2 . / W

    R v d s D 0

    , and we enforce

    R u d s D 0 to guarantee uniqueness of the solution,

    that is, u 2 QH 1. / D v 2 H

    1. / WR v d s D 0(equipped with the equivalent norm kvk

    2QH 1 . / D

    R jr vj2 d x ). The condition R u d s D 0 imposes a grounding condition for the electrical potentialu . We shall denote the forward operator by F. / , that is, u D F. /j . Let the trace operator be ,which restricts a function u 2 QH 1 . / to the boundary , that is, u 2 QH 1=2 . / . In practice, weexperimentally acquire the data 2 L 2 . / by

    D F. /j C ,

    where denotes the noise due to imprecise measurement procedure and denotes the unknownphysical conductivity. The goal is to obtain an approximation to the physical conductivity fromseveral noisy measurements due to different applied currents.

    Before proceeding to the inversion procedure, we rst derive the linearized operator F 0. /j and

    its adjoint, which are needed for carrying out the numerical algorithm in Section 3. The derivationslater are heuristic but can be rigorously justied (see, e.g., [10] and [22]). First, we turn to the lin-earized operator. Let # be an arbitrary direction for the variation of with # vanishing on . Thenby subtracting the governing equations for F. /j and F . C #/j and ignoring the higher-orderterms, we arrive at

    r . r w/ D r .# r F. /j/ in ,

    @w@n

    D 0 on .

    The mapping from # to w obviously denes a linear operator, and it can be shown that it is thelinearization of the forward map F. /j around . We shall denote it by F 0. /j . The previousequation is often known as the sensitivity problem, relating the change in the solution u D F. /jbecause of the change in the conductivity .

    The adjoint of the linearized operator is needed for several purposes, in particular in deriving aformula for the gradient of the discrepancy. The following theorem seems well known (see, e.g.,[23, Lemma 5]). We include a short proof because of its importance to our later discussions.

    Theorem 2.1The adjoint of the operator F 0. /j WL 1 . / ! L 2 . / is given by

    . F 0. /j/ WL 2 . / ! L 1 . / ,f ! r Q u r F. /j ,

    Copyright 2011 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2012; 89 :337353

    DOI: 10.1002/nme

  • 7/27/2019 A Reconstruction Algorithm for EIT..,2012

    4/17

  • 7/27/2019 A Reconstruction Algorithm for EIT..,2012

    5/17

    SPARSITY RECONSTRUCTION IN ELECTRICAL IMPEDANCE TOMOGRAPHY 341

    Corollary 2.1The gradient of the discrepancy J is given by

    J 0. /# D Z . F 0. /j/ . F. /j /# d x .3. NUMERICAL ALGORITHM

    We can now describe an iterative reconstruction algorithm by approximately minimizing the func-tional with respect to for a given pair .j , / . The extension to multiple data sets .j k , k /

    N k D 1

    is straightforward by modifying the discrepancy J into J. / D 12 PN k D 1 wk k F. /j k

    k k

    2L 2 . / .

    Statistically speaking, a proper choice of the weights wk should depend on the inverse variance of each data set (see [15, chap. 3] for details), and in the absence of such knowledge, wk may be allset to unity. The complete procedure is listed in Algorithm 1. Here, the background 0 is assumedto be known. We note that the background 0 can be quite arbitrary, for example, discontinuous.

    Algorithm 1 Sparse reconstruction algorithm1: Give 0 and , and set 0 D 02: for i D 1, : : : , I do3: Compute i D 0 C i ;4: Compute the gradient J 0. i / ;5: Compute the H 10 -gradient J

    0s . i / ;

    6: Determine the step size si ;7: Update inhomogeneity by i C 1 D i si J 0s . i / ;8: Threshold i C 1 by S s i . i C 1 / ;9: Check stopping criterion.

    10: end for11: output an approximate minimizer .

    Since the work [18], optimization problems involving the ` 1 penalty have attracted intensiveinterest [21,24, 25]. The minimization problem is complicated by nonsmoothness of the ` 1 penalty,high-degree nonlinearity of the discrepancy J. / , and severe ill-posedness of the inverse problem.The basic algorithm (iterated soft shrinkage) for updating the inhomogeneity and i D 0 C iby minimizing formally reads

    i C 1 D S s i s. F 0. i /j / . F. i /j /,with an appropriately chosen step size s and a shrinkage operator S . However, as we mentionedearlier, a direct application of the mentioned algorithm does not yield meaningful results.

    We capitalize on recent works, notably [21, 24,25], to propose an efcient sparsity reconstructionalgorithm for the EIT problems. The approach is sketched in Algorithm 1. The essential ingredi-

    ents consist of calculating the gradient J 0

    of the discrepancy J (steps 4 and 5, see Section 3.1) andselecting a step size with the popular BB rule (step 6, see Section 3.3, where the stoping criterionat step 9 is also briey discussed). Steps 7 and 8 of thresholded gradient descent are described inSection 3.2. Our experiences indicate that the uses of the smoothed gradient and the BB rule arekeys to good algorithmic performance.

    3.1. Gradient of the discrepancy

    By Corollary 2.1, evaluating the gradient J 0 involves simply solving an adjoint problem. Unfor-tunately, a direct application of this gradient does not give reasonable reconstructions. This isattributed to its insufcient regularity. We remind that the gradient depends on the metric of the

    Copyright 2011 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2012; 89 :337353

    DOI: 10.1002/nme

  • 7/27/2019 A Reconstruction Algorithm for EIT..,2012

    6/17

    342 B. JIN, T. KHAN AND P. MAASS

    underlying space for the conductivity . In Corollary 2.1, the metric is L 2 . / , and it is dened viaduality mapping

    J 0. / D Z J 0. / d x D hJ 0. / , i L 2 . / .Thus, J 0. / is an element of the dual space of L 1 . / and likely does not have sufcient regularityfor a gradient descent step. Alternatively, we may take the H 10 . / metric for the conductivity ,that is, we dene J 0s . / via

    J 0. / D hJ 0s . / , i H 10 . / .

    Integration by parts yields

    J 0s . / C J 0s . / D J

    0. / in

    with a homogeneous Dirichlet boundary condition. The gradient J 0s is also known as Sobolev gra-dient in the literature [26]. The gradient J 0s naturally satises the desired boundary condition, and itis a preconditioning of the conventional L 2 gradient. In fact, the Sobolev gradient J 0s . / coincideswith i J 0. / , where i is the adjoint of the embedding operator i from H 10 . / into L 2 . / . Inother words, we equip the admissible set A with the H 10 . / norm, thereby implicitly restricting theadmissible conductivity to a smoother subset. Numerically, evaluating the Sobolev gradient J 0s . /involves solving an extra well-posed forward problem and can be carried out in a computationallyefcient manner.

    As to the gradient J 0s , one has the following regularity. For its proof, we refer to Appendix A. Thetheorem shows that it has the same regularity as the underlying space H 10 . / , thereby ensuring thevalidity for updating the inhomogeneity.

    Theorem 3.1Let 2 A . Assume that the boundary condition satises j 2 L s . / \ QH . 1=2/ . / for some s > 2,and further if d D 2 or d D 3 with c0 in (3) being sufciently close to c1 . Then the smoothedgradient J 0s satises J

    0s 2 H

    10 . / .

    3.2. Soft shrinkage operator With the gradient J 0s at hand, we can locally approximate the functional . / D . 0 C / witha quadratic form

    . 0 C / . 0 C i / h i , J 0s . i / i H 1 . / C1

    2s ik i k2H 1 . / C k k` 1 ,

    where si is some step size to be discussed in Section 3.3. We then attempt to minimize this proxy ateach step. The adoption of the H 1 . / inner product is related to the use of the smoothed gradient,which, as explained earlier, intrinsically is the gradient with respect to H 10 . / . This approximationis inspired by the recent work [25], with the Sobolev gradient in place of the conventional L 2 gra-dient. The proxy is convex, in contrast to the original nonconvex functional. If the basis k isorthonormal in H 10 . / , this proxy is separable in , as in the surrogate function approach [18],and thus lends itself to closed-form solutions. Up to an unimportant constant, the proxy problem isequivalent to

    12s i

    k . i si J 0s . i // k2H 1 . / C k k` 1 . (4)

    We rst introduce the soft shrinkage operator S dened componentwise for any > 0

    S . / D sign . / max j j , 0 ,

    where sign is the standard sign function. Here and later, we have slightly abused the notation for thequantity in the bracket by identifying with the sequence of its expansion coefcients with respect

    Copyright 2011 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2012; 89 :337353

    DOI: 10.1002/nme

  • 7/27/2019 A Reconstruction Algorithm for EIT..,2012

    7/17

    SPARSITY RECONSTRUCTION IN ELECTRICAL IMPEDANCE TOMOGRAPHY 343

    to k . Assisted with the soft shrinkage operator S , we can write down explicitly the solution tothe proxy problem (4) as

    i C 1 D S s i . i si J 0s . i // .

    So its solution involves a gradient descent step followed by a soft shrinkage step (steps 11 and 12 of Algorithm 1). It is easy to see that this step sets all small coefcients to zero, thereby promoting thesparsity of the reconstruction.

    Remark 3.1The soft shrinkage operation minimizes exactly the proxy (4) only in the case of an orthonor-mal basis/frame k . It is well known that, in this case, the converged solution of Algorithm 1corresponds to a minimizer of the Tikhonov functional . However, the algorithm is still welldened for any basis/frame k irrespective of its orthonormality and can still be used as a sparsereconstruction algorithm.

    Remark 3.2To achieve further exibility of the reconstruction procedure, one might adopt a weighted H 10 . /metric induced by the inner product h , iH 1 D hr , r i L 2 C h , iL 2 with the weight controlling

    the degree of smoothing. This invokes only minor changes to Algorithm 1.

    3.3. BarzilaiBorwein step length

    Usual gradient algorithms, for example, steepest descent methods, suffer from slow convergence.One recent breakthrough in enhancing its convergence behavior is due to Barzilai and Borwein[27], who, in 1988, developed an ingenious step size selection rule (BB rule). It performs muchbetter than the standard steepest descent algorithm. The basic idea is to mimic the Hessian with sI over the most recent steps so that sI. i i 1 / J 0s . i / J

    0s . i 1 / approximately holds. This

    equation may not have a solution, so it is solved in a least squares sense, that is,

    si D arg mins

    ks. i i 1 / .J 0s . i / J 0

    s . i 1 // k2H 1 . / .

    This gives rise to

    si Dh i i 1 , J 0s . i / J

    0s . i 1 / i H 1 . /

    h i i 1 , i i 1 i H 1 . /.

    This is one basic variant among many popular BB rules [27, 28]. In the present study, we restrictour attention to this simplest choice. The BB rule is nonmonotonic in that the functional value doesnot decrease monotonically, although this property is practically very desirable. It is known thatenforcing the monotonicity would destroy the fast convergence behavior of the rule, and occasionalincrease of the functional value is essential for the observed fast convergence. In order to retain theglobal convergence of the algorithm, we follow [25, 29], by choosing the step length s to enforcethe weak monotonicity

    . 0 C S s . i sJ 0s. i /// 6 max

    i M C 16 k 6 i. k /

    s

    2kS s . i sJ 0

    s. i // i k2

    H 1

    . /, (5)

    where is a small number and M > 1 is an integer. The case M D 1 reduces to the standard mono-tone rule. In our implementation, we use the step size produced by the BB rule as the initial guess ateach inner iteration and then decrease it geometrically by a factor q until criterion (5) is satised. Inaddition, the initial step size is constrained to smin , smax , and the iteration is terminated when si fallsbelow sstop and when the iteration is deemed to stagnate. Alternatively, the algorithm is terminatedwhen the maximum number I of iterations is reached. We remark that, upon convergence, the BBrule will not affect the minimizer and it only effects the computational efciency.

    Before proceeding to numerical results, we would like to add that the convergence of the sparsereconstruction algorithm of soft shrinkage type for general linear and nonlinear inverse problems

    Copyright 2011 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2012; 89 :337353

    DOI: 10.1002/nme

  • 7/27/2019 A Reconstruction Algorithm for EIT..,2012

    8/17

    344 B. JIN, T. KHAN AND P. MAASS

    has been studied in [18, 21]. In [25], the authors discussed the BB rule (with back tracking) for stepsize selection and proved the convergence to a stationary point.

    4. NUMERICAL EXPERIMENTS AND DISCUSSIONS

    In this section, we present some numerical results for simulated data to illustrate the featuresof our approach. We have tested it with a single inclusion at different locations as well as withmultiple inclusions of varying conductivity. Its features are well illustrated by the following exam-ples. For reconstruction, we utilize multiple, say N , data sets, and consider the discrepancyJ. / D .1=2/ P

    N k D 1 wk kF. /j k

    k k

    2L 2 . / , where .j k ,

    k /

    N k D 1 are N sets of input currents

    and respective noisy potential measurements and wk N k D 1 are the weights given to each dataset. We use the rst ve sinusoidal currents ( N D 5) and put an equal weight (unity) on everydata set. The noisy data are generated by adding Gaussian errors into the exact data pointwise as

    k D k C " max k max x 2 j k j , where are standard normal random variables and " refers to the

    relative noise level.The forward problems and adjoint problems were discretized using piecewise linear nite ele-

    ments with 1032 triangle elements (see Figure 1 for an exemplary mesh). The exact data weregenerated using a different and ner mesh to avoid the inverse crime and were then interpolated tothe coarse mesh. The basis k was taken to be the nite element basis. In Algorithm 1, the max-imum number I of iterations was taken to be 500 . The step size bounds smin and smax were set to 1and 2048 , respectively, with sstop D 1 10 3 , q D 2 and M D 5, and the parameter D 1 10 5 .The choice of the regularization parameter is very important in all regularization methods, so isin sparsity regularization. Here, we shall not delve into this issue, and we will use a single value,which proves sufcient, for all the examples by the proposed algorithm. The specic value is manu-ally determined for Example 4.1 and then used for the rest. Nonetheless, to illustrate the sensitivityof the approach with respect to this parameter, we present results for varying values for one of thetest examples, that is, Example 4.4.

    For the purpose of comparison, we present also the reconstructions by a conventional algorithmbased on H 1 semi-norm regularization [20, 3032]. The complete algorithm is listed in Appendix Bfor the readers convenience. The regularization parameter for the smoothness regularization ismanually selected to give a feasible reconstruction. The values of used for the reconstructionspresented later are summarized in Table I. We have tested the approach on several different noiselevels, and the results are comparable. Therefore, we present results only for one medium noiselevel, that is, " D 3% (relative noise). The resulting levels of absolute noise are given in the rst rowof Table II.

    (a) (b)

    Figure 1. Mesh for the conductivity: (a) schematic plot of conductivity and (b) mesh for representingconductivity.

    Copyright 2011 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2012; 89 :337353

    DOI: 10.1002/nme

  • 7/27/2019 A Reconstruction Algorithm for EIT..,2012

    9/17

    SPARSITY RECONSTRUCTION IN ELECTRICAL IMPEDANCE TOMOGRAPHY 345

    Table I. The value of the regularization parameter for each example.

    Example 4.1 4.2 4.3 4.4 4.5 4.6

    Smoothness 4.64e-3 1.00e-3 1.00e-3 1.00e-2 2.15e-3 2.15e-3Sparsity 1.23e-4 1.23e-4 1.23e-4 1.23e-4 1.23e-4 1.23e-4

    Table II. Numerical results for 3% relative pointwise noise. The notations , E , and e stand for theabsolute noise level, the residual, and the reconstruction error, respectively.

    Example 4.1 4.2 4.3 4.4 4.5 4.6

    Absolute noise 1.14e-1 7.79e-2 7.35e-2 1.43e-1 7.57e-2 1.13e-1Residual E (sparsity) 1.20e-1 8.91e-2 9.59e-2 1.41e-1 8.28e-2 1.20e-1Error e (sparsity) 1.06 2.11 3.15 5.10e-1 1.57 8.75e-1Error e (smoothness) 1.88 3.29 3.40 7.18e-1 2.79 1.73

    (a) (b) (c)

    Figure 2. Reconstructions for Example 4.1 with " D 3% noise. Here and later, we use the notation (a)exact, (b) smoothness, and (c) sparsity to denote the true conductivity distribution, the reconstructionby smoothness regularization (cf. Algorithm 2 in Appendix B), and the reconstruction by the proposed

    reconstruction algorithm (cf. Algorithm 1), respectively.

    4.1. Single circular inclusion

    A rst example is one single inclusion.

    Example 4.1The true conductivity eld consists of a homogeneous background plus one single circular inclusioncentered at .0 .5, 0.45/ with a radius 0.3. The conductivities of the background and the inclusion are1 and 6, respectively.

    The reconstructions by the smoothness (Appendix B) and sparsity (Algorithm 1) regularizationare shown in Figure 2(b) and (c), respectively. The former is overall smooth, which is typical of such smoothness regularization. The location of the inclusion is reasonably retrieved: its support isslightly larger than that of the exact one; in particular, it extends towards the center of the domain .More importantly, the magnitude of the inclusion conductivity is signicantly underestimated to 2.7in comparison with the true value 6. In contrast, the sparsity reconstruction is more localized at thecorrect location, and the estimate 5.7 is close to the true value 6, which is quantitatively correct. Tomeasure the accuracy of an approximation , we compute the L 2 . / error e D k

    kL 2 . / ,

    where is the unknown physical conductivity. The errors for both reconstructions are shown inTable II. As expected, the proposed algorithm gives a smaller error than the smoothness approach.

    The convergence of the algorithm is rather steady, although the step size selection does not enforcemonotone decreasing of the functional value. As is typical of gradient-type methods, the algorithmrst decreases the functional value rapidly, and then it slows down considerably as the iteration pro-ceeds (see Figure 3(b)). The residual E D .2J. i // 1=2 exhibits a similar behavior (cf. Figure 3(a)).Upon convergence, for Algorithm 1, the residual E is 1.20 10 1 , which is close to the true noise

    Copyright 2011 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2012; 89 :337353

    DOI: 10.1002/nme

  • 7/27/2019 A Reconstruction Algorithm for EIT..,2012

    10/17

    346 B. JIN, T. KHAN AND P. MAASS

    (a) (b) (c)

    Figure 3. Convergence behavior of Algorithm 1 for Example 4.1 with " D 3% noise: (a) residual E , (b)functional , and (c) L 2 error e.

    (a) (b) (c)

    Figure 4. Reconstructions for Example 4.2 with " D 3% noise: (a) exact, (b) smoothness, and (c) sparsity.

    level D 1.14 10 1 (Table II). This indicates that the discrepancy principle might be adoptedfor selecting an appropriate regularization parameter . Overall, the algorithm is quite effective in

    reducing the error (see Figure 3(c)). We note that the convergence behavior of the algorithm for theother examples is similar and thus is not repeated.

    4.2. Multiple circular inclusions

    Example 4.2The true conductivity eld consists of a homogeneous background and four circular inclusions: twocentered at .0 , 0.6/ and .0 , 0.6/ with a radius 0.3 and two centered at .0 .75, 0/ and . 0.75, 0/ witha radius 0.2. The conductivities of the background and inhomogeneities are 1 and 6, respectively.

    Multiple inclusions are challenging for some existing numerical algorithms that can only recoverthe convex envelope. Nonetheless, both the smoothness and sparsity regularization can give rea-

    sonable reconstructions (see Figure 4). In the smoothness reconstruction, the four inclusions arecorrectly identied; however, their magnitude is severely underestimated, and their shape is grosslysmoothed. These drawbacks are partially remedied by the proposed reconstruction technique withsharper localization of the inclusions and with more accurate estimate of their conductivity.

    4.3. Toric inclusion

    Example 4.3The true conductivity eld consists of a homogeneous background and one toric inclusion centeredat .0 , 0.15/ with inner and outer radii being 0.6 and 0.8, respectively. The conductivities of thebackground and the inhomogeneity are 1 and 6, respectively.

    Copyright 2011 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2012; 89 :337353

    DOI: 10.1002/nme

  • 7/27/2019 A Reconstruction Algorithm for EIT..,2012

    11/17

    SPARSITY RECONSTRUCTION IN ELECTRICAL IMPEDANCE TOMOGRAPHY 347

    Toric inclusions present one of the most challenging objects to recover. The reconstructions areshown in Figure 5. We observe that we can barely observe a toric inclusion in the smooth recon-struction, whereas it stands out clearly in the sparsity reconstruction. Further, the magnitude of theconductivity is reasonably reconstructed. However, the part closer to the center is less accuratelyresolved, which is in accordance with the fact that inclusions near the center are more difcult todetect.

    4.4. Two circular inclusions with different magnitudes

    Example 4.4The true conductivity eld consists of a homogeneous background and two circular inclusions cen-tered at .0 , 0.6/ and .0 , 0.6/ with a radius 0.3 each. The conductivity of the background is 1, andthe conductivities of the upper and lower inclusions are 3 and 0.4, respectively.

    The presence of inclusions, both higher and lower conductivities, and the background is chal-lenging, and the simultaneous detection of such regions is an unsolved issue for some direct recon-struction algorithms. However, this does not pose any difculty to both algorithms (see Figure 6).The smoothness reconstruction gives a reasonable indication for higher and lower conductiveregions. However, their conductivity estimates are inaccurate. The proposed approach gives bet-

    ter localization and conductivity estimates. It is observed that the conductivities of both regions areunderestimated slightly.We would like to mention that the proposed method is not sensitive to the regularization parame-

    ter . To illustrate the point, we show in Figure 7 the reconstructions for several different parametervalues that span several orders of magnitude. As expected, the details of the reconstructions changeslightly as the regularization parameter varies: there are some small spurious oscillations as decreases to a very small value. Nonetheless, the overall structure of the reconstructions in terms of conductivity magnitude and inclusion locations remains fairly stable.

    (a) (b) (c)

    Figure 5. Reconstructions for Example 4.3 with " D 3% noise: (a) exact, (b) smoothness, and (c) sparsity.

    (a) (b) (c)

    Figure 6. Reconstructions for Example 4.4 with " D 3% noise: (a) exact, (b) smoothness, and (c) sparsity.

    Copyright 2011 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2012; 89 :337353

    DOI: 10.1002/nme

  • 7/27/2019 A Reconstruction Algorithm for EIT..,2012

    12/17

    348 B. JIN, T. KHAN AND P. MAASS

    = 9.50 10 6 = 1.79 10 5 = 3.41 10 5

    = 6.45 10 5 = 1.23 10 4 = 2.32 10 4

    = 4.40 10 4 = 8.35 10 4 = 1.58 10 3

    Figure 7. Reconstructions for Example 4.4 with " D 3% noise by the sparsity reconstruction technique withvarious regularization parameters.

    4.5. Discontinuous background

    Example 4.5The true conductivity eld consists of a discontinuous background and two circular inclusions cen-tered at .0 , 0.6/ and .0 , 0.6/ . The background conductivities are 1.5 and 1 on the upper-half andlower-half circle, respectively, and that of the inclusions are 6.5 and 6.

    This example illustrates the feasibility of our approach for nonhomogeneous backgrounds. Againa sharp localization of the inclusions and an accurate estimate of the conductivities are observed incomparison with the exact one and the smoothed reconstruction (see Figure 8). The quality of thereconstruction is comparable with previous examples; thus, the presence of discontinuities in thebackground does not affect the reconstruction.

    4.6. Imprecise background

    Example 4.6 The conductivity eld consists of an oscillatory background plus one circular inclusion centered at.0 .5, 0.45/ with a radius 0.3. The means of the conductivities of the background and the inclusionare 1 and 6, respectively. However, they are subject to pointwise random oscillations distributeduniformly in 0.2, 0.2 .

    Copyright 2011 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2012; 89 :337353

    DOI: 10.1002/nme

  • 7/27/2019 A Reconstruction Algorithm for EIT..,2012

    13/17

    SPARSITY RECONSTRUCTION IN ELECTRICAL IMPEDANCE TOMOGRAPHY 349

    (a) (b) (c)

    Figure 8. Reconstructions for Example 4.5 with " D 3% noise: (a) exact, (b) smoothness, and (c) sparsity.

    (a) (b) (c)

    Figure 9. Reconstructions for Example 4.6 with " D 3% noise: (a) exact, (b) smoothness, and (c) sparsity.

    This last example intends to showcase the approach for an imprecise background estimate. Itoccurs in practical scenarios, for example, porous media and concrete. The reconstructions areshown in Figure 9, where we have used the mean value as the initial guess 0 . Although the back-ground estimate 0 is not accurate, the reconstruction deteriorates only slightly compared with thatof Example 4.1 in terms of the magnitude and represents a qualitatively acceptable approximation,because the location of the inclusion is well retrieved. Hence the proposed algorithm still yieldsbetter reconstructions than the smoothness approach.

    5. CONCLUDING REMARKS

    We have presented a novel image reconstruction technique for electrical impedance tomographybased on sparsity concepts. It is adapted from the classical iterative soft shrinkage algorithm, and itsmain ingredients include a Sobolev smoothing of the estimated gradients, a soft shrinkage iteration,and an adaptive step size selection based on the BB rule. This algorithm is numerically illustrated onseveral examples with single/multiple convex/nonconvex inclusions and is compared with a conven-tional reconstruction approach on the basis of smoothness regularization. The results indicate thatthe proposed technique can yield quantitatively acceptable reconstructions in terms of the locationsas well as the conductivity magnitudes of the inclusions and compare favorably with those obtainedwith the conventional approach.

    This study represents only as a rst step in sparsity regularization for EIT imaging, and thereare a number of problems deserving further research. First, real-life applications usually require thecomplete electrode model [3234]. For a preliminary evaluation of the proposed approach on theelectrode model and real experimental data, we refer to [35]. Second, although one single value of the regularization parameter has worked reasonably for all the examples herein, a systematic strat-egy is desirable in practice. This might require a knowledge of either the noise level or the expectedenergy (certain norm) of the solution. We refer to [36,37] for some recent theoretical progress in caseof linear inverse problems. Third, it is of immense interest to extend the approach to closely relatedimaging modalities, for example, diffused optical tomography. Lastly, careful numerical analysis of

    Copyright 2011 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2012; 89 :337353

    DOI: 10.1002/nme

  • 7/27/2019 A Reconstruction Algorithm for EIT..,2012

    14/17

    350 B. JIN, T. KHAN AND P. MAASS

    the reconstruction technique, for example, convergence analysis, would provide further insights intoits excellent performance.

    ACKNOWLEDGEMENTS

    The authors are grateful to two anonymous referees for their constructive comments that have led to a sig-

    nicant improvement in the presentation of the manuscript. The rst author was substantially supported bythe Alexander von Humboldt Foundation through a postdoctoral researcher fellowship and was partiallysupported by Award No. KUS-C1-016-04, made by King Abdullah University of Science and Technology(KAUST). The second author thanks the US National Science Foundation for supporting the work on thisproject by grant DMS 0915214, and the third author would like to thank the German Science Foundation forsupporting the work through grant MA 1657/18-1.

    APPENDIX A: PROOF OF THEOREM 3.1

    We rst recall the following lemma. Here, we denote by q0 the conjugate exponent of q > 1, that is,.1=q/ C .1=q 0/ D 1.

    Lemma A.1Let g 2 L s . / ; then, the mapping 7!

    R g d x belongs to .W 1 ,q

    0

    . // 0 for all s > .qd=.q C d // ,respectively for all s > 1 , if q0 > d , respectively if 1 < q 0 < d .

    Proof If v 2 W 1 ,q

    0

    . / , then v 2 L .q 0 d=.d q 0 // . / if 1 < q 0 < d and v 2 L p . / for any p < 1 if q0 > d by trace theorem and Sobolev embedding theorem [38]. Thus, the mapping v 7! R gv d x belongsto .W 1 ,q 0 . // 0 if s > .q 0d=.d q0// 0 D .qd=.q C d // for 1 < q 0 < d or s > 1 otherwise. Thiscompletes the proof.

    In particular, L s . / .H 10 . //0 for s > 1 and s > 6=5 for d D 2, 3, respectively. To prove

    Theorem 3.1, we need Meyers celebrated gradient estimate [39,40], which reads as follows.

    Theorem A.1For any 2 A , there exists a constant Q that depends on d , , and only and tends to C1 and2 as ! 1 and ! 0, respectively, such that for any q 2 .2 , Q/ , any s 2 q .q=d/ , C1 , andj 2 L s . / \ QH

    12 . / , the solution u 2 QH 1 . / to the Neumann problem

    r . r u/ D 0 in and @u@n

    D j on

    satises the estimate

    kukW 1 ,q . / 6 C kj kL s . / .

    Now, we are in a position to state the proof of Theorem 3.1.

    Proof of Theorem 3.1Proof . First we consider the adjoint solution w F. /. F. /j / (see Theorem 2.1). BySobolev embedding and trace theorem, we have H 1 . / that embeds continuously into L 2 . / , and

    2 L 2 . / holds. Therefore, F. /j 2 L 2 . / , and appealing again to Theorem A.1,

    w 2 W 1 ,q . / for any q 2 .2 , min .Q , 4// , d D 2,.2 , min .Q , 3// , d D 3.In order to show the theorem, we only need to show that

    r F. /j r w 2 .H 10 . //0,

    Copyright 2011 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2012; 89 :337353

    DOI: 10.1002/nme

  • 7/27/2019 A Reconstruction Algorithm for EIT..,2012

    15/17

    SPARSITY RECONSTRUCTION IN ELECTRICAL IMPEDANCE TOMOGRAPHY 351

    Consequently, by Lemma A.1, it sufces showing the term r F. /j r w 2 L s . / for s > 1 ands > 6=5 for d D 2 and d D 3, respectively.

    We discuss the two cases d D 2 and d D 3 separately. In the case of d D 2, we haver F. /j 2 L 2 . / by noting j 2 QH 1=2 . / and r w 2 L r . / for some r > 2 from the earlierdiscussion; thus, the condition r F. /j r w 2 L s . / for some s > 1 is veried. In the case of d D 3, it sufces to ensure that both r F. /j and r w belong to L 12=5 . / , which by Theorem A.1,

    is satised for 12=5 < Q and s > 8=5. The former is veried by Theorem A.1 as Q ! C1 as! 1.

    Remark A.1Under the conditions of Theorem 3.1 with s being sufciently large, the smoothed gradient J 0s isactually an element of C. / , the space of continuous functions on , as a consequence of Sobolevembedding theorem [38].

    APPENDIX B: BENCHMARK ALGORITHM

    The benchmark algorithm consists of minimizing the Tikhonov functional

    J . / D k F. /jk

    2L 2 . / C kr k

    2L 2 . / .

    As to nding a minimizer of the functional J , we employ the GaussNewton-type method.Specically, the forward operator F. /j is linearized around an initial guess 0 , that is,

    F. /j D F. 0 /j C H. 0 / C h.o.t.,

    where H D r F. 0 /j is the Jacobian of the forward mapping with respect to the parameter evaluated at 0 . Upon substituting it into the functional J and ignoring the higher-order-term

    (h.o.t.), we obtain a linearized problem

    min 2 A

    kH. 0 / . F. 0 /j / k2L 2 . / C kr k2L 2 . / .

    The solution is explicitly given by the linear system

    H H C . / D H F. 0 /j C H 0.The previous system can be solved directly to obtain a (hopefully) better approximation. Then,we iteratively update the reconstruction by taking the solution as the initial guess. In practice,the iterative procedure achieves convergence within a few iteration. The complete procedure islisted in Algorithm 2. The stopping criterion can be based on monitoring the relative change of the consecutive iterations.

    Algorithm 2 Reconstruction algorithm based on smoothness1: Set and 0 D 02: for i D 1, : : : , I do3: Compute the Jacobian H i D r F. i 1 /j ;4: Update to i by solving the linearized problem

    H i H i C . / D H i F. i 1 /j C H i i 1.

    5: Check a stopping criterion.6: end for7: output an approximate minimizer .

    Copyright 2011 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2012; 89 :337353

    DOI: 10.1002/nme

  • 7/27/2019 A Reconstruction Algorithm for EIT..,2012

    16/17

    352 B. JIN, T. KHAN AND P. MAASS

    REFERENCES

    1. Holder DS (ed.). Electrical Impedance Tomography: Methods, History and Applications . Institute of PhysicsPublishing: Bristol, 2005.

    2. Bayford RH. Bioimpedance tomography (electrical impedance tomography). Annual Review of Biomedical Engi-neering 2006; 8:6391.

    3. Daily W, Ramirez A, LaBrecque D, Nitao J. Electrical resistivity tomography of vadose water movement. Water Resources Research 1992; 28 (5):14291442.

    4. Isaksen O, Dico AS, Hammer EA. A capacitance-based tomography system for interface measurement in separationvessels. Measurement Science and Technology 1994; 5(10):12621271.

    5. Brodowicz K, MaryniakL, Diakowski T. Application of capacitance tomography for pneumatic conveying processes.In Proc. 1st ECAPT Conf. (European Concerted Action on Process Tomography) Manchester 2629 March 1992 ,Beck MS, Campogrande E, Morris M, Williams RA, Waterfall RC (eds): Southampton, 1992; 361368.

    6. Wexler A, Fry B, Neuman MR. Impedance-computed tomography algorithm and system. Applied Optics 1985;24 (23):39853992.

    7. Yorkey TJ, Webster JG, Tompkins WJ. Comparing reconstruction algorithms for electrical impedance tomography1987; 34 (11):843852.

    8. Cheney M, Isaacson D, Newell JC, Simske S, Goble J. NOSER: An algorithm for solving the inverse conductivityproblem. International Journal of Imaging Systems and Technology 1990; 2(2):6675.

    9. Lukaschewitsch M, Maass P, Pidcock M. Tikhonov regularization for electrical impedance tomography onunbounded domains. Inverse Problems 2003; 19 (3):585610.

    10. Rondi L, Santosa F. Enhanced electrical impedance tomography via the Mumford-Shah functional. ESAIM: Control,Optimisation and Calculus of Variations 2001; 6:517538.

    11. Chung ET, Chan TF, Tai X-C. Electrical impedance tomography using level set representation and total variationalregularization. Journal of Computational Physics 2005; 205 (1):357372.

    12. Kirsch A, Grinberg N. The Factorization Method for Inverse Problems . Oxford University Press: Oxford, 2008.13. Isaacson D, Mueller JL, Newell JC, Siltanen S. Reconstructions of chest phantoms by the D-bar method for electrical

    impedance tomography. IEEE Transactions on Medical Imaging 2004; 23 (7):821828.14. Alessandrini G, Rosset E. The inverse conductivity problem with one measurement: bounds on the size of the

    unknown object. SIAM Journal on Applied Mathematics 1998; 58 (4):10601071.15. Kaipio J, Somersalo E. Statistical and Computational Inverse Problems . Springer-Verlag: New York, 2005.16. Matsuura K, Okabe Y. Selective minimum-norm solution of the biomagnetic inverse problem. IEEE Transactions on

    Biomedical Engineering 1995; 42 (6):608615.17. Uutela K, Hmlinen M, Somersalo E. Visualization of magnetoencephalographic data using minimum current

    estimates. NeuroImage 1999; 10 (2):173180.18. Daubechies I, Defrise M, De Mol C. An iterative thresholding algorithm for linear inverse problems with a sparsity

    constraint. Communications on Pure and Applied Mathematics 2004; 57 (11):14131457.19. Martin T, Idier J. A fem-based nonlinear map estimator in electrical impedance tomography. 1997 International

    Conference on Image Processing (ICIP97) - Volume 2 , 1997; 684687.20. Kaipio JP, Kolehmainen V, Somersalo E, Vauhkonen M. Statistical inversion and Monte Carlo sampling methods in

    electrical impedance tomography. Inverse Problems 2000; 16 (5):14871522.21. Bredies K, Lorenz DA, Maass P. A generalized conditional gradient method and its connection to an iterative

    shrinkage method. Computational Optimization and Applications 2009; 42 (2):173193.22. Jin B, Maass P. An analysis of electrical impedance tomography with applications to Tikhonov regular-

    ization. DFG-SPP 1324, Preprint 70 , 2010. (Available from: http://www.dfg-spp1324.de/download/preprints/ preprint070.pdf) [20 December 2010].

    23. Borcea L. Electrical impedance tomography. Inverse Problems 2002; 18 (6):R99R136.24. Bonesky T, Bredies K, Lorenz DA, Maass P. A generalized conditional gradient method for nonlinear operator

    equations with sparsity constraints. Inverse Problems 2007; 23 (5):20412058.25. Wright SJ, Nowak RD, Figueiredo M. Sparse reconstruction by separable approximation. IEEE Transactions on

    Signal Processing 2009; 57 (7):24792493.26. Neuberger JW. Sobolev Gradients and Differential Equations . Springer-Verlag: Berlin, 1997.27. Barzilai J, Borwein JM. Two-point step size gradient methods. IMA Journal of Numerical Analysis 1988;

    8(1):141148.28. Dai Y-H, Hager WW, SchittkowskiK, Zhang H. The cyclic Barzilai-Borwein method for unconstrained optimization.

    IMA Journal of Numerical Analysis 2006; 26 (3):604627.29. Grippo L, Lampariello F, Lucidi S. A nonmonotone line search technique for Newtons method. SIAM Journal on

    Numerical Analysis 1986; 23 (4):707716.30. Lionheart WRB. EIT reconstruction algorithms: pitfalls, challenges and recent developments. Physiological Mea-

    surement 2004; 25 (1):125142.31. Soleimani M, Lionheart WRB. Nonlinear image reconstruction for electrical capacitance tomography using

    experimental data. Measurement Science and Technology 2005; 16 (10):19871996.32. Karhunen K, Seppnen A, Lehikoinen A, Monteiro PJM, Kaipio JP. Electrical resistance tomography imaging of

    concrete. Cement and Concrete Research 2010; 40 (1):137145.

    Copyright 2011 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2012; 89 :337353

    DOI: 10.1002/nme

  • 7/27/2019 A Reconstruction Algorithm for EIT..,2012

    17/17

    SPARSITY RECONSTRUCTION IN ELECTRICAL IMPEDANCE TOMOGRAPHY 353

    33. Somersalo E, Cheney M, Isaacson D. Existence and uniqueness for electrode models for electric current computedtomography. SIAM Journal on Applied Mathematics 1992; 52 (4):10231040.

    34. Borsic A, Graham BM, Adler A, Lionheart WRB. In vivo impedance imaging with total variation regularization. IEEE Transactions on Medical Imaging 2010; 29 (1):4454.

    35. Gehre M, Kluth T, Lipponen A, Jin B, Seppnen A, Kaipio J, Maass P. Sparsity reconstruction in electricalimpedance tomography: an experimental evaluation. Journal of Computational and Applied Mathematics, 2011.in press.

    36. Ito K, Jin B, Zou J. A new choice rule for regularization parameters in Tikhonov regularization. Applicable Analysis2010. page in press.37. Jin B, Lorenz DA. Heuristic parameter-choice rules for convex variational regularization based on error estimates.

    SIAM Journal on Numerical Analysis 2010; 48 (3):1208-1229.38. Evans LC, Gariepy RF. Measure Theory and Fine Properties of Functions . CRC Press: Boca Raton, 1992.39. Meyers NG. An L p -estimate for the gradient of solutions of second order elliptic divergence equations. Annali della

    Scuola Normale Superiore di Pisa, Classe di Scienze 3 e srie 1963; 17 :189206.40. Gallouet T, Monier A. On the regularity of solutions to elliptic equations. Rendiconti di Matematica e delle sue

    Applicazioni. Serie VII 1999; 19 (4):471488 (2000).

    Copyright 2011 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2012; 89 :337353