research article backtracking-based simultaneous...

18
Research Article Backtracking-Based Simultaneous Orthogonal Matching Pursuit for Sparse Unmixing of Hyperspectral Data Fanqiang Kong, 1 Wenjun Guo, 1 Yunsong Li, 2 Qiu Shen, 1 and Xin Liu 1 1 College of Astronautics, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China 2 State Key Laboratory of Integrated Service Networks, Xidian University, Xi’an 710071, China Correspondence should be addressed to Fanqiang Kong; [email protected] Received 12 November 2014; Revised 3 April 2015; Accepted 3 April 2015 Academic Editor: Kishin Sadarangani Copyright © 2015 Fanqiang Kong et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Sparse unmixing is a promising approach in a semisupervised fashion by assuming that the observed signatures of a hyperspectral image can be expressed in the form of linear combination of only a few spectral signatures (endmembers) in an available spectral library. Simultaneous orthogonal matching pursuit (SOMP) algorithm is a typical simultaneous greedy algorithm for sparse unmixing, which involves finding the optimal subset of signatures for the observed data from a spectral library. But the numbers of endmembers selected by SOMP are still larger than the actual number, and the nonexisting endmembers will have a negative effect on the estimation of the abundances corresponding to the actual endmembers. is paper presents a variant of SOMP, termed backtracking-based SOMP (BSOMP), for sparse unmixing of hyperspectral data. As an extension of SOMP, BSOMP incorporates a backtracking technique to detect the previous chosen endmembers’ reliability and then deletes the unreliable endmembers. rough this modification, BSOMP can select the true endmembers more accurately than SOMP. Experimental results on both simulated and real data demonstrate the effectiveness of the proposed algorithm. 1. Introduction With the rapid development of space technology, hyper- spectral remote sensing image has gained more and more attention in many application domains, such as environ- mental monitoring, target detection, material identification, mineral exploration, and military surveillance. However, due to the low spatial resolution of the hyperspectral imaging sensor, each pixel in the hyperspectral image oſten contains a mixture of several different materials. In order to deal with the problem of spectral mixing, hyperspectral unmixing is used to decompose each pixel’s spectrum to identify and quantify the fractional abundances of the pure spectral signatures or endmembers in each mixed pixel [1, 2]. ere are two basic models of hyperspectral unmixing used to analyze the mixed pixel problem: the linear mixture model and the nonlinear mixture model. e linear mixture model assumes that each mixed pixel can be expressed as a linear combination of endmembers weighted by their corresponding abundances [3]. e linear mixture model has been widely applied for spectral unmixing, due to its computational tractability and flexibility. Under the linear mixture model, the traditional lin- ear spectral unmixing algorithms based on geometry [48], statistics [5, 9], and nonnegative matrix factorization [1012] have been proposed. However, some of these methods [1012] are unsupervised and could extract virtual endmembers with no physical meaning. In [5], the presence in the data of at least one pure pixel per endmember is assumed. If the pure pixel assumption is not fulfilled because of the inadequate spatial resolution and the microscopic mixture of distinct materials, the unmixing results will not be accurate and the unmixing process is a rather challenging task. Sparse unmixing, as a semisupervised method, has been proposed to overcome this challenge. It assumes that the observed image can be expressed as a linear combinations of spectral signatures from a spectral library that is known in advance [3, 13]. But the number of spectral signatures in the spectral library is much larger than the number of endmembers in the hyperspectral image; the sparse unmixing model is combinatorial and difficult to find a unique, stable, and optimal solution. Fortunately, sparse linear regression techniques can be used [14, 15] to solve it. Hindawi Publishing Corporation Mathematical Problems in Engineering Volume 2015, Article ID 842017, 17 pages http://dx.doi.org/10.1155/2015/842017

Upload: others

Post on 24-Oct-2019

28 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Research Article Backtracking-Based Simultaneous ...downloads.hindawi.com/journals/mpe/2015/842017.pdf · Research Article Backtracking-Based Simultaneous Orthogonal Matching Pursuit

Research ArticleBacktracking-Based Simultaneous Orthogonal Matching Pursuitfor Sparse Unmixing of Hyperspectral Data

Fanqiang Kong1 Wenjun Guo1 Yunsong Li2 Qiu Shen1 and Xin Liu1

1College of Astronautics Nanjing University of Aeronautics and Astronautics Nanjing 210016 China2State Key Laboratory of Integrated Service Networks Xidian University Xirsquoan 710071 China

Correspondence should be addressed to Fanqiang Kong kongfqnuaaeducn

Received 12 November 2014 Revised 3 April 2015 Accepted 3 April 2015

Academic Editor Kishin Sadarangani

Copyright copy 2015 Fanqiang Kong et alThis is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

Sparse unmixing is a promising approach in a semisupervised fashion by assuming that the observed signatures of a hyperspectralimage can be expressed in the form of linear combination of only a few spectral signatures (endmembers) in an available spectrallibrary Simultaneous orthogonal matching pursuit (SOMP) algorithm is a typical simultaneous greedy algorithm for sparseunmixing which involves finding the optimal subset of signatures for the observed data from a spectral library But the numbersof endmembers selected by SOMP are still larger than the actual number and the nonexisting endmembers will have a negativeeffect on the estimation of the abundances corresponding to the actual endmembersThis paper presents a variant of SOMP termedbacktracking-based SOMP (BSOMP) for sparse unmixing of hyperspectral data As an extension of SOMP BSOMP incorporates abacktracking technique to detect the previous chosen endmembersrsquo reliability and then deletes the unreliable endmembersThroughthis modification BSOMP can select the true endmembers more accurately than SOMP Experimental results on both simulatedand real data demonstrate the effectiveness of the proposed algorithm

1 Introduction

With the rapid development of space technology hyper-spectral remote sensing image has gained more and moreattention in many application domains such as environ-mental monitoring target detection material identificationmineral exploration and military surveillance However dueto the low spatial resolution of the hyperspectral imagingsensor each pixel in the hyperspectral image often contains amixture of several differentmaterials In order to dealwith theproblem of spectral mixing hyperspectral unmixing is usedto decompose each pixelrsquos spectrum to identify and quantifythe fractional abundances of the pure spectral signatures orendmembers in each mixed pixel [1 2] There are two basicmodels of hyperspectral unmixing used to analyze the mixedpixel problem the linear mixture model and the nonlinearmixture model The linear mixture model assumes that eachmixed pixel can be expressed as a linear combination ofendmembers weighted by their corresponding abundances[3] The linear mixture model has been widely applied forspectral unmixing due to its computational tractability and

flexibility Under the linearmixturemodel the traditional lin-ear spectral unmixing algorithms based on geometry [4ndash8]statistics [5 9] and nonnegative matrix factorization [10ndash12]have been proposedHowever some of thesemethods [10ndash12]are unsupervised and could extract virtual endmembers withno physicalmeaning In [5] the presence in the data of at leastone pure pixel per endmember is assumed If the pure pixelassumption is not fulfilled because of the inadequate spatialresolution and the microscopic mixture of distinct materialsthe unmixing results will not be accurate and the unmixingprocess is a rather challenging task

Sparse unmixing as a semisupervised method has beenproposed to overcome this challenge It assumes that theobserved image can be expressed as a linear combinationsof spectral signatures from a spectral library that is knownin advance [3 13] But the number of spectral signaturesin the spectral library is much larger than the number ofendmembers in the hyperspectral image the sparse unmixingmodel is combinatorial and difficult to find a unique stableand optimal solution Fortunately sparse linear regressiontechniques can be used [14 15] to solve it

Hindawi Publishing CorporationMathematical Problems in EngineeringVolume 2015 Article ID 842017 17 pageshttpdxdoiorg1011552015842017

2 Mathematical Problems in Engineering

Several sparse regression techniques such as greedyalgorithms (GAs) [16ndash19] and convex relaxation methods[20ndash23] are usually adopted to solve the sparse unmixingproblem Convex relaxation methods such as SUnSAL [20]SUnSAL-TV [21] and CLSUnSAL [22] use the alternatingdirectionmethodofmultipliers to efficiently solve the 119897

1norm

sparse regression problem which can decompose a complexproblem into several simpler ones Convex relaxation meth-ods can obtain the global sparse optimization solution andare more sophisticated than the Gas however they are farmore complicated than the GAs The GAs such as OMP[16] SP [17] and CGP [18 19] adopt one or more potentialendmembers from the spectral library in each iteration thatexplains the largest correlation between the current residualof one input signal and the spectral library The GAs can getan approximate solution for the 119897

0norm problem without

smoothing the penalty function and have low computationalcomplexity However the endmembers selection criterion ofGAs is not optimal in the sense of minimizing the residualof the new approximation it means that a nonexisting end-member once selected into the supporting set will never bedeleted So theGAs tend to be trapped into the local optimumand are likely to miss some of the actual endmembers Tosolve the local optimal solutions problem of GAs severalrepresentative simultaneous greedy algorithms (SGAs) arepresented including simultaneous subspace pursuit (SSP)[24] subspace matching pursuit (SMP) [24] and simultane-ous orthogonal matching pursuit (SOMP) [25] These SGAsdivide the whole hyperspectral image into several blocks andpick some potential endmembers from the spectral library ineach block Then the endmembers picked in each block areassociated as the endmembers sets of the whole hyperspectraldata Finally the abundances are estimated using the wholehyperspectral data with the obtained endmembers sets TheSGAs have the same low computational complexity as theGAs and can find the actual endmembers far more accuratelythan the GAsThe SGAs adopt a block-processing strategy toefficiently solve the local optimal problem but the endmem-bers picked in all the blocks are not all actual endmembersand the nonexisting endmembers will affect the estimation ofthe abundances corresponding to the actual endmembers Tosolve the drawback of the SGAs RSFoBa [26] combinates aforward greedy step and a backward greedy step which canselect the actual endmembersmore accurately than the SGAs

Inspired by the existing SGA methods we proposea sparse unmixing algorithm termed backtracking-basedsimultaneous orthogonal matching pursuit (BSOMP) in thispaper Similar to SOMP and SMP it uses a block-processingstrategy to select some potential endmembers and adds themto the estimated endmembers set which divides the wholehyperspectral image into several blocks and picks somepotential endmembers from the spectral library in each blockFurthermore BSOMP incorporates a backtracking processto detect the previous chosen endmembersrsquo reliability andthen deletes the unreliable endmember from the estimatedendmembers set in each iterationThrough thismodificationBSOMP can identify the true endmembers set more accu-rately than the other considered SGA methods

The remainder of the paper is organized as followsSection 2 introduces the simultaneous sparse unmixingmodel In Section 3 we present the proposed BSOMP algo-rithm and give out the theoretical analysis for the algo-rithm Section 4 provides a quantitative comparison betweenBSOMP and previous sparse unmixing algorithms usingboth simulated hyperspectral and real hyperspectral dataFinally we conclude in Section 5

2 Simultaneous Sparse Unmixing Model

The linear sparse unmixing model assumes that the observedspectrum of a mixed pixel is a linear combination of a fewspectral signatures presented in a known spectral library Let119910 isin 119877

119871 denote themeasured spectrum vector of amixed pixelwith 119871 bands119860 isin 119877119871times119898 where119898 is the number of signaturesin the library119860 denote a119871times119898 spectral library then the linearsparse unmixing model can be expressed as follows [23]

119910 = 119860119909 + 119899 (1)

where 119909 isin 119877119898 denotes the fractional abundance vector

with regard to the library 119860 and 119899 isin 119877119871 is the noise

Considering physical constraints abundance nonnegativityconstraint (ANC) and abundance sum-to-one constraint(ASC) are imposed on the linear sparse model as follows

119909 ge 0

119898

sum

119894=1

119909119894= 1

(2)

where 119909119894is the 119894th element of 119909

The sparse unmixing problemcan be expressed as follows

min119909

1199090

st 1003817100381710038171003817119910 minus 11986011990910038171003817100381710038172

2le 120575

119909 ge 0

(3)

where 1199090(called the 119897

0norm) denotes the number of

nonzero atoms in 119909 and 120575 ge 0 is the tolerated error due tothe noise and model error It is worth mentioning that we donot explicitly add the ASC in (3) because the hyperspectrallibraries generally contain only nonnegative components andthe nonnegativity of the sources automatically imposes ageneralized ASC [23]

The simultaneous sparse unmixing model assumes thatseveral input signals can be expressed in the form of differentlinear combinations of the same elementary signals Thismeans that all the pixels in the hyperspectral image areconstrained to share the same subset of endmembers selectedfrom the spectral library Then we can use SGA methods forsparse unmixing the sparse unmixing model in (1) becomes

119884 = 119860119883 +119873 (4)

where 119884 isin 119877119871times119870 denote the hyperspectral data matrix with 119871bands and119870mixed pixels 119860 isin 119877119871times119898 denote a 119871 times 119898 spectral

Mathematical Problems in Engineering 3

Part 1 (SOMP)(1) Initialize hyperspectral data 119884 and spectral library 119860(2) Divide hyperspectral data 119884 into several blocks 119884 = [119884

1 1198842 119884

119861] and initialize the index set 119878SOMP =

(3) For each block do(4) Set index set 119878

119887= and iteration counter 119896 = 1 Initialize the residual data of block 119887 119877

0= 119884119887

(5) While stopping criterion 1 has not been met do(6) Compute the index of the best correlated member of 119860 to the actual residual 119895 = argmax

119894(119877119897minus1)1198791198601198942 where 119860

119894is the

119894th column of 119860(7) Update support set 119878

119887= 119878119887cup 119895

(8) Compute119883119897 = (119860lowast119878119887119860119878119887)minus1119860lowast

119878119887119884119887119860119878119887is the matrix containing the columns of 119860 having the indexes from 119878

119887

(9) Update residual 119877119897 = 119884119887minus 119860119878119887119883119897

(10) 119897 larr 119897 + 1

(11) End while(12) Set 119878SOMP = 119878SOMP cup 119878119887(13) End forPart 2 (Backtracking process)(14) Initialize hyperspectral data 119884 the index set 119878

119905= 119878SOMP and 119860 119905 = 119860119878SOMP

119860119905is the matrix containing the columns of 119860 having

the indexes from 119878119905

(15)While stopping criterion 2 has not been met do(16) Compute solution119883119905 = (119860lowast

119905119860119905)minus1119860lowast

119905119884

(17) Compute the member of 119860119905having the lowest abundance indexlarr min

119895(119883119895

119905)

(18) Remove the member having the lowest fractional abundance from the index set 119878119905larr 119878119905 119878index and the endmember set

119860119905larr 119860119905 119860 index

(19) 119905 larr 119905 + 1

(20) End whilePart 3 (Abundance estimation)(21) Estimate abundances using the original hyperspectral data matrix and the endmember set under the constraint of

nonnegativity119883 larr argmin

119883119860119905119883 minus 119884

119865 subject to119883 ge 0

Algorithm 1 BSOMP for hyperspectral sparse unmixing

library 119883 isin 119877119898times119870 denotes the fractional abundance matrix

each column of which corresponds with the abundancefractions of a mixed pixel and119873 isin 119877

119871times119870 is the noise matrixUnder the simultaneous sparse unmixing model the

simultaneous sparse unmixing problem can be expressed asfollows

min119883

119883rowminus0

st 119884 minus 1198601198832

119865le 120575

119883 ge 0

(5)

where 119883119865denotes the Frobenius norm of 119883 119883rowminus0 is

the number of nonzero rows in matrix 119883 [25] and 120575 is thetolerated error due to the noise and model error

It should be noted that the model in (5) is reasonablebecause there should be only a few rows with nonzero entriesin the abundance matrix in light of only a small number ofendmembers in the hyperspectral image compared with thedimensionality of the spectral library [22]

3 Backtracking-Based SOMP

In this section we first present our new algorithm BSOMPfor sparse unmixing of hyperspectral dataThen a theoretical

analysis of the algorithm will be given Then we give aconvergence theorem for the proposed algorithm

31 Statement of Algorithm The whole process of usingBSOMP for sparse unmixing of hyperspectral data is sum-marized in Algorithm 1 The algorithm includes three mainparts SOMP for endmember selection backtracking process-ing and abundance estimation In the first part we adopta block-processing strategy for SOMP to efficiently selectendmembers This strategy divides the whole hyperspectralimage into several blocks Then in each block SOMP willpick several potential endmembers from the spectral libraryand add them to the estimated endmembers set In thesecond part we utilize a backtracking strategy to removesome endmembers chosen wrongly from the estimated end-members set in previous processing and identify the true end-members set more accurately The backtracking processingis halted when the maximum total correlation between anendmember in the spectral library and the residual dropsbelow threshold 120575 Finally the abundances are estimatedusing the obtained endmembers set under the constraint ofnonnegativity

In the following paragraphs we provide some definitionsand then give two stopping conditions for the BSOMPalgorithm

4 Mathematical Problems in Engineering

Definition 1 If 119909 is a vector where 119909119894is the 119894th element of 119909

the 119897119901norm of 119909 is defined as

119909119901 = (sum

119894

10038161003816100381610038161199091198941003816100381610038161003816119901

)

1119901

for 1 le 119901 lt infin

119909infin = max119894

10038161003816100381610038161199091198941003816100381610038161003816

(6)

Definition 2 The (119901 119902) operator norms of the matrix 119860 isdefined as

119860119901119902= max119909 =0

119860119909119902

119909119901

(7)

Several of the (119901 119902) operator norms can be computedeasily using the following lemma [25 27]

Lemma 3 (1) The (1 q) operator norm is the maximum 119897119901

norm of any column of 119860(2)The (2 2) operator norm is themaximum singular value

of 119860(3)The (119901infin) operator norm is the maximum 119897

119901norm of

any row of 119860

We give a stopping condition for the first part of theBSOMP algorithm

100381610038161003816100381610038171003817100381710038171198771198971003817100381710038171003817119865 minus

1003817100381710038171003817119877119897minus110038171003817100381710038171198651003816100381610038161003816

1003817100381710038171003817119877119897minus11003817100381710038171003817119865

le 120590 (8)

where 119877119897119865is the Frobenius norm of the residual corre-

sponding to the 119897th iteration and 120590 is a thresholdFor the second part of the BSOMP algorithm we give a

stopping condition for deciding when to halt the iteration10038161003816100381610038161003816

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin minus1003817100381710038171003817119860lowast119877119905minus1

1003817100381710038171003817infininfin

100381610038161003816100381610038161003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin

le 120575 (9)

where 119884119905is the residual corresponding to the 119905th iteration

119860lowast denotes the conjugate transpose of 119860 119860lowast119877

119905infininfin

is themaximum total correlation between an endmember in thespectral library 119860 and the residual 119877

119905 and 120575 is a threshold

32 Theoretical Analysis In this section we will introducesome definitions and then give a theoretical analysis for theBSOMP algorithm

Definition 4 The coherence 120583 of a spectral library 119860 equalsthe maximum correlation between two distinct spectralsignatures [28]

120583 = max119896 =ℎ

1003816100381610038161003816⟨119860119896 119860ℎ⟩1003816100381610038161003816 (10)

where 119860119896is the 119896th column of 119860 If the coherence parameter

is small each pair of spectral signatures is nearly orthogonal

Definition 5 The cumulative coherence [29 30] is defined as

120583 (119896) = max|Λ|le119896

max119895notinΛ

sum

119894isinΛ

10038161003816100381610038161003816⟨119860119894 119860119895⟩10038161003816100381610038161003816 (11)

where the index set Λ isin 119878all 119878all is the support of all thespectral signatures in the spectral library 119860 The cumulativecoherence measures the maximum total correlation betweena fixed spectral signature and 119905 distinct spectral signaturesParticularly 120583(0) = 0

Definition 6 Given hyperspectral image data matrix 119884 isin

119877119871times119870 and available spectral library matrix 119860 isin 119877119871times119898

119878opt is an index set which lists at most 119879 endmemberspresented in the hyperspectral image scene119860opt is an optimal submatrix of the spectral library 119860which contains the 119879 columns indexed in 119878opt119883opt is the optimal abundance matrix correspondingto 119860opt119884opt is a 119879-term best approximation of 119884 and 119884opt =119860opt119883opt119878119905is an index set which lists 119896 endmembers after 119905

iterations and assumes 119878opt isin 119878119905119860119905is a submatrix of the spectral library 119860 which

contains the 119896 columns indexed in 119878119905

119883119905is the abundance matrix corresponding to 119860

119905

119884119905is the reconstruction of 119884 by BSOMP algorithm

after 119905 iterations (assume that BSOMP algorithmselects a nonexisting endmember to remove fromindex set 119878

119905in each iteration) and 119884

119905= 119860119905119883119905

119877119905is the residual error of 119884 after 119905 iterations and 119877

119905=

119884 minus 119884119905

Theorem 7 Suppose that the best approximation 119884opt of thehyperspectral image data matrix 119884 over the index set 119878optsatisfies the error bound 119860lowast(119884minus119884opt)infininfin le 120591 the first part ofBSOMP has selected 119896 potential endmembers from the spectrallibrary to constitute the initial index set 119878

119905 and 119878opt isin 119878119905 After

iteration 119905 of the second part of BSOMP halt the second part ofthe algorithm if

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)120591 (12)

where 119896 ge 119879 If this inequality fails then BSOMPmust removeanother nonexisting endmember in iteration (119905 + 1)

The proof of Theorem 7 is shown in the Appendix If thesecond part of the BSOMP algorithm terminates at the endof iteration 119905 we may conclude that the BSOMP algorithmremoves 119905 nonexisting endmembers from 119878

119905and has chosen

119896 optimal endmembers

4 Experimental Results

In this section we use both simulated hyperspectral data andreal hyperspectral data to demonstrate the performance ofthe proposed algorithm The BSOMP algorithm is comparedwith three SGAs (SOMP [25] SMP [24] and RSFoBa [26])and three convex relaxation methods (ie SUnSAL [20]SUnSAL-TV [21] and CLSUnSAL [22]) All the considered

Mathematical Problems in Engineering 5

1

08

06

04

02

0

1

08

06

04

02

005 1 15 2 25

Refle

ctan

ce

1

08

06

04

02

0

Refle

ctan

ce

1

08

06

04

02

0

Refle

ctan

ce

Refle

ctan

ce

1

08

06

04

02

0

Refle

ctan

ce

Wavelength (nm)

05 1 15 2 25

Wavelength (nm)05 1 15 2 25

Wavelength (nm)

05 1 15 2 25

Wavelength (nm)05 1 15 2 25

Wavelength (nm)

Axinite HS3423B Samarium oxide GDS36 Chrysocolla HS2973B

Niter GDS43 (K-saltpeter) Anthophyllite HS2863B

Figure 1 Five spectral signatures from the USGS used in our simulated data experiments The title of each subimage denotes the mineralcorresponding to the signature

algorithms have taken into account the abundance nonnega-tivity constraintThe TV regularizer used in SUnSAL-TV is anonisotropic one For the RSFoBa algorithm the 119897

infinnorm is

considered in our experiment Asmentioned above when thespectral library is overcomplete the GAs behave worse thanthe SGAs and the convex relaxationmethods thus we do notinclude the GAs in our experiment

In the simulated data experiments we evaluated theperformances of the sparse unmixing algorithms in situationsof different SNR level of noise the signal-to-reconstructionerror (SRE) [21] is used to measure the quality of thereconstruction abundances of spectral endmembers

SRE =119864 [119883

2

2]

119864 [10038171003817100381710038171003817119883 minus 119883

10038171003817100381710038171003817

2

2]

SRE (dB) = 10log10(SRE)

(13)

where 119883 denotes the true abundances of spectral endmem-bers and119883 denotes the reconstruction abundances of spectralendmembers

41 Evaluation with Simulated Data In the simulated exper-iments the United States Geological Survey (USGS) digital

spectral library (splib06a) [31] is used to build the spectrallibrary 119860 The reflectance values of 498 spectral signaturesare measured for 224 spectral bands distributed uniformlyin the interval of 04ndash25 120583m (119860 isin 119877

224times498) Fifteenspectral signatures are chosen from the spectral library togenerate our simulated data Figure 1 shows five spectralsignatures used for all the simulated data experimentsThe other ten spectral signatures that are not displayed inthe figure include Monazite HS2553B Zoisite HS3473BRhodochrosite HS67 Neodymium Oxide GDS34 PigeoniteHS1993B Meionite WS700Hlsep Spodumene HS2103BLabradorite HS173B Grossular WS484 and WollastoniteHS3483B

In Simulated Data 1 experiment we generated sevendatacubes of 50 times 50 pixels and 224 bands per pixel eachcontaining a different number of endmembers 3 5 7 9 1113 and 15 In each simulated pixel the fractional abundancesof the endmembers were randomly generated following aDirichlet distribution [23] Note that there was no purepixel in Simulated Data 1 and the fractional abundances ofthe endmembers were less than 08 After Simulated Data1 was generated Gaussian white noise or correlated noisewas added to Simulated Data 1 having different levels of thesignal-to-noise ratio (SNR) that is 20 25 30 35 40 45and 50 dB The Gaussian white noise was generated using

6 Mathematical Problems in Engineering

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

45

40

35

30

25

20

15

10

5

0

4 6 8 10 12 14

Endmember number

SRE

(dB)

Figure 2 Results on Simulated Data 1 with 30 dB white noise SREas function of endmember number

the awgn function in MATLAB The correlated noise wasobtained by low-pass filtering iid Gaussian noise

Simulated Data 2 was generated using nine randomlyselected spectral signatures from 119860

1 with 100 times 100 pixels

and 224 bands per pixel provided by Dr M D Iordache andProf J M Bioucas-Dias The fractional abundances satisfythe ANC and the ASC and are piecewise smoothed The trueabundances of three endmembers are illustrated in Figure 2After Simulated Data 2 was generated Gaussian white noiseor correlated noise with seven levels of SNR (20 25 30 3540 45 and 50 dB) was added to Simulated Data 2 as well

In the two simulated data experiments all the SGAmeth-ods have adopted the block-processing strategy to get betterperformances The block sizes of the SGA methods wereempirically set to 10 and the stopping threshold parameter 120590of SOMP and SMP was set to 1sdot10minus2 The stopping thresholdparameters 120590 and 120575 of BSOMP were set to 1sdot10minus2 and 01RSFoBa was tested using different values of the parameters120582 and 120590 0 and 5sdot10minus3 for Simulated Data 1 experiment 1and 1sdot10minus2 for Simulated Data 2 experiment SUnSAL-TVwastested using different values of the parameters 120582 and 120582TV1sdot10minus2 and 1sdot10minus2 1sdot10minus2 and 1sdot10minus2 1sdot10minus2 and 5sdot10minus3 5sdot10minus3and 1sdot10minus3 1sdot10minus3 and 5sdot10minus4 5sdot10minus4 and 1sdot10minus4 and 5sdot10minus4 and1sdot10minus4 for SNR = 20 25 30 35 40 45 and 50 dB SUnSAL andCLSUnSALwas tested using different values of the parameter120582 1sdot10minus2 5sdot10minus3 1sdot10minus3 5sdot10minus4 5sdot10minus4 5sdot10minus4 and 5sdot10minus4 forSNR = 20 25 30 35 40 45 and 50 dB

Figure 2 shows the SRE (dB) results obtained on Simu-lated Data 1 with white noise as a function of endmembernumber using the different tested methods The SREs of allthe algorithms decrease as the endmember number increases

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

30

25

20

15

10

5

0

SRE

(dB)

20 25 30 35 40 45 50

SNR (dB)

Figure 3 Results on Simulated Data 1 with white noise when theendmember number is eleven SRE as function of SNR

We can find that BSOMP SOMP SMP RSFoBa and SUnSAL-TV perform better than SUnSAL and CLSUnSAL All theSGAmethods have comparable performances with SUnSAL-TV BSOMP can always obtain the best estimation of theabundances when the number of endmembers is less than 15Figure 3 shows the SRE (dB) results obtained on SimulatedData 1 with white noise as a function of SNR when theendmember number is elevenThe SREs of all the algorithmsdecrease as the SNR decreases We can find that all the SGAmethods perform better than SUnSAL-TV SUnSAL andCLSUnSAL This result indicates that all the SGA methodsadopt the block-processing strategy to effectively pick upall the actual endmembers Among all the SGA methodsBSOMP and RSFoBa behave better than SOMP and SMPand BSOMP can always obtain the best estimation of theabundances Figures 4 and 5 show the SRE results obtainedon Simulated Data 1 with correlated noise as a function ofendmember number and SNR respectively As shown inFigures 4 and 5 BSOMP gets an overall better performancethan the other algorithms

Figure 6 shows the number of the potential endmembersobtained from the spectral library by all the SGA methodsThe SGA methods adopt the block-processing strategy toeffectively pick up all the actual endmembers It can beobserved that BSOMP and RSFoBa can select the actualendmembers more accurately than SOMP and SMP It isworth mentioning that many endmembers in the estimatedendmembers set of SOMP are nonexisting and affect theperformance of SOMP However BSOMP incorporates abacktracking process to remove some endmembers chosenwrongly from the estimated endmembers set which canselect the actual endmembers more accurately than SOMP

Mathematical Problems in Engineering 7

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

4 6 8 10 12 14

Endmember number

45

40

35

30

25

20

15

10

5

0

SRE

(dB)

Figure 4 Results on Simulated Data 1 with 30 dB correlated noiseSRE as function of endmember number

Differing from RSFoBa BSOMP uses a backtracking processin the whole hyperspectral data to remove some endmem-bers chosen wrongly from the estimated endmembers setin previous block-processing which can identify the trueendmembers set more accurately than RSFoBa It indicatesthat BSOMP behaves better than the other SGA methods

Figures 7 and 8 show the SRE results obtained onSimulated Data 2 with white noise and correlated noise asa function of SNR respectively We can find that BSOMPSOMP SMP RSFoBa and SUnSAL-TV perform better thanSUnSAL and CLSUnSAL This result indicates that all theSGA methods adopt the block-processing strategy to effec-tively pick up all the actual endmembers and SUnSAL-TVcombinates spatial-contextual coherence regularizer whichcan get better performance over SUnSAL and CLSUnSALAll the SGA methods have comparable performances withSUnSAL-TV BSOMP and RSFoBa can always obtain thebetter estimation of the abundances RSFoBa incorporates thespatial-contextual coherence regularizer to select the actualendmembers more accurately than SOMP and SMP andBSOMP incorporates a backtracking process in the wholehyperspectral data to identify the true endmembers set moreaccurately than SOMP and SMP It can be observed thatBSOMP and RSFoBa can select the actual endmembers moreaccurately than SOMP and SMP

For visual comparison Figure 9 shows the true abun-dance maps and the abundance maps estimated by the differ-ent algorithms on Simulated Data 2 with 20 dB white noiseIn Figure 9 the abundancemaps of endmembers which wereobtained by SUnSAL-TV contain fewer noise points How-ever the abundance maps estimated by SUnSAL-TV may

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

30

25

20

15

10

5

0

SRE

(dB)

20 25 30 35 40 45 50

SNR (dB)

Figure 5 Results on Simulated Data 1 with correlated noise whenthe endmember number is eleven SRE as function of SNR

Endmember number

70

60

50

40

30

20

10

0

4 6 8 10 12 14

Num

ber o

f ret

aine

d en

dmem

bers

SMP (white)SOMP (white)RSFoBa (white)

SMP (correlated)SOMP (correlated)RSFoBa (correlated)

BSOMP (white) BSOMP (correlated)

Figure 6 Results obtained by the four SGAs on Simulated Data1 with 30 dB white noise or correlated noise number of retainedendmembers as function of endmember number

exhibit an oversmooth visual effect around some pixels someof the details and edges are not well preserved Comparedwith SUnSAL-TV the estimated abundances by BSOMPmay contain more noise points but preserve more edgeregions and are closer to the true abundances Moreover itis much easier to tell the different materials apart Comparedwith SOMP and SMP the estimated abundances by BSOMP

8 Mathematical Problems in Engineering

Table 1 Processing times(s) measured after applying the tested methods to the two considered simulated data sets with 30 dB white noise

Datacube SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa BSOMPSimulated Data 1 (11 endmembers) 6767 7841 36327 2083 21289 3938 6418Simulated Data 2 25471 27902 127112 9618 35444 28277 106524

50

45

40

35

30

25

20

15

10

20 25 30 35 40 45 50

SNR (dB)

SRE

(dB)

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

Figure 7 Results on Simulated Data 2 with white noise SRE asfunction of SNR

20 25 30 35 40 45 50

SNR (dB)

50

45

40

35

30

25

20

15

10

SRE

(dB)

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

Figure 8 Results on Simulated Data 2 with correlated noise SRE asfunction of SNR

Table 2 Comparison of the sparse unmixing algorithms withwhether the backward step joint sparsity and contextual informa-tion are considered

Property Backwardstep Joint sparsity Contextual

informationSMP sdot

SOMP sdot

RSFoBa sdot sdot sdot

BSOMP sdot sdot

SUnSALCLSUnSAL sdot

SUnSAL-TV sdot

contain fewer noise points and preserve more edge regionswhich are more accurate and have a better visual effect

Table 1 shows the processing time measured after apply-ing the tested algorithms to the two simulated data setswith 30 dB white noise The algorithms were implementedin MATLAB 2009a and executed on a desktop PC equippedwith an Intel Core 2 Duo CPU (293GHz) and 2GB ofRAM memory It can be observed that the SUnSAL-TValgorithm is far more time consuming compared with theother algorithms We can find that BSOMP incorporating abacktracking technique is more time consuming than SOMP

In order to make the differences among the BSOMP algo-rithm and the other sparse unmixing algorithms clearer wedisplay their comparison concerned with several propertiesin Table 2

SUnSAL is a convex relaxation method for sparse unmix-ing which solves the 119897

1norm sparse regression problem

CLSUnSAL is convex relaxation method for solving the 11989721

optimization problemThemain difference between SUnSALand CLSUnSAL is that the former employs pixelwise inde-pendent regressions while the latter enforces joint sparsityamong all the pixels SUnSAL-TV is the special case ofSUnSALwhich incorporates the spatial-contextual coherenceregularizer into the SUnSAL As observed in Figures 7 and8 SUnSAL-TV behaves better than SUnSAL and CLSUnSALwhich indicates that the spatial-contextual coherence regu-larizer can make SUnSAL-TV more effective

SOMP and SMP are the forward SGAs which select oneor more potential endmembers from the spectral library ineach iteration RSFoBa is a forward-backward SGA whichincorporates the backward step spatial-contextual informa-tion and joint sparsity into the greedy algorithm BSOMP isa forward-backward SGA which incorporates the backwardstep and joint sparsity to the greedy algorithm As observedin Figure 6 the numbers of endmembers selected by theSGAs are still larger than the actual number BSOMP and

Mathematical Problems in Engineering 9

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

Figure 9 Continued

10 Mathematical Problems in Engineering

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

Figure 9 Comparison of abundance maps on Synthetic Data 2 with 20 dB white noise From left column to right column are the abundancemaps corresponding to Endmember 1 Endmember 4 and Endmember 9 respectively From top row to bottom row are true abundance mapsor abundance maps obtained by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively

RSFoBa behave better than SMP and SOMP which indicatesthat the backward step can make BSOMP and RSFoBa moreaccurate to select the actual endmembers Differing fromRSFoBa which adopts a forward step and a backward stepto pick some potential endmembers in each block BSOMPadopts a forward step to pick some potential endmembersin each block and then adopts a backward step in the wholehyperspectral data to remove some endmembers chosenwrongly from the estimated endmembers set Figure 10 showsthe SRE (dB) results obtained on Simulated Data 1 withwhite noise as a function of block size We can find thatBSOMP performs better than the other SGAs Figure 11shows the number of the potential endmembers obtainedfrom the spectral library by all the SGA methods It canbe observed that BSOMP can select the actual endmembersmore accurately than RSFoBa This result indicates that abackward step in the whole hyperspectral data can makeBSOMP more effective

42 Evaluation with Real Data The hyperspectral data setused in our real data experiment is the well-known AVIRISCuprite data set (httpavirisjplnasagovhtmlavirisfreed-atahtml) The size of the hyperspectral scene is a 250 times 191pixels subset with 188 spectral bands (water absorption bandsand low SNR bands are removed)The spectral library for thisdata set is the USGS spectral library (containing 498 pure sig-natures) with the corresponding bands removed The miner-als map (httpspeclabcrusgsgovcuprite95tgif22um mapgif) which was produced by a Tricorder 33 software(httpspeclabcrusgsgovPAPERStetracorder) is shown inFigure 12 Note that the Tricorder map is only suitable forthe hyperspectral data collected in 1995 the AVIRIS Cupritedata was collected was in 1997 Thus we only use the mineralmap as a reference indicator for qualitative analysis of thefractional abundance maps produced by different sparseunmixing methods

In this section for the real data experiment the sparsityof solution and the reconstruction error of the hyperspectral

Mathematical Problems in Engineering 11

Table 3 Sparsity of the abundance vectors and the reconstruction errors

Algorithms SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa BSOMPSparsity 175629 207037 20472 13684 15102 12702 11810RMSE 00051 00035 00028 00040 00034 00032 00033

14

13

12

11

10

9

8

7

8 10 12 14 16 18

Block size

SRE

(dB)

SMPSOMP

RSFoBaBSOMP

Figure 10 Results on Simulated Data 1 with white noise when theendmember number is eleven SRE as function of block size

50

55

60

65

45

40

35

30

25

20

15

108 10 12 14 16 18

Block size

SMPSOMP

RSFoBaBSOMP

Endm

embe

r num

ber

Figure 11 Results on Simulated Data 1 with 30 dB white noise whenthe endmember number is eleven number of retained endmembersas function of block size

image are used to evaluate the performances of the sparseunmixing algorithms since the true abundance maps of thereal data are not available The sparsity is obtained by count-ing the average number of nonzero abundances estimated byeach algorithm The same as in [22] we define the fractionalabundances larger than 0001 as nonzero abundances toavoid counting negligible values The reconstruction erroris calculated by the original hyperspectral image and theone reconstructed by the actual endmembers and theircorresponding abundances [32] The root mean square error(RMSE) is used to evaluate the reconstruction error betweenthe original hyperspectral image and the reconstructed one

RMSE = radic 1119870

119870

sum

119895=1

(119884119895minus 119895)2

(14)

where 119884 denotes the original hyperspectral image and denotes the reconstruction hyperspectral image If the RMSEof a sparse unmixing algorithm is smaller it also reflects theunmixing performance of this algorithm

The regularization parameter used for CLSUnSAL wasempirically set to 001 while the one corresponding toSUnSAL was set to 0001 The parameters 120582 and 120582TV ofSUnSAL-TV were set to 0001 and 0001 The block sizes ofthe SGA methods were empirically set to 10 The stoppingthreshold parameter 120590 of SOMP and SMP was set to 001The stopping threshold parameters 120590 and 120575 of BSOMP wereset to 001 and 01 The parameters 120582 and 120590 of RSFoBawere set to 1 and 001 Table 2 shows the sparsity of theabundance vectors and the reconstruction error obtained bythe different algorithm In Table 3 we can find that BSOMPSOMP SMP and SUnSAL perform better than SUnSAL-TV in the sparsest solutions although SUnSAL-TV has thelowest reconstruction errors BSOMP can achieve nearly asfew reconstruction errors as SUnSAL-TV but obtain muchsparser solutions In general BSOMP can obtain the sparsestsolutions and achieve the lower reconstruction errors whichindicates the effectiveness of the proposed algorithm

For visual comparison Figure 13 shows the estimatedfractional abundance maps by applying the proposedBSOMP SMP RSFoBa SOMP SUnSAL and SUnSAL-TValgorithms to the AVIRIS Cuprite data for three differentminerals (Alunite Buddingtonite and Chalcedony) Forcomparison the classification maps of three materialsextracted from the Tricorder 33 software are also presentedin Figure 13 It can be observed that the abundances estimatedby BSOMP are comparable or higher in the regions assignedto the respective minerals in comparison to the otherconsidered algorithms with regard to the classification mapsproduced by the Tricorder 33 software We also analyzed thesparsity of the abundances estimated by BSOMP the average

12 Mathematical Problems in Engineering

Cuprite NevadaAVIRIS 1995 data

USGSClark and Swayze

Tetracorder 33 productSulfates

K-Alunite 150CK-Alunite 250CK-Alunite 450C

JarositeAlunite + Kaoliniteandor Muscovite

Kaolinite group claysKaolinite wxlKaolinite pxlKaolinite + Smectiteor Muscovite

HalloysiteDickite

CarbonatesCalciteCalcite + KaoliniteCalcite + Montmorillonite

ClaysNa-MontmorilloniteNontronite (Fe clay)

Other mineralsLow-Al MuscoviteMed-Al MuscoviteHigh-Al MuscoviteChlorite + Musc MontChloriteBuddingtoniteChalcedony OH QtzPyrophyllite + Alunite

N2km

Na82-Alunite 100CNa40-Alunite 400C

Figure 12 USGS map showing the distribution of different minerals in the Cuprite mining district in Nevada

number of endmembers with abundances higher than 0001estimated by BSOMP is 11810 (per pixel) This means thatthe algorithm uses a minimum number of members toexplain the data Overall the qualitative results and visualresults reported in this section indicate that the BSOMPalgorithm is more effective for sparse unmixing than theother algorithms

5 Conclusions

In this paper we present a new SGA termed BSOMP forsparse unmixing of hyperspectral data The main idea of theproposed algorithm is that BSOMP incorporates a backtrackstep to detect the previous chosen endmembersrsquo reliabilityand then deletes the unreliable endmembers It is provedthat BSOMP can recover the optimal endmembers from thespectral library under certain conditions Experiments onboth synthetic data and real data also demonstrate that theBSOMP algorithm is more effective than the other SGAs forsparse unmixing

The topic we want to discuss is why BSOMP is effectivefor sparse unmixing Endmember selection and abundanceestimation are the two main parts of the SGAs The num-ber of the potential endmembers selected by the SGAs

is far smaller than the number of spectral signatures inthe spectral library and then the abundance estimationmethod makes the abundance vectors sparse further withthe relatively small endmembers subset The main differenceamong BSOMP SOMP SMP and RSFoBa is the endmemberselectionThe SGAs adopt a block-processing strategy to effi-ciently select endmembers but the numbers of endmembersselected are still larger than the actual number However thenonexisting endmembers will have a negative effect on theestimation of the abundances corresponding to the actualendmembers

Compared with SOMP and SMP BSOMP and RSFoBaadopt a backward step to detect the previous chosen end-membersrsquo reliability and delete the nonexisting endmembersSo BSOMP and RSFoBa can select the actual endmembersmore accurately than SOMP and SMP The main differencebetween BSOMP and RSFoBa is that the former adopts abackward step in the whole hyperspectral data while thelatter adopts a backward step in each block As shown inFigures 10 and 11 we can observe that the block size willsignificantly affect the number of the endmembers obtainedby RSFoBa while BSOMP is not affected by the block size Inconclusion BSOMP can select the actual endmembers moreaccurately than the other SGAs and be beneficial to improveestimation effectiveness

Mathematical Problems in Engineering 13

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

02

015

01

005

0

0

04

03

02

01

035

03

025

02

015

01

005

0

04

03

02

01

0

Figure 13 Continued

14 Mathematical Problems in Engineering

025

02

015

01

005

0

02

015

01

005

0

03

025

02

015

01

005

0

03

035

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

04

03

02

01

0

04

05

03

02

01

0

04

05

06

03

02

01

0

Figure 13 Abundance maps estimated by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP and the distribution mapsproduced by Tricorder 33 software for the AVIRIS Cuprite scene From top row to bottom row are the maps produced or estimated byTricorder software SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively From left column to right column arethe maps corresponding to Alunite Buddingtonite and Chalcedony respectively

Appendix

Proof of Theorem 7

First we should introduce some definitionswhichwill be usedduring the proof process

Definition A1 Given hyperspectral image data matrix 119884 isin

119877119871times119870 and available spectral library matrix 119860 isin 119877119871times119898

119860119905is a submatrix of the spectral library 119860 where 119860

119905

contains the 119896 columns indexed in 119878119905119860119905contains the

remaining columns 119860 = [119860119905 119860119905]

119860simopt is a submatrix of 119860

119905 which contains the

remaining (119896 minus 119879) columns indexed in 119878119905 119878opt

119883simopt is a submatrix of119883

119905and contains the rows listed

in 119878119905 119878opt

Nowwewill prove the theorem on the performance of thealgorithm

Proof Let119875opt denote the orthogonal projector onto the rangeof 119860opt

119875opt = 119860opt (119860lowast

opt119860opt)minus1

119860lowast

opt (A1)

where119860lowastopt denotes the conjugate transpose of119860opt Since thecolumns of 119860

119905are orthogonal to the columns of 119877

119905(119877119905= 119884minus

119884119905) we have

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905119877119905

10038171003817100381710038171003817infininfin (A2)

Themaximumcorrelation between119860119905(a submatrix of the

spectral library 119860) and the residual 119877opt = 119884 minus 119884opt is smaller

Mathematical Problems in Engineering 15

than the maximum correlation between the spectral library119860 and the residual 119877opt so we have

10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin (A3)

Invoke (A2) and (A3) and apply the triangle inequality to get

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt + 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A4)

Then the matrix 119884119905minus 119884opt can be expressed in the following

manner

119884119905minus 119884opt = (119868 minus 119875opt) 119884119905 = (119868 minus 119875opt)119860 119905119883119905

= (119868 minus 119875opt)119860simopt119883simopt(A5)

The last equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt Applying the triangle inequality the secondterm of (A4) can be rewritten as follows

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119868 minus 119875opt)119860simopt119883simopt

10038171003817100381710038171003817infininfin

le [10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfin]

times10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A6)

We will apply the cumulative coherence function todeduce the estimates of (A6) the deduction has five steps

(1) Each row of the matrix 119860lowast119905119860simopt lists the inner

products between a spectral endmember and (119896 minus 119879) distinctendmembers we have

10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879) (A7)

(2) Use the fact that sdot infininfin

is submultiplicative andinvoke (A1) to get

10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfin

times100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

times10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

(A8)

(3) Making use of the same deduction as in step (1) wehave

10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfinle 120583 (119879)

10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879)

(A9)

(4) Since the columns of 119860 are normalized the Grammatrix 119860lowastopt119860opt lists the inner products between 119879 different

endmembers so the matrix119860lowastopt119860opt has a unit diagonal andthen we can get

119860lowast

opt119860opt = 119868 minus 119883opt (A10)

where 119883optinfininfin le 120583(119879) then we can expand the inverse ina Neumann series to get100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

=100381710038171003817100381710038171003817(119868 minus 119883opt)

minus1100381710038171003817100381710038171003817infininfin

le1

1 minus10038171003817100381710038171003817119883opt

10038171003817100381710038171003817infininfin

le1

1 minus 120583 (119879)

(A11)

(5) Invoke the bounds from steps (1) to (4) into (A6) wecan obtain that10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le [120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)] times

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A12)

Simplify expression (A12) to conclude that

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfinle120583 (119896 minus 119879)

1 minus 120583 (119879)

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A13)

In order to produce the an upper bound of 119883simoptinfininfin we

deduce the following formula by using the triangle inequality10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119884 minus 119884

119905+ 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le1003817100381710038171003817119860lowast

119905(119884 minus 119884

119905)1003817100381710038171003817infininfin +

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A14)

Since (119884minus119884119905) is orthogonal to the range of119860

119905 Introduce (A5)

into (A14) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

simopt (119868 minus 119875opt)119860simopt119883simopt10038171003817100381710038171003817infininfin

(A15)

The equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt we can replace 119860

119905by 119860simopt Let 119863 = 119860

lowast

simopt(119868 minus

119875opt)119860simopt and rewrite 119863 = 119868 minus (119868 minus 119863) we can expand itsinverse in a Neumann series to obtain

10038171003817100381710038171003817119863119883simopt10038171003817100381710038171003817infininfin

ge (1 minus 119868 minus 119863infininfin)10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A16)

Recall the definition of119863 and apply the triangle inequality toobtain

119868 minus 119863infininfin

=10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt + 119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

(A17)

16 Mathematical Problems in Engineering

The right-hand side of inequality can be deduced completelyanalogous with steps (1)ndash(5) we can obtain

119868 minus 119863infininfin le 120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)

=120583 (119896 minus 119879)

1 minus 120583 (119879)

(A18)

Introduce (A18) and (A16) into (A14) we have10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A19)

Using the same deduction as (A3) it follows that 119860lowast119905(119884 minus

119884opt)infininfin le 119860lowast(119884minus119884opt)infininfin So (A19) can be rewritten as

follows10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A20)

Introduce (A20) into (A13) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le120583 (119896 minus 119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A21)

Introduce (A21) into (A4) to get1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin

le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A22)

Finally assume we have a bound 119860lowast(119884 minus 119884opt)infininfin le 120591 so(A22) can be rewritten as follows

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)120591 (A23)

This completes the argument

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work has been supported by a grant from the NUAAFundamental Research Funds (NS2013085) The authorswould like to thank M D Iordache J Bioucas-Dias and APlaza for sharing Simulated Data 2 and their codes for thealgorithms of CLSUnSAL SUnSAL and SUnSAL-TV Theauthors would also like to thank Shi Z TangW Duren Z andJiang Z for sharing their codes for the algorithms of SMP andRSFoBaMoreover the authors would also like to thank thoseanonymous reviewers for their helpful comments to improvethis paper

References

[1] N Keshava and J F Mustard ldquoSpectral unmixingrdquo IEEE SignalProcessing Magazine vol 19 no 1 pp 44ndash57 2002

[2] Y H Hu H B Lee and F L Scarpace ldquoOptimal linear spectralunmixingrdquo IEEE Transactions on Geoscience and Remote Sens-ing vol 37 no 1 pp 639ndash644 1999

[3] J M Bioucas-Dias A Plaza N Dobigeon et al ldquoHyperspec-tral unmixing overview geometrical statistical and sparseregression-based approachesrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 5 no 2 pp354ndash379 2012

[4] M Grana I Villaverde J O Maldonado and C HernandezldquoTwo lattice computing approaches for the unsupervised seg-mentation of hyperspectral imagesrdquo Neurocomputing vol 72no 10-12 pp 2111ndash2120 2009

[5] J Li and J M Bioucas-Dias ldquoMinimum volume simplexanalysis a fast algorithm to unmix hyperspectral datardquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo08) pp III-250ndashIII-253 BostonMass USA July 2008

[6] H Pu B Wang and L Zhang ldquoSimplex geometry-basedabundance estimation algorithm for hyperspectral unmixingrdquoScientia Sinica Informationis vol 42 no 8 pp 1019ndash1033 2012

[7] G Zhou S Xie Z Yang J-M Yang and Z He ldquoMinimum-volume-constrained nonnegative matrix factorizationenhanced ability of learning partsrdquo IEEE Transactions onNeural Networks vol 22 no 10 pp 1626ndash1637 2011

[8] T-H Chan C-Y Chi Y-M Huang and W-K Ma ldquoA convexanalysis-based minimum-volume enclosing simplex algorithmfor hyperspectral unmixingrdquo IEEE Transactions on Signal Pro-cessing vol 57 no 11 pp 4418ndash4432 2009

[9] M Arngren M N Schmidt and J Larsen ldquoBayesian nonneg-ative matrix factorization with volume prior for unmixing ofhyperspectral imagesrdquo in Proceedings of the IEEE InternationalWorkshop onMachine Learning for Signal Processing (MLSP rsquo09)pp 1ndash6 IEEE Grenoble France September 2009

[10] V P Pauca J Piper and R J Plemmons ldquoNonnegative matrixfactorization for spectral data analysisrdquo Linear Algebra and ItsApplications vol 416 no 1 pp 29ndash47 2006

[11] N Wang B Du and L Zhang ldquoAn endmember dissimilar-ity constrained non-negative matrix factorization method forhyperspectral unmixingrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 6 no 2pp 554ndash569 2013

[12] Z Wu S Ye J Liu L Sun and Z Wei ldquoSparse non-negativematrix factorization on GPUs for hyperspectral unmixingrdquoIEEE Journal of Selected Topics in Applied Earth Observationsand Remote Sensing vol 7 no 8 pp 3640ndash3649 2014

[13] J B Greer ldquoSparse demixing of hyperspectral imagesrdquo IEEETransactions on Image Processing vol 21 no 1 pp 219ndash2282012

[14] J A Tropp and S JWright ldquoComputational methods for sparsesolution of linear inverse problemsrdquoProceedings of the IEEE vol98 no 6 pp 948ndash958 2010

[15] M Elad Sparse and Redundant Representations Springer NewYork NY USA 2010

[16] J A Tropp and A C Gilbert ldquoSignal recovery from randommeasurements via orthogonal matching pursuitrdquo IEEE Trans-actions on Information Theory vol 53 no 12 pp 4655ndash46662007

Mathematical Problems in Engineering 17

[17] W Dai and O Milenkovic ldquoSubspace pursuit for compressivesensing signal reconstructionrdquo IEEE Transactions on Informa-tion Theory vol 55 no 5 pp 2230ndash2249 2009

[18] I Marques and M Grana ldquoHybrid sparse linear and latticemethod for hyperspectral image unmixingrdquo inHybrid ArtificialIntelligence Systems vol 8480 of Lecture Notes in ComputerScience pp 266ndash273 Springer 2014

[19] I Marques and M Grana ldquoGreedy sparsification WM algo-rithm for endmember induction in hyperspectral imagesrdquo inNatural and Artificial Computation in Engineering and MedicalApplications vol 7931 of Lecture Notes in Computer Science pp336ndash344 Springer Berlin Germany 2013

[20] M V Afonso J M Bioucas-Dias and M A Figueiredo ldquoAnaugmented Lagrangian approach to the constrained optimiza-tion formulation of imaging inverse problemsrdquo IEEE Transac-tions on Image Processing vol 20 no 3 pp 681ndash695 2011

[21] M-D Iordache J M Bioucas-Dias and A Plaza ldquoTotal varia-tion spatial regularization for sparse hyperspectral unmixingrdquoIEEE Transactions on Geoscience and Remote Sensing vol 50no 11 pp 4484ndash4502 2012

[22] M-D Iordache JM Bioucas-Dias andA Plaza ldquoCollaborativesparse unmixing of hyperspectral datardquo in Proceedings of theIEEE International Geoscience and Remote Sensing Symposium(IGARSS rsquo12) pp 7488ndash7491 IEEE Munich Germany July2012

[23] J M Bioucas-Dias and M A T Figueiredo ldquoAlternating direc-tion algorithms for constrained sparse regression Applicationto hyperspectral unmixingrdquo in Proceedings of the 2ndWorkshopon Hyperspectral Image and Signal Processing Evolution inRemote Sensing (WHISPERS rsquo10) pp 1ndash4 Reykjavik IcelandJune 2010

[24] Z Shi W Tang Z Duren and Z Jiang ldquoSubspace matchingpursuit for sparse unmixing of hyperspectral datardquo IEEE Trans-actions on Geoscience and Remote Sensing vol 52 no 6 pp3256ndash3274 2014

[25] J A Tropp A C Gilbert and M J Strauss ldquoAlgorithms forsimultaneous sparse approximation Part I greedy pursuitrdquoSignal Processing vol 86 no 3 pp 572ndash588 2006

[26] W Tang Z Shi andYWu ldquoRegularized simultaneous forward-backward greedy algorithm for sparse unmixing of hyperspec-tral datardquo IEEE Transactions on Geoscience and Remote Sensingvol 52 no 9 pp 5271ndash5288 2014

[27] J Chen and X Huo ldquoTheoretical results on sparse representa-tions of multiple-measurement vectorsrdquo IEEE Transactions onSignal Processing vol 54 no 12 pp 4634ndash4643 2006

[28] G Davis S Mallat and M Avellaneda ldquoGreedy adaptiveapproximationrdquo Journal of Constructive Approximation vol 13no 1 pp 57ndash98 1997

[29] J A Tropp ldquoGreed is good algorithmic results for sparseapproximationrdquo IEEE Transactions on Information Theory vol50 no 10 pp 2231ndash2242 2004

[30] D L Donoho and M Elad ldquoMaximal sparsity representationvia l1 minimizationrdquo Proceedings of the National Academy ofSciences of the United States of America vol 100 pp 2197ndash22022003

[31] R N Clark G A Swayze R Wise et al USGS Digital SpectralLibrary splib06a US Geological Survey Denver Colo USA2007

[32] J Plaza A Plaza P Martınez and R Perez ldquoH-COMP atool for quantitative and comparative analysis of endmemberidentification algorithmsrdquo in Proceedings of the Geoscience andRemote Sensing Symposium (IGARSS rsquo03) vol 1 pp 291ndash2932003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 2: Research Article Backtracking-Based Simultaneous ...downloads.hindawi.com/journals/mpe/2015/842017.pdf · Research Article Backtracking-Based Simultaneous Orthogonal Matching Pursuit

2 Mathematical Problems in Engineering

Several sparse regression techniques such as greedyalgorithms (GAs) [16ndash19] and convex relaxation methods[20ndash23] are usually adopted to solve the sparse unmixingproblem Convex relaxation methods such as SUnSAL [20]SUnSAL-TV [21] and CLSUnSAL [22] use the alternatingdirectionmethodofmultipliers to efficiently solve the 119897

1norm

sparse regression problem which can decompose a complexproblem into several simpler ones Convex relaxation meth-ods can obtain the global sparse optimization solution andare more sophisticated than the Gas however they are farmore complicated than the GAs The GAs such as OMP[16] SP [17] and CGP [18 19] adopt one or more potentialendmembers from the spectral library in each iteration thatexplains the largest correlation between the current residualof one input signal and the spectral library The GAs can getan approximate solution for the 119897

0norm problem without

smoothing the penalty function and have low computationalcomplexity However the endmembers selection criterion ofGAs is not optimal in the sense of minimizing the residualof the new approximation it means that a nonexisting end-member once selected into the supporting set will never bedeleted So theGAs tend to be trapped into the local optimumand are likely to miss some of the actual endmembers Tosolve the local optimal solutions problem of GAs severalrepresentative simultaneous greedy algorithms (SGAs) arepresented including simultaneous subspace pursuit (SSP)[24] subspace matching pursuit (SMP) [24] and simultane-ous orthogonal matching pursuit (SOMP) [25] These SGAsdivide the whole hyperspectral image into several blocks andpick some potential endmembers from the spectral library ineach block Then the endmembers picked in each block areassociated as the endmembers sets of the whole hyperspectraldata Finally the abundances are estimated using the wholehyperspectral data with the obtained endmembers sets TheSGAs have the same low computational complexity as theGAs and can find the actual endmembers far more accuratelythan the GAsThe SGAs adopt a block-processing strategy toefficiently solve the local optimal problem but the endmem-bers picked in all the blocks are not all actual endmembersand the nonexisting endmembers will affect the estimation ofthe abundances corresponding to the actual endmembers Tosolve the drawback of the SGAs RSFoBa [26] combinates aforward greedy step and a backward greedy step which canselect the actual endmembersmore accurately than the SGAs

Inspired by the existing SGA methods we proposea sparse unmixing algorithm termed backtracking-basedsimultaneous orthogonal matching pursuit (BSOMP) in thispaper Similar to SOMP and SMP it uses a block-processingstrategy to select some potential endmembers and adds themto the estimated endmembers set which divides the wholehyperspectral image into several blocks and picks somepotential endmembers from the spectral library in each blockFurthermore BSOMP incorporates a backtracking processto detect the previous chosen endmembersrsquo reliability andthen deletes the unreliable endmember from the estimatedendmembers set in each iterationThrough thismodificationBSOMP can identify the true endmembers set more accu-rately than the other considered SGA methods

The remainder of the paper is organized as followsSection 2 introduces the simultaneous sparse unmixingmodel In Section 3 we present the proposed BSOMP algo-rithm and give out the theoretical analysis for the algo-rithm Section 4 provides a quantitative comparison betweenBSOMP and previous sparse unmixing algorithms usingboth simulated hyperspectral and real hyperspectral dataFinally we conclude in Section 5

2 Simultaneous Sparse Unmixing Model

The linear sparse unmixing model assumes that the observedspectrum of a mixed pixel is a linear combination of a fewspectral signatures presented in a known spectral library Let119910 isin 119877

119871 denote themeasured spectrum vector of amixed pixelwith 119871 bands119860 isin 119877119871times119898 where119898 is the number of signaturesin the library119860 denote a119871times119898 spectral library then the linearsparse unmixing model can be expressed as follows [23]

119910 = 119860119909 + 119899 (1)

where 119909 isin 119877119898 denotes the fractional abundance vector

with regard to the library 119860 and 119899 isin 119877119871 is the noise

Considering physical constraints abundance nonnegativityconstraint (ANC) and abundance sum-to-one constraint(ASC) are imposed on the linear sparse model as follows

119909 ge 0

119898

sum

119894=1

119909119894= 1

(2)

where 119909119894is the 119894th element of 119909

The sparse unmixing problemcan be expressed as follows

min119909

1199090

st 1003817100381710038171003817119910 minus 11986011990910038171003817100381710038172

2le 120575

119909 ge 0

(3)

where 1199090(called the 119897

0norm) denotes the number of

nonzero atoms in 119909 and 120575 ge 0 is the tolerated error due tothe noise and model error It is worth mentioning that we donot explicitly add the ASC in (3) because the hyperspectrallibraries generally contain only nonnegative components andthe nonnegativity of the sources automatically imposes ageneralized ASC [23]

The simultaneous sparse unmixing model assumes thatseveral input signals can be expressed in the form of differentlinear combinations of the same elementary signals Thismeans that all the pixels in the hyperspectral image areconstrained to share the same subset of endmembers selectedfrom the spectral library Then we can use SGA methods forsparse unmixing the sparse unmixing model in (1) becomes

119884 = 119860119883 +119873 (4)

where 119884 isin 119877119871times119870 denote the hyperspectral data matrix with 119871bands and119870mixed pixels 119860 isin 119877119871times119898 denote a 119871 times 119898 spectral

Mathematical Problems in Engineering 3

Part 1 (SOMP)(1) Initialize hyperspectral data 119884 and spectral library 119860(2) Divide hyperspectral data 119884 into several blocks 119884 = [119884

1 1198842 119884

119861] and initialize the index set 119878SOMP =

(3) For each block do(4) Set index set 119878

119887= and iteration counter 119896 = 1 Initialize the residual data of block 119887 119877

0= 119884119887

(5) While stopping criterion 1 has not been met do(6) Compute the index of the best correlated member of 119860 to the actual residual 119895 = argmax

119894(119877119897minus1)1198791198601198942 where 119860

119894is the

119894th column of 119860(7) Update support set 119878

119887= 119878119887cup 119895

(8) Compute119883119897 = (119860lowast119878119887119860119878119887)minus1119860lowast

119878119887119884119887119860119878119887is the matrix containing the columns of 119860 having the indexes from 119878

119887

(9) Update residual 119877119897 = 119884119887minus 119860119878119887119883119897

(10) 119897 larr 119897 + 1

(11) End while(12) Set 119878SOMP = 119878SOMP cup 119878119887(13) End forPart 2 (Backtracking process)(14) Initialize hyperspectral data 119884 the index set 119878

119905= 119878SOMP and 119860 119905 = 119860119878SOMP

119860119905is the matrix containing the columns of 119860 having

the indexes from 119878119905

(15)While stopping criterion 2 has not been met do(16) Compute solution119883119905 = (119860lowast

119905119860119905)minus1119860lowast

119905119884

(17) Compute the member of 119860119905having the lowest abundance indexlarr min

119895(119883119895

119905)

(18) Remove the member having the lowest fractional abundance from the index set 119878119905larr 119878119905 119878index and the endmember set

119860119905larr 119860119905 119860 index

(19) 119905 larr 119905 + 1

(20) End whilePart 3 (Abundance estimation)(21) Estimate abundances using the original hyperspectral data matrix and the endmember set under the constraint of

nonnegativity119883 larr argmin

119883119860119905119883 minus 119884

119865 subject to119883 ge 0

Algorithm 1 BSOMP for hyperspectral sparse unmixing

library 119883 isin 119877119898times119870 denotes the fractional abundance matrix

each column of which corresponds with the abundancefractions of a mixed pixel and119873 isin 119877

119871times119870 is the noise matrixUnder the simultaneous sparse unmixing model the

simultaneous sparse unmixing problem can be expressed asfollows

min119883

119883rowminus0

st 119884 minus 1198601198832

119865le 120575

119883 ge 0

(5)

where 119883119865denotes the Frobenius norm of 119883 119883rowminus0 is

the number of nonzero rows in matrix 119883 [25] and 120575 is thetolerated error due to the noise and model error

It should be noted that the model in (5) is reasonablebecause there should be only a few rows with nonzero entriesin the abundance matrix in light of only a small number ofendmembers in the hyperspectral image compared with thedimensionality of the spectral library [22]

3 Backtracking-Based SOMP

In this section we first present our new algorithm BSOMPfor sparse unmixing of hyperspectral dataThen a theoretical

analysis of the algorithm will be given Then we give aconvergence theorem for the proposed algorithm

31 Statement of Algorithm The whole process of usingBSOMP for sparse unmixing of hyperspectral data is sum-marized in Algorithm 1 The algorithm includes three mainparts SOMP for endmember selection backtracking process-ing and abundance estimation In the first part we adopta block-processing strategy for SOMP to efficiently selectendmembers This strategy divides the whole hyperspectralimage into several blocks Then in each block SOMP willpick several potential endmembers from the spectral libraryand add them to the estimated endmembers set In thesecond part we utilize a backtracking strategy to removesome endmembers chosen wrongly from the estimated end-members set in previous processing and identify the true end-members set more accurately The backtracking processingis halted when the maximum total correlation between anendmember in the spectral library and the residual dropsbelow threshold 120575 Finally the abundances are estimatedusing the obtained endmembers set under the constraint ofnonnegativity

In the following paragraphs we provide some definitionsand then give two stopping conditions for the BSOMPalgorithm

4 Mathematical Problems in Engineering

Definition 1 If 119909 is a vector where 119909119894is the 119894th element of 119909

the 119897119901norm of 119909 is defined as

119909119901 = (sum

119894

10038161003816100381610038161199091198941003816100381610038161003816119901

)

1119901

for 1 le 119901 lt infin

119909infin = max119894

10038161003816100381610038161199091198941003816100381610038161003816

(6)

Definition 2 The (119901 119902) operator norms of the matrix 119860 isdefined as

119860119901119902= max119909 =0

119860119909119902

119909119901

(7)

Several of the (119901 119902) operator norms can be computedeasily using the following lemma [25 27]

Lemma 3 (1) The (1 q) operator norm is the maximum 119897119901

norm of any column of 119860(2)The (2 2) operator norm is themaximum singular value

of 119860(3)The (119901infin) operator norm is the maximum 119897

119901norm of

any row of 119860

We give a stopping condition for the first part of theBSOMP algorithm

100381610038161003816100381610038171003817100381710038171198771198971003817100381710038171003817119865 minus

1003817100381710038171003817119877119897minus110038171003817100381710038171198651003816100381610038161003816

1003817100381710038171003817119877119897minus11003817100381710038171003817119865

le 120590 (8)

where 119877119897119865is the Frobenius norm of the residual corre-

sponding to the 119897th iteration and 120590 is a thresholdFor the second part of the BSOMP algorithm we give a

stopping condition for deciding when to halt the iteration10038161003816100381610038161003816

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin minus1003817100381710038171003817119860lowast119877119905minus1

1003817100381710038171003817infininfin

100381610038161003816100381610038161003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin

le 120575 (9)

where 119884119905is the residual corresponding to the 119905th iteration

119860lowast denotes the conjugate transpose of 119860 119860lowast119877

119905infininfin

is themaximum total correlation between an endmember in thespectral library 119860 and the residual 119877

119905 and 120575 is a threshold

32 Theoretical Analysis In this section we will introducesome definitions and then give a theoretical analysis for theBSOMP algorithm

Definition 4 The coherence 120583 of a spectral library 119860 equalsthe maximum correlation between two distinct spectralsignatures [28]

120583 = max119896 =ℎ

1003816100381610038161003816⟨119860119896 119860ℎ⟩1003816100381610038161003816 (10)

where 119860119896is the 119896th column of 119860 If the coherence parameter

is small each pair of spectral signatures is nearly orthogonal

Definition 5 The cumulative coherence [29 30] is defined as

120583 (119896) = max|Λ|le119896

max119895notinΛ

sum

119894isinΛ

10038161003816100381610038161003816⟨119860119894 119860119895⟩10038161003816100381610038161003816 (11)

where the index set Λ isin 119878all 119878all is the support of all thespectral signatures in the spectral library 119860 The cumulativecoherence measures the maximum total correlation betweena fixed spectral signature and 119905 distinct spectral signaturesParticularly 120583(0) = 0

Definition 6 Given hyperspectral image data matrix 119884 isin

119877119871times119870 and available spectral library matrix 119860 isin 119877119871times119898

119878opt is an index set which lists at most 119879 endmemberspresented in the hyperspectral image scene119860opt is an optimal submatrix of the spectral library 119860which contains the 119879 columns indexed in 119878opt119883opt is the optimal abundance matrix correspondingto 119860opt119884opt is a 119879-term best approximation of 119884 and 119884opt =119860opt119883opt119878119905is an index set which lists 119896 endmembers after 119905

iterations and assumes 119878opt isin 119878119905119860119905is a submatrix of the spectral library 119860 which

contains the 119896 columns indexed in 119878119905

119883119905is the abundance matrix corresponding to 119860

119905

119884119905is the reconstruction of 119884 by BSOMP algorithm

after 119905 iterations (assume that BSOMP algorithmselects a nonexisting endmember to remove fromindex set 119878

119905in each iteration) and 119884

119905= 119860119905119883119905

119877119905is the residual error of 119884 after 119905 iterations and 119877

119905=

119884 minus 119884119905

Theorem 7 Suppose that the best approximation 119884opt of thehyperspectral image data matrix 119884 over the index set 119878optsatisfies the error bound 119860lowast(119884minus119884opt)infininfin le 120591 the first part ofBSOMP has selected 119896 potential endmembers from the spectrallibrary to constitute the initial index set 119878

119905 and 119878opt isin 119878119905 After

iteration 119905 of the second part of BSOMP halt the second part ofthe algorithm if

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)120591 (12)

where 119896 ge 119879 If this inequality fails then BSOMPmust removeanother nonexisting endmember in iteration (119905 + 1)

The proof of Theorem 7 is shown in the Appendix If thesecond part of the BSOMP algorithm terminates at the endof iteration 119905 we may conclude that the BSOMP algorithmremoves 119905 nonexisting endmembers from 119878

119905and has chosen

119896 optimal endmembers

4 Experimental Results

In this section we use both simulated hyperspectral data andreal hyperspectral data to demonstrate the performance ofthe proposed algorithm The BSOMP algorithm is comparedwith three SGAs (SOMP [25] SMP [24] and RSFoBa [26])and three convex relaxation methods (ie SUnSAL [20]SUnSAL-TV [21] and CLSUnSAL [22]) All the considered

Mathematical Problems in Engineering 5

1

08

06

04

02

0

1

08

06

04

02

005 1 15 2 25

Refle

ctan

ce

1

08

06

04

02

0

Refle

ctan

ce

1

08

06

04

02

0

Refle

ctan

ce

Refle

ctan

ce

1

08

06

04

02

0

Refle

ctan

ce

Wavelength (nm)

05 1 15 2 25

Wavelength (nm)05 1 15 2 25

Wavelength (nm)

05 1 15 2 25

Wavelength (nm)05 1 15 2 25

Wavelength (nm)

Axinite HS3423B Samarium oxide GDS36 Chrysocolla HS2973B

Niter GDS43 (K-saltpeter) Anthophyllite HS2863B

Figure 1 Five spectral signatures from the USGS used in our simulated data experiments The title of each subimage denotes the mineralcorresponding to the signature

algorithms have taken into account the abundance nonnega-tivity constraintThe TV regularizer used in SUnSAL-TV is anonisotropic one For the RSFoBa algorithm the 119897

infinnorm is

considered in our experiment Asmentioned above when thespectral library is overcomplete the GAs behave worse thanthe SGAs and the convex relaxationmethods thus we do notinclude the GAs in our experiment

In the simulated data experiments we evaluated theperformances of the sparse unmixing algorithms in situationsof different SNR level of noise the signal-to-reconstructionerror (SRE) [21] is used to measure the quality of thereconstruction abundances of spectral endmembers

SRE =119864 [119883

2

2]

119864 [10038171003817100381710038171003817119883 minus 119883

10038171003817100381710038171003817

2

2]

SRE (dB) = 10log10(SRE)

(13)

where 119883 denotes the true abundances of spectral endmem-bers and119883 denotes the reconstruction abundances of spectralendmembers

41 Evaluation with Simulated Data In the simulated exper-iments the United States Geological Survey (USGS) digital

spectral library (splib06a) [31] is used to build the spectrallibrary 119860 The reflectance values of 498 spectral signaturesare measured for 224 spectral bands distributed uniformlyin the interval of 04ndash25 120583m (119860 isin 119877

224times498) Fifteenspectral signatures are chosen from the spectral library togenerate our simulated data Figure 1 shows five spectralsignatures used for all the simulated data experimentsThe other ten spectral signatures that are not displayed inthe figure include Monazite HS2553B Zoisite HS3473BRhodochrosite HS67 Neodymium Oxide GDS34 PigeoniteHS1993B Meionite WS700Hlsep Spodumene HS2103BLabradorite HS173B Grossular WS484 and WollastoniteHS3483B

In Simulated Data 1 experiment we generated sevendatacubes of 50 times 50 pixels and 224 bands per pixel eachcontaining a different number of endmembers 3 5 7 9 1113 and 15 In each simulated pixel the fractional abundancesof the endmembers were randomly generated following aDirichlet distribution [23] Note that there was no purepixel in Simulated Data 1 and the fractional abundances ofthe endmembers were less than 08 After Simulated Data1 was generated Gaussian white noise or correlated noisewas added to Simulated Data 1 having different levels of thesignal-to-noise ratio (SNR) that is 20 25 30 35 40 45and 50 dB The Gaussian white noise was generated using

6 Mathematical Problems in Engineering

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

45

40

35

30

25

20

15

10

5

0

4 6 8 10 12 14

Endmember number

SRE

(dB)

Figure 2 Results on Simulated Data 1 with 30 dB white noise SREas function of endmember number

the awgn function in MATLAB The correlated noise wasobtained by low-pass filtering iid Gaussian noise

Simulated Data 2 was generated using nine randomlyselected spectral signatures from 119860

1 with 100 times 100 pixels

and 224 bands per pixel provided by Dr M D Iordache andProf J M Bioucas-Dias The fractional abundances satisfythe ANC and the ASC and are piecewise smoothed The trueabundances of three endmembers are illustrated in Figure 2After Simulated Data 2 was generated Gaussian white noiseor correlated noise with seven levels of SNR (20 25 30 3540 45 and 50 dB) was added to Simulated Data 2 as well

In the two simulated data experiments all the SGAmeth-ods have adopted the block-processing strategy to get betterperformances The block sizes of the SGA methods wereempirically set to 10 and the stopping threshold parameter 120590of SOMP and SMP was set to 1sdot10minus2 The stopping thresholdparameters 120590 and 120575 of BSOMP were set to 1sdot10minus2 and 01RSFoBa was tested using different values of the parameters120582 and 120590 0 and 5sdot10minus3 for Simulated Data 1 experiment 1and 1sdot10minus2 for Simulated Data 2 experiment SUnSAL-TVwastested using different values of the parameters 120582 and 120582TV1sdot10minus2 and 1sdot10minus2 1sdot10minus2 and 1sdot10minus2 1sdot10minus2 and 5sdot10minus3 5sdot10minus3and 1sdot10minus3 1sdot10minus3 and 5sdot10minus4 5sdot10minus4 and 1sdot10minus4 and 5sdot10minus4 and1sdot10minus4 for SNR = 20 25 30 35 40 45 and 50 dB SUnSAL andCLSUnSALwas tested using different values of the parameter120582 1sdot10minus2 5sdot10minus3 1sdot10minus3 5sdot10minus4 5sdot10minus4 5sdot10minus4 and 5sdot10minus4 forSNR = 20 25 30 35 40 45 and 50 dB

Figure 2 shows the SRE (dB) results obtained on Simu-lated Data 1 with white noise as a function of endmembernumber using the different tested methods The SREs of allthe algorithms decrease as the endmember number increases

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

30

25

20

15

10

5

0

SRE

(dB)

20 25 30 35 40 45 50

SNR (dB)

Figure 3 Results on Simulated Data 1 with white noise when theendmember number is eleven SRE as function of SNR

We can find that BSOMP SOMP SMP RSFoBa and SUnSAL-TV perform better than SUnSAL and CLSUnSAL All theSGAmethods have comparable performances with SUnSAL-TV BSOMP can always obtain the best estimation of theabundances when the number of endmembers is less than 15Figure 3 shows the SRE (dB) results obtained on SimulatedData 1 with white noise as a function of SNR when theendmember number is elevenThe SREs of all the algorithmsdecrease as the SNR decreases We can find that all the SGAmethods perform better than SUnSAL-TV SUnSAL andCLSUnSAL This result indicates that all the SGA methodsadopt the block-processing strategy to effectively pick upall the actual endmembers Among all the SGA methodsBSOMP and RSFoBa behave better than SOMP and SMPand BSOMP can always obtain the best estimation of theabundances Figures 4 and 5 show the SRE results obtainedon Simulated Data 1 with correlated noise as a function ofendmember number and SNR respectively As shown inFigures 4 and 5 BSOMP gets an overall better performancethan the other algorithms

Figure 6 shows the number of the potential endmembersobtained from the spectral library by all the SGA methodsThe SGA methods adopt the block-processing strategy toeffectively pick up all the actual endmembers It can beobserved that BSOMP and RSFoBa can select the actualendmembers more accurately than SOMP and SMP It isworth mentioning that many endmembers in the estimatedendmembers set of SOMP are nonexisting and affect theperformance of SOMP However BSOMP incorporates abacktracking process to remove some endmembers chosenwrongly from the estimated endmembers set which canselect the actual endmembers more accurately than SOMP

Mathematical Problems in Engineering 7

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

4 6 8 10 12 14

Endmember number

45

40

35

30

25

20

15

10

5

0

SRE

(dB)

Figure 4 Results on Simulated Data 1 with 30 dB correlated noiseSRE as function of endmember number

Differing from RSFoBa BSOMP uses a backtracking processin the whole hyperspectral data to remove some endmem-bers chosen wrongly from the estimated endmembers setin previous block-processing which can identify the trueendmembers set more accurately than RSFoBa It indicatesthat BSOMP behaves better than the other SGA methods

Figures 7 and 8 show the SRE results obtained onSimulated Data 2 with white noise and correlated noise asa function of SNR respectively We can find that BSOMPSOMP SMP RSFoBa and SUnSAL-TV perform better thanSUnSAL and CLSUnSAL This result indicates that all theSGA methods adopt the block-processing strategy to effec-tively pick up all the actual endmembers and SUnSAL-TVcombinates spatial-contextual coherence regularizer whichcan get better performance over SUnSAL and CLSUnSALAll the SGA methods have comparable performances withSUnSAL-TV BSOMP and RSFoBa can always obtain thebetter estimation of the abundances RSFoBa incorporates thespatial-contextual coherence regularizer to select the actualendmembers more accurately than SOMP and SMP andBSOMP incorporates a backtracking process in the wholehyperspectral data to identify the true endmembers set moreaccurately than SOMP and SMP It can be observed thatBSOMP and RSFoBa can select the actual endmembers moreaccurately than SOMP and SMP

For visual comparison Figure 9 shows the true abun-dance maps and the abundance maps estimated by the differ-ent algorithms on Simulated Data 2 with 20 dB white noiseIn Figure 9 the abundancemaps of endmembers which wereobtained by SUnSAL-TV contain fewer noise points How-ever the abundance maps estimated by SUnSAL-TV may

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

30

25

20

15

10

5

0

SRE

(dB)

20 25 30 35 40 45 50

SNR (dB)

Figure 5 Results on Simulated Data 1 with correlated noise whenthe endmember number is eleven SRE as function of SNR

Endmember number

70

60

50

40

30

20

10

0

4 6 8 10 12 14

Num

ber o

f ret

aine

d en

dmem

bers

SMP (white)SOMP (white)RSFoBa (white)

SMP (correlated)SOMP (correlated)RSFoBa (correlated)

BSOMP (white) BSOMP (correlated)

Figure 6 Results obtained by the four SGAs on Simulated Data1 with 30 dB white noise or correlated noise number of retainedendmembers as function of endmember number

exhibit an oversmooth visual effect around some pixels someof the details and edges are not well preserved Comparedwith SUnSAL-TV the estimated abundances by BSOMPmay contain more noise points but preserve more edgeregions and are closer to the true abundances Moreover itis much easier to tell the different materials apart Comparedwith SOMP and SMP the estimated abundances by BSOMP

8 Mathematical Problems in Engineering

Table 1 Processing times(s) measured after applying the tested methods to the two considered simulated data sets with 30 dB white noise

Datacube SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa BSOMPSimulated Data 1 (11 endmembers) 6767 7841 36327 2083 21289 3938 6418Simulated Data 2 25471 27902 127112 9618 35444 28277 106524

50

45

40

35

30

25

20

15

10

20 25 30 35 40 45 50

SNR (dB)

SRE

(dB)

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

Figure 7 Results on Simulated Data 2 with white noise SRE asfunction of SNR

20 25 30 35 40 45 50

SNR (dB)

50

45

40

35

30

25

20

15

10

SRE

(dB)

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

Figure 8 Results on Simulated Data 2 with correlated noise SRE asfunction of SNR

Table 2 Comparison of the sparse unmixing algorithms withwhether the backward step joint sparsity and contextual informa-tion are considered

Property Backwardstep Joint sparsity Contextual

informationSMP sdot

SOMP sdot

RSFoBa sdot sdot sdot

BSOMP sdot sdot

SUnSALCLSUnSAL sdot

SUnSAL-TV sdot

contain fewer noise points and preserve more edge regionswhich are more accurate and have a better visual effect

Table 1 shows the processing time measured after apply-ing the tested algorithms to the two simulated data setswith 30 dB white noise The algorithms were implementedin MATLAB 2009a and executed on a desktop PC equippedwith an Intel Core 2 Duo CPU (293GHz) and 2GB ofRAM memory It can be observed that the SUnSAL-TValgorithm is far more time consuming compared with theother algorithms We can find that BSOMP incorporating abacktracking technique is more time consuming than SOMP

In order to make the differences among the BSOMP algo-rithm and the other sparse unmixing algorithms clearer wedisplay their comparison concerned with several propertiesin Table 2

SUnSAL is a convex relaxation method for sparse unmix-ing which solves the 119897

1norm sparse regression problem

CLSUnSAL is convex relaxation method for solving the 11989721

optimization problemThemain difference between SUnSALand CLSUnSAL is that the former employs pixelwise inde-pendent regressions while the latter enforces joint sparsityamong all the pixels SUnSAL-TV is the special case ofSUnSALwhich incorporates the spatial-contextual coherenceregularizer into the SUnSAL As observed in Figures 7 and8 SUnSAL-TV behaves better than SUnSAL and CLSUnSALwhich indicates that the spatial-contextual coherence regu-larizer can make SUnSAL-TV more effective

SOMP and SMP are the forward SGAs which select oneor more potential endmembers from the spectral library ineach iteration RSFoBa is a forward-backward SGA whichincorporates the backward step spatial-contextual informa-tion and joint sparsity into the greedy algorithm BSOMP isa forward-backward SGA which incorporates the backwardstep and joint sparsity to the greedy algorithm As observedin Figure 6 the numbers of endmembers selected by theSGAs are still larger than the actual number BSOMP and

Mathematical Problems in Engineering 9

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

Figure 9 Continued

10 Mathematical Problems in Engineering

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

Figure 9 Comparison of abundance maps on Synthetic Data 2 with 20 dB white noise From left column to right column are the abundancemaps corresponding to Endmember 1 Endmember 4 and Endmember 9 respectively From top row to bottom row are true abundance mapsor abundance maps obtained by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively

RSFoBa behave better than SMP and SOMP which indicatesthat the backward step can make BSOMP and RSFoBa moreaccurate to select the actual endmembers Differing fromRSFoBa which adopts a forward step and a backward stepto pick some potential endmembers in each block BSOMPadopts a forward step to pick some potential endmembersin each block and then adopts a backward step in the wholehyperspectral data to remove some endmembers chosenwrongly from the estimated endmembers set Figure 10 showsthe SRE (dB) results obtained on Simulated Data 1 withwhite noise as a function of block size We can find thatBSOMP performs better than the other SGAs Figure 11shows the number of the potential endmembers obtainedfrom the spectral library by all the SGA methods It canbe observed that BSOMP can select the actual endmembersmore accurately than RSFoBa This result indicates that abackward step in the whole hyperspectral data can makeBSOMP more effective

42 Evaluation with Real Data The hyperspectral data setused in our real data experiment is the well-known AVIRISCuprite data set (httpavirisjplnasagovhtmlavirisfreed-atahtml) The size of the hyperspectral scene is a 250 times 191pixels subset with 188 spectral bands (water absorption bandsand low SNR bands are removed)The spectral library for thisdata set is the USGS spectral library (containing 498 pure sig-natures) with the corresponding bands removed The miner-als map (httpspeclabcrusgsgovcuprite95tgif22um mapgif) which was produced by a Tricorder 33 software(httpspeclabcrusgsgovPAPERStetracorder) is shown inFigure 12 Note that the Tricorder map is only suitable forthe hyperspectral data collected in 1995 the AVIRIS Cupritedata was collected was in 1997 Thus we only use the mineralmap as a reference indicator for qualitative analysis of thefractional abundance maps produced by different sparseunmixing methods

In this section for the real data experiment the sparsityof solution and the reconstruction error of the hyperspectral

Mathematical Problems in Engineering 11

Table 3 Sparsity of the abundance vectors and the reconstruction errors

Algorithms SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa BSOMPSparsity 175629 207037 20472 13684 15102 12702 11810RMSE 00051 00035 00028 00040 00034 00032 00033

14

13

12

11

10

9

8

7

8 10 12 14 16 18

Block size

SRE

(dB)

SMPSOMP

RSFoBaBSOMP

Figure 10 Results on Simulated Data 1 with white noise when theendmember number is eleven SRE as function of block size

50

55

60

65

45

40

35

30

25

20

15

108 10 12 14 16 18

Block size

SMPSOMP

RSFoBaBSOMP

Endm

embe

r num

ber

Figure 11 Results on Simulated Data 1 with 30 dB white noise whenthe endmember number is eleven number of retained endmembersas function of block size

image are used to evaluate the performances of the sparseunmixing algorithms since the true abundance maps of thereal data are not available The sparsity is obtained by count-ing the average number of nonzero abundances estimated byeach algorithm The same as in [22] we define the fractionalabundances larger than 0001 as nonzero abundances toavoid counting negligible values The reconstruction erroris calculated by the original hyperspectral image and theone reconstructed by the actual endmembers and theircorresponding abundances [32] The root mean square error(RMSE) is used to evaluate the reconstruction error betweenthe original hyperspectral image and the reconstructed one

RMSE = radic 1119870

119870

sum

119895=1

(119884119895minus 119895)2

(14)

where 119884 denotes the original hyperspectral image and denotes the reconstruction hyperspectral image If the RMSEof a sparse unmixing algorithm is smaller it also reflects theunmixing performance of this algorithm

The regularization parameter used for CLSUnSAL wasempirically set to 001 while the one corresponding toSUnSAL was set to 0001 The parameters 120582 and 120582TV ofSUnSAL-TV were set to 0001 and 0001 The block sizes ofthe SGA methods were empirically set to 10 The stoppingthreshold parameter 120590 of SOMP and SMP was set to 001The stopping threshold parameters 120590 and 120575 of BSOMP wereset to 001 and 01 The parameters 120582 and 120590 of RSFoBawere set to 1 and 001 Table 2 shows the sparsity of theabundance vectors and the reconstruction error obtained bythe different algorithm In Table 3 we can find that BSOMPSOMP SMP and SUnSAL perform better than SUnSAL-TV in the sparsest solutions although SUnSAL-TV has thelowest reconstruction errors BSOMP can achieve nearly asfew reconstruction errors as SUnSAL-TV but obtain muchsparser solutions In general BSOMP can obtain the sparsestsolutions and achieve the lower reconstruction errors whichindicates the effectiveness of the proposed algorithm

For visual comparison Figure 13 shows the estimatedfractional abundance maps by applying the proposedBSOMP SMP RSFoBa SOMP SUnSAL and SUnSAL-TValgorithms to the AVIRIS Cuprite data for three differentminerals (Alunite Buddingtonite and Chalcedony) Forcomparison the classification maps of three materialsextracted from the Tricorder 33 software are also presentedin Figure 13 It can be observed that the abundances estimatedby BSOMP are comparable or higher in the regions assignedto the respective minerals in comparison to the otherconsidered algorithms with regard to the classification mapsproduced by the Tricorder 33 software We also analyzed thesparsity of the abundances estimated by BSOMP the average

12 Mathematical Problems in Engineering

Cuprite NevadaAVIRIS 1995 data

USGSClark and Swayze

Tetracorder 33 productSulfates

K-Alunite 150CK-Alunite 250CK-Alunite 450C

JarositeAlunite + Kaoliniteandor Muscovite

Kaolinite group claysKaolinite wxlKaolinite pxlKaolinite + Smectiteor Muscovite

HalloysiteDickite

CarbonatesCalciteCalcite + KaoliniteCalcite + Montmorillonite

ClaysNa-MontmorilloniteNontronite (Fe clay)

Other mineralsLow-Al MuscoviteMed-Al MuscoviteHigh-Al MuscoviteChlorite + Musc MontChloriteBuddingtoniteChalcedony OH QtzPyrophyllite + Alunite

N2km

Na82-Alunite 100CNa40-Alunite 400C

Figure 12 USGS map showing the distribution of different minerals in the Cuprite mining district in Nevada

number of endmembers with abundances higher than 0001estimated by BSOMP is 11810 (per pixel) This means thatthe algorithm uses a minimum number of members toexplain the data Overall the qualitative results and visualresults reported in this section indicate that the BSOMPalgorithm is more effective for sparse unmixing than theother algorithms

5 Conclusions

In this paper we present a new SGA termed BSOMP forsparse unmixing of hyperspectral data The main idea of theproposed algorithm is that BSOMP incorporates a backtrackstep to detect the previous chosen endmembersrsquo reliabilityand then deletes the unreliable endmembers It is provedthat BSOMP can recover the optimal endmembers from thespectral library under certain conditions Experiments onboth synthetic data and real data also demonstrate that theBSOMP algorithm is more effective than the other SGAs forsparse unmixing

The topic we want to discuss is why BSOMP is effectivefor sparse unmixing Endmember selection and abundanceestimation are the two main parts of the SGAs The num-ber of the potential endmembers selected by the SGAs

is far smaller than the number of spectral signatures inthe spectral library and then the abundance estimationmethod makes the abundance vectors sparse further withthe relatively small endmembers subset The main differenceamong BSOMP SOMP SMP and RSFoBa is the endmemberselectionThe SGAs adopt a block-processing strategy to effi-ciently select endmembers but the numbers of endmembersselected are still larger than the actual number However thenonexisting endmembers will have a negative effect on theestimation of the abundances corresponding to the actualendmembers

Compared with SOMP and SMP BSOMP and RSFoBaadopt a backward step to detect the previous chosen end-membersrsquo reliability and delete the nonexisting endmembersSo BSOMP and RSFoBa can select the actual endmembersmore accurately than SOMP and SMP The main differencebetween BSOMP and RSFoBa is that the former adopts abackward step in the whole hyperspectral data while thelatter adopts a backward step in each block As shown inFigures 10 and 11 we can observe that the block size willsignificantly affect the number of the endmembers obtainedby RSFoBa while BSOMP is not affected by the block size Inconclusion BSOMP can select the actual endmembers moreaccurately than the other SGAs and be beneficial to improveestimation effectiveness

Mathematical Problems in Engineering 13

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

02

015

01

005

0

0

04

03

02

01

035

03

025

02

015

01

005

0

04

03

02

01

0

Figure 13 Continued

14 Mathematical Problems in Engineering

025

02

015

01

005

0

02

015

01

005

0

03

025

02

015

01

005

0

03

035

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

04

03

02

01

0

04

05

03

02

01

0

04

05

06

03

02

01

0

Figure 13 Abundance maps estimated by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP and the distribution mapsproduced by Tricorder 33 software for the AVIRIS Cuprite scene From top row to bottom row are the maps produced or estimated byTricorder software SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively From left column to right column arethe maps corresponding to Alunite Buddingtonite and Chalcedony respectively

Appendix

Proof of Theorem 7

First we should introduce some definitionswhichwill be usedduring the proof process

Definition A1 Given hyperspectral image data matrix 119884 isin

119877119871times119870 and available spectral library matrix 119860 isin 119877119871times119898

119860119905is a submatrix of the spectral library 119860 where 119860

119905

contains the 119896 columns indexed in 119878119905119860119905contains the

remaining columns 119860 = [119860119905 119860119905]

119860simopt is a submatrix of 119860

119905 which contains the

remaining (119896 minus 119879) columns indexed in 119878119905 119878opt

119883simopt is a submatrix of119883

119905and contains the rows listed

in 119878119905 119878opt

Nowwewill prove the theorem on the performance of thealgorithm

Proof Let119875opt denote the orthogonal projector onto the rangeof 119860opt

119875opt = 119860opt (119860lowast

opt119860opt)minus1

119860lowast

opt (A1)

where119860lowastopt denotes the conjugate transpose of119860opt Since thecolumns of 119860

119905are orthogonal to the columns of 119877

119905(119877119905= 119884minus

119884119905) we have

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905119877119905

10038171003817100381710038171003817infininfin (A2)

Themaximumcorrelation between119860119905(a submatrix of the

spectral library 119860) and the residual 119877opt = 119884 minus 119884opt is smaller

Mathematical Problems in Engineering 15

than the maximum correlation between the spectral library119860 and the residual 119877opt so we have

10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin (A3)

Invoke (A2) and (A3) and apply the triangle inequality to get

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt + 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A4)

Then the matrix 119884119905minus 119884opt can be expressed in the following

manner

119884119905minus 119884opt = (119868 minus 119875opt) 119884119905 = (119868 minus 119875opt)119860 119905119883119905

= (119868 minus 119875opt)119860simopt119883simopt(A5)

The last equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt Applying the triangle inequality the secondterm of (A4) can be rewritten as follows

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119868 minus 119875opt)119860simopt119883simopt

10038171003817100381710038171003817infininfin

le [10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfin]

times10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A6)

We will apply the cumulative coherence function todeduce the estimates of (A6) the deduction has five steps

(1) Each row of the matrix 119860lowast119905119860simopt lists the inner

products between a spectral endmember and (119896 minus 119879) distinctendmembers we have

10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879) (A7)

(2) Use the fact that sdot infininfin

is submultiplicative andinvoke (A1) to get

10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfin

times100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

times10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

(A8)

(3) Making use of the same deduction as in step (1) wehave

10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfinle 120583 (119879)

10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879)

(A9)

(4) Since the columns of 119860 are normalized the Grammatrix 119860lowastopt119860opt lists the inner products between 119879 different

endmembers so the matrix119860lowastopt119860opt has a unit diagonal andthen we can get

119860lowast

opt119860opt = 119868 minus 119883opt (A10)

where 119883optinfininfin le 120583(119879) then we can expand the inverse ina Neumann series to get100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

=100381710038171003817100381710038171003817(119868 minus 119883opt)

minus1100381710038171003817100381710038171003817infininfin

le1

1 minus10038171003817100381710038171003817119883opt

10038171003817100381710038171003817infininfin

le1

1 minus 120583 (119879)

(A11)

(5) Invoke the bounds from steps (1) to (4) into (A6) wecan obtain that10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le [120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)] times

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A12)

Simplify expression (A12) to conclude that

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfinle120583 (119896 minus 119879)

1 minus 120583 (119879)

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A13)

In order to produce the an upper bound of 119883simoptinfininfin we

deduce the following formula by using the triangle inequality10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119884 minus 119884

119905+ 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le1003817100381710038171003817119860lowast

119905(119884 minus 119884

119905)1003817100381710038171003817infininfin +

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A14)

Since (119884minus119884119905) is orthogonal to the range of119860

119905 Introduce (A5)

into (A14) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

simopt (119868 minus 119875opt)119860simopt119883simopt10038171003817100381710038171003817infininfin

(A15)

The equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt we can replace 119860

119905by 119860simopt Let 119863 = 119860

lowast

simopt(119868 minus

119875opt)119860simopt and rewrite 119863 = 119868 minus (119868 minus 119863) we can expand itsinverse in a Neumann series to obtain

10038171003817100381710038171003817119863119883simopt10038171003817100381710038171003817infininfin

ge (1 minus 119868 minus 119863infininfin)10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A16)

Recall the definition of119863 and apply the triangle inequality toobtain

119868 minus 119863infininfin

=10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt + 119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

(A17)

16 Mathematical Problems in Engineering

The right-hand side of inequality can be deduced completelyanalogous with steps (1)ndash(5) we can obtain

119868 minus 119863infininfin le 120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)

=120583 (119896 minus 119879)

1 minus 120583 (119879)

(A18)

Introduce (A18) and (A16) into (A14) we have10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A19)

Using the same deduction as (A3) it follows that 119860lowast119905(119884 minus

119884opt)infininfin le 119860lowast(119884minus119884opt)infininfin So (A19) can be rewritten as

follows10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A20)

Introduce (A20) into (A13) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le120583 (119896 minus 119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A21)

Introduce (A21) into (A4) to get1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin

le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A22)

Finally assume we have a bound 119860lowast(119884 minus 119884opt)infininfin le 120591 so(A22) can be rewritten as follows

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)120591 (A23)

This completes the argument

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work has been supported by a grant from the NUAAFundamental Research Funds (NS2013085) The authorswould like to thank M D Iordache J Bioucas-Dias and APlaza for sharing Simulated Data 2 and their codes for thealgorithms of CLSUnSAL SUnSAL and SUnSAL-TV Theauthors would also like to thank Shi Z TangW Duren Z andJiang Z for sharing their codes for the algorithms of SMP andRSFoBaMoreover the authors would also like to thank thoseanonymous reviewers for their helpful comments to improvethis paper

References

[1] N Keshava and J F Mustard ldquoSpectral unmixingrdquo IEEE SignalProcessing Magazine vol 19 no 1 pp 44ndash57 2002

[2] Y H Hu H B Lee and F L Scarpace ldquoOptimal linear spectralunmixingrdquo IEEE Transactions on Geoscience and Remote Sens-ing vol 37 no 1 pp 639ndash644 1999

[3] J M Bioucas-Dias A Plaza N Dobigeon et al ldquoHyperspec-tral unmixing overview geometrical statistical and sparseregression-based approachesrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 5 no 2 pp354ndash379 2012

[4] M Grana I Villaverde J O Maldonado and C HernandezldquoTwo lattice computing approaches for the unsupervised seg-mentation of hyperspectral imagesrdquo Neurocomputing vol 72no 10-12 pp 2111ndash2120 2009

[5] J Li and J M Bioucas-Dias ldquoMinimum volume simplexanalysis a fast algorithm to unmix hyperspectral datardquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo08) pp III-250ndashIII-253 BostonMass USA July 2008

[6] H Pu B Wang and L Zhang ldquoSimplex geometry-basedabundance estimation algorithm for hyperspectral unmixingrdquoScientia Sinica Informationis vol 42 no 8 pp 1019ndash1033 2012

[7] G Zhou S Xie Z Yang J-M Yang and Z He ldquoMinimum-volume-constrained nonnegative matrix factorizationenhanced ability of learning partsrdquo IEEE Transactions onNeural Networks vol 22 no 10 pp 1626ndash1637 2011

[8] T-H Chan C-Y Chi Y-M Huang and W-K Ma ldquoA convexanalysis-based minimum-volume enclosing simplex algorithmfor hyperspectral unmixingrdquo IEEE Transactions on Signal Pro-cessing vol 57 no 11 pp 4418ndash4432 2009

[9] M Arngren M N Schmidt and J Larsen ldquoBayesian nonneg-ative matrix factorization with volume prior for unmixing ofhyperspectral imagesrdquo in Proceedings of the IEEE InternationalWorkshop onMachine Learning for Signal Processing (MLSP rsquo09)pp 1ndash6 IEEE Grenoble France September 2009

[10] V P Pauca J Piper and R J Plemmons ldquoNonnegative matrixfactorization for spectral data analysisrdquo Linear Algebra and ItsApplications vol 416 no 1 pp 29ndash47 2006

[11] N Wang B Du and L Zhang ldquoAn endmember dissimilar-ity constrained non-negative matrix factorization method forhyperspectral unmixingrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 6 no 2pp 554ndash569 2013

[12] Z Wu S Ye J Liu L Sun and Z Wei ldquoSparse non-negativematrix factorization on GPUs for hyperspectral unmixingrdquoIEEE Journal of Selected Topics in Applied Earth Observationsand Remote Sensing vol 7 no 8 pp 3640ndash3649 2014

[13] J B Greer ldquoSparse demixing of hyperspectral imagesrdquo IEEETransactions on Image Processing vol 21 no 1 pp 219ndash2282012

[14] J A Tropp and S JWright ldquoComputational methods for sparsesolution of linear inverse problemsrdquoProceedings of the IEEE vol98 no 6 pp 948ndash958 2010

[15] M Elad Sparse and Redundant Representations Springer NewYork NY USA 2010

[16] J A Tropp and A C Gilbert ldquoSignal recovery from randommeasurements via orthogonal matching pursuitrdquo IEEE Trans-actions on Information Theory vol 53 no 12 pp 4655ndash46662007

Mathematical Problems in Engineering 17

[17] W Dai and O Milenkovic ldquoSubspace pursuit for compressivesensing signal reconstructionrdquo IEEE Transactions on Informa-tion Theory vol 55 no 5 pp 2230ndash2249 2009

[18] I Marques and M Grana ldquoHybrid sparse linear and latticemethod for hyperspectral image unmixingrdquo inHybrid ArtificialIntelligence Systems vol 8480 of Lecture Notes in ComputerScience pp 266ndash273 Springer 2014

[19] I Marques and M Grana ldquoGreedy sparsification WM algo-rithm for endmember induction in hyperspectral imagesrdquo inNatural and Artificial Computation in Engineering and MedicalApplications vol 7931 of Lecture Notes in Computer Science pp336ndash344 Springer Berlin Germany 2013

[20] M V Afonso J M Bioucas-Dias and M A Figueiredo ldquoAnaugmented Lagrangian approach to the constrained optimiza-tion formulation of imaging inverse problemsrdquo IEEE Transac-tions on Image Processing vol 20 no 3 pp 681ndash695 2011

[21] M-D Iordache J M Bioucas-Dias and A Plaza ldquoTotal varia-tion spatial regularization for sparse hyperspectral unmixingrdquoIEEE Transactions on Geoscience and Remote Sensing vol 50no 11 pp 4484ndash4502 2012

[22] M-D Iordache JM Bioucas-Dias andA Plaza ldquoCollaborativesparse unmixing of hyperspectral datardquo in Proceedings of theIEEE International Geoscience and Remote Sensing Symposium(IGARSS rsquo12) pp 7488ndash7491 IEEE Munich Germany July2012

[23] J M Bioucas-Dias and M A T Figueiredo ldquoAlternating direc-tion algorithms for constrained sparse regression Applicationto hyperspectral unmixingrdquo in Proceedings of the 2ndWorkshopon Hyperspectral Image and Signal Processing Evolution inRemote Sensing (WHISPERS rsquo10) pp 1ndash4 Reykjavik IcelandJune 2010

[24] Z Shi W Tang Z Duren and Z Jiang ldquoSubspace matchingpursuit for sparse unmixing of hyperspectral datardquo IEEE Trans-actions on Geoscience and Remote Sensing vol 52 no 6 pp3256ndash3274 2014

[25] J A Tropp A C Gilbert and M J Strauss ldquoAlgorithms forsimultaneous sparse approximation Part I greedy pursuitrdquoSignal Processing vol 86 no 3 pp 572ndash588 2006

[26] W Tang Z Shi andYWu ldquoRegularized simultaneous forward-backward greedy algorithm for sparse unmixing of hyperspec-tral datardquo IEEE Transactions on Geoscience and Remote Sensingvol 52 no 9 pp 5271ndash5288 2014

[27] J Chen and X Huo ldquoTheoretical results on sparse representa-tions of multiple-measurement vectorsrdquo IEEE Transactions onSignal Processing vol 54 no 12 pp 4634ndash4643 2006

[28] G Davis S Mallat and M Avellaneda ldquoGreedy adaptiveapproximationrdquo Journal of Constructive Approximation vol 13no 1 pp 57ndash98 1997

[29] J A Tropp ldquoGreed is good algorithmic results for sparseapproximationrdquo IEEE Transactions on Information Theory vol50 no 10 pp 2231ndash2242 2004

[30] D L Donoho and M Elad ldquoMaximal sparsity representationvia l1 minimizationrdquo Proceedings of the National Academy ofSciences of the United States of America vol 100 pp 2197ndash22022003

[31] R N Clark G A Swayze R Wise et al USGS Digital SpectralLibrary splib06a US Geological Survey Denver Colo USA2007

[32] J Plaza A Plaza P Martınez and R Perez ldquoH-COMP atool for quantitative and comparative analysis of endmemberidentification algorithmsrdquo in Proceedings of the Geoscience andRemote Sensing Symposium (IGARSS rsquo03) vol 1 pp 291ndash2932003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 3: Research Article Backtracking-Based Simultaneous ...downloads.hindawi.com/journals/mpe/2015/842017.pdf · Research Article Backtracking-Based Simultaneous Orthogonal Matching Pursuit

Mathematical Problems in Engineering 3

Part 1 (SOMP)(1) Initialize hyperspectral data 119884 and spectral library 119860(2) Divide hyperspectral data 119884 into several blocks 119884 = [119884

1 1198842 119884

119861] and initialize the index set 119878SOMP =

(3) For each block do(4) Set index set 119878

119887= and iteration counter 119896 = 1 Initialize the residual data of block 119887 119877

0= 119884119887

(5) While stopping criterion 1 has not been met do(6) Compute the index of the best correlated member of 119860 to the actual residual 119895 = argmax

119894(119877119897minus1)1198791198601198942 where 119860

119894is the

119894th column of 119860(7) Update support set 119878

119887= 119878119887cup 119895

(8) Compute119883119897 = (119860lowast119878119887119860119878119887)minus1119860lowast

119878119887119884119887119860119878119887is the matrix containing the columns of 119860 having the indexes from 119878

119887

(9) Update residual 119877119897 = 119884119887minus 119860119878119887119883119897

(10) 119897 larr 119897 + 1

(11) End while(12) Set 119878SOMP = 119878SOMP cup 119878119887(13) End forPart 2 (Backtracking process)(14) Initialize hyperspectral data 119884 the index set 119878

119905= 119878SOMP and 119860 119905 = 119860119878SOMP

119860119905is the matrix containing the columns of 119860 having

the indexes from 119878119905

(15)While stopping criterion 2 has not been met do(16) Compute solution119883119905 = (119860lowast

119905119860119905)minus1119860lowast

119905119884

(17) Compute the member of 119860119905having the lowest abundance indexlarr min

119895(119883119895

119905)

(18) Remove the member having the lowest fractional abundance from the index set 119878119905larr 119878119905 119878index and the endmember set

119860119905larr 119860119905 119860 index

(19) 119905 larr 119905 + 1

(20) End whilePart 3 (Abundance estimation)(21) Estimate abundances using the original hyperspectral data matrix and the endmember set under the constraint of

nonnegativity119883 larr argmin

119883119860119905119883 minus 119884

119865 subject to119883 ge 0

Algorithm 1 BSOMP for hyperspectral sparse unmixing

library 119883 isin 119877119898times119870 denotes the fractional abundance matrix

each column of which corresponds with the abundancefractions of a mixed pixel and119873 isin 119877

119871times119870 is the noise matrixUnder the simultaneous sparse unmixing model the

simultaneous sparse unmixing problem can be expressed asfollows

min119883

119883rowminus0

st 119884 minus 1198601198832

119865le 120575

119883 ge 0

(5)

where 119883119865denotes the Frobenius norm of 119883 119883rowminus0 is

the number of nonzero rows in matrix 119883 [25] and 120575 is thetolerated error due to the noise and model error

It should be noted that the model in (5) is reasonablebecause there should be only a few rows with nonzero entriesin the abundance matrix in light of only a small number ofendmembers in the hyperspectral image compared with thedimensionality of the spectral library [22]

3 Backtracking-Based SOMP

In this section we first present our new algorithm BSOMPfor sparse unmixing of hyperspectral dataThen a theoretical

analysis of the algorithm will be given Then we give aconvergence theorem for the proposed algorithm

31 Statement of Algorithm The whole process of usingBSOMP for sparse unmixing of hyperspectral data is sum-marized in Algorithm 1 The algorithm includes three mainparts SOMP for endmember selection backtracking process-ing and abundance estimation In the first part we adopta block-processing strategy for SOMP to efficiently selectendmembers This strategy divides the whole hyperspectralimage into several blocks Then in each block SOMP willpick several potential endmembers from the spectral libraryand add them to the estimated endmembers set In thesecond part we utilize a backtracking strategy to removesome endmembers chosen wrongly from the estimated end-members set in previous processing and identify the true end-members set more accurately The backtracking processingis halted when the maximum total correlation between anendmember in the spectral library and the residual dropsbelow threshold 120575 Finally the abundances are estimatedusing the obtained endmembers set under the constraint ofnonnegativity

In the following paragraphs we provide some definitionsand then give two stopping conditions for the BSOMPalgorithm

4 Mathematical Problems in Engineering

Definition 1 If 119909 is a vector where 119909119894is the 119894th element of 119909

the 119897119901norm of 119909 is defined as

119909119901 = (sum

119894

10038161003816100381610038161199091198941003816100381610038161003816119901

)

1119901

for 1 le 119901 lt infin

119909infin = max119894

10038161003816100381610038161199091198941003816100381610038161003816

(6)

Definition 2 The (119901 119902) operator norms of the matrix 119860 isdefined as

119860119901119902= max119909 =0

119860119909119902

119909119901

(7)

Several of the (119901 119902) operator norms can be computedeasily using the following lemma [25 27]

Lemma 3 (1) The (1 q) operator norm is the maximum 119897119901

norm of any column of 119860(2)The (2 2) operator norm is themaximum singular value

of 119860(3)The (119901infin) operator norm is the maximum 119897

119901norm of

any row of 119860

We give a stopping condition for the first part of theBSOMP algorithm

100381610038161003816100381610038171003817100381710038171198771198971003817100381710038171003817119865 minus

1003817100381710038171003817119877119897minus110038171003817100381710038171198651003816100381610038161003816

1003817100381710038171003817119877119897minus11003817100381710038171003817119865

le 120590 (8)

where 119877119897119865is the Frobenius norm of the residual corre-

sponding to the 119897th iteration and 120590 is a thresholdFor the second part of the BSOMP algorithm we give a

stopping condition for deciding when to halt the iteration10038161003816100381610038161003816

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin minus1003817100381710038171003817119860lowast119877119905minus1

1003817100381710038171003817infininfin

100381610038161003816100381610038161003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin

le 120575 (9)

where 119884119905is the residual corresponding to the 119905th iteration

119860lowast denotes the conjugate transpose of 119860 119860lowast119877

119905infininfin

is themaximum total correlation between an endmember in thespectral library 119860 and the residual 119877

119905 and 120575 is a threshold

32 Theoretical Analysis In this section we will introducesome definitions and then give a theoretical analysis for theBSOMP algorithm

Definition 4 The coherence 120583 of a spectral library 119860 equalsthe maximum correlation between two distinct spectralsignatures [28]

120583 = max119896 =ℎ

1003816100381610038161003816⟨119860119896 119860ℎ⟩1003816100381610038161003816 (10)

where 119860119896is the 119896th column of 119860 If the coherence parameter

is small each pair of spectral signatures is nearly orthogonal

Definition 5 The cumulative coherence [29 30] is defined as

120583 (119896) = max|Λ|le119896

max119895notinΛ

sum

119894isinΛ

10038161003816100381610038161003816⟨119860119894 119860119895⟩10038161003816100381610038161003816 (11)

where the index set Λ isin 119878all 119878all is the support of all thespectral signatures in the spectral library 119860 The cumulativecoherence measures the maximum total correlation betweena fixed spectral signature and 119905 distinct spectral signaturesParticularly 120583(0) = 0

Definition 6 Given hyperspectral image data matrix 119884 isin

119877119871times119870 and available spectral library matrix 119860 isin 119877119871times119898

119878opt is an index set which lists at most 119879 endmemberspresented in the hyperspectral image scene119860opt is an optimal submatrix of the spectral library 119860which contains the 119879 columns indexed in 119878opt119883opt is the optimal abundance matrix correspondingto 119860opt119884opt is a 119879-term best approximation of 119884 and 119884opt =119860opt119883opt119878119905is an index set which lists 119896 endmembers after 119905

iterations and assumes 119878opt isin 119878119905119860119905is a submatrix of the spectral library 119860 which

contains the 119896 columns indexed in 119878119905

119883119905is the abundance matrix corresponding to 119860

119905

119884119905is the reconstruction of 119884 by BSOMP algorithm

after 119905 iterations (assume that BSOMP algorithmselects a nonexisting endmember to remove fromindex set 119878

119905in each iteration) and 119884

119905= 119860119905119883119905

119877119905is the residual error of 119884 after 119905 iterations and 119877

119905=

119884 minus 119884119905

Theorem 7 Suppose that the best approximation 119884opt of thehyperspectral image data matrix 119884 over the index set 119878optsatisfies the error bound 119860lowast(119884minus119884opt)infininfin le 120591 the first part ofBSOMP has selected 119896 potential endmembers from the spectrallibrary to constitute the initial index set 119878

119905 and 119878opt isin 119878119905 After

iteration 119905 of the second part of BSOMP halt the second part ofthe algorithm if

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)120591 (12)

where 119896 ge 119879 If this inequality fails then BSOMPmust removeanother nonexisting endmember in iteration (119905 + 1)

The proof of Theorem 7 is shown in the Appendix If thesecond part of the BSOMP algorithm terminates at the endof iteration 119905 we may conclude that the BSOMP algorithmremoves 119905 nonexisting endmembers from 119878

119905and has chosen

119896 optimal endmembers

4 Experimental Results

In this section we use both simulated hyperspectral data andreal hyperspectral data to demonstrate the performance ofthe proposed algorithm The BSOMP algorithm is comparedwith three SGAs (SOMP [25] SMP [24] and RSFoBa [26])and three convex relaxation methods (ie SUnSAL [20]SUnSAL-TV [21] and CLSUnSAL [22]) All the considered

Mathematical Problems in Engineering 5

1

08

06

04

02

0

1

08

06

04

02

005 1 15 2 25

Refle

ctan

ce

1

08

06

04

02

0

Refle

ctan

ce

1

08

06

04

02

0

Refle

ctan

ce

Refle

ctan

ce

1

08

06

04

02

0

Refle

ctan

ce

Wavelength (nm)

05 1 15 2 25

Wavelength (nm)05 1 15 2 25

Wavelength (nm)

05 1 15 2 25

Wavelength (nm)05 1 15 2 25

Wavelength (nm)

Axinite HS3423B Samarium oxide GDS36 Chrysocolla HS2973B

Niter GDS43 (K-saltpeter) Anthophyllite HS2863B

Figure 1 Five spectral signatures from the USGS used in our simulated data experiments The title of each subimage denotes the mineralcorresponding to the signature

algorithms have taken into account the abundance nonnega-tivity constraintThe TV regularizer used in SUnSAL-TV is anonisotropic one For the RSFoBa algorithm the 119897

infinnorm is

considered in our experiment Asmentioned above when thespectral library is overcomplete the GAs behave worse thanthe SGAs and the convex relaxationmethods thus we do notinclude the GAs in our experiment

In the simulated data experiments we evaluated theperformances of the sparse unmixing algorithms in situationsof different SNR level of noise the signal-to-reconstructionerror (SRE) [21] is used to measure the quality of thereconstruction abundances of spectral endmembers

SRE =119864 [119883

2

2]

119864 [10038171003817100381710038171003817119883 minus 119883

10038171003817100381710038171003817

2

2]

SRE (dB) = 10log10(SRE)

(13)

where 119883 denotes the true abundances of spectral endmem-bers and119883 denotes the reconstruction abundances of spectralendmembers

41 Evaluation with Simulated Data In the simulated exper-iments the United States Geological Survey (USGS) digital

spectral library (splib06a) [31] is used to build the spectrallibrary 119860 The reflectance values of 498 spectral signaturesare measured for 224 spectral bands distributed uniformlyin the interval of 04ndash25 120583m (119860 isin 119877

224times498) Fifteenspectral signatures are chosen from the spectral library togenerate our simulated data Figure 1 shows five spectralsignatures used for all the simulated data experimentsThe other ten spectral signatures that are not displayed inthe figure include Monazite HS2553B Zoisite HS3473BRhodochrosite HS67 Neodymium Oxide GDS34 PigeoniteHS1993B Meionite WS700Hlsep Spodumene HS2103BLabradorite HS173B Grossular WS484 and WollastoniteHS3483B

In Simulated Data 1 experiment we generated sevendatacubes of 50 times 50 pixels and 224 bands per pixel eachcontaining a different number of endmembers 3 5 7 9 1113 and 15 In each simulated pixel the fractional abundancesof the endmembers were randomly generated following aDirichlet distribution [23] Note that there was no purepixel in Simulated Data 1 and the fractional abundances ofthe endmembers were less than 08 After Simulated Data1 was generated Gaussian white noise or correlated noisewas added to Simulated Data 1 having different levels of thesignal-to-noise ratio (SNR) that is 20 25 30 35 40 45and 50 dB The Gaussian white noise was generated using

6 Mathematical Problems in Engineering

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

45

40

35

30

25

20

15

10

5

0

4 6 8 10 12 14

Endmember number

SRE

(dB)

Figure 2 Results on Simulated Data 1 with 30 dB white noise SREas function of endmember number

the awgn function in MATLAB The correlated noise wasobtained by low-pass filtering iid Gaussian noise

Simulated Data 2 was generated using nine randomlyselected spectral signatures from 119860

1 with 100 times 100 pixels

and 224 bands per pixel provided by Dr M D Iordache andProf J M Bioucas-Dias The fractional abundances satisfythe ANC and the ASC and are piecewise smoothed The trueabundances of three endmembers are illustrated in Figure 2After Simulated Data 2 was generated Gaussian white noiseor correlated noise with seven levels of SNR (20 25 30 3540 45 and 50 dB) was added to Simulated Data 2 as well

In the two simulated data experiments all the SGAmeth-ods have adopted the block-processing strategy to get betterperformances The block sizes of the SGA methods wereempirically set to 10 and the stopping threshold parameter 120590of SOMP and SMP was set to 1sdot10minus2 The stopping thresholdparameters 120590 and 120575 of BSOMP were set to 1sdot10minus2 and 01RSFoBa was tested using different values of the parameters120582 and 120590 0 and 5sdot10minus3 for Simulated Data 1 experiment 1and 1sdot10minus2 for Simulated Data 2 experiment SUnSAL-TVwastested using different values of the parameters 120582 and 120582TV1sdot10minus2 and 1sdot10minus2 1sdot10minus2 and 1sdot10minus2 1sdot10minus2 and 5sdot10minus3 5sdot10minus3and 1sdot10minus3 1sdot10minus3 and 5sdot10minus4 5sdot10minus4 and 1sdot10minus4 and 5sdot10minus4 and1sdot10minus4 for SNR = 20 25 30 35 40 45 and 50 dB SUnSAL andCLSUnSALwas tested using different values of the parameter120582 1sdot10minus2 5sdot10minus3 1sdot10minus3 5sdot10minus4 5sdot10minus4 5sdot10minus4 and 5sdot10minus4 forSNR = 20 25 30 35 40 45 and 50 dB

Figure 2 shows the SRE (dB) results obtained on Simu-lated Data 1 with white noise as a function of endmembernumber using the different tested methods The SREs of allthe algorithms decrease as the endmember number increases

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

30

25

20

15

10

5

0

SRE

(dB)

20 25 30 35 40 45 50

SNR (dB)

Figure 3 Results on Simulated Data 1 with white noise when theendmember number is eleven SRE as function of SNR

We can find that BSOMP SOMP SMP RSFoBa and SUnSAL-TV perform better than SUnSAL and CLSUnSAL All theSGAmethods have comparable performances with SUnSAL-TV BSOMP can always obtain the best estimation of theabundances when the number of endmembers is less than 15Figure 3 shows the SRE (dB) results obtained on SimulatedData 1 with white noise as a function of SNR when theendmember number is elevenThe SREs of all the algorithmsdecrease as the SNR decreases We can find that all the SGAmethods perform better than SUnSAL-TV SUnSAL andCLSUnSAL This result indicates that all the SGA methodsadopt the block-processing strategy to effectively pick upall the actual endmembers Among all the SGA methodsBSOMP and RSFoBa behave better than SOMP and SMPand BSOMP can always obtain the best estimation of theabundances Figures 4 and 5 show the SRE results obtainedon Simulated Data 1 with correlated noise as a function ofendmember number and SNR respectively As shown inFigures 4 and 5 BSOMP gets an overall better performancethan the other algorithms

Figure 6 shows the number of the potential endmembersobtained from the spectral library by all the SGA methodsThe SGA methods adopt the block-processing strategy toeffectively pick up all the actual endmembers It can beobserved that BSOMP and RSFoBa can select the actualendmembers more accurately than SOMP and SMP It isworth mentioning that many endmembers in the estimatedendmembers set of SOMP are nonexisting and affect theperformance of SOMP However BSOMP incorporates abacktracking process to remove some endmembers chosenwrongly from the estimated endmembers set which canselect the actual endmembers more accurately than SOMP

Mathematical Problems in Engineering 7

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

4 6 8 10 12 14

Endmember number

45

40

35

30

25

20

15

10

5

0

SRE

(dB)

Figure 4 Results on Simulated Data 1 with 30 dB correlated noiseSRE as function of endmember number

Differing from RSFoBa BSOMP uses a backtracking processin the whole hyperspectral data to remove some endmem-bers chosen wrongly from the estimated endmembers setin previous block-processing which can identify the trueendmembers set more accurately than RSFoBa It indicatesthat BSOMP behaves better than the other SGA methods

Figures 7 and 8 show the SRE results obtained onSimulated Data 2 with white noise and correlated noise asa function of SNR respectively We can find that BSOMPSOMP SMP RSFoBa and SUnSAL-TV perform better thanSUnSAL and CLSUnSAL This result indicates that all theSGA methods adopt the block-processing strategy to effec-tively pick up all the actual endmembers and SUnSAL-TVcombinates spatial-contextual coherence regularizer whichcan get better performance over SUnSAL and CLSUnSALAll the SGA methods have comparable performances withSUnSAL-TV BSOMP and RSFoBa can always obtain thebetter estimation of the abundances RSFoBa incorporates thespatial-contextual coherence regularizer to select the actualendmembers more accurately than SOMP and SMP andBSOMP incorporates a backtracking process in the wholehyperspectral data to identify the true endmembers set moreaccurately than SOMP and SMP It can be observed thatBSOMP and RSFoBa can select the actual endmembers moreaccurately than SOMP and SMP

For visual comparison Figure 9 shows the true abun-dance maps and the abundance maps estimated by the differ-ent algorithms on Simulated Data 2 with 20 dB white noiseIn Figure 9 the abundancemaps of endmembers which wereobtained by SUnSAL-TV contain fewer noise points How-ever the abundance maps estimated by SUnSAL-TV may

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

30

25

20

15

10

5

0

SRE

(dB)

20 25 30 35 40 45 50

SNR (dB)

Figure 5 Results on Simulated Data 1 with correlated noise whenthe endmember number is eleven SRE as function of SNR

Endmember number

70

60

50

40

30

20

10

0

4 6 8 10 12 14

Num

ber o

f ret

aine

d en

dmem

bers

SMP (white)SOMP (white)RSFoBa (white)

SMP (correlated)SOMP (correlated)RSFoBa (correlated)

BSOMP (white) BSOMP (correlated)

Figure 6 Results obtained by the four SGAs on Simulated Data1 with 30 dB white noise or correlated noise number of retainedendmembers as function of endmember number

exhibit an oversmooth visual effect around some pixels someof the details and edges are not well preserved Comparedwith SUnSAL-TV the estimated abundances by BSOMPmay contain more noise points but preserve more edgeregions and are closer to the true abundances Moreover itis much easier to tell the different materials apart Comparedwith SOMP and SMP the estimated abundances by BSOMP

8 Mathematical Problems in Engineering

Table 1 Processing times(s) measured after applying the tested methods to the two considered simulated data sets with 30 dB white noise

Datacube SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa BSOMPSimulated Data 1 (11 endmembers) 6767 7841 36327 2083 21289 3938 6418Simulated Data 2 25471 27902 127112 9618 35444 28277 106524

50

45

40

35

30

25

20

15

10

20 25 30 35 40 45 50

SNR (dB)

SRE

(dB)

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

Figure 7 Results on Simulated Data 2 with white noise SRE asfunction of SNR

20 25 30 35 40 45 50

SNR (dB)

50

45

40

35

30

25

20

15

10

SRE

(dB)

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

Figure 8 Results on Simulated Data 2 with correlated noise SRE asfunction of SNR

Table 2 Comparison of the sparse unmixing algorithms withwhether the backward step joint sparsity and contextual informa-tion are considered

Property Backwardstep Joint sparsity Contextual

informationSMP sdot

SOMP sdot

RSFoBa sdot sdot sdot

BSOMP sdot sdot

SUnSALCLSUnSAL sdot

SUnSAL-TV sdot

contain fewer noise points and preserve more edge regionswhich are more accurate and have a better visual effect

Table 1 shows the processing time measured after apply-ing the tested algorithms to the two simulated data setswith 30 dB white noise The algorithms were implementedin MATLAB 2009a and executed on a desktop PC equippedwith an Intel Core 2 Duo CPU (293GHz) and 2GB ofRAM memory It can be observed that the SUnSAL-TValgorithm is far more time consuming compared with theother algorithms We can find that BSOMP incorporating abacktracking technique is more time consuming than SOMP

In order to make the differences among the BSOMP algo-rithm and the other sparse unmixing algorithms clearer wedisplay their comparison concerned with several propertiesin Table 2

SUnSAL is a convex relaxation method for sparse unmix-ing which solves the 119897

1norm sparse regression problem

CLSUnSAL is convex relaxation method for solving the 11989721

optimization problemThemain difference between SUnSALand CLSUnSAL is that the former employs pixelwise inde-pendent regressions while the latter enforces joint sparsityamong all the pixels SUnSAL-TV is the special case ofSUnSALwhich incorporates the spatial-contextual coherenceregularizer into the SUnSAL As observed in Figures 7 and8 SUnSAL-TV behaves better than SUnSAL and CLSUnSALwhich indicates that the spatial-contextual coherence regu-larizer can make SUnSAL-TV more effective

SOMP and SMP are the forward SGAs which select oneor more potential endmembers from the spectral library ineach iteration RSFoBa is a forward-backward SGA whichincorporates the backward step spatial-contextual informa-tion and joint sparsity into the greedy algorithm BSOMP isa forward-backward SGA which incorporates the backwardstep and joint sparsity to the greedy algorithm As observedin Figure 6 the numbers of endmembers selected by theSGAs are still larger than the actual number BSOMP and

Mathematical Problems in Engineering 9

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

Figure 9 Continued

10 Mathematical Problems in Engineering

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

Figure 9 Comparison of abundance maps on Synthetic Data 2 with 20 dB white noise From left column to right column are the abundancemaps corresponding to Endmember 1 Endmember 4 and Endmember 9 respectively From top row to bottom row are true abundance mapsor abundance maps obtained by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively

RSFoBa behave better than SMP and SOMP which indicatesthat the backward step can make BSOMP and RSFoBa moreaccurate to select the actual endmembers Differing fromRSFoBa which adopts a forward step and a backward stepto pick some potential endmembers in each block BSOMPadopts a forward step to pick some potential endmembersin each block and then adopts a backward step in the wholehyperspectral data to remove some endmembers chosenwrongly from the estimated endmembers set Figure 10 showsthe SRE (dB) results obtained on Simulated Data 1 withwhite noise as a function of block size We can find thatBSOMP performs better than the other SGAs Figure 11shows the number of the potential endmembers obtainedfrom the spectral library by all the SGA methods It canbe observed that BSOMP can select the actual endmembersmore accurately than RSFoBa This result indicates that abackward step in the whole hyperspectral data can makeBSOMP more effective

42 Evaluation with Real Data The hyperspectral data setused in our real data experiment is the well-known AVIRISCuprite data set (httpavirisjplnasagovhtmlavirisfreed-atahtml) The size of the hyperspectral scene is a 250 times 191pixels subset with 188 spectral bands (water absorption bandsand low SNR bands are removed)The spectral library for thisdata set is the USGS spectral library (containing 498 pure sig-natures) with the corresponding bands removed The miner-als map (httpspeclabcrusgsgovcuprite95tgif22um mapgif) which was produced by a Tricorder 33 software(httpspeclabcrusgsgovPAPERStetracorder) is shown inFigure 12 Note that the Tricorder map is only suitable forthe hyperspectral data collected in 1995 the AVIRIS Cupritedata was collected was in 1997 Thus we only use the mineralmap as a reference indicator for qualitative analysis of thefractional abundance maps produced by different sparseunmixing methods

In this section for the real data experiment the sparsityof solution and the reconstruction error of the hyperspectral

Mathematical Problems in Engineering 11

Table 3 Sparsity of the abundance vectors and the reconstruction errors

Algorithms SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa BSOMPSparsity 175629 207037 20472 13684 15102 12702 11810RMSE 00051 00035 00028 00040 00034 00032 00033

14

13

12

11

10

9

8

7

8 10 12 14 16 18

Block size

SRE

(dB)

SMPSOMP

RSFoBaBSOMP

Figure 10 Results on Simulated Data 1 with white noise when theendmember number is eleven SRE as function of block size

50

55

60

65

45

40

35

30

25

20

15

108 10 12 14 16 18

Block size

SMPSOMP

RSFoBaBSOMP

Endm

embe

r num

ber

Figure 11 Results on Simulated Data 1 with 30 dB white noise whenthe endmember number is eleven number of retained endmembersas function of block size

image are used to evaluate the performances of the sparseunmixing algorithms since the true abundance maps of thereal data are not available The sparsity is obtained by count-ing the average number of nonzero abundances estimated byeach algorithm The same as in [22] we define the fractionalabundances larger than 0001 as nonzero abundances toavoid counting negligible values The reconstruction erroris calculated by the original hyperspectral image and theone reconstructed by the actual endmembers and theircorresponding abundances [32] The root mean square error(RMSE) is used to evaluate the reconstruction error betweenthe original hyperspectral image and the reconstructed one

RMSE = radic 1119870

119870

sum

119895=1

(119884119895minus 119895)2

(14)

where 119884 denotes the original hyperspectral image and denotes the reconstruction hyperspectral image If the RMSEof a sparse unmixing algorithm is smaller it also reflects theunmixing performance of this algorithm

The regularization parameter used for CLSUnSAL wasempirically set to 001 while the one corresponding toSUnSAL was set to 0001 The parameters 120582 and 120582TV ofSUnSAL-TV were set to 0001 and 0001 The block sizes ofthe SGA methods were empirically set to 10 The stoppingthreshold parameter 120590 of SOMP and SMP was set to 001The stopping threshold parameters 120590 and 120575 of BSOMP wereset to 001 and 01 The parameters 120582 and 120590 of RSFoBawere set to 1 and 001 Table 2 shows the sparsity of theabundance vectors and the reconstruction error obtained bythe different algorithm In Table 3 we can find that BSOMPSOMP SMP and SUnSAL perform better than SUnSAL-TV in the sparsest solutions although SUnSAL-TV has thelowest reconstruction errors BSOMP can achieve nearly asfew reconstruction errors as SUnSAL-TV but obtain muchsparser solutions In general BSOMP can obtain the sparsestsolutions and achieve the lower reconstruction errors whichindicates the effectiveness of the proposed algorithm

For visual comparison Figure 13 shows the estimatedfractional abundance maps by applying the proposedBSOMP SMP RSFoBa SOMP SUnSAL and SUnSAL-TValgorithms to the AVIRIS Cuprite data for three differentminerals (Alunite Buddingtonite and Chalcedony) Forcomparison the classification maps of three materialsextracted from the Tricorder 33 software are also presentedin Figure 13 It can be observed that the abundances estimatedby BSOMP are comparable or higher in the regions assignedto the respective minerals in comparison to the otherconsidered algorithms with regard to the classification mapsproduced by the Tricorder 33 software We also analyzed thesparsity of the abundances estimated by BSOMP the average

12 Mathematical Problems in Engineering

Cuprite NevadaAVIRIS 1995 data

USGSClark and Swayze

Tetracorder 33 productSulfates

K-Alunite 150CK-Alunite 250CK-Alunite 450C

JarositeAlunite + Kaoliniteandor Muscovite

Kaolinite group claysKaolinite wxlKaolinite pxlKaolinite + Smectiteor Muscovite

HalloysiteDickite

CarbonatesCalciteCalcite + KaoliniteCalcite + Montmorillonite

ClaysNa-MontmorilloniteNontronite (Fe clay)

Other mineralsLow-Al MuscoviteMed-Al MuscoviteHigh-Al MuscoviteChlorite + Musc MontChloriteBuddingtoniteChalcedony OH QtzPyrophyllite + Alunite

N2km

Na82-Alunite 100CNa40-Alunite 400C

Figure 12 USGS map showing the distribution of different minerals in the Cuprite mining district in Nevada

number of endmembers with abundances higher than 0001estimated by BSOMP is 11810 (per pixel) This means thatthe algorithm uses a minimum number of members toexplain the data Overall the qualitative results and visualresults reported in this section indicate that the BSOMPalgorithm is more effective for sparse unmixing than theother algorithms

5 Conclusions

In this paper we present a new SGA termed BSOMP forsparse unmixing of hyperspectral data The main idea of theproposed algorithm is that BSOMP incorporates a backtrackstep to detect the previous chosen endmembersrsquo reliabilityand then deletes the unreliable endmembers It is provedthat BSOMP can recover the optimal endmembers from thespectral library under certain conditions Experiments onboth synthetic data and real data also demonstrate that theBSOMP algorithm is more effective than the other SGAs forsparse unmixing

The topic we want to discuss is why BSOMP is effectivefor sparse unmixing Endmember selection and abundanceestimation are the two main parts of the SGAs The num-ber of the potential endmembers selected by the SGAs

is far smaller than the number of spectral signatures inthe spectral library and then the abundance estimationmethod makes the abundance vectors sparse further withthe relatively small endmembers subset The main differenceamong BSOMP SOMP SMP and RSFoBa is the endmemberselectionThe SGAs adopt a block-processing strategy to effi-ciently select endmembers but the numbers of endmembersselected are still larger than the actual number However thenonexisting endmembers will have a negative effect on theestimation of the abundances corresponding to the actualendmembers

Compared with SOMP and SMP BSOMP and RSFoBaadopt a backward step to detect the previous chosen end-membersrsquo reliability and delete the nonexisting endmembersSo BSOMP and RSFoBa can select the actual endmembersmore accurately than SOMP and SMP The main differencebetween BSOMP and RSFoBa is that the former adopts abackward step in the whole hyperspectral data while thelatter adopts a backward step in each block As shown inFigures 10 and 11 we can observe that the block size willsignificantly affect the number of the endmembers obtainedby RSFoBa while BSOMP is not affected by the block size Inconclusion BSOMP can select the actual endmembers moreaccurately than the other SGAs and be beneficial to improveestimation effectiveness

Mathematical Problems in Engineering 13

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

02

015

01

005

0

0

04

03

02

01

035

03

025

02

015

01

005

0

04

03

02

01

0

Figure 13 Continued

14 Mathematical Problems in Engineering

025

02

015

01

005

0

02

015

01

005

0

03

025

02

015

01

005

0

03

035

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

04

03

02

01

0

04

05

03

02

01

0

04

05

06

03

02

01

0

Figure 13 Abundance maps estimated by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP and the distribution mapsproduced by Tricorder 33 software for the AVIRIS Cuprite scene From top row to bottom row are the maps produced or estimated byTricorder software SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively From left column to right column arethe maps corresponding to Alunite Buddingtonite and Chalcedony respectively

Appendix

Proof of Theorem 7

First we should introduce some definitionswhichwill be usedduring the proof process

Definition A1 Given hyperspectral image data matrix 119884 isin

119877119871times119870 and available spectral library matrix 119860 isin 119877119871times119898

119860119905is a submatrix of the spectral library 119860 where 119860

119905

contains the 119896 columns indexed in 119878119905119860119905contains the

remaining columns 119860 = [119860119905 119860119905]

119860simopt is a submatrix of 119860

119905 which contains the

remaining (119896 minus 119879) columns indexed in 119878119905 119878opt

119883simopt is a submatrix of119883

119905and contains the rows listed

in 119878119905 119878opt

Nowwewill prove the theorem on the performance of thealgorithm

Proof Let119875opt denote the orthogonal projector onto the rangeof 119860opt

119875opt = 119860opt (119860lowast

opt119860opt)minus1

119860lowast

opt (A1)

where119860lowastopt denotes the conjugate transpose of119860opt Since thecolumns of 119860

119905are orthogonal to the columns of 119877

119905(119877119905= 119884minus

119884119905) we have

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905119877119905

10038171003817100381710038171003817infininfin (A2)

Themaximumcorrelation between119860119905(a submatrix of the

spectral library 119860) and the residual 119877opt = 119884 minus 119884opt is smaller

Mathematical Problems in Engineering 15

than the maximum correlation between the spectral library119860 and the residual 119877opt so we have

10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin (A3)

Invoke (A2) and (A3) and apply the triangle inequality to get

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt + 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A4)

Then the matrix 119884119905minus 119884opt can be expressed in the following

manner

119884119905minus 119884opt = (119868 minus 119875opt) 119884119905 = (119868 minus 119875opt)119860 119905119883119905

= (119868 minus 119875opt)119860simopt119883simopt(A5)

The last equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt Applying the triangle inequality the secondterm of (A4) can be rewritten as follows

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119868 minus 119875opt)119860simopt119883simopt

10038171003817100381710038171003817infininfin

le [10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfin]

times10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A6)

We will apply the cumulative coherence function todeduce the estimates of (A6) the deduction has five steps

(1) Each row of the matrix 119860lowast119905119860simopt lists the inner

products between a spectral endmember and (119896 minus 119879) distinctendmembers we have

10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879) (A7)

(2) Use the fact that sdot infininfin

is submultiplicative andinvoke (A1) to get

10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfin

times100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

times10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

(A8)

(3) Making use of the same deduction as in step (1) wehave

10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfinle 120583 (119879)

10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879)

(A9)

(4) Since the columns of 119860 are normalized the Grammatrix 119860lowastopt119860opt lists the inner products between 119879 different

endmembers so the matrix119860lowastopt119860opt has a unit diagonal andthen we can get

119860lowast

opt119860opt = 119868 minus 119883opt (A10)

where 119883optinfininfin le 120583(119879) then we can expand the inverse ina Neumann series to get100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

=100381710038171003817100381710038171003817(119868 minus 119883opt)

minus1100381710038171003817100381710038171003817infininfin

le1

1 minus10038171003817100381710038171003817119883opt

10038171003817100381710038171003817infininfin

le1

1 minus 120583 (119879)

(A11)

(5) Invoke the bounds from steps (1) to (4) into (A6) wecan obtain that10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le [120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)] times

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A12)

Simplify expression (A12) to conclude that

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfinle120583 (119896 minus 119879)

1 minus 120583 (119879)

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A13)

In order to produce the an upper bound of 119883simoptinfininfin we

deduce the following formula by using the triangle inequality10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119884 minus 119884

119905+ 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le1003817100381710038171003817119860lowast

119905(119884 minus 119884

119905)1003817100381710038171003817infininfin +

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A14)

Since (119884minus119884119905) is orthogonal to the range of119860

119905 Introduce (A5)

into (A14) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

simopt (119868 minus 119875opt)119860simopt119883simopt10038171003817100381710038171003817infininfin

(A15)

The equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt we can replace 119860

119905by 119860simopt Let 119863 = 119860

lowast

simopt(119868 minus

119875opt)119860simopt and rewrite 119863 = 119868 minus (119868 minus 119863) we can expand itsinverse in a Neumann series to obtain

10038171003817100381710038171003817119863119883simopt10038171003817100381710038171003817infininfin

ge (1 minus 119868 minus 119863infininfin)10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A16)

Recall the definition of119863 and apply the triangle inequality toobtain

119868 minus 119863infininfin

=10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt + 119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

(A17)

16 Mathematical Problems in Engineering

The right-hand side of inequality can be deduced completelyanalogous with steps (1)ndash(5) we can obtain

119868 minus 119863infininfin le 120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)

=120583 (119896 minus 119879)

1 minus 120583 (119879)

(A18)

Introduce (A18) and (A16) into (A14) we have10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A19)

Using the same deduction as (A3) it follows that 119860lowast119905(119884 minus

119884opt)infininfin le 119860lowast(119884minus119884opt)infininfin So (A19) can be rewritten as

follows10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A20)

Introduce (A20) into (A13) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le120583 (119896 minus 119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A21)

Introduce (A21) into (A4) to get1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin

le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A22)

Finally assume we have a bound 119860lowast(119884 minus 119884opt)infininfin le 120591 so(A22) can be rewritten as follows

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)120591 (A23)

This completes the argument

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work has been supported by a grant from the NUAAFundamental Research Funds (NS2013085) The authorswould like to thank M D Iordache J Bioucas-Dias and APlaza for sharing Simulated Data 2 and their codes for thealgorithms of CLSUnSAL SUnSAL and SUnSAL-TV Theauthors would also like to thank Shi Z TangW Duren Z andJiang Z for sharing their codes for the algorithms of SMP andRSFoBaMoreover the authors would also like to thank thoseanonymous reviewers for their helpful comments to improvethis paper

References

[1] N Keshava and J F Mustard ldquoSpectral unmixingrdquo IEEE SignalProcessing Magazine vol 19 no 1 pp 44ndash57 2002

[2] Y H Hu H B Lee and F L Scarpace ldquoOptimal linear spectralunmixingrdquo IEEE Transactions on Geoscience and Remote Sens-ing vol 37 no 1 pp 639ndash644 1999

[3] J M Bioucas-Dias A Plaza N Dobigeon et al ldquoHyperspec-tral unmixing overview geometrical statistical and sparseregression-based approachesrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 5 no 2 pp354ndash379 2012

[4] M Grana I Villaverde J O Maldonado and C HernandezldquoTwo lattice computing approaches for the unsupervised seg-mentation of hyperspectral imagesrdquo Neurocomputing vol 72no 10-12 pp 2111ndash2120 2009

[5] J Li and J M Bioucas-Dias ldquoMinimum volume simplexanalysis a fast algorithm to unmix hyperspectral datardquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo08) pp III-250ndashIII-253 BostonMass USA July 2008

[6] H Pu B Wang and L Zhang ldquoSimplex geometry-basedabundance estimation algorithm for hyperspectral unmixingrdquoScientia Sinica Informationis vol 42 no 8 pp 1019ndash1033 2012

[7] G Zhou S Xie Z Yang J-M Yang and Z He ldquoMinimum-volume-constrained nonnegative matrix factorizationenhanced ability of learning partsrdquo IEEE Transactions onNeural Networks vol 22 no 10 pp 1626ndash1637 2011

[8] T-H Chan C-Y Chi Y-M Huang and W-K Ma ldquoA convexanalysis-based minimum-volume enclosing simplex algorithmfor hyperspectral unmixingrdquo IEEE Transactions on Signal Pro-cessing vol 57 no 11 pp 4418ndash4432 2009

[9] M Arngren M N Schmidt and J Larsen ldquoBayesian nonneg-ative matrix factorization with volume prior for unmixing ofhyperspectral imagesrdquo in Proceedings of the IEEE InternationalWorkshop onMachine Learning for Signal Processing (MLSP rsquo09)pp 1ndash6 IEEE Grenoble France September 2009

[10] V P Pauca J Piper and R J Plemmons ldquoNonnegative matrixfactorization for spectral data analysisrdquo Linear Algebra and ItsApplications vol 416 no 1 pp 29ndash47 2006

[11] N Wang B Du and L Zhang ldquoAn endmember dissimilar-ity constrained non-negative matrix factorization method forhyperspectral unmixingrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 6 no 2pp 554ndash569 2013

[12] Z Wu S Ye J Liu L Sun and Z Wei ldquoSparse non-negativematrix factorization on GPUs for hyperspectral unmixingrdquoIEEE Journal of Selected Topics in Applied Earth Observationsand Remote Sensing vol 7 no 8 pp 3640ndash3649 2014

[13] J B Greer ldquoSparse demixing of hyperspectral imagesrdquo IEEETransactions on Image Processing vol 21 no 1 pp 219ndash2282012

[14] J A Tropp and S JWright ldquoComputational methods for sparsesolution of linear inverse problemsrdquoProceedings of the IEEE vol98 no 6 pp 948ndash958 2010

[15] M Elad Sparse and Redundant Representations Springer NewYork NY USA 2010

[16] J A Tropp and A C Gilbert ldquoSignal recovery from randommeasurements via orthogonal matching pursuitrdquo IEEE Trans-actions on Information Theory vol 53 no 12 pp 4655ndash46662007

Mathematical Problems in Engineering 17

[17] W Dai and O Milenkovic ldquoSubspace pursuit for compressivesensing signal reconstructionrdquo IEEE Transactions on Informa-tion Theory vol 55 no 5 pp 2230ndash2249 2009

[18] I Marques and M Grana ldquoHybrid sparse linear and latticemethod for hyperspectral image unmixingrdquo inHybrid ArtificialIntelligence Systems vol 8480 of Lecture Notes in ComputerScience pp 266ndash273 Springer 2014

[19] I Marques and M Grana ldquoGreedy sparsification WM algo-rithm for endmember induction in hyperspectral imagesrdquo inNatural and Artificial Computation in Engineering and MedicalApplications vol 7931 of Lecture Notes in Computer Science pp336ndash344 Springer Berlin Germany 2013

[20] M V Afonso J M Bioucas-Dias and M A Figueiredo ldquoAnaugmented Lagrangian approach to the constrained optimiza-tion formulation of imaging inverse problemsrdquo IEEE Transac-tions on Image Processing vol 20 no 3 pp 681ndash695 2011

[21] M-D Iordache J M Bioucas-Dias and A Plaza ldquoTotal varia-tion spatial regularization for sparse hyperspectral unmixingrdquoIEEE Transactions on Geoscience and Remote Sensing vol 50no 11 pp 4484ndash4502 2012

[22] M-D Iordache JM Bioucas-Dias andA Plaza ldquoCollaborativesparse unmixing of hyperspectral datardquo in Proceedings of theIEEE International Geoscience and Remote Sensing Symposium(IGARSS rsquo12) pp 7488ndash7491 IEEE Munich Germany July2012

[23] J M Bioucas-Dias and M A T Figueiredo ldquoAlternating direc-tion algorithms for constrained sparse regression Applicationto hyperspectral unmixingrdquo in Proceedings of the 2ndWorkshopon Hyperspectral Image and Signal Processing Evolution inRemote Sensing (WHISPERS rsquo10) pp 1ndash4 Reykjavik IcelandJune 2010

[24] Z Shi W Tang Z Duren and Z Jiang ldquoSubspace matchingpursuit for sparse unmixing of hyperspectral datardquo IEEE Trans-actions on Geoscience and Remote Sensing vol 52 no 6 pp3256ndash3274 2014

[25] J A Tropp A C Gilbert and M J Strauss ldquoAlgorithms forsimultaneous sparse approximation Part I greedy pursuitrdquoSignal Processing vol 86 no 3 pp 572ndash588 2006

[26] W Tang Z Shi andYWu ldquoRegularized simultaneous forward-backward greedy algorithm for sparse unmixing of hyperspec-tral datardquo IEEE Transactions on Geoscience and Remote Sensingvol 52 no 9 pp 5271ndash5288 2014

[27] J Chen and X Huo ldquoTheoretical results on sparse representa-tions of multiple-measurement vectorsrdquo IEEE Transactions onSignal Processing vol 54 no 12 pp 4634ndash4643 2006

[28] G Davis S Mallat and M Avellaneda ldquoGreedy adaptiveapproximationrdquo Journal of Constructive Approximation vol 13no 1 pp 57ndash98 1997

[29] J A Tropp ldquoGreed is good algorithmic results for sparseapproximationrdquo IEEE Transactions on Information Theory vol50 no 10 pp 2231ndash2242 2004

[30] D L Donoho and M Elad ldquoMaximal sparsity representationvia l1 minimizationrdquo Proceedings of the National Academy ofSciences of the United States of America vol 100 pp 2197ndash22022003

[31] R N Clark G A Swayze R Wise et al USGS Digital SpectralLibrary splib06a US Geological Survey Denver Colo USA2007

[32] J Plaza A Plaza P Martınez and R Perez ldquoH-COMP atool for quantitative and comparative analysis of endmemberidentification algorithmsrdquo in Proceedings of the Geoscience andRemote Sensing Symposium (IGARSS rsquo03) vol 1 pp 291ndash2932003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 4: Research Article Backtracking-Based Simultaneous ...downloads.hindawi.com/journals/mpe/2015/842017.pdf · Research Article Backtracking-Based Simultaneous Orthogonal Matching Pursuit

4 Mathematical Problems in Engineering

Definition 1 If 119909 is a vector where 119909119894is the 119894th element of 119909

the 119897119901norm of 119909 is defined as

119909119901 = (sum

119894

10038161003816100381610038161199091198941003816100381610038161003816119901

)

1119901

for 1 le 119901 lt infin

119909infin = max119894

10038161003816100381610038161199091198941003816100381610038161003816

(6)

Definition 2 The (119901 119902) operator norms of the matrix 119860 isdefined as

119860119901119902= max119909 =0

119860119909119902

119909119901

(7)

Several of the (119901 119902) operator norms can be computedeasily using the following lemma [25 27]

Lemma 3 (1) The (1 q) operator norm is the maximum 119897119901

norm of any column of 119860(2)The (2 2) operator norm is themaximum singular value

of 119860(3)The (119901infin) operator norm is the maximum 119897

119901norm of

any row of 119860

We give a stopping condition for the first part of theBSOMP algorithm

100381610038161003816100381610038171003817100381710038171198771198971003817100381710038171003817119865 minus

1003817100381710038171003817119877119897minus110038171003817100381710038171198651003816100381610038161003816

1003817100381710038171003817119877119897minus11003817100381710038171003817119865

le 120590 (8)

where 119877119897119865is the Frobenius norm of the residual corre-

sponding to the 119897th iteration and 120590 is a thresholdFor the second part of the BSOMP algorithm we give a

stopping condition for deciding when to halt the iteration10038161003816100381610038161003816

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin minus1003817100381710038171003817119860lowast119877119905minus1

1003817100381710038171003817infininfin

100381610038161003816100381610038161003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin

le 120575 (9)

where 119884119905is the residual corresponding to the 119905th iteration

119860lowast denotes the conjugate transpose of 119860 119860lowast119877

119905infininfin

is themaximum total correlation between an endmember in thespectral library 119860 and the residual 119877

119905 and 120575 is a threshold

32 Theoretical Analysis In this section we will introducesome definitions and then give a theoretical analysis for theBSOMP algorithm

Definition 4 The coherence 120583 of a spectral library 119860 equalsthe maximum correlation between two distinct spectralsignatures [28]

120583 = max119896 =ℎ

1003816100381610038161003816⟨119860119896 119860ℎ⟩1003816100381610038161003816 (10)

where 119860119896is the 119896th column of 119860 If the coherence parameter

is small each pair of spectral signatures is nearly orthogonal

Definition 5 The cumulative coherence [29 30] is defined as

120583 (119896) = max|Λ|le119896

max119895notinΛ

sum

119894isinΛ

10038161003816100381610038161003816⟨119860119894 119860119895⟩10038161003816100381610038161003816 (11)

where the index set Λ isin 119878all 119878all is the support of all thespectral signatures in the spectral library 119860 The cumulativecoherence measures the maximum total correlation betweena fixed spectral signature and 119905 distinct spectral signaturesParticularly 120583(0) = 0

Definition 6 Given hyperspectral image data matrix 119884 isin

119877119871times119870 and available spectral library matrix 119860 isin 119877119871times119898

119878opt is an index set which lists at most 119879 endmemberspresented in the hyperspectral image scene119860opt is an optimal submatrix of the spectral library 119860which contains the 119879 columns indexed in 119878opt119883opt is the optimal abundance matrix correspondingto 119860opt119884opt is a 119879-term best approximation of 119884 and 119884opt =119860opt119883opt119878119905is an index set which lists 119896 endmembers after 119905

iterations and assumes 119878opt isin 119878119905119860119905is a submatrix of the spectral library 119860 which

contains the 119896 columns indexed in 119878119905

119883119905is the abundance matrix corresponding to 119860

119905

119884119905is the reconstruction of 119884 by BSOMP algorithm

after 119905 iterations (assume that BSOMP algorithmselects a nonexisting endmember to remove fromindex set 119878

119905in each iteration) and 119884

119905= 119860119905119883119905

119877119905is the residual error of 119884 after 119905 iterations and 119877

119905=

119884 minus 119884119905

Theorem 7 Suppose that the best approximation 119884opt of thehyperspectral image data matrix 119884 over the index set 119878optsatisfies the error bound 119860lowast(119884minus119884opt)infininfin le 120591 the first part ofBSOMP has selected 119896 potential endmembers from the spectrallibrary to constitute the initial index set 119878

119905 and 119878opt isin 119878119905 After

iteration 119905 of the second part of BSOMP halt the second part ofthe algorithm if

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)120591 (12)

where 119896 ge 119879 If this inequality fails then BSOMPmust removeanother nonexisting endmember in iteration (119905 + 1)

The proof of Theorem 7 is shown in the Appendix If thesecond part of the BSOMP algorithm terminates at the endof iteration 119905 we may conclude that the BSOMP algorithmremoves 119905 nonexisting endmembers from 119878

119905and has chosen

119896 optimal endmembers

4 Experimental Results

In this section we use both simulated hyperspectral data andreal hyperspectral data to demonstrate the performance ofthe proposed algorithm The BSOMP algorithm is comparedwith three SGAs (SOMP [25] SMP [24] and RSFoBa [26])and three convex relaxation methods (ie SUnSAL [20]SUnSAL-TV [21] and CLSUnSAL [22]) All the considered

Mathematical Problems in Engineering 5

1

08

06

04

02

0

1

08

06

04

02

005 1 15 2 25

Refle

ctan

ce

1

08

06

04

02

0

Refle

ctan

ce

1

08

06

04

02

0

Refle

ctan

ce

Refle

ctan

ce

1

08

06

04

02

0

Refle

ctan

ce

Wavelength (nm)

05 1 15 2 25

Wavelength (nm)05 1 15 2 25

Wavelength (nm)

05 1 15 2 25

Wavelength (nm)05 1 15 2 25

Wavelength (nm)

Axinite HS3423B Samarium oxide GDS36 Chrysocolla HS2973B

Niter GDS43 (K-saltpeter) Anthophyllite HS2863B

Figure 1 Five spectral signatures from the USGS used in our simulated data experiments The title of each subimage denotes the mineralcorresponding to the signature

algorithms have taken into account the abundance nonnega-tivity constraintThe TV regularizer used in SUnSAL-TV is anonisotropic one For the RSFoBa algorithm the 119897

infinnorm is

considered in our experiment Asmentioned above when thespectral library is overcomplete the GAs behave worse thanthe SGAs and the convex relaxationmethods thus we do notinclude the GAs in our experiment

In the simulated data experiments we evaluated theperformances of the sparse unmixing algorithms in situationsof different SNR level of noise the signal-to-reconstructionerror (SRE) [21] is used to measure the quality of thereconstruction abundances of spectral endmembers

SRE =119864 [119883

2

2]

119864 [10038171003817100381710038171003817119883 minus 119883

10038171003817100381710038171003817

2

2]

SRE (dB) = 10log10(SRE)

(13)

where 119883 denotes the true abundances of spectral endmem-bers and119883 denotes the reconstruction abundances of spectralendmembers

41 Evaluation with Simulated Data In the simulated exper-iments the United States Geological Survey (USGS) digital

spectral library (splib06a) [31] is used to build the spectrallibrary 119860 The reflectance values of 498 spectral signaturesare measured for 224 spectral bands distributed uniformlyin the interval of 04ndash25 120583m (119860 isin 119877

224times498) Fifteenspectral signatures are chosen from the spectral library togenerate our simulated data Figure 1 shows five spectralsignatures used for all the simulated data experimentsThe other ten spectral signatures that are not displayed inthe figure include Monazite HS2553B Zoisite HS3473BRhodochrosite HS67 Neodymium Oxide GDS34 PigeoniteHS1993B Meionite WS700Hlsep Spodumene HS2103BLabradorite HS173B Grossular WS484 and WollastoniteHS3483B

In Simulated Data 1 experiment we generated sevendatacubes of 50 times 50 pixels and 224 bands per pixel eachcontaining a different number of endmembers 3 5 7 9 1113 and 15 In each simulated pixel the fractional abundancesof the endmembers were randomly generated following aDirichlet distribution [23] Note that there was no purepixel in Simulated Data 1 and the fractional abundances ofthe endmembers were less than 08 After Simulated Data1 was generated Gaussian white noise or correlated noisewas added to Simulated Data 1 having different levels of thesignal-to-noise ratio (SNR) that is 20 25 30 35 40 45and 50 dB The Gaussian white noise was generated using

6 Mathematical Problems in Engineering

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

45

40

35

30

25

20

15

10

5

0

4 6 8 10 12 14

Endmember number

SRE

(dB)

Figure 2 Results on Simulated Data 1 with 30 dB white noise SREas function of endmember number

the awgn function in MATLAB The correlated noise wasobtained by low-pass filtering iid Gaussian noise

Simulated Data 2 was generated using nine randomlyselected spectral signatures from 119860

1 with 100 times 100 pixels

and 224 bands per pixel provided by Dr M D Iordache andProf J M Bioucas-Dias The fractional abundances satisfythe ANC and the ASC and are piecewise smoothed The trueabundances of three endmembers are illustrated in Figure 2After Simulated Data 2 was generated Gaussian white noiseor correlated noise with seven levels of SNR (20 25 30 3540 45 and 50 dB) was added to Simulated Data 2 as well

In the two simulated data experiments all the SGAmeth-ods have adopted the block-processing strategy to get betterperformances The block sizes of the SGA methods wereempirically set to 10 and the stopping threshold parameter 120590of SOMP and SMP was set to 1sdot10minus2 The stopping thresholdparameters 120590 and 120575 of BSOMP were set to 1sdot10minus2 and 01RSFoBa was tested using different values of the parameters120582 and 120590 0 and 5sdot10minus3 for Simulated Data 1 experiment 1and 1sdot10minus2 for Simulated Data 2 experiment SUnSAL-TVwastested using different values of the parameters 120582 and 120582TV1sdot10minus2 and 1sdot10minus2 1sdot10minus2 and 1sdot10minus2 1sdot10minus2 and 5sdot10minus3 5sdot10minus3and 1sdot10minus3 1sdot10minus3 and 5sdot10minus4 5sdot10minus4 and 1sdot10minus4 and 5sdot10minus4 and1sdot10minus4 for SNR = 20 25 30 35 40 45 and 50 dB SUnSAL andCLSUnSALwas tested using different values of the parameter120582 1sdot10minus2 5sdot10minus3 1sdot10minus3 5sdot10minus4 5sdot10minus4 5sdot10minus4 and 5sdot10minus4 forSNR = 20 25 30 35 40 45 and 50 dB

Figure 2 shows the SRE (dB) results obtained on Simu-lated Data 1 with white noise as a function of endmembernumber using the different tested methods The SREs of allthe algorithms decrease as the endmember number increases

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

30

25

20

15

10

5

0

SRE

(dB)

20 25 30 35 40 45 50

SNR (dB)

Figure 3 Results on Simulated Data 1 with white noise when theendmember number is eleven SRE as function of SNR

We can find that BSOMP SOMP SMP RSFoBa and SUnSAL-TV perform better than SUnSAL and CLSUnSAL All theSGAmethods have comparable performances with SUnSAL-TV BSOMP can always obtain the best estimation of theabundances when the number of endmembers is less than 15Figure 3 shows the SRE (dB) results obtained on SimulatedData 1 with white noise as a function of SNR when theendmember number is elevenThe SREs of all the algorithmsdecrease as the SNR decreases We can find that all the SGAmethods perform better than SUnSAL-TV SUnSAL andCLSUnSAL This result indicates that all the SGA methodsadopt the block-processing strategy to effectively pick upall the actual endmembers Among all the SGA methodsBSOMP and RSFoBa behave better than SOMP and SMPand BSOMP can always obtain the best estimation of theabundances Figures 4 and 5 show the SRE results obtainedon Simulated Data 1 with correlated noise as a function ofendmember number and SNR respectively As shown inFigures 4 and 5 BSOMP gets an overall better performancethan the other algorithms

Figure 6 shows the number of the potential endmembersobtained from the spectral library by all the SGA methodsThe SGA methods adopt the block-processing strategy toeffectively pick up all the actual endmembers It can beobserved that BSOMP and RSFoBa can select the actualendmembers more accurately than SOMP and SMP It isworth mentioning that many endmembers in the estimatedendmembers set of SOMP are nonexisting and affect theperformance of SOMP However BSOMP incorporates abacktracking process to remove some endmembers chosenwrongly from the estimated endmembers set which canselect the actual endmembers more accurately than SOMP

Mathematical Problems in Engineering 7

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

4 6 8 10 12 14

Endmember number

45

40

35

30

25

20

15

10

5

0

SRE

(dB)

Figure 4 Results on Simulated Data 1 with 30 dB correlated noiseSRE as function of endmember number

Differing from RSFoBa BSOMP uses a backtracking processin the whole hyperspectral data to remove some endmem-bers chosen wrongly from the estimated endmembers setin previous block-processing which can identify the trueendmembers set more accurately than RSFoBa It indicatesthat BSOMP behaves better than the other SGA methods

Figures 7 and 8 show the SRE results obtained onSimulated Data 2 with white noise and correlated noise asa function of SNR respectively We can find that BSOMPSOMP SMP RSFoBa and SUnSAL-TV perform better thanSUnSAL and CLSUnSAL This result indicates that all theSGA methods adopt the block-processing strategy to effec-tively pick up all the actual endmembers and SUnSAL-TVcombinates spatial-contextual coherence regularizer whichcan get better performance over SUnSAL and CLSUnSALAll the SGA methods have comparable performances withSUnSAL-TV BSOMP and RSFoBa can always obtain thebetter estimation of the abundances RSFoBa incorporates thespatial-contextual coherence regularizer to select the actualendmembers more accurately than SOMP and SMP andBSOMP incorporates a backtracking process in the wholehyperspectral data to identify the true endmembers set moreaccurately than SOMP and SMP It can be observed thatBSOMP and RSFoBa can select the actual endmembers moreaccurately than SOMP and SMP

For visual comparison Figure 9 shows the true abun-dance maps and the abundance maps estimated by the differ-ent algorithms on Simulated Data 2 with 20 dB white noiseIn Figure 9 the abundancemaps of endmembers which wereobtained by SUnSAL-TV contain fewer noise points How-ever the abundance maps estimated by SUnSAL-TV may

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

30

25

20

15

10

5

0

SRE

(dB)

20 25 30 35 40 45 50

SNR (dB)

Figure 5 Results on Simulated Data 1 with correlated noise whenthe endmember number is eleven SRE as function of SNR

Endmember number

70

60

50

40

30

20

10

0

4 6 8 10 12 14

Num

ber o

f ret

aine

d en

dmem

bers

SMP (white)SOMP (white)RSFoBa (white)

SMP (correlated)SOMP (correlated)RSFoBa (correlated)

BSOMP (white) BSOMP (correlated)

Figure 6 Results obtained by the four SGAs on Simulated Data1 with 30 dB white noise or correlated noise number of retainedendmembers as function of endmember number

exhibit an oversmooth visual effect around some pixels someof the details and edges are not well preserved Comparedwith SUnSAL-TV the estimated abundances by BSOMPmay contain more noise points but preserve more edgeregions and are closer to the true abundances Moreover itis much easier to tell the different materials apart Comparedwith SOMP and SMP the estimated abundances by BSOMP

8 Mathematical Problems in Engineering

Table 1 Processing times(s) measured after applying the tested methods to the two considered simulated data sets with 30 dB white noise

Datacube SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa BSOMPSimulated Data 1 (11 endmembers) 6767 7841 36327 2083 21289 3938 6418Simulated Data 2 25471 27902 127112 9618 35444 28277 106524

50

45

40

35

30

25

20

15

10

20 25 30 35 40 45 50

SNR (dB)

SRE

(dB)

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

Figure 7 Results on Simulated Data 2 with white noise SRE asfunction of SNR

20 25 30 35 40 45 50

SNR (dB)

50

45

40

35

30

25

20

15

10

SRE

(dB)

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

Figure 8 Results on Simulated Data 2 with correlated noise SRE asfunction of SNR

Table 2 Comparison of the sparse unmixing algorithms withwhether the backward step joint sparsity and contextual informa-tion are considered

Property Backwardstep Joint sparsity Contextual

informationSMP sdot

SOMP sdot

RSFoBa sdot sdot sdot

BSOMP sdot sdot

SUnSALCLSUnSAL sdot

SUnSAL-TV sdot

contain fewer noise points and preserve more edge regionswhich are more accurate and have a better visual effect

Table 1 shows the processing time measured after apply-ing the tested algorithms to the two simulated data setswith 30 dB white noise The algorithms were implementedin MATLAB 2009a and executed on a desktop PC equippedwith an Intel Core 2 Duo CPU (293GHz) and 2GB ofRAM memory It can be observed that the SUnSAL-TValgorithm is far more time consuming compared with theother algorithms We can find that BSOMP incorporating abacktracking technique is more time consuming than SOMP

In order to make the differences among the BSOMP algo-rithm and the other sparse unmixing algorithms clearer wedisplay their comparison concerned with several propertiesin Table 2

SUnSAL is a convex relaxation method for sparse unmix-ing which solves the 119897

1norm sparse regression problem

CLSUnSAL is convex relaxation method for solving the 11989721

optimization problemThemain difference between SUnSALand CLSUnSAL is that the former employs pixelwise inde-pendent regressions while the latter enforces joint sparsityamong all the pixels SUnSAL-TV is the special case ofSUnSALwhich incorporates the spatial-contextual coherenceregularizer into the SUnSAL As observed in Figures 7 and8 SUnSAL-TV behaves better than SUnSAL and CLSUnSALwhich indicates that the spatial-contextual coherence regu-larizer can make SUnSAL-TV more effective

SOMP and SMP are the forward SGAs which select oneor more potential endmembers from the spectral library ineach iteration RSFoBa is a forward-backward SGA whichincorporates the backward step spatial-contextual informa-tion and joint sparsity into the greedy algorithm BSOMP isa forward-backward SGA which incorporates the backwardstep and joint sparsity to the greedy algorithm As observedin Figure 6 the numbers of endmembers selected by theSGAs are still larger than the actual number BSOMP and

Mathematical Problems in Engineering 9

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

Figure 9 Continued

10 Mathematical Problems in Engineering

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

Figure 9 Comparison of abundance maps on Synthetic Data 2 with 20 dB white noise From left column to right column are the abundancemaps corresponding to Endmember 1 Endmember 4 and Endmember 9 respectively From top row to bottom row are true abundance mapsor abundance maps obtained by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively

RSFoBa behave better than SMP and SOMP which indicatesthat the backward step can make BSOMP and RSFoBa moreaccurate to select the actual endmembers Differing fromRSFoBa which adopts a forward step and a backward stepto pick some potential endmembers in each block BSOMPadopts a forward step to pick some potential endmembersin each block and then adopts a backward step in the wholehyperspectral data to remove some endmembers chosenwrongly from the estimated endmembers set Figure 10 showsthe SRE (dB) results obtained on Simulated Data 1 withwhite noise as a function of block size We can find thatBSOMP performs better than the other SGAs Figure 11shows the number of the potential endmembers obtainedfrom the spectral library by all the SGA methods It canbe observed that BSOMP can select the actual endmembersmore accurately than RSFoBa This result indicates that abackward step in the whole hyperspectral data can makeBSOMP more effective

42 Evaluation with Real Data The hyperspectral data setused in our real data experiment is the well-known AVIRISCuprite data set (httpavirisjplnasagovhtmlavirisfreed-atahtml) The size of the hyperspectral scene is a 250 times 191pixels subset with 188 spectral bands (water absorption bandsand low SNR bands are removed)The spectral library for thisdata set is the USGS spectral library (containing 498 pure sig-natures) with the corresponding bands removed The miner-als map (httpspeclabcrusgsgovcuprite95tgif22um mapgif) which was produced by a Tricorder 33 software(httpspeclabcrusgsgovPAPERStetracorder) is shown inFigure 12 Note that the Tricorder map is only suitable forthe hyperspectral data collected in 1995 the AVIRIS Cupritedata was collected was in 1997 Thus we only use the mineralmap as a reference indicator for qualitative analysis of thefractional abundance maps produced by different sparseunmixing methods

In this section for the real data experiment the sparsityof solution and the reconstruction error of the hyperspectral

Mathematical Problems in Engineering 11

Table 3 Sparsity of the abundance vectors and the reconstruction errors

Algorithms SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa BSOMPSparsity 175629 207037 20472 13684 15102 12702 11810RMSE 00051 00035 00028 00040 00034 00032 00033

14

13

12

11

10

9

8

7

8 10 12 14 16 18

Block size

SRE

(dB)

SMPSOMP

RSFoBaBSOMP

Figure 10 Results on Simulated Data 1 with white noise when theendmember number is eleven SRE as function of block size

50

55

60

65

45

40

35

30

25

20

15

108 10 12 14 16 18

Block size

SMPSOMP

RSFoBaBSOMP

Endm

embe

r num

ber

Figure 11 Results on Simulated Data 1 with 30 dB white noise whenthe endmember number is eleven number of retained endmembersas function of block size

image are used to evaluate the performances of the sparseunmixing algorithms since the true abundance maps of thereal data are not available The sparsity is obtained by count-ing the average number of nonzero abundances estimated byeach algorithm The same as in [22] we define the fractionalabundances larger than 0001 as nonzero abundances toavoid counting negligible values The reconstruction erroris calculated by the original hyperspectral image and theone reconstructed by the actual endmembers and theircorresponding abundances [32] The root mean square error(RMSE) is used to evaluate the reconstruction error betweenthe original hyperspectral image and the reconstructed one

RMSE = radic 1119870

119870

sum

119895=1

(119884119895minus 119895)2

(14)

where 119884 denotes the original hyperspectral image and denotes the reconstruction hyperspectral image If the RMSEof a sparse unmixing algorithm is smaller it also reflects theunmixing performance of this algorithm

The regularization parameter used for CLSUnSAL wasempirically set to 001 while the one corresponding toSUnSAL was set to 0001 The parameters 120582 and 120582TV ofSUnSAL-TV were set to 0001 and 0001 The block sizes ofthe SGA methods were empirically set to 10 The stoppingthreshold parameter 120590 of SOMP and SMP was set to 001The stopping threshold parameters 120590 and 120575 of BSOMP wereset to 001 and 01 The parameters 120582 and 120590 of RSFoBawere set to 1 and 001 Table 2 shows the sparsity of theabundance vectors and the reconstruction error obtained bythe different algorithm In Table 3 we can find that BSOMPSOMP SMP and SUnSAL perform better than SUnSAL-TV in the sparsest solutions although SUnSAL-TV has thelowest reconstruction errors BSOMP can achieve nearly asfew reconstruction errors as SUnSAL-TV but obtain muchsparser solutions In general BSOMP can obtain the sparsestsolutions and achieve the lower reconstruction errors whichindicates the effectiveness of the proposed algorithm

For visual comparison Figure 13 shows the estimatedfractional abundance maps by applying the proposedBSOMP SMP RSFoBa SOMP SUnSAL and SUnSAL-TValgorithms to the AVIRIS Cuprite data for three differentminerals (Alunite Buddingtonite and Chalcedony) Forcomparison the classification maps of three materialsextracted from the Tricorder 33 software are also presentedin Figure 13 It can be observed that the abundances estimatedby BSOMP are comparable or higher in the regions assignedto the respective minerals in comparison to the otherconsidered algorithms with regard to the classification mapsproduced by the Tricorder 33 software We also analyzed thesparsity of the abundances estimated by BSOMP the average

12 Mathematical Problems in Engineering

Cuprite NevadaAVIRIS 1995 data

USGSClark and Swayze

Tetracorder 33 productSulfates

K-Alunite 150CK-Alunite 250CK-Alunite 450C

JarositeAlunite + Kaoliniteandor Muscovite

Kaolinite group claysKaolinite wxlKaolinite pxlKaolinite + Smectiteor Muscovite

HalloysiteDickite

CarbonatesCalciteCalcite + KaoliniteCalcite + Montmorillonite

ClaysNa-MontmorilloniteNontronite (Fe clay)

Other mineralsLow-Al MuscoviteMed-Al MuscoviteHigh-Al MuscoviteChlorite + Musc MontChloriteBuddingtoniteChalcedony OH QtzPyrophyllite + Alunite

N2km

Na82-Alunite 100CNa40-Alunite 400C

Figure 12 USGS map showing the distribution of different minerals in the Cuprite mining district in Nevada

number of endmembers with abundances higher than 0001estimated by BSOMP is 11810 (per pixel) This means thatthe algorithm uses a minimum number of members toexplain the data Overall the qualitative results and visualresults reported in this section indicate that the BSOMPalgorithm is more effective for sparse unmixing than theother algorithms

5 Conclusions

In this paper we present a new SGA termed BSOMP forsparse unmixing of hyperspectral data The main idea of theproposed algorithm is that BSOMP incorporates a backtrackstep to detect the previous chosen endmembersrsquo reliabilityand then deletes the unreliable endmembers It is provedthat BSOMP can recover the optimal endmembers from thespectral library under certain conditions Experiments onboth synthetic data and real data also demonstrate that theBSOMP algorithm is more effective than the other SGAs forsparse unmixing

The topic we want to discuss is why BSOMP is effectivefor sparse unmixing Endmember selection and abundanceestimation are the two main parts of the SGAs The num-ber of the potential endmembers selected by the SGAs

is far smaller than the number of spectral signatures inthe spectral library and then the abundance estimationmethod makes the abundance vectors sparse further withthe relatively small endmembers subset The main differenceamong BSOMP SOMP SMP and RSFoBa is the endmemberselectionThe SGAs adopt a block-processing strategy to effi-ciently select endmembers but the numbers of endmembersselected are still larger than the actual number However thenonexisting endmembers will have a negative effect on theestimation of the abundances corresponding to the actualendmembers

Compared with SOMP and SMP BSOMP and RSFoBaadopt a backward step to detect the previous chosen end-membersrsquo reliability and delete the nonexisting endmembersSo BSOMP and RSFoBa can select the actual endmembersmore accurately than SOMP and SMP The main differencebetween BSOMP and RSFoBa is that the former adopts abackward step in the whole hyperspectral data while thelatter adopts a backward step in each block As shown inFigures 10 and 11 we can observe that the block size willsignificantly affect the number of the endmembers obtainedby RSFoBa while BSOMP is not affected by the block size Inconclusion BSOMP can select the actual endmembers moreaccurately than the other SGAs and be beneficial to improveestimation effectiveness

Mathematical Problems in Engineering 13

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

02

015

01

005

0

0

04

03

02

01

035

03

025

02

015

01

005

0

04

03

02

01

0

Figure 13 Continued

14 Mathematical Problems in Engineering

025

02

015

01

005

0

02

015

01

005

0

03

025

02

015

01

005

0

03

035

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

04

03

02

01

0

04

05

03

02

01

0

04

05

06

03

02

01

0

Figure 13 Abundance maps estimated by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP and the distribution mapsproduced by Tricorder 33 software for the AVIRIS Cuprite scene From top row to bottom row are the maps produced or estimated byTricorder software SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively From left column to right column arethe maps corresponding to Alunite Buddingtonite and Chalcedony respectively

Appendix

Proof of Theorem 7

First we should introduce some definitionswhichwill be usedduring the proof process

Definition A1 Given hyperspectral image data matrix 119884 isin

119877119871times119870 and available spectral library matrix 119860 isin 119877119871times119898

119860119905is a submatrix of the spectral library 119860 where 119860

119905

contains the 119896 columns indexed in 119878119905119860119905contains the

remaining columns 119860 = [119860119905 119860119905]

119860simopt is a submatrix of 119860

119905 which contains the

remaining (119896 minus 119879) columns indexed in 119878119905 119878opt

119883simopt is a submatrix of119883

119905and contains the rows listed

in 119878119905 119878opt

Nowwewill prove the theorem on the performance of thealgorithm

Proof Let119875opt denote the orthogonal projector onto the rangeof 119860opt

119875opt = 119860opt (119860lowast

opt119860opt)minus1

119860lowast

opt (A1)

where119860lowastopt denotes the conjugate transpose of119860opt Since thecolumns of 119860

119905are orthogonal to the columns of 119877

119905(119877119905= 119884minus

119884119905) we have

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905119877119905

10038171003817100381710038171003817infininfin (A2)

Themaximumcorrelation between119860119905(a submatrix of the

spectral library 119860) and the residual 119877opt = 119884 minus 119884opt is smaller

Mathematical Problems in Engineering 15

than the maximum correlation between the spectral library119860 and the residual 119877opt so we have

10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin (A3)

Invoke (A2) and (A3) and apply the triangle inequality to get

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt + 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A4)

Then the matrix 119884119905minus 119884opt can be expressed in the following

manner

119884119905minus 119884opt = (119868 minus 119875opt) 119884119905 = (119868 minus 119875opt)119860 119905119883119905

= (119868 minus 119875opt)119860simopt119883simopt(A5)

The last equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt Applying the triangle inequality the secondterm of (A4) can be rewritten as follows

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119868 minus 119875opt)119860simopt119883simopt

10038171003817100381710038171003817infininfin

le [10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfin]

times10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A6)

We will apply the cumulative coherence function todeduce the estimates of (A6) the deduction has five steps

(1) Each row of the matrix 119860lowast119905119860simopt lists the inner

products between a spectral endmember and (119896 minus 119879) distinctendmembers we have

10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879) (A7)

(2) Use the fact that sdot infininfin

is submultiplicative andinvoke (A1) to get

10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfin

times100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

times10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

(A8)

(3) Making use of the same deduction as in step (1) wehave

10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfinle 120583 (119879)

10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879)

(A9)

(4) Since the columns of 119860 are normalized the Grammatrix 119860lowastopt119860opt lists the inner products between 119879 different

endmembers so the matrix119860lowastopt119860opt has a unit diagonal andthen we can get

119860lowast

opt119860opt = 119868 minus 119883opt (A10)

where 119883optinfininfin le 120583(119879) then we can expand the inverse ina Neumann series to get100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

=100381710038171003817100381710038171003817(119868 minus 119883opt)

minus1100381710038171003817100381710038171003817infininfin

le1

1 minus10038171003817100381710038171003817119883opt

10038171003817100381710038171003817infininfin

le1

1 minus 120583 (119879)

(A11)

(5) Invoke the bounds from steps (1) to (4) into (A6) wecan obtain that10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le [120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)] times

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A12)

Simplify expression (A12) to conclude that

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfinle120583 (119896 minus 119879)

1 minus 120583 (119879)

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A13)

In order to produce the an upper bound of 119883simoptinfininfin we

deduce the following formula by using the triangle inequality10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119884 minus 119884

119905+ 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le1003817100381710038171003817119860lowast

119905(119884 minus 119884

119905)1003817100381710038171003817infininfin +

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A14)

Since (119884minus119884119905) is orthogonal to the range of119860

119905 Introduce (A5)

into (A14) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

simopt (119868 minus 119875opt)119860simopt119883simopt10038171003817100381710038171003817infininfin

(A15)

The equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt we can replace 119860

119905by 119860simopt Let 119863 = 119860

lowast

simopt(119868 minus

119875opt)119860simopt and rewrite 119863 = 119868 minus (119868 minus 119863) we can expand itsinverse in a Neumann series to obtain

10038171003817100381710038171003817119863119883simopt10038171003817100381710038171003817infininfin

ge (1 minus 119868 minus 119863infininfin)10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A16)

Recall the definition of119863 and apply the triangle inequality toobtain

119868 minus 119863infininfin

=10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt + 119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

(A17)

16 Mathematical Problems in Engineering

The right-hand side of inequality can be deduced completelyanalogous with steps (1)ndash(5) we can obtain

119868 minus 119863infininfin le 120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)

=120583 (119896 minus 119879)

1 minus 120583 (119879)

(A18)

Introduce (A18) and (A16) into (A14) we have10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A19)

Using the same deduction as (A3) it follows that 119860lowast119905(119884 minus

119884opt)infininfin le 119860lowast(119884minus119884opt)infininfin So (A19) can be rewritten as

follows10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A20)

Introduce (A20) into (A13) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le120583 (119896 minus 119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A21)

Introduce (A21) into (A4) to get1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin

le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A22)

Finally assume we have a bound 119860lowast(119884 minus 119884opt)infininfin le 120591 so(A22) can be rewritten as follows

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)120591 (A23)

This completes the argument

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work has been supported by a grant from the NUAAFundamental Research Funds (NS2013085) The authorswould like to thank M D Iordache J Bioucas-Dias and APlaza for sharing Simulated Data 2 and their codes for thealgorithms of CLSUnSAL SUnSAL and SUnSAL-TV Theauthors would also like to thank Shi Z TangW Duren Z andJiang Z for sharing their codes for the algorithms of SMP andRSFoBaMoreover the authors would also like to thank thoseanonymous reviewers for their helpful comments to improvethis paper

References

[1] N Keshava and J F Mustard ldquoSpectral unmixingrdquo IEEE SignalProcessing Magazine vol 19 no 1 pp 44ndash57 2002

[2] Y H Hu H B Lee and F L Scarpace ldquoOptimal linear spectralunmixingrdquo IEEE Transactions on Geoscience and Remote Sens-ing vol 37 no 1 pp 639ndash644 1999

[3] J M Bioucas-Dias A Plaza N Dobigeon et al ldquoHyperspec-tral unmixing overview geometrical statistical and sparseregression-based approachesrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 5 no 2 pp354ndash379 2012

[4] M Grana I Villaverde J O Maldonado and C HernandezldquoTwo lattice computing approaches for the unsupervised seg-mentation of hyperspectral imagesrdquo Neurocomputing vol 72no 10-12 pp 2111ndash2120 2009

[5] J Li and J M Bioucas-Dias ldquoMinimum volume simplexanalysis a fast algorithm to unmix hyperspectral datardquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo08) pp III-250ndashIII-253 BostonMass USA July 2008

[6] H Pu B Wang and L Zhang ldquoSimplex geometry-basedabundance estimation algorithm for hyperspectral unmixingrdquoScientia Sinica Informationis vol 42 no 8 pp 1019ndash1033 2012

[7] G Zhou S Xie Z Yang J-M Yang and Z He ldquoMinimum-volume-constrained nonnegative matrix factorizationenhanced ability of learning partsrdquo IEEE Transactions onNeural Networks vol 22 no 10 pp 1626ndash1637 2011

[8] T-H Chan C-Y Chi Y-M Huang and W-K Ma ldquoA convexanalysis-based minimum-volume enclosing simplex algorithmfor hyperspectral unmixingrdquo IEEE Transactions on Signal Pro-cessing vol 57 no 11 pp 4418ndash4432 2009

[9] M Arngren M N Schmidt and J Larsen ldquoBayesian nonneg-ative matrix factorization with volume prior for unmixing ofhyperspectral imagesrdquo in Proceedings of the IEEE InternationalWorkshop onMachine Learning for Signal Processing (MLSP rsquo09)pp 1ndash6 IEEE Grenoble France September 2009

[10] V P Pauca J Piper and R J Plemmons ldquoNonnegative matrixfactorization for spectral data analysisrdquo Linear Algebra and ItsApplications vol 416 no 1 pp 29ndash47 2006

[11] N Wang B Du and L Zhang ldquoAn endmember dissimilar-ity constrained non-negative matrix factorization method forhyperspectral unmixingrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 6 no 2pp 554ndash569 2013

[12] Z Wu S Ye J Liu L Sun and Z Wei ldquoSparse non-negativematrix factorization on GPUs for hyperspectral unmixingrdquoIEEE Journal of Selected Topics in Applied Earth Observationsand Remote Sensing vol 7 no 8 pp 3640ndash3649 2014

[13] J B Greer ldquoSparse demixing of hyperspectral imagesrdquo IEEETransactions on Image Processing vol 21 no 1 pp 219ndash2282012

[14] J A Tropp and S JWright ldquoComputational methods for sparsesolution of linear inverse problemsrdquoProceedings of the IEEE vol98 no 6 pp 948ndash958 2010

[15] M Elad Sparse and Redundant Representations Springer NewYork NY USA 2010

[16] J A Tropp and A C Gilbert ldquoSignal recovery from randommeasurements via orthogonal matching pursuitrdquo IEEE Trans-actions on Information Theory vol 53 no 12 pp 4655ndash46662007

Mathematical Problems in Engineering 17

[17] W Dai and O Milenkovic ldquoSubspace pursuit for compressivesensing signal reconstructionrdquo IEEE Transactions on Informa-tion Theory vol 55 no 5 pp 2230ndash2249 2009

[18] I Marques and M Grana ldquoHybrid sparse linear and latticemethod for hyperspectral image unmixingrdquo inHybrid ArtificialIntelligence Systems vol 8480 of Lecture Notes in ComputerScience pp 266ndash273 Springer 2014

[19] I Marques and M Grana ldquoGreedy sparsification WM algo-rithm for endmember induction in hyperspectral imagesrdquo inNatural and Artificial Computation in Engineering and MedicalApplications vol 7931 of Lecture Notes in Computer Science pp336ndash344 Springer Berlin Germany 2013

[20] M V Afonso J M Bioucas-Dias and M A Figueiredo ldquoAnaugmented Lagrangian approach to the constrained optimiza-tion formulation of imaging inverse problemsrdquo IEEE Transac-tions on Image Processing vol 20 no 3 pp 681ndash695 2011

[21] M-D Iordache J M Bioucas-Dias and A Plaza ldquoTotal varia-tion spatial regularization for sparse hyperspectral unmixingrdquoIEEE Transactions on Geoscience and Remote Sensing vol 50no 11 pp 4484ndash4502 2012

[22] M-D Iordache JM Bioucas-Dias andA Plaza ldquoCollaborativesparse unmixing of hyperspectral datardquo in Proceedings of theIEEE International Geoscience and Remote Sensing Symposium(IGARSS rsquo12) pp 7488ndash7491 IEEE Munich Germany July2012

[23] J M Bioucas-Dias and M A T Figueiredo ldquoAlternating direc-tion algorithms for constrained sparse regression Applicationto hyperspectral unmixingrdquo in Proceedings of the 2ndWorkshopon Hyperspectral Image and Signal Processing Evolution inRemote Sensing (WHISPERS rsquo10) pp 1ndash4 Reykjavik IcelandJune 2010

[24] Z Shi W Tang Z Duren and Z Jiang ldquoSubspace matchingpursuit for sparse unmixing of hyperspectral datardquo IEEE Trans-actions on Geoscience and Remote Sensing vol 52 no 6 pp3256ndash3274 2014

[25] J A Tropp A C Gilbert and M J Strauss ldquoAlgorithms forsimultaneous sparse approximation Part I greedy pursuitrdquoSignal Processing vol 86 no 3 pp 572ndash588 2006

[26] W Tang Z Shi andYWu ldquoRegularized simultaneous forward-backward greedy algorithm for sparse unmixing of hyperspec-tral datardquo IEEE Transactions on Geoscience and Remote Sensingvol 52 no 9 pp 5271ndash5288 2014

[27] J Chen and X Huo ldquoTheoretical results on sparse representa-tions of multiple-measurement vectorsrdquo IEEE Transactions onSignal Processing vol 54 no 12 pp 4634ndash4643 2006

[28] G Davis S Mallat and M Avellaneda ldquoGreedy adaptiveapproximationrdquo Journal of Constructive Approximation vol 13no 1 pp 57ndash98 1997

[29] J A Tropp ldquoGreed is good algorithmic results for sparseapproximationrdquo IEEE Transactions on Information Theory vol50 no 10 pp 2231ndash2242 2004

[30] D L Donoho and M Elad ldquoMaximal sparsity representationvia l1 minimizationrdquo Proceedings of the National Academy ofSciences of the United States of America vol 100 pp 2197ndash22022003

[31] R N Clark G A Swayze R Wise et al USGS Digital SpectralLibrary splib06a US Geological Survey Denver Colo USA2007

[32] J Plaza A Plaza P Martınez and R Perez ldquoH-COMP atool for quantitative and comparative analysis of endmemberidentification algorithmsrdquo in Proceedings of the Geoscience andRemote Sensing Symposium (IGARSS rsquo03) vol 1 pp 291ndash2932003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 5: Research Article Backtracking-Based Simultaneous ...downloads.hindawi.com/journals/mpe/2015/842017.pdf · Research Article Backtracking-Based Simultaneous Orthogonal Matching Pursuit

Mathematical Problems in Engineering 5

1

08

06

04

02

0

1

08

06

04

02

005 1 15 2 25

Refle

ctan

ce

1

08

06

04

02

0

Refle

ctan

ce

1

08

06

04

02

0

Refle

ctan

ce

Refle

ctan

ce

1

08

06

04

02

0

Refle

ctan

ce

Wavelength (nm)

05 1 15 2 25

Wavelength (nm)05 1 15 2 25

Wavelength (nm)

05 1 15 2 25

Wavelength (nm)05 1 15 2 25

Wavelength (nm)

Axinite HS3423B Samarium oxide GDS36 Chrysocolla HS2973B

Niter GDS43 (K-saltpeter) Anthophyllite HS2863B

Figure 1 Five spectral signatures from the USGS used in our simulated data experiments The title of each subimage denotes the mineralcorresponding to the signature

algorithms have taken into account the abundance nonnega-tivity constraintThe TV regularizer used in SUnSAL-TV is anonisotropic one For the RSFoBa algorithm the 119897

infinnorm is

considered in our experiment Asmentioned above when thespectral library is overcomplete the GAs behave worse thanthe SGAs and the convex relaxationmethods thus we do notinclude the GAs in our experiment

In the simulated data experiments we evaluated theperformances of the sparse unmixing algorithms in situationsof different SNR level of noise the signal-to-reconstructionerror (SRE) [21] is used to measure the quality of thereconstruction abundances of spectral endmembers

SRE =119864 [119883

2

2]

119864 [10038171003817100381710038171003817119883 minus 119883

10038171003817100381710038171003817

2

2]

SRE (dB) = 10log10(SRE)

(13)

where 119883 denotes the true abundances of spectral endmem-bers and119883 denotes the reconstruction abundances of spectralendmembers

41 Evaluation with Simulated Data In the simulated exper-iments the United States Geological Survey (USGS) digital

spectral library (splib06a) [31] is used to build the spectrallibrary 119860 The reflectance values of 498 spectral signaturesare measured for 224 spectral bands distributed uniformlyin the interval of 04ndash25 120583m (119860 isin 119877

224times498) Fifteenspectral signatures are chosen from the spectral library togenerate our simulated data Figure 1 shows five spectralsignatures used for all the simulated data experimentsThe other ten spectral signatures that are not displayed inthe figure include Monazite HS2553B Zoisite HS3473BRhodochrosite HS67 Neodymium Oxide GDS34 PigeoniteHS1993B Meionite WS700Hlsep Spodumene HS2103BLabradorite HS173B Grossular WS484 and WollastoniteHS3483B

In Simulated Data 1 experiment we generated sevendatacubes of 50 times 50 pixels and 224 bands per pixel eachcontaining a different number of endmembers 3 5 7 9 1113 and 15 In each simulated pixel the fractional abundancesof the endmembers were randomly generated following aDirichlet distribution [23] Note that there was no purepixel in Simulated Data 1 and the fractional abundances ofthe endmembers were less than 08 After Simulated Data1 was generated Gaussian white noise or correlated noisewas added to Simulated Data 1 having different levels of thesignal-to-noise ratio (SNR) that is 20 25 30 35 40 45and 50 dB The Gaussian white noise was generated using

6 Mathematical Problems in Engineering

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

45

40

35

30

25

20

15

10

5

0

4 6 8 10 12 14

Endmember number

SRE

(dB)

Figure 2 Results on Simulated Data 1 with 30 dB white noise SREas function of endmember number

the awgn function in MATLAB The correlated noise wasobtained by low-pass filtering iid Gaussian noise

Simulated Data 2 was generated using nine randomlyselected spectral signatures from 119860

1 with 100 times 100 pixels

and 224 bands per pixel provided by Dr M D Iordache andProf J M Bioucas-Dias The fractional abundances satisfythe ANC and the ASC and are piecewise smoothed The trueabundances of three endmembers are illustrated in Figure 2After Simulated Data 2 was generated Gaussian white noiseor correlated noise with seven levels of SNR (20 25 30 3540 45 and 50 dB) was added to Simulated Data 2 as well

In the two simulated data experiments all the SGAmeth-ods have adopted the block-processing strategy to get betterperformances The block sizes of the SGA methods wereempirically set to 10 and the stopping threshold parameter 120590of SOMP and SMP was set to 1sdot10minus2 The stopping thresholdparameters 120590 and 120575 of BSOMP were set to 1sdot10minus2 and 01RSFoBa was tested using different values of the parameters120582 and 120590 0 and 5sdot10minus3 for Simulated Data 1 experiment 1and 1sdot10minus2 for Simulated Data 2 experiment SUnSAL-TVwastested using different values of the parameters 120582 and 120582TV1sdot10minus2 and 1sdot10minus2 1sdot10minus2 and 1sdot10minus2 1sdot10minus2 and 5sdot10minus3 5sdot10minus3and 1sdot10minus3 1sdot10minus3 and 5sdot10minus4 5sdot10minus4 and 1sdot10minus4 and 5sdot10minus4 and1sdot10minus4 for SNR = 20 25 30 35 40 45 and 50 dB SUnSAL andCLSUnSALwas tested using different values of the parameter120582 1sdot10minus2 5sdot10minus3 1sdot10minus3 5sdot10minus4 5sdot10minus4 5sdot10minus4 and 5sdot10minus4 forSNR = 20 25 30 35 40 45 and 50 dB

Figure 2 shows the SRE (dB) results obtained on Simu-lated Data 1 with white noise as a function of endmembernumber using the different tested methods The SREs of allthe algorithms decrease as the endmember number increases

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

30

25

20

15

10

5

0

SRE

(dB)

20 25 30 35 40 45 50

SNR (dB)

Figure 3 Results on Simulated Data 1 with white noise when theendmember number is eleven SRE as function of SNR

We can find that BSOMP SOMP SMP RSFoBa and SUnSAL-TV perform better than SUnSAL and CLSUnSAL All theSGAmethods have comparable performances with SUnSAL-TV BSOMP can always obtain the best estimation of theabundances when the number of endmembers is less than 15Figure 3 shows the SRE (dB) results obtained on SimulatedData 1 with white noise as a function of SNR when theendmember number is elevenThe SREs of all the algorithmsdecrease as the SNR decreases We can find that all the SGAmethods perform better than SUnSAL-TV SUnSAL andCLSUnSAL This result indicates that all the SGA methodsadopt the block-processing strategy to effectively pick upall the actual endmembers Among all the SGA methodsBSOMP and RSFoBa behave better than SOMP and SMPand BSOMP can always obtain the best estimation of theabundances Figures 4 and 5 show the SRE results obtainedon Simulated Data 1 with correlated noise as a function ofendmember number and SNR respectively As shown inFigures 4 and 5 BSOMP gets an overall better performancethan the other algorithms

Figure 6 shows the number of the potential endmembersobtained from the spectral library by all the SGA methodsThe SGA methods adopt the block-processing strategy toeffectively pick up all the actual endmembers It can beobserved that BSOMP and RSFoBa can select the actualendmembers more accurately than SOMP and SMP It isworth mentioning that many endmembers in the estimatedendmembers set of SOMP are nonexisting and affect theperformance of SOMP However BSOMP incorporates abacktracking process to remove some endmembers chosenwrongly from the estimated endmembers set which canselect the actual endmembers more accurately than SOMP

Mathematical Problems in Engineering 7

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

4 6 8 10 12 14

Endmember number

45

40

35

30

25

20

15

10

5

0

SRE

(dB)

Figure 4 Results on Simulated Data 1 with 30 dB correlated noiseSRE as function of endmember number

Differing from RSFoBa BSOMP uses a backtracking processin the whole hyperspectral data to remove some endmem-bers chosen wrongly from the estimated endmembers setin previous block-processing which can identify the trueendmembers set more accurately than RSFoBa It indicatesthat BSOMP behaves better than the other SGA methods

Figures 7 and 8 show the SRE results obtained onSimulated Data 2 with white noise and correlated noise asa function of SNR respectively We can find that BSOMPSOMP SMP RSFoBa and SUnSAL-TV perform better thanSUnSAL and CLSUnSAL This result indicates that all theSGA methods adopt the block-processing strategy to effec-tively pick up all the actual endmembers and SUnSAL-TVcombinates spatial-contextual coherence regularizer whichcan get better performance over SUnSAL and CLSUnSALAll the SGA methods have comparable performances withSUnSAL-TV BSOMP and RSFoBa can always obtain thebetter estimation of the abundances RSFoBa incorporates thespatial-contextual coherence regularizer to select the actualendmembers more accurately than SOMP and SMP andBSOMP incorporates a backtracking process in the wholehyperspectral data to identify the true endmembers set moreaccurately than SOMP and SMP It can be observed thatBSOMP and RSFoBa can select the actual endmembers moreaccurately than SOMP and SMP

For visual comparison Figure 9 shows the true abun-dance maps and the abundance maps estimated by the differ-ent algorithms on Simulated Data 2 with 20 dB white noiseIn Figure 9 the abundancemaps of endmembers which wereobtained by SUnSAL-TV contain fewer noise points How-ever the abundance maps estimated by SUnSAL-TV may

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

30

25

20

15

10

5

0

SRE

(dB)

20 25 30 35 40 45 50

SNR (dB)

Figure 5 Results on Simulated Data 1 with correlated noise whenthe endmember number is eleven SRE as function of SNR

Endmember number

70

60

50

40

30

20

10

0

4 6 8 10 12 14

Num

ber o

f ret

aine

d en

dmem

bers

SMP (white)SOMP (white)RSFoBa (white)

SMP (correlated)SOMP (correlated)RSFoBa (correlated)

BSOMP (white) BSOMP (correlated)

Figure 6 Results obtained by the four SGAs on Simulated Data1 with 30 dB white noise or correlated noise number of retainedendmembers as function of endmember number

exhibit an oversmooth visual effect around some pixels someof the details and edges are not well preserved Comparedwith SUnSAL-TV the estimated abundances by BSOMPmay contain more noise points but preserve more edgeregions and are closer to the true abundances Moreover itis much easier to tell the different materials apart Comparedwith SOMP and SMP the estimated abundances by BSOMP

8 Mathematical Problems in Engineering

Table 1 Processing times(s) measured after applying the tested methods to the two considered simulated data sets with 30 dB white noise

Datacube SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa BSOMPSimulated Data 1 (11 endmembers) 6767 7841 36327 2083 21289 3938 6418Simulated Data 2 25471 27902 127112 9618 35444 28277 106524

50

45

40

35

30

25

20

15

10

20 25 30 35 40 45 50

SNR (dB)

SRE

(dB)

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

Figure 7 Results on Simulated Data 2 with white noise SRE asfunction of SNR

20 25 30 35 40 45 50

SNR (dB)

50

45

40

35

30

25

20

15

10

SRE

(dB)

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

Figure 8 Results on Simulated Data 2 with correlated noise SRE asfunction of SNR

Table 2 Comparison of the sparse unmixing algorithms withwhether the backward step joint sparsity and contextual informa-tion are considered

Property Backwardstep Joint sparsity Contextual

informationSMP sdot

SOMP sdot

RSFoBa sdot sdot sdot

BSOMP sdot sdot

SUnSALCLSUnSAL sdot

SUnSAL-TV sdot

contain fewer noise points and preserve more edge regionswhich are more accurate and have a better visual effect

Table 1 shows the processing time measured after apply-ing the tested algorithms to the two simulated data setswith 30 dB white noise The algorithms were implementedin MATLAB 2009a and executed on a desktop PC equippedwith an Intel Core 2 Duo CPU (293GHz) and 2GB ofRAM memory It can be observed that the SUnSAL-TValgorithm is far more time consuming compared with theother algorithms We can find that BSOMP incorporating abacktracking technique is more time consuming than SOMP

In order to make the differences among the BSOMP algo-rithm and the other sparse unmixing algorithms clearer wedisplay their comparison concerned with several propertiesin Table 2

SUnSAL is a convex relaxation method for sparse unmix-ing which solves the 119897

1norm sparse regression problem

CLSUnSAL is convex relaxation method for solving the 11989721

optimization problemThemain difference between SUnSALand CLSUnSAL is that the former employs pixelwise inde-pendent regressions while the latter enforces joint sparsityamong all the pixels SUnSAL-TV is the special case ofSUnSALwhich incorporates the spatial-contextual coherenceregularizer into the SUnSAL As observed in Figures 7 and8 SUnSAL-TV behaves better than SUnSAL and CLSUnSALwhich indicates that the spatial-contextual coherence regu-larizer can make SUnSAL-TV more effective

SOMP and SMP are the forward SGAs which select oneor more potential endmembers from the spectral library ineach iteration RSFoBa is a forward-backward SGA whichincorporates the backward step spatial-contextual informa-tion and joint sparsity into the greedy algorithm BSOMP isa forward-backward SGA which incorporates the backwardstep and joint sparsity to the greedy algorithm As observedin Figure 6 the numbers of endmembers selected by theSGAs are still larger than the actual number BSOMP and

Mathematical Problems in Engineering 9

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

Figure 9 Continued

10 Mathematical Problems in Engineering

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

Figure 9 Comparison of abundance maps on Synthetic Data 2 with 20 dB white noise From left column to right column are the abundancemaps corresponding to Endmember 1 Endmember 4 and Endmember 9 respectively From top row to bottom row are true abundance mapsor abundance maps obtained by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively

RSFoBa behave better than SMP and SOMP which indicatesthat the backward step can make BSOMP and RSFoBa moreaccurate to select the actual endmembers Differing fromRSFoBa which adopts a forward step and a backward stepto pick some potential endmembers in each block BSOMPadopts a forward step to pick some potential endmembersin each block and then adopts a backward step in the wholehyperspectral data to remove some endmembers chosenwrongly from the estimated endmembers set Figure 10 showsthe SRE (dB) results obtained on Simulated Data 1 withwhite noise as a function of block size We can find thatBSOMP performs better than the other SGAs Figure 11shows the number of the potential endmembers obtainedfrom the spectral library by all the SGA methods It canbe observed that BSOMP can select the actual endmembersmore accurately than RSFoBa This result indicates that abackward step in the whole hyperspectral data can makeBSOMP more effective

42 Evaluation with Real Data The hyperspectral data setused in our real data experiment is the well-known AVIRISCuprite data set (httpavirisjplnasagovhtmlavirisfreed-atahtml) The size of the hyperspectral scene is a 250 times 191pixels subset with 188 spectral bands (water absorption bandsand low SNR bands are removed)The spectral library for thisdata set is the USGS spectral library (containing 498 pure sig-natures) with the corresponding bands removed The miner-als map (httpspeclabcrusgsgovcuprite95tgif22um mapgif) which was produced by a Tricorder 33 software(httpspeclabcrusgsgovPAPERStetracorder) is shown inFigure 12 Note that the Tricorder map is only suitable forthe hyperspectral data collected in 1995 the AVIRIS Cupritedata was collected was in 1997 Thus we only use the mineralmap as a reference indicator for qualitative analysis of thefractional abundance maps produced by different sparseunmixing methods

In this section for the real data experiment the sparsityof solution and the reconstruction error of the hyperspectral

Mathematical Problems in Engineering 11

Table 3 Sparsity of the abundance vectors and the reconstruction errors

Algorithms SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa BSOMPSparsity 175629 207037 20472 13684 15102 12702 11810RMSE 00051 00035 00028 00040 00034 00032 00033

14

13

12

11

10

9

8

7

8 10 12 14 16 18

Block size

SRE

(dB)

SMPSOMP

RSFoBaBSOMP

Figure 10 Results on Simulated Data 1 with white noise when theendmember number is eleven SRE as function of block size

50

55

60

65

45

40

35

30

25

20

15

108 10 12 14 16 18

Block size

SMPSOMP

RSFoBaBSOMP

Endm

embe

r num

ber

Figure 11 Results on Simulated Data 1 with 30 dB white noise whenthe endmember number is eleven number of retained endmembersas function of block size

image are used to evaluate the performances of the sparseunmixing algorithms since the true abundance maps of thereal data are not available The sparsity is obtained by count-ing the average number of nonzero abundances estimated byeach algorithm The same as in [22] we define the fractionalabundances larger than 0001 as nonzero abundances toavoid counting negligible values The reconstruction erroris calculated by the original hyperspectral image and theone reconstructed by the actual endmembers and theircorresponding abundances [32] The root mean square error(RMSE) is used to evaluate the reconstruction error betweenthe original hyperspectral image and the reconstructed one

RMSE = radic 1119870

119870

sum

119895=1

(119884119895minus 119895)2

(14)

where 119884 denotes the original hyperspectral image and denotes the reconstruction hyperspectral image If the RMSEof a sparse unmixing algorithm is smaller it also reflects theunmixing performance of this algorithm

The regularization parameter used for CLSUnSAL wasempirically set to 001 while the one corresponding toSUnSAL was set to 0001 The parameters 120582 and 120582TV ofSUnSAL-TV were set to 0001 and 0001 The block sizes ofthe SGA methods were empirically set to 10 The stoppingthreshold parameter 120590 of SOMP and SMP was set to 001The stopping threshold parameters 120590 and 120575 of BSOMP wereset to 001 and 01 The parameters 120582 and 120590 of RSFoBawere set to 1 and 001 Table 2 shows the sparsity of theabundance vectors and the reconstruction error obtained bythe different algorithm In Table 3 we can find that BSOMPSOMP SMP and SUnSAL perform better than SUnSAL-TV in the sparsest solutions although SUnSAL-TV has thelowest reconstruction errors BSOMP can achieve nearly asfew reconstruction errors as SUnSAL-TV but obtain muchsparser solutions In general BSOMP can obtain the sparsestsolutions and achieve the lower reconstruction errors whichindicates the effectiveness of the proposed algorithm

For visual comparison Figure 13 shows the estimatedfractional abundance maps by applying the proposedBSOMP SMP RSFoBa SOMP SUnSAL and SUnSAL-TValgorithms to the AVIRIS Cuprite data for three differentminerals (Alunite Buddingtonite and Chalcedony) Forcomparison the classification maps of three materialsextracted from the Tricorder 33 software are also presentedin Figure 13 It can be observed that the abundances estimatedby BSOMP are comparable or higher in the regions assignedto the respective minerals in comparison to the otherconsidered algorithms with regard to the classification mapsproduced by the Tricorder 33 software We also analyzed thesparsity of the abundances estimated by BSOMP the average

12 Mathematical Problems in Engineering

Cuprite NevadaAVIRIS 1995 data

USGSClark and Swayze

Tetracorder 33 productSulfates

K-Alunite 150CK-Alunite 250CK-Alunite 450C

JarositeAlunite + Kaoliniteandor Muscovite

Kaolinite group claysKaolinite wxlKaolinite pxlKaolinite + Smectiteor Muscovite

HalloysiteDickite

CarbonatesCalciteCalcite + KaoliniteCalcite + Montmorillonite

ClaysNa-MontmorilloniteNontronite (Fe clay)

Other mineralsLow-Al MuscoviteMed-Al MuscoviteHigh-Al MuscoviteChlorite + Musc MontChloriteBuddingtoniteChalcedony OH QtzPyrophyllite + Alunite

N2km

Na82-Alunite 100CNa40-Alunite 400C

Figure 12 USGS map showing the distribution of different minerals in the Cuprite mining district in Nevada

number of endmembers with abundances higher than 0001estimated by BSOMP is 11810 (per pixel) This means thatthe algorithm uses a minimum number of members toexplain the data Overall the qualitative results and visualresults reported in this section indicate that the BSOMPalgorithm is more effective for sparse unmixing than theother algorithms

5 Conclusions

In this paper we present a new SGA termed BSOMP forsparse unmixing of hyperspectral data The main idea of theproposed algorithm is that BSOMP incorporates a backtrackstep to detect the previous chosen endmembersrsquo reliabilityand then deletes the unreliable endmembers It is provedthat BSOMP can recover the optimal endmembers from thespectral library under certain conditions Experiments onboth synthetic data and real data also demonstrate that theBSOMP algorithm is more effective than the other SGAs forsparse unmixing

The topic we want to discuss is why BSOMP is effectivefor sparse unmixing Endmember selection and abundanceestimation are the two main parts of the SGAs The num-ber of the potential endmembers selected by the SGAs

is far smaller than the number of spectral signatures inthe spectral library and then the abundance estimationmethod makes the abundance vectors sparse further withthe relatively small endmembers subset The main differenceamong BSOMP SOMP SMP and RSFoBa is the endmemberselectionThe SGAs adopt a block-processing strategy to effi-ciently select endmembers but the numbers of endmembersselected are still larger than the actual number However thenonexisting endmembers will have a negative effect on theestimation of the abundances corresponding to the actualendmembers

Compared with SOMP and SMP BSOMP and RSFoBaadopt a backward step to detect the previous chosen end-membersrsquo reliability and delete the nonexisting endmembersSo BSOMP and RSFoBa can select the actual endmembersmore accurately than SOMP and SMP The main differencebetween BSOMP and RSFoBa is that the former adopts abackward step in the whole hyperspectral data while thelatter adopts a backward step in each block As shown inFigures 10 and 11 we can observe that the block size willsignificantly affect the number of the endmembers obtainedby RSFoBa while BSOMP is not affected by the block size Inconclusion BSOMP can select the actual endmembers moreaccurately than the other SGAs and be beneficial to improveestimation effectiveness

Mathematical Problems in Engineering 13

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

02

015

01

005

0

0

04

03

02

01

035

03

025

02

015

01

005

0

04

03

02

01

0

Figure 13 Continued

14 Mathematical Problems in Engineering

025

02

015

01

005

0

02

015

01

005

0

03

025

02

015

01

005

0

03

035

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

04

03

02

01

0

04

05

03

02

01

0

04

05

06

03

02

01

0

Figure 13 Abundance maps estimated by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP and the distribution mapsproduced by Tricorder 33 software for the AVIRIS Cuprite scene From top row to bottom row are the maps produced or estimated byTricorder software SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively From left column to right column arethe maps corresponding to Alunite Buddingtonite and Chalcedony respectively

Appendix

Proof of Theorem 7

First we should introduce some definitionswhichwill be usedduring the proof process

Definition A1 Given hyperspectral image data matrix 119884 isin

119877119871times119870 and available spectral library matrix 119860 isin 119877119871times119898

119860119905is a submatrix of the spectral library 119860 where 119860

119905

contains the 119896 columns indexed in 119878119905119860119905contains the

remaining columns 119860 = [119860119905 119860119905]

119860simopt is a submatrix of 119860

119905 which contains the

remaining (119896 minus 119879) columns indexed in 119878119905 119878opt

119883simopt is a submatrix of119883

119905and contains the rows listed

in 119878119905 119878opt

Nowwewill prove the theorem on the performance of thealgorithm

Proof Let119875opt denote the orthogonal projector onto the rangeof 119860opt

119875opt = 119860opt (119860lowast

opt119860opt)minus1

119860lowast

opt (A1)

where119860lowastopt denotes the conjugate transpose of119860opt Since thecolumns of 119860

119905are orthogonal to the columns of 119877

119905(119877119905= 119884minus

119884119905) we have

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905119877119905

10038171003817100381710038171003817infininfin (A2)

Themaximumcorrelation between119860119905(a submatrix of the

spectral library 119860) and the residual 119877opt = 119884 minus 119884opt is smaller

Mathematical Problems in Engineering 15

than the maximum correlation between the spectral library119860 and the residual 119877opt so we have

10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin (A3)

Invoke (A2) and (A3) and apply the triangle inequality to get

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt + 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A4)

Then the matrix 119884119905minus 119884opt can be expressed in the following

manner

119884119905minus 119884opt = (119868 minus 119875opt) 119884119905 = (119868 minus 119875opt)119860 119905119883119905

= (119868 minus 119875opt)119860simopt119883simopt(A5)

The last equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt Applying the triangle inequality the secondterm of (A4) can be rewritten as follows

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119868 minus 119875opt)119860simopt119883simopt

10038171003817100381710038171003817infininfin

le [10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfin]

times10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A6)

We will apply the cumulative coherence function todeduce the estimates of (A6) the deduction has five steps

(1) Each row of the matrix 119860lowast119905119860simopt lists the inner

products between a spectral endmember and (119896 minus 119879) distinctendmembers we have

10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879) (A7)

(2) Use the fact that sdot infininfin

is submultiplicative andinvoke (A1) to get

10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfin

times100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

times10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

(A8)

(3) Making use of the same deduction as in step (1) wehave

10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfinle 120583 (119879)

10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879)

(A9)

(4) Since the columns of 119860 are normalized the Grammatrix 119860lowastopt119860opt lists the inner products between 119879 different

endmembers so the matrix119860lowastopt119860opt has a unit diagonal andthen we can get

119860lowast

opt119860opt = 119868 minus 119883opt (A10)

where 119883optinfininfin le 120583(119879) then we can expand the inverse ina Neumann series to get100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

=100381710038171003817100381710038171003817(119868 minus 119883opt)

minus1100381710038171003817100381710038171003817infininfin

le1

1 minus10038171003817100381710038171003817119883opt

10038171003817100381710038171003817infininfin

le1

1 minus 120583 (119879)

(A11)

(5) Invoke the bounds from steps (1) to (4) into (A6) wecan obtain that10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le [120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)] times

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A12)

Simplify expression (A12) to conclude that

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfinle120583 (119896 minus 119879)

1 minus 120583 (119879)

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A13)

In order to produce the an upper bound of 119883simoptinfininfin we

deduce the following formula by using the triangle inequality10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119884 minus 119884

119905+ 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le1003817100381710038171003817119860lowast

119905(119884 minus 119884

119905)1003817100381710038171003817infininfin +

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A14)

Since (119884minus119884119905) is orthogonal to the range of119860

119905 Introduce (A5)

into (A14) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

simopt (119868 minus 119875opt)119860simopt119883simopt10038171003817100381710038171003817infininfin

(A15)

The equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt we can replace 119860

119905by 119860simopt Let 119863 = 119860

lowast

simopt(119868 minus

119875opt)119860simopt and rewrite 119863 = 119868 minus (119868 minus 119863) we can expand itsinverse in a Neumann series to obtain

10038171003817100381710038171003817119863119883simopt10038171003817100381710038171003817infininfin

ge (1 minus 119868 minus 119863infininfin)10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A16)

Recall the definition of119863 and apply the triangle inequality toobtain

119868 minus 119863infininfin

=10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt + 119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

(A17)

16 Mathematical Problems in Engineering

The right-hand side of inequality can be deduced completelyanalogous with steps (1)ndash(5) we can obtain

119868 minus 119863infininfin le 120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)

=120583 (119896 minus 119879)

1 minus 120583 (119879)

(A18)

Introduce (A18) and (A16) into (A14) we have10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A19)

Using the same deduction as (A3) it follows that 119860lowast119905(119884 minus

119884opt)infininfin le 119860lowast(119884minus119884opt)infininfin So (A19) can be rewritten as

follows10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A20)

Introduce (A20) into (A13) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le120583 (119896 minus 119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A21)

Introduce (A21) into (A4) to get1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin

le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A22)

Finally assume we have a bound 119860lowast(119884 minus 119884opt)infininfin le 120591 so(A22) can be rewritten as follows

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)120591 (A23)

This completes the argument

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work has been supported by a grant from the NUAAFundamental Research Funds (NS2013085) The authorswould like to thank M D Iordache J Bioucas-Dias and APlaza for sharing Simulated Data 2 and their codes for thealgorithms of CLSUnSAL SUnSAL and SUnSAL-TV Theauthors would also like to thank Shi Z TangW Duren Z andJiang Z for sharing their codes for the algorithms of SMP andRSFoBaMoreover the authors would also like to thank thoseanonymous reviewers for their helpful comments to improvethis paper

References

[1] N Keshava and J F Mustard ldquoSpectral unmixingrdquo IEEE SignalProcessing Magazine vol 19 no 1 pp 44ndash57 2002

[2] Y H Hu H B Lee and F L Scarpace ldquoOptimal linear spectralunmixingrdquo IEEE Transactions on Geoscience and Remote Sens-ing vol 37 no 1 pp 639ndash644 1999

[3] J M Bioucas-Dias A Plaza N Dobigeon et al ldquoHyperspec-tral unmixing overview geometrical statistical and sparseregression-based approachesrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 5 no 2 pp354ndash379 2012

[4] M Grana I Villaverde J O Maldonado and C HernandezldquoTwo lattice computing approaches for the unsupervised seg-mentation of hyperspectral imagesrdquo Neurocomputing vol 72no 10-12 pp 2111ndash2120 2009

[5] J Li and J M Bioucas-Dias ldquoMinimum volume simplexanalysis a fast algorithm to unmix hyperspectral datardquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo08) pp III-250ndashIII-253 BostonMass USA July 2008

[6] H Pu B Wang and L Zhang ldquoSimplex geometry-basedabundance estimation algorithm for hyperspectral unmixingrdquoScientia Sinica Informationis vol 42 no 8 pp 1019ndash1033 2012

[7] G Zhou S Xie Z Yang J-M Yang and Z He ldquoMinimum-volume-constrained nonnegative matrix factorizationenhanced ability of learning partsrdquo IEEE Transactions onNeural Networks vol 22 no 10 pp 1626ndash1637 2011

[8] T-H Chan C-Y Chi Y-M Huang and W-K Ma ldquoA convexanalysis-based minimum-volume enclosing simplex algorithmfor hyperspectral unmixingrdquo IEEE Transactions on Signal Pro-cessing vol 57 no 11 pp 4418ndash4432 2009

[9] M Arngren M N Schmidt and J Larsen ldquoBayesian nonneg-ative matrix factorization with volume prior for unmixing ofhyperspectral imagesrdquo in Proceedings of the IEEE InternationalWorkshop onMachine Learning for Signal Processing (MLSP rsquo09)pp 1ndash6 IEEE Grenoble France September 2009

[10] V P Pauca J Piper and R J Plemmons ldquoNonnegative matrixfactorization for spectral data analysisrdquo Linear Algebra and ItsApplications vol 416 no 1 pp 29ndash47 2006

[11] N Wang B Du and L Zhang ldquoAn endmember dissimilar-ity constrained non-negative matrix factorization method forhyperspectral unmixingrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 6 no 2pp 554ndash569 2013

[12] Z Wu S Ye J Liu L Sun and Z Wei ldquoSparse non-negativematrix factorization on GPUs for hyperspectral unmixingrdquoIEEE Journal of Selected Topics in Applied Earth Observationsand Remote Sensing vol 7 no 8 pp 3640ndash3649 2014

[13] J B Greer ldquoSparse demixing of hyperspectral imagesrdquo IEEETransactions on Image Processing vol 21 no 1 pp 219ndash2282012

[14] J A Tropp and S JWright ldquoComputational methods for sparsesolution of linear inverse problemsrdquoProceedings of the IEEE vol98 no 6 pp 948ndash958 2010

[15] M Elad Sparse and Redundant Representations Springer NewYork NY USA 2010

[16] J A Tropp and A C Gilbert ldquoSignal recovery from randommeasurements via orthogonal matching pursuitrdquo IEEE Trans-actions on Information Theory vol 53 no 12 pp 4655ndash46662007

Mathematical Problems in Engineering 17

[17] W Dai and O Milenkovic ldquoSubspace pursuit for compressivesensing signal reconstructionrdquo IEEE Transactions on Informa-tion Theory vol 55 no 5 pp 2230ndash2249 2009

[18] I Marques and M Grana ldquoHybrid sparse linear and latticemethod for hyperspectral image unmixingrdquo inHybrid ArtificialIntelligence Systems vol 8480 of Lecture Notes in ComputerScience pp 266ndash273 Springer 2014

[19] I Marques and M Grana ldquoGreedy sparsification WM algo-rithm for endmember induction in hyperspectral imagesrdquo inNatural and Artificial Computation in Engineering and MedicalApplications vol 7931 of Lecture Notes in Computer Science pp336ndash344 Springer Berlin Germany 2013

[20] M V Afonso J M Bioucas-Dias and M A Figueiredo ldquoAnaugmented Lagrangian approach to the constrained optimiza-tion formulation of imaging inverse problemsrdquo IEEE Transac-tions on Image Processing vol 20 no 3 pp 681ndash695 2011

[21] M-D Iordache J M Bioucas-Dias and A Plaza ldquoTotal varia-tion spatial regularization for sparse hyperspectral unmixingrdquoIEEE Transactions on Geoscience and Remote Sensing vol 50no 11 pp 4484ndash4502 2012

[22] M-D Iordache JM Bioucas-Dias andA Plaza ldquoCollaborativesparse unmixing of hyperspectral datardquo in Proceedings of theIEEE International Geoscience and Remote Sensing Symposium(IGARSS rsquo12) pp 7488ndash7491 IEEE Munich Germany July2012

[23] J M Bioucas-Dias and M A T Figueiredo ldquoAlternating direc-tion algorithms for constrained sparse regression Applicationto hyperspectral unmixingrdquo in Proceedings of the 2ndWorkshopon Hyperspectral Image and Signal Processing Evolution inRemote Sensing (WHISPERS rsquo10) pp 1ndash4 Reykjavik IcelandJune 2010

[24] Z Shi W Tang Z Duren and Z Jiang ldquoSubspace matchingpursuit for sparse unmixing of hyperspectral datardquo IEEE Trans-actions on Geoscience and Remote Sensing vol 52 no 6 pp3256ndash3274 2014

[25] J A Tropp A C Gilbert and M J Strauss ldquoAlgorithms forsimultaneous sparse approximation Part I greedy pursuitrdquoSignal Processing vol 86 no 3 pp 572ndash588 2006

[26] W Tang Z Shi andYWu ldquoRegularized simultaneous forward-backward greedy algorithm for sparse unmixing of hyperspec-tral datardquo IEEE Transactions on Geoscience and Remote Sensingvol 52 no 9 pp 5271ndash5288 2014

[27] J Chen and X Huo ldquoTheoretical results on sparse representa-tions of multiple-measurement vectorsrdquo IEEE Transactions onSignal Processing vol 54 no 12 pp 4634ndash4643 2006

[28] G Davis S Mallat and M Avellaneda ldquoGreedy adaptiveapproximationrdquo Journal of Constructive Approximation vol 13no 1 pp 57ndash98 1997

[29] J A Tropp ldquoGreed is good algorithmic results for sparseapproximationrdquo IEEE Transactions on Information Theory vol50 no 10 pp 2231ndash2242 2004

[30] D L Donoho and M Elad ldquoMaximal sparsity representationvia l1 minimizationrdquo Proceedings of the National Academy ofSciences of the United States of America vol 100 pp 2197ndash22022003

[31] R N Clark G A Swayze R Wise et al USGS Digital SpectralLibrary splib06a US Geological Survey Denver Colo USA2007

[32] J Plaza A Plaza P Martınez and R Perez ldquoH-COMP atool for quantitative and comparative analysis of endmemberidentification algorithmsrdquo in Proceedings of the Geoscience andRemote Sensing Symposium (IGARSS rsquo03) vol 1 pp 291ndash2932003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 6: Research Article Backtracking-Based Simultaneous ...downloads.hindawi.com/journals/mpe/2015/842017.pdf · Research Article Backtracking-Based Simultaneous Orthogonal Matching Pursuit

6 Mathematical Problems in Engineering

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

45

40

35

30

25

20

15

10

5

0

4 6 8 10 12 14

Endmember number

SRE

(dB)

Figure 2 Results on Simulated Data 1 with 30 dB white noise SREas function of endmember number

the awgn function in MATLAB The correlated noise wasobtained by low-pass filtering iid Gaussian noise

Simulated Data 2 was generated using nine randomlyselected spectral signatures from 119860

1 with 100 times 100 pixels

and 224 bands per pixel provided by Dr M D Iordache andProf J M Bioucas-Dias The fractional abundances satisfythe ANC and the ASC and are piecewise smoothed The trueabundances of three endmembers are illustrated in Figure 2After Simulated Data 2 was generated Gaussian white noiseor correlated noise with seven levels of SNR (20 25 30 3540 45 and 50 dB) was added to Simulated Data 2 as well

In the two simulated data experiments all the SGAmeth-ods have adopted the block-processing strategy to get betterperformances The block sizes of the SGA methods wereempirically set to 10 and the stopping threshold parameter 120590of SOMP and SMP was set to 1sdot10minus2 The stopping thresholdparameters 120590 and 120575 of BSOMP were set to 1sdot10minus2 and 01RSFoBa was tested using different values of the parameters120582 and 120590 0 and 5sdot10minus3 for Simulated Data 1 experiment 1and 1sdot10minus2 for Simulated Data 2 experiment SUnSAL-TVwastested using different values of the parameters 120582 and 120582TV1sdot10minus2 and 1sdot10minus2 1sdot10minus2 and 1sdot10minus2 1sdot10minus2 and 5sdot10minus3 5sdot10minus3and 1sdot10minus3 1sdot10minus3 and 5sdot10minus4 5sdot10minus4 and 1sdot10minus4 and 5sdot10minus4 and1sdot10minus4 for SNR = 20 25 30 35 40 45 and 50 dB SUnSAL andCLSUnSALwas tested using different values of the parameter120582 1sdot10minus2 5sdot10minus3 1sdot10minus3 5sdot10minus4 5sdot10minus4 5sdot10minus4 and 5sdot10minus4 forSNR = 20 25 30 35 40 45 and 50 dB

Figure 2 shows the SRE (dB) results obtained on Simu-lated Data 1 with white noise as a function of endmembernumber using the different tested methods The SREs of allthe algorithms decrease as the endmember number increases

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

30

25

20

15

10

5

0

SRE

(dB)

20 25 30 35 40 45 50

SNR (dB)

Figure 3 Results on Simulated Data 1 with white noise when theendmember number is eleven SRE as function of SNR

We can find that BSOMP SOMP SMP RSFoBa and SUnSAL-TV perform better than SUnSAL and CLSUnSAL All theSGAmethods have comparable performances with SUnSAL-TV BSOMP can always obtain the best estimation of theabundances when the number of endmembers is less than 15Figure 3 shows the SRE (dB) results obtained on SimulatedData 1 with white noise as a function of SNR when theendmember number is elevenThe SREs of all the algorithmsdecrease as the SNR decreases We can find that all the SGAmethods perform better than SUnSAL-TV SUnSAL andCLSUnSAL This result indicates that all the SGA methodsadopt the block-processing strategy to effectively pick upall the actual endmembers Among all the SGA methodsBSOMP and RSFoBa behave better than SOMP and SMPand BSOMP can always obtain the best estimation of theabundances Figures 4 and 5 show the SRE results obtainedon Simulated Data 1 with correlated noise as a function ofendmember number and SNR respectively As shown inFigures 4 and 5 BSOMP gets an overall better performancethan the other algorithms

Figure 6 shows the number of the potential endmembersobtained from the spectral library by all the SGA methodsThe SGA methods adopt the block-processing strategy toeffectively pick up all the actual endmembers It can beobserved that BSOMP and RSFoBa can select the actualendmembers more accurately than SOMP and SMP It isworth mentioning that many endmembers in the estimatedendmembers set of SOMP are nonexisting and affect theperformance of SOMP However BSOMP incorporates abacktracking process to remove some endmembers chosenwrongly from the estimated endmembers set which canselect the actual endmembers more accurately than SOMP

Mathematical Problems in Engineering 7

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

4 6 8 10 12 14

Endmember number

45

40

35

30

25

20

15

10

5

0

SRE

(dB)

Figure 4 Results on Simulated Data 1 with 30 dB correlated noiseSRE as function of endmember number

Differing from RSFoBa BSOMP uses a backtracking processin the whole hyperspectral data to remove some endmem-bers chosen wrongly from the estimated endmembers setin previous block-processing which can identify the trueendmembers set more accurately than RSFoBa It indicatesthat BSOMP behaves better than the other SGA methods

Figures 7 and 8 show the SRE results obtained onSimulated Data 2 with white noise and correlated noise asa function of SNR respectively We can find that BSOMPSOMP SMP RSFoBa and SUnSAL-TV perform better thanSUnSAL and CLSUnSAL This result indicates that all theSGA methods adopt the block-processing strategy to effec-tively pick up all the actual endmembers and SUnSAL-TVcombinates spatial-contextual coherence regularizer whichcan get better performance over SUnSAL and CLSUnSALAll the SGA methods have comparable performances withSUnSAL-TV BSOMP and RSFoBa can always obtain thebetter estimation of the abundances RSFoBa incorporates thespatial-contextual coherence regularizer to select the actualendmembers more accurately than SOMP and SMP andBSOMP incorporates a backtracking process in the wholehyperspectral data to identify the true endmembers set moreaccurately than SOMP and SMP It can be observed thatBSOMP and RSFoBa can select the actual endmembers moreaccurately than SOMP and SMP

For visual comparison Figure 9 shows the true abun-dance maps and the abundance maps estimated by the differ-ent algorithms on Simulated Data 2 with 20 dB white noiseIn Figure 9 the abundancemaps of endmembers which wereobtained by SUnSAL-TV contain fewer noise points How-ever the abundance maps estimated by SUnSAL-TV may

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

30

25

20

15

10

5

0

SRE

(dB)

20 25 30 35 40 45 50

SNR (dB)

Figure 5 Results on Simulated Data 1 with correlated noise whenthe endmember number is eleven SRE as function of SNR

Endmember number

70

60

50

40

30

20

10

0

4 6 8 10 12 14

Num

ber o

f ret

aine

d en

dmem

bers

SMP (white)SOMP (white)RSFoBa (white)

SMP (correlated)SOMP (correlated)RSFoBa (correlated)

BSOMP (white) BSOMP (correlated)

Figure 6 Results obtained by the four SGAs on Simulated Data1 with 30 dB white noise or correlated noise number of retainedendmembers as function of endmember number

exhibit an oversmooth visual effect around some pixels someof the details and edges are not well preserved Comparedwith SUnSAL-TV the estimated abundances by BSOMPmay contain more noise points but preserve more edgeregions and are closer to the true abundances Moreover itis much easier to tell the different materials apart Comparedwith SOMP and SMP the estimated abundances by BSOMP

8 Mathematical Problems in Engineering

Table 1 Processing times(s) measured after applying the tested methods to the two considered simulated data sets with 30 dB white noise

Datacube SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa BSOMPSimulated Data 1 (11 endmembers) 6767 7841 36327 2083 21289 3938 6418Simulated Data 2 25471 27902 127112 9618 35444 28277 106524

50

45

40

35

30

25

20

15

10

20 25 30 35 40 45 50

SNR (dB)

SRE

(dB)

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

Figure 7 Results on Simulated Data 2 with white noise SRE asfunction of SNR

20 25 30 35 40 45 50

SNR (dB)

50

45

40

35

30

25

20

15

10

SRE

(dB)

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

Figure 8 Results on Simulated Data 2 with correlated noise SRE asfunction of SNR

Table 2 Comparison of the sparse unmixing algorithms withwhether the backward step joint sparsity and contextual informa-tion are considered

Property Backwardstep Joint sparsity Contextual

informationSMP sdot

SOMP sdot

RSFoBa sdot sdot sdot

BSOMP sdot sdot

SUnSALCLSUnSAL sdot

SUnSAL-TV sdot

contain fewer noise points and preserve more edge regionswhich are more accurate and have a better visual effect

Table 1 shows the processing time measured after apply-ing the tested algorithms to the two simulated data setswith 30 dB white noise The algorithms were implementedin MATLAB 2009a and executed on a desktop PC equippedwith an Intel Core 2 Duo CPU (293GHz) and 2GB ofRAM memory It can be observed that the SUnSAL-TValgorithm is far more time consuming compared with theother algorithms We can find that BSOMP incorporating abacktracking technique is more time consuming than SOMP

In order to make the differences among the BSOMP algo-rithm and the other sparse unmixing algorithms clearer wedisplay their comparison concerned with several propertiesin Table 2

SUnSAL is a convex relaxation method for sparse unmix-ing which solves the 119897

1norm sparse regression problem

CLSUnSAL is convex relaxation method for solving the 11989721

optimization problemThemain difference between SUnSALand CLSUnSAL is that the former employs pixelwise inde-pendent regressions while the latter enforces joint sparsityamong all the pixels SUnSAL-TV is the special case ofSUnSALwhich incorporates the spatial-contextual coherenceregularizer into the SUnSAL As observed in Figures 7 and8 SUnSAL-TV behaves better than SUnSAL and CLSUnSALwhich indicates that the spatial-contextual coherence regu-larizer can make SUnSAL-TV more effective

SOMP and SMP are the forward SGAs which select oneor more potential endmembers from the spectral library ineach iteration RSFoBa is a forward-backward SGA whichincorporates the backward step spatial-contextual informa-tion and joint sparsity into the greedy algorithm BSOMP isa forward-backward SGA which incorporates the backwardstep and joint sparsity to the greedy algorithm As observedin Figure 6 the numbers of endmembers selected by theSGAs are still larger than the actual number BSOMP and

Mathematical Problems in Engineering 9

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

Figure 9 Continued

10 Mathematical Problems in Engineering

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

Figure 9 Comparison of abundance maps on Synthetic Data 2 with 20 dB white noise From left column to right column are the abundancemaps corresponding to Endmember 1 Endmember 4 and Endmember 9 respectively From top row to bottom row are true abundance mapsor abundance maps obtained by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively

RSFoBa behave better than SMP and SOMP which indicatesthat the backward step can make BSOMP and RSFoBa moreaccurate to select the actual endmembers Differing fromRSFoBa which adopts a forward step and a backward stepto pick some potential endmembers in each block BSOMPadopts a forward step to pick some potential endmembersin each block and then adopts a backward step in the wholehyperspectral data to remove some endmembers chosenwrongly from the estimated endmembers set Figure 10 showsthe SRE (dB) results obtained on Simulated Data 1 withwhite noise as a function of block size We can find thatBSOMP performs better than the other SGAs Figure 11shows the number of the potential endmembers obtainedfrom the spectral library by all the SGA methods It canbe observed that BSOMP can select the actual endmembersmore accurately than RSFoBa This result indicates that abackward step in the whole hyperspectral data can makeBSOMP more effective

42 Evaluation with Real Data The hyperspectral data setused in our real data experiment is the well-known AVIRISCuprite data set (httpavirisjplnasagovhtmlavirisfreed-atahtml) The size of the hyperspectral scene is a 250 times 191pixels subset with 188 spectral bands (water absorption bandsand low SNR bands are removed)The spectral library for thisdata set is the USGS spectral library (containing 498 pure sig-natures) with the corresponding bands removed The miner-als map (httpspeclabcrusgsgovcuprite95tgif22um mapgif) which was produced by a Tricorder 33 software(httpspeclabcrusgsgovPAPERStetracorder) is shown inFigure 12 Note that the Tricorder map is only suitable forthe hyperspectral data collected in 1995 the AVIRIS Cupritedata was collected was in 1997 Thus we only use the mineralmap as a reference indicator for qualitative analysis of thefractional abundance maps produced by different sparseunmixing methods

In this section for the real data experiment the sparsityof solution and the reconstruction error of the hyperspectral

Mathematical Problems in Engineering 11

Table 3 Sparsity of the abundance vectors and the reconstruction errors

Algorithms SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa BSOMPSparsity 175629 207037 20472 13684 15102 12702 11810RMSE 00051 00035 00028 00040 00034 00032 00033

14

13

12

11

10

9

8

7

8 10 12 14 16 18

Block size

SRE

(dB)

SMPSOMP

RSFoBaBSOMP

Figure 10 Results on Simulated Data 1 with white noise when theendmember number is eleven SRE as function of block size

50

55

60

65

45

40

35

30

25

20

15

108 10 12 14 16 18

Block size

SMPSOMP

RSFoBaBSOMP

Endm

embe

r num

ber

Figure 11 Results on Simulated Data 1 with 30 dB white noise whenthe endmember number is eleven number of retained endmembersas function of block size

image are used to evaluate the performances of the sparseunmixing algorithms since the true abundance maps of thereal data are not available The sparsity is obtained by count-ing the average number of nonzero abundances estimated byeach algorithm The same as in [22] we define the fractionalabundances larger than 0001 as nonzero abundances toavoid counting negligible values The reconstruction erroris calculated by the original hyperspectral image and theone reconstructed by the actual endmembers and theircorresponding abundances [32] The root mean square error(RMSE) is used to evaluate the reconstruction error betweenthe original hyperspectral image and the reconstructed one

RMSE = radic 1119870

119870

sum

119895=1

(119884119895minus 119895)2

(14)

where 119884 denotes the original hyperspectral image and denotes the reconstruction hyperspectral image If the RMSEof a sparse unmixing algorithm is smaller it also reflects theunmixing performance of this algorithm

The regularization parameter used for CLSUnSAL wasempirically set to 001 while the one corresponding toSUnSAL was set to 0001 The parameters 120582 and 120582TV ofSUnSAL-TV were set to 0001 and 0001 The block sizes ofthe SGA methods were empirically set to 10 The stoppingthreshold parameter 120590 of SOMP and SMP was set to 001The stopping threshold parameters 120590 and 120575 of BSOMP wereset to 001 and 01 The parameters 120582 and 120590 of RSFoBawere set to 1 and 001 Table 2 shows the sparsity of theabundance vectors and the reconstruction error obtained bythe different algorithm In Table 3 we can find that BSOMPSOMP SMP and SUnSAL perform better than SUnSAL-TV in the sparsest solutions although SUnSAL-TV has thelowest reconstruction errors BSOMP can achieve nearly asfew reconstruction errors as SUnSAL-TV but obtain muchsparser solutions In general BSOMP can obtain the sparsestsolutions and achieve the lower reconstruction errors whichindicates the effectiveness of the proposed algorithm

For visual comparison Figure 13 shows the estimatedfractional abundance maps by applying the proposedBSOMP SMP RSFoBa SOMP SUnSAL and SUnSAL-TValgorithms to the AVIRIS Cuprite data for three differentminerals (Alunite Buddingtonite and Chalcedony) Forcomparison the classification maps of three materialsextracted from the Tricorder 33 software are also presentedin Figure 13 It can be observed that the abundances estimatedby BSOMP are comparable or higher in the regions assignedto the respective minerals in comparison to the otherconsidered algorithms with regard to the classification mapsproduced by the Tricorder 33 software We also analyzed thesparsity of the abundances estimated by BSOMP the average

12 Mathematical Problems in Engineering

Cuprite NevadaAVIRIS 1995 data

USGSClark and Swayze

Tetracorder 33 productSulfates

K-Alunite 150CK-Alunite 250CK-Alunite 450C

JarositeAlunite + Kaoliniteandor Muscovite

Kaolinite group claysKaolinite wxlKaolinite pxlKaolinite + Smectiteor Muscovite

HalloysiteDickite

CarbonatesCalciteCalcite + KaoliniteCalcite + Montmorillonite

ClaysNa-MontmorilloniteNontronite (Fe clay)

Other mineralsLow-Al MuscoviteMed-Al MuscoviteHigh-Al MuscoviteChlorite + Musc MontChloriteBuddingtoniteChalcedony OH QtzPyrophyllite + Alunite

N2km

Na82-Alunite 100CNa40-Alunite 400C

Figure 12 USGS map showing the distribution of different minerals in the Cuprite mining district in Nevada

number of endmembers with abundances higher than 0001estimated by BSOMP is 11810 (per pixel) This means thatthe algorithm uses a minimum number of members toexplain the data Overall the qualitative results and visualresults reported in this section indicate that the BSOMPalgorithm is more effective for sparse unmixing than theother algorithms

5 Conclusions

In this paper we present a new SGA termed BSOMP forsparse unmixing of hyperspectral data The main idea of theproposed algorithm is that BSOMP incorporates a backtrackstep to detect the previous chosen endmembersrsquo reliabilityand then deletes the unreliable endmembers It is provedthat BSOMP can recover the optimal endmembers from thespectral library under certain conditions Experiments onboth synthetic data and real data also demonstrate that theBSOMP algorithm is more effective than the other SGAs forsparse unmixing

The topic we want to discuss is why BSOMP is effectivefor sparse unmixing Endmember selection and abundanceestimation are the two main parts of the SGAs The num-ber of the potential endmembers selected by the SGAs

is far smaller than the number of spectral signatures inthe spectral library and then the abundance estimationmethod makes the abundance vectors sparse further withthe relatively small endmembers subset The main differenceamong BSOMP SOMP SMP and RSFoBa is the endmemberselectionThe SGAs adopt a block-processing strategy to effi-ciently select endmembers but the numbers of endmembersselected are still larger than the actual number However thenonexisting endmembers will have a negative effect on theestimation of the abundances corresponding to the actualendmembers

Compared with SOMP and SMP BSOMP and RSFoBaadopt a backward step to detect the previous chosen end-membersrsquo reliability and delete the nonexisting endmembersSo BSOMP and RSFoBa can select the actual endmembersmore accurately than SOMP and SMP The main differencebetween BSOMP and RSFoBa is that the former adopts abackward step in the whole hyperspectral data while thelatter adopts a backward step in each block As shown inFigures 10 and 11 we can observe that the block size willsignificantly affect the number of the endmembers obtainedby RSFoBa while BSOMP is not affected by the block size Inconclusion BSOMP can select the actual endmembers moreaccurately than the other SGAs and be beneficial to improveestimation effectiveness

Mathematical Problems in Engineering 13

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

02

015

01

005

0

0

04

03

02

01

035

03

025

02

015

01

005

0

04

03

02

01

0

Figure 13 Continued

14 Mathematical Problems in Engineering

025

02

015

01

005

0

02

015

01

005

0

03

025

02

015

01

005

0

03

035

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

04

03

02

01

0

04

05

03

02

01

0

04

05

06

03

02

01

0

Figure 13 Abundance maps estimated by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP and the distribution mapsproduced by Tricorder 33 software for the AVIRIS Cuprite scene From top row to bottom row are the maps produced or estimated byTricorder software SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively From left column to right column arethe maps corresponding to Alunite Buddingtonite and Chalcedony respectively

Appendix

Proof of Theorem 7

First we should introduce some definitionswhichwill be usedduring the proof process

Definition A1 Given hyperspectral image data matrix 119884 isin

119877119871times119870 and available spectral library matrix 119860 isin 119877119871times119898

119860119905is a submatrix of the spectral library 119860 where 119860

119905

contains the 119896 columns indexed in 119878119905119860119905contains the

remaining columns 119860 = [119860119905 119860119905]

119860simopt is a submatrix of 119860

119905 which contains the

remaining (119896 minus 119879) columns indexed in 119878119905 119878opt

119883simopt is a submatrix of119883

119905and contains the rows listed

in 119878119905 119878opt

Nowwewill prove the theorem on the performance of thealgorithm

Proof Let119875opt denote the orthogonal projector onto the rangeof 119860opt

119875opt = 119860opt (119860lowast

opt119860opt)minus1

119860lowast

opt (A1)

where119860lowastopt denotes the conjugate transpose of119860opt Since thecolumns of 119860

119905are orthogonal to the columns of 119877

119905(119877119905= 119884minus

119884119905) we have

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905119877119905

10038171003817100381710038171003817infininfin (A2)

Themaximumcorrelation between119860119905(a submatrix of the

spectral library 119860) and the residual 119877opt = 119884 minus 119884opt is smaller

Mathematical Problems in Engineering 15

than the maximum correlation between the spectral library119860 and the residual 119877opt so we have

10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin (A3)

Invoke (A2) and (A3) and apply the triangle inequality to get

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt + 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A4)

Then the matrix 119884119905minus 119884opt can be expressed in the following

manner

119884119905minus 119884opt = (119868 minus 119875opt) 119884119905 = (119868 minus 119875opt)119860 119905119883119905

= (119868 minus 119875opt)119860simopt119883simopt(A5)

The last equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt Applying the triangle inequality the secondterm of (A4) can be rewritten as follows

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119868 minus 119875opt)119860simopt119883simopt

10038171003817100381710038171003817infininfin

le [10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfin]

times10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A6)

We will apply the cumulative coherence function todeduce the estimates of (A6) the deduction has five steps

(1) Each row of the matrix 119860lowast119905119860simopt lists the inner

products between a spectral endmember and (119896 minus 119879) distinctendmembers we have

10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879) (A7)

(2) Use the fact that sdot infininfin

is submultiplicative andinvoke (A1) to get

10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfin

times100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

times10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

(A8)

(3) Making use of the same deduction as in step (1) wehave

10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfinle 120583 (119879)

10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879)

(A9)

(4) Since the columns of 119860 are normalized the Grammatrix 119860lowastopt119860opt lists the inner products between 119879 different

endmembers so the matrix119860lowastopt119860opt has a unit diagonal andthen we can get

119860lowast

opt119860opt = 119868 minus 119883opt (A10)

where 119883optinfininfin le 120583(119879) then we can expand the inverse ina Neumann series to get100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

=100381710038171003817100381710038171003817(119868 minus 119883opt)

minus1100381710038171003817100381710038171003817infininfin

le1

1 minus10038171003817100381710038171003817119883opt

10038171003817100381710038171003817infininfin

le1

1 minus 120583 (119879)

(A11)

(5) Invoke the bounds from steps (1) to (4) into (A6) wecan obtain that10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le [120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)] times

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A12)

Simplify expression (A12) to conclude that

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfinle120583 (119896 minus 119879)

1 minus 120583 (119879)

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A13)

In order to produce the an upper bound of 119883simoptinfininfin we

deduce the following formula by using the triangle inequality10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119884 minus 119884

119905+ 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le1003817100381710038171003817119860lowast

119905(119884 minus 119884

119905)1003817100381710038171003817infininfin +

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A14)

Since (119884minus119884119905) is orthogonal to the range of119860

119905 Introduce (A5)

into (A14) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

simopt (119868 minus 119875opt)119860simopt119883simopt10038171003817100381710038171003817infininfin

(A15)

The equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt we can replace 119860

119905by 119860simopt Let 119863 = 119860

lowast

simopt(119868 minus

119875opt)119860simopt and rewrite 119863 = 119868 minus (119868 minus 119863) we can expand itsinverse in a Neumann series to obtain

10038171003817100381710038171003817119863119883simopt10038171003817100381710038171003817infininfin

ge (1 minus 119868 minus 119863infininfin)10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A16)

Recall the definition of119863 and apply the triangle inequality toobtain

119868 minus 119863infininfin

=10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt + 119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

(A17)

16 Mathematical Problems in Engineering

The right-hand side of inequality can be deduced completelyanalogous with steps (1)ndash(5) we can obtain

119868 minus 119863infininfin le 120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)

=120583 (119896 minus 119879)

1 minus 120583 (119879)

(A18)

Introduce (A18) and (A16) into (A14) we have10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A19)

Using the same deduction as (A3) it follows that 119860lowast119905(119884 minus

119884opt)infininfin le 119860lowast(119884minus119884opt)infininfin So (A19) can be rewritten as

follows10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A20)

Introduce (A20) into (A13) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le120583 (119896 minus 119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A21)

Introduce (A21) into (A4) to get1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin

le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A22)

Finally assume we have a bound 119860lowast(119884 minus 119884opt)infininfin le 120591 so(A22) can be rewritten as follows

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)120591 (A23)

This completes the argument

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work has been supported by a grant from the NUAAFundamental Research Funds (NS2013085) The authorswould like to thank M D Iordache J Bioucas-Dias and APlaza for sharing Simulated Data 2 and their codes for thealgorithms of CLSUnSAL SUnSAL and SUnSAL-TV Theauthors would also like to thank Shi Z TangW Duren Z andJiang Z for sharing their codes for the algorithms of SMP andRSFoBaMoreover the authors would also like to thank thoseanonymous reviewers for their helpful comments to improvethis paper

References

[1] N Keshava and J F Mustard ldquoSpectral unmixingrdquo IEEE SignalProcessing Magazine vol 19 no 1 pp 44ndash57 2002

[2] Y H Hu H B Lee and F L Scarpace ldquoOptimal linear spectralunmixingrdquo IEEE Transactions on Geoscience and Remote Sens-ing vol 37 no 1 pp 639ndash644 1999

[3] J M Bioucas-Dias A Plaza N Dobigeon et al ldquoHyperspec-tral unmixing overview geometrical statistical and sparseregression-based approachesrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 5 no 2 pp354ndash379 2012

[4] M Grana I Villaverde J O Maldonado and C HernandezldquoTwo lattice computing approaches for the unsupervised seg-mentation of hyperspectral imagesrdquo Neurocomputing vol 72no 10-12 pp 2111ndash2120 2009

[5] J Li and J M Bioucas-Dias ldquoMinimum volume simplexanalysis a fast algorithm to unmix hyperspectral datardquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo08) pp III-250ndashIII-253 BostonMass USA July 2008

[6] H Pu B Wang and L Zhang ldquoSimplex geometry-basedabundance estimation algorithm for hyperspectral unmixingrdquoScientia Sinica Informationis vol 42 no 8 pp 1019ndash1033 2012

[7] G Zhou S Xie Z Yang J-M Yang and Z He ldquoMinimum-volume-constrained nonnegative matrix factorizationenhanced ability of learning partsrdquo IEEE Transactions onNeural Networks vol 22 no 10 pp 1626ndash1637 2011

[8] T-H Chan C-Y Chi Y-M Huang and W-K Ma ldquoA convexanalysis-based minimum-volume enclosing simplex algorithmfor hyperspectral unmixingrdquo IEEE Transactions on Signal Pro-cessing vol 57 no 11 pp 4418ndash4432 2009

[9] M Arngren M N Schmidt and J Larsen ldquoBayesian nonneg-ative matrix factorization with volume prior for unmixing ofhyperspectral imagesrdquo in Proceedings of the IEEE InternationalWorkshop onMachine Learning for Signal Processing (MLSP rsquo09)pp 1ndash6 IEEE Grenoble France September 2009

[10] V P Pauca J Piper and R J Plemmons ldquoNonnegative matrixfactorization for spectral data analysisrdquo Linear Algebra and ItsApplications vol 416 no 1 pp 29ndash47 2006

[11] N Wang B Du and L Zhang ldquoAn endmember dissimilar-ity constrained non-negative matrix factorization method forhyperspectral unmixingrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 6 no 2pp 554ndash569 2013

[12] Z Wu S Ye J Liu L Sun and Z Wei ldquoSparse non-negativematrix factorization on GPUs for hyperspectral unmixingrdquoIEEE Journal of Selected Topics in Applied Earth Observationsand Remote Sensing vol 7 no 8 pp 3640ndash3649 2014

[13] J B Greer ldquoSparse demixing of hyperspectral imagesrdquo IEEETransactions on Image Processing vol 21 no 1 pp 219ndash2282012

[14] J A Tropp and S JWright ldquoComputational methods for sparsesolution of linear inverse problemsrdquoProceedings of the IEEE vol98 no 6 pp 948ndash958 2010

[15] M Elad Sparse and Redundant Representations Springer NewYork NY USA 2010

[16] J A Tropp and A C Gilbert ldquoSignal recovery from randommeasurements via orthogonal matching pursuitrdquo IEEE Trans-actions on Information Theory vol 53 no 12 pp 4655ndash46662007

Mathematical Problems in Engineering 17

[17] W Dai and O Milenkovic ldquoSubspace pursuit for compressivesensing signal reconstructionrdquo IEEE Transactions on Informa-tion Theory vol 55 no 5 pp 2230ndash2249 2009

[18] I Marques and M Grana ldquoHybrid sparse linear and latticemethod for hyperspectral image unmixingrdquo inHybrid ArtificialIntelligence Systems vol 8480 of Lecture Notes in ComputerScience pp 266ndash273 Springer 2014

[19] I Marques and M Grana ldquoGreedy sparsification WM algo-rithm for endmember induction in hyperspectral imagesrdquo inNatural and Artificial Computation in Engineering and MedicalApplications vol 7931 of Lecture Notes in Computer Science pp336ndash344 Springer Berlin Germany 2013

[20] M V Afonso J M Bioucas-Dias and M A Figueiredo ldquoAnaugmented Lagrangian approach to the constrained optimiza-tion formulation of imaging inverse problemsrdquo IEEE Transac-tions on Image Processing vol 20 no 3 pp 681ndash695 2011

[21] M-D Iordache J M Bioucas-Dias and A Plaza ldquoTotal varia-tion spatial regularization for sparse hyperspectral unmixingrdquoIEEE Transactions on Geoscience and Remote Sensing vol 50no 11 pp 4484ndash4502 2012

[22] M-D Iordache JM Bioucas-Dias andA Plaza ldquoCollaborativesparse unmixing of hyperspectral datardquo in Proceedings of theIEEE International Geoscience and Remote Sensing Symposium(IGARSS rsquo12) pp 7488ndash7491 IEEE Munich Germany July2012

[23] J M Bioucas-Dias and M A T Figueiredo ldquoAlternating direc-tion algorithms for constrained sparse regression Applicationto hyperspectral unmixingrdquo in Proceedings of the 2ndWorkshopon Hyperspectral Image and Signal Processing Evolution inRemote Sensing (WHISPERS rsquo10) pp 1ndash4 Reykjavik IcelandJune 2010

[24] Z Shi W Tang Z Duren and Z Jiang ldquoSubspace matchingpursuit for sparse unmixing of hyperspectral datardquo IEEE Trans-actions on Geoscience and Remote Sensing vol 52 no 6 pp3256ndash3274 2014

[25] J A Tropp A C Gilbert and M J Strauss ldquoAlgorithms forsimultaneous sparse approximation Part I greedy pursuitrdquoSignal Processing vol 86 no 3 pp 572ndash588 2006

[26] W Tang Z Shi andYWu ldquoRegularized simultaneous forward-backward greedy algorithm for sparse unmixing of hyperspec-tral datardquo IEEE Transactions on Geoscience and Remote Sensingvol 52 no 9 pp 5271ndash5288 2014

[27] J Chen and X Huo ldquoTheoretical results on sparse representa-tions of multiple-measurement vectorsrdquo IEEE Transactions onSignal Processing vol 54 no 12 pp 4634ndash4643 2006

[28] G Davis S Mallat and M Avellaneda ldquoGreedy adaptiveapproximationrdquo Journal of Constructive Approximation vol 13no 1 pp 57ndash98 1997

[29] J A Tropp ldquoGreed is good algorithmic results for sparseapproximationrdquo IEEE Transactions on Information Theory vol50 no 10 pp 2231ndash2242 2004

[30] D L Donoho and M Elad ldquoMaximal sparsity representationvia l1 minimizationrdquo Proceedings of the National Academy ofSciences of the United States of America vol 100 pp 2197ndash22022003

[31] R N Clark G A Swayze R Wise et al USGS Digital SpectralLibrary splib06a US Geological Survey Denver Colo USA2007

[32] J Plaza A Plaza P Martınez and R Perez ldquoH-COMP atool for quantitative and comparative analysis of endmemberidentification algorithmsrdquo in Proceedings of the Geoscience andRemote Sensing Symposium (IGARSS rsquo03) vol 1 pp 291ndash2932003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 7: Research Article Backtracking-Based Simultaneous ...downloads.hindawi.com/journals/mpe/2015/842017.pdf · Research Article Backtracking-Based Simultaneous Orthogonal Matching Pursuit

Mathematical Problems in Engineering 7

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

4 6 8 10 12 14

Endmember number

45

40

35

30

25

20

15

10

5

0

SRE

(dB)

Figure 4 Results on Simulated Data 1 with 30 dB correlated noiseSRE as function of endmember number

Differing from RSFoBa BSOMP uses a backtracking processin the whole hyperspectral data to remove some endmem-bers chosen wrongly from the estimated endmembers setin previous block-processing which can identify the trueendmembers set more accurately than RSFoBa It indicatesthat BSOMP behaves better than the other SGA methods

Figures 7 and 8 show the SRE results obtained onSimulated Data 2 with white noise and correlated noise asa function of SNR respectively We can find that BSOMPSOMP SMP RSFoBa and SUnSAL-TV perform better thanSUnSAL and CLSUnSAL This result indicates that all theSGA methods adopt the block-processing strategy to effec-tively pick up all the actual endmembers and SUnSAL-TVcombinates spatial-contextual coherence regularizer whichcan get better performance over SUnSAL and CLSUnSALAll the SGA methods have comparable performances withSUnSAL-TV BSOMP and RSFoBa can always obtain thebetter estimation of the abundances RSFoBa incorporates thespatial-contextual coherence regularizer to select the actualendmembers more accurately than SOMP and SMP andBSOMP incorporates a backtracking process in the wholehyperspectral data to identify the true endmembers set moreaccurately than SOMP and SMP It can be observed thatBSOMP and RSFoBa can select the actual endmembers moreaccurately than SOMP and SMP

For visual comparison Figure 9 shows the true abun-dance maps and the abundance maps estimated by the differ-ent algorithms on Simulated Data 2 with 20 dB white noiseIn Figure 9 the abundancemaps of endmembers which wereobtained by SUnSAL-TV contain fewer noise points How-ever the abundance maps estimated by SUnSAL-TV may

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

30

25

20

15

10

5

0

SRE

(dB)

20 25 30 35 40 45 50

SNR (dB)

Figure 5 Results on Simulated Data 1 with correlated noise whenthe endmember number is eleven SRE as function of SNR

Endmember number

70

60

50

40

30

20

10

0

4 6 8 10 12 14

Num

ber o

f ret

aine

d en

dmem

bers

SMP (white)SOMP (white)RSFoBa (white)

SMP (correlated)SOMP (correlated)RSFoBa (correlated)

BSOMP (white) BSOMP (correlated)

Figure 6 Results obtained by the four SGAs on Simulated Data1 with 30 dB white noise or correlated noise number of retainedendmembers as function of endmember number

exhibit an oversmooth visual effect around some pixels someof the details and edges are not well preserved Comparedwith SUnSAL-TV the estimated abundances by BSOMPmay contain more noise points but preserve more edgeregions and are closer to the true abundances Moreover itis much easier to tell the different materials apart Comparedwith SOMP and SMP the estimated abundances by BSOMP

8 Mathematical Problems in Engineering

Table 1 Processing times(s) measured after applying the tested methods to the two considered simulated data sets with 30 dB white noise

Datacube SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa BSOMPSimulated Data 1 (11 endmembers) 6767 7841 36327 2083 21289 3938 6418Simulated Data 2 25471 27902 127112 9618 35444 28277 106524

50

45

40

35

30

25

20

15

10

20 25 30 35 40 45 50

SNR (dB)

SRE

(dB)

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

Figure 7 Results on Simulated Data 2 with white noise SRE asfunction of SNR

20 25 30 35 40 45 50

SNR (dB)

50

45

40

35

30

25

20

15

10

SRE

(dB)

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

Figure 8 Results on Simulated Data 2 with correlated noise SRE asfunction of SNR

Table 2 Comparison of the sparse unmixing algorithms withwhether the backward step joint sparsity and contextual informa-tion are considered

Property Backwardstep Joint sparsity Contextual

informationSMP sdot

SOMP sdot

RSFoBa sdot sdot sdot

BSOMP sdot sdot

SUnSALCLSUnSAL sdot

SUnSAL-TV sdot

contain fewer noise points and preserve more edge regionswhich are more accurate and have a better visual effect

Table 1 shows the processing time measured after apply-ing the tested algorithms to the two simulated data setswith 30 dB white noise The algorithms were implementedin MATLAB 2009a and executed on a desktop PC equippedwith an Intel Core 2 Duo CPU (293GHz) and 2GB ofRAM memory It can be observed that the SUnSAL-TValgorithm is far more time consuming compared with theother algorithms We can find that BSOMP incorporating abacktracking technique is more time consuming than SOMP

In order to make the differences among the BSOMP algo-rithm and the other sparse unmixing algorithms clearer wedisplay their comparison concerned with several propertiesin Table 2

SUnSAL is a convex relaxation method for sparse unmix-ing which solves the 119897

1norm sparse regression problem

CLSUnSAL is convex relaxation method for solving the 11989721

optimization problemThemain difference between SUnSALand CLSUnSAL is that the former employs pixelwise inde-pendent regressions while the latter enforces joint sparsityamong all the pixels SUnSAL-TV is the special case ofSUnSALwhich incorporates the spatial-contextual coherenceregularizer into the SUnSAL As observed in Figures 7 and8 SUnSAL-TV behaves better than SUnSAL and CLSUnSALwhich indicates that the spatial-contextual coherence regu-larizer can make SUnSAL-TV more effective

SOMP and SMP are the forward SGAs which select oneor more potential endmembers from the spectral library ineach iteration RSFoBa is a forward-backward SGA whichincorporates the backward step spatial-contextual informa-tion and joint sparsity into the greedy algorithm BSOMP isa forward-backward SGA which incorporates the backwardstep and joint sparsity to the greedy algorithm As observedin Figure 6 the numbers of endmembers selected by theSGAs are still larger than the actual number BSOMP and

Mathematical Problems in Engineering 9

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

Figure 9 Continued

10 Mathematical Problems in Engineering

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

Figure 9 Comparison of abundance maps on Synthetic Data 2 with 20 dB white noise From left column to right column are the abundancemaps corresponding to Endmember 1 Endmember 4 and Endmember 9 respectively From top row to bottom row are true abundance mapsor abundance maps obtained by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively

RSFoBa behave better than SMP and SOMP which indicatesthat the backward step can make BSOMP and RSFoBa moreaccurate to select the actual endmembers Differing fromRSFoBa which adopts a forward step and a backward stepto pick some potential endmembers in each block BSOMPadopts a forward step to pick some potential endmembersin each block and then adopts a backward step in the wholehyperspectral data to remove some endmembers chosenwrongly from the estimated endmembers set Figure 10 showsthe SRE (dB) results obtained on Simulated Data 1 withwhite noise as a function of block size We can find thatBSOMP performs better than the other SGAs Figure 11shows the number of the potential endmembers obtainedfrom the spectral library by all the SGA methods It canbe observed that BSOMP can select the actual endmembersmore accurately than RSFoBa This result indicates that abackward step in the whole hyperspectral data can makeBSOMP more effective

42 Evaluation with Real Data The hyperspectral data setused in our real data experiment is the well-known AVIRISCuprite data set (httpavirisjplnasagovhtmlavirisfreed-atahtml) The size of the hyperspectral scene is a 250 times 191pixels subset with 188 spectral bands (water absorption bandsand low SNR bands are removed)The spectral library for thisdata set is the USGS spectral library (containing 498 pure sig-natures) with the corresponding bands removed The miner-als map (httpspeclabcrusgsgovcuprite95tgif22um mapgif) which was produced by a Tricorder 33 software(httpspeclabcrusgsgovPAPERStetracorder) is shown inFigure 12 Note that the Tricorder map is only suitable forthe hyperspectral data collected in 1995 the AVIRIS Cupritedata was collected was in 1997 Thus we only use the mineralmap as a reference indicator for qualitative analysis of thefractional abundance maps produced by different sparseunmixing methods

In this section for the real data experiment the sparsityof solution and the reconstruction error of the hyperspectral

Mathematical Problems in Engineering 11

Table 3 Sparsity of the abundance vectors and the reconstruction errors

Algorithms SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa BSOMPSparsity 175629 207037 20472 13684 15102 12702 11810RMSE 00051 00035 00028 00040 00034 00032 00033

14

13

12

11

10

9

8

7

8 10 12 14 16 18

Block size

SRE

(dB)

SMPSOMP

RSFoBaBSOMP

Figure 10 Results on Simulated Data 1 with white noise when theendmember number is eleven SRE as function of block size

50

55

60

65

45

40

35

30

25

20

15

108 10 12 14 16 18

Block size

SMPSOMP

RSFoBaBSOMP

Endm

embe

r num

ber

Figure 11 Results on Simulated Data 1 with 30 dB white noise whenthe endmember number is eleven number of retained endmembersas function of block size

image are used to evaluate the performances of the sparseunmixing algorithms since the true abundance maps of thereal data are not available The sparsity is obtained by count-ing the average number of nonzero abundances estimated byeach algorithm The same as in [22] we define the fractionalabundances larger than 0001 as nonzero abundances toavoid counting negligible values The reconstruction erroris calculated by the original hyperspectral image and theone reconstructed by the actual endmembers and theircorresponding abundances [32] The root mean square error(RMSE) is used to evaluate the reconstruction error betweenthe original hyperspectral image and the reconstructed one

RMSE = radic 1119870

119870

sum

119895=1

(119884119895minus 119895)2

(14)

where 119884 denotes the original hyperspectral image and denotes the reconstruction hyperspectral image If the RMSEof a sparse unmixing algorithm is smaller it also reflects theunmixing performance of this algorithm

The regularization parameter used for CLSUnSAL wasempirically set to 001 while the one corresponding toSUnSAL was set to 0001 The parameters 120582 and 120582TV ofSUnSAL-TV were set to 0001 and 0001 The block sizes ofthe SGA methods were empirically set to 10 The stoppingthreshold parameter 120590 of SOMP and SMP was set to 001The stopping threshold parameters 120590 and 120575 of BSOMP wereset to 001 and 01 The parameters 120582 and 120590 of RSFoBawere set to 1 and 001 Table 2 shows the sparsity of theabundance vectors and the reconstruction error obtained bythe different algorithm In Table 3 we can find that BSOMPSOMP SMP and SUnSAL perform better than SUnSAL-TV in the sparsest solutions although SUnSAL-TV has thelowest reconstruction errors BSOMP can achieve nearly asfew reconstruction errors as SUnSAL-TV but obtain muchsparser solutions In general BSOMP can obtain the sparsestsolutions and achieve the lower reconstruction errors whichindicates the effectiveness of the proposed algorithm

For visual comparison Figure 13 shows the estimatedfractional abundance maps by applying the proposedBSOMP SMP RSFoBa SOMP SUnSAL and SUnSAL-TValgorithms to the AVIRIS Cuprite data for three differentminerals (Alunite Buddingtonite and Chalcedony) Forcomparison the classification maps of three materialsextracted from the Tricorder 33 software are also presentedin Figure 13 It can be observed that the abundances estimatedby BSOMP are comparable or higher in the regions assignedto the respective minerals in comparison to the otherconsidered algorithms with regard to the classification mapsproduced by the Tricorder 33 software We also analyzed thesparsity of the abundances estimated by BSOMP the average

12 Mathematical Problems in Engineering

Cuprite NevadaAVIRIS 1995 data

USGSClark and Swayze

Tetracorder 33 productSulfates

K-Alunite 150CK-Alunite 250CK-Alunite 450C

JarositeAlunite + Kaoliniteandor Muscovite

Kaolinite group claysKaolinite wxlKaolinite pxlKaolinite + Smectiteor Muscovite

HalloysiteDickite

CarbonatesCalciteCalcite + KaoliniteCalcite + Montmorillonite

ClaysNa-MontmorilloniteNontronite (Fe clay)

Other mineralsLow-Al MuscoviteMed-Al MuscoviteHigh-Al MuscoviteChlorite + Musc MontChloriteBuddingtoniteChalcedony OH QtzPyrophyllite + Alunite

N2km

Na82-Alunite 100CNa40-Alunite 400C

Figure 12 USGS map showing the distribution of different minerals in the Cuprite mining district in Nevada

number of endmembers with abundances higher than 0001estimated by BSOMP is 11810 (per pixel) This means thatthe algorithm uses a minimum number of members toexplain the data Overall the qualitative results and visualresults reported in this section indicate that the BSOMPalgorithm is more effective for sparse unmixing than theother algorithms

5 Conclusions

In this paper we present a new SGA termed BSOMP forsparse unmixing of hyperspectral data The main idea of theproposed algorithm is that BSOMP incorporates a backtrackstep to detect the previous chosen endmembersrsquo reliabilityand then deletes the unreliable endmembers It is provedthat BSOMP can recover the optimal endmembers from thespectral library under certain conditions Experiments onboth synthetic data and real data also demonstrate that theBSOMP algorithm is more effective than the other SGAs forsparse unmixing

The topic we want to discuss is why BSOMP is effectivefor sparse unmixing Endmember selection and abundanceestimation are the two main parts of the SGAs The num-ber of the potential endmembers selected by the SGAs

is far smaller than the number of spectral signatures inthe spectral library and then the abundance estimationmethod makes the abundance vectors sparse further withthe relatively small endmembers subset The main differenceamong BSOMP SOMP SMP and RSFoBa is the endmemberselectionThe SGAs adopt a block-processing strategy to effi-ciently select endmembers but the numbers of endmembersselected are still larger than the actual number However thenonexisting endmembers will have a negative effect on theestimation of the abundances corresponding to the actualendmembers

Compared with SOMP and SMP BSOMP and RSFoBaadopt a backward step to detect the previous chosen end-membersrsquo reliability and delete the nonexisting endmembersSo BSOMP and RSFoBa can select the actual endmembersmore accurately than SOMP and SMP The main differencebetween BSOMP and RSFoBa is that the former adopts abackward step in the whole hyperspectral data while thelatter adopts a backward step in each block As shown inFigures 10 and 11 we can observe that the block size willsignificantly affect the number of the endmembers obtainedby RSFoBa while BSOMP is not affected by the block size Inconclusion BSOMP can select the actual endmembers moreaccurately than the other SGAs and be beneficial to improveestimation effectiveness

Mathematical Problems in Engineering 13

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

02

015

01

005

0

0

04

03

02

01

035

03

025

02

015

01

005

0

04

03

02

01

0

Figure 13 Continued

14 Mathematical Problems in Engineering

025

02

015

01

005

0

02

015

01

005

0

03

025

02

015

01

005

0

03

035

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

04

03

02

01

0

04

05

03

02

01

0

04

05

06

03

02

01

0

Figure 13 Abundance maps estimated by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP and the distribution mapsproduced by Tricorder 33 software for the AVIRIS Cuprite scene From top row to bottom row are the maps produced or estimated byTricorder software SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively From left column to right column arethe maps corresponding to Alunite Buddingtonite and Chalcedony respectively

Appendix

Proof of Theorem 7

First we should introduce some definitionswhichwill be usedduring the proof process

Definition A1 Given hyperspectral image data matrix 119884 isin

119877119871times119870 and available spectral library matrix 119860 isin 119877119871times119898

119860119905is a submatrix of the spectral library 119860 where 119860

119905

contains the 119896 columns indexed in 119878119905119860119905contains the

remaining columns 119860 = [119860119905 119860119905]

119860simopt is a submatrix of 119860

119905 which contains the

remaining (119896 minus 119879) columns indexed in 119878119905 119878opt

119883simopt is a submatrix of119883

119905and contains the rows listed

in 119878119905 119878opt

Nowwewill prove the theorem on the performance of thealgorithm

Proof Let119875opt denote the orthogonal projector onto the rangeof 119860opt

119875opt = 119860opt (119860lowast

opt119860opt)minus1

119860lowast

opt (A1)

where119860lowastopt denotes the conjugate transpose of119860opt Since thecolumns of 119860

119905are orthogonal to the columns of 119877

119905(119877119905= 119884minus

119884119905) we have

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905119877119905

10038171003817100381710038171003817infininfin (A2)

Themaximumcorrelation between119860119905(a submatrix of the

spectral library 119860) and the residual 119877opt = 119884 minus 119884opt is smaller

Mathematical Problems in Engineering 15

than the maximum correlation between the spectral library119860 and the residual 119877opt so we have

10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin (A3)

Invoke (A2) and (A3) and apply the triangle inequality to get

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt + 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A4)

Then the matrix 119884119905minus 119884opt can be expressed in the following

manner

119884119905minus 119884opt = (119868 minus 119875opt) 119884119905 = (119868 minus 119875opt)119860 119905119883119905

= (119868 minus 119875opt)119860simopt119883simopt(A5)

The last equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt Applying the triangle inequality the secondterm of (A4) can be rewritten as follows

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119868 minus 119875opt)119860simopt119883simopt

10038171003817100381710038171003817infininfin

le [10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfin]

times10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A6)

We will apply the cumulative coherence function todeduce the estimates of (A6) the deduction has five steps

(1) Each row of the matrix 119860lowast119905119860simopt lists the inner

products between a spectral endmember and (119896 minus 119879) distinctendmembers we have

10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879) (A7)

(2) Use the fact that sdot infininfin

is submultiplicative andinvoke (A1) to get

10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfin

times100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

times10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

(A8)

(3) Making use of the same deduction as in step (1) wehave

10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfinle 120583 (119879)

10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879)

(A9)

(4) Since the columns of 119860 are normalized the Grammatrix 119860lowastopt119860opt lists the inner products between 119879 different

endmembers so the matrix119860lowastopt119860opt has a unit diagonal andthen we can get

119860lowast

opt119860opt = 119868 minus 119883opt (A10)

where 119883optinfininfin le 120583(119879) then we can expand the inverse ina Neumann series to get100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

=100381710038171003817100381710038171003817(119868 minus 119883opt)

minus1100381710038171003817100381710038171003817infininfin

le1

1 minus10038171003817100381710038171003817119883opt

10038171003817100381710038171003817infininfin

le1

1 minus 120583 (119879)

(A11)

(5) Invoke the bounds from steps (1) to (4) into (A6) wecan obtain that10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le [120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)] times

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A12)

Simplify expression (A12) to conclude that

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfinle120583 (119896 minus 119879)

1 minus 120583 (119879)

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A13)

In order to produce the an upper bound of 119883simoptinfininfin we

deduce the following formula by using the triangle inequality10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119884 minus 119884

119905+ 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le1003817100381710038171003817119860lowast

119905(119884 minus 119884

119905)1003817100381710038171003817infininfin +

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A14)

Since (119884minus119884119905) is orthogonal to the range of119860

119905 Introduce (A5)

into (A14) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

simopt (119868 minus 119875opt)119860simopt119883simopt10038171003817100381710038171003817infininfin

(A15)

The equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt we can replace 119860

119905by 119860simopt Let 119863 = 119860

lowast

simopt(119868 minus

119875opt)119860simopt and rewrite 119863 = 119868 minus (119868 minus 119863) we can expand itsinverse in a Neumann series to obtain

10038171003817100381710038171003817119863119883simopt10038171003817100381710038171003817infininfin

ge (1 minus 119868 minus 119863infininfin)10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A16)

Recall the definition of119863 and apply the triangle inequality toobtain

119868 minus 119863infininfin

=10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt + 119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

(A17)

16 Mathematical Problems in Engineering

The right-hand side of inequality can be deduced completelyanalogous with steps (1)ndash(5) we can obtain

119868 minus 119863infininfin le 120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)

=120583 (119896 minus 119879)

1 minus 120583 (119879)

(A18)

Introduce (A18) and (A16) into (A14) we have10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A19)

Using the same deduction as (A3) it follows that 119860lowast119905(119884 minus

119884opt)infininfin le 119860lowast(119884minus119884opt)infininfin So (A19) can be rewritten as

follows10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A20)

Introduce (A20) into (A13) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le120583 (119896 minus 119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A21)

Introduce (A21) into (A4) to get1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin

le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A22)

Finally assume we have a bound 119860lowast(119884 minus 119884opt)infininfin le 120591 so(A22) can be rewritten as follows

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)120591 (A23)

This completes the argument

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work has been supported by a grant from the NUAAFundamental Research Funds (NS2013085) The authorswould like to thank M D Iordache J Bioucas-Dias and APlaza for sharing Simulated Data 2 and their codes for thealgorithms of CLSUnSAL SUnSAL and SUnSAL-TV Theauthors would also like to thank Shi Z TangW Duren Z andJiang Z for sharing their codes for the algorithms of SMP andRSFoBaMoreover the authors would also like to thank thoseanonymous reviewers for their helpful comments to improvethis paper

References

[1] N Keshava and J F Mustard ldquoSpectral unmixingrdquo IEEE SignalProcessing Magazine vol 19 no 1 pp 44ndash57 2002

[2] Y H Hu H B Lee and F L Scarpace ldquoOptimal linear spectralunmixingrdquo IEEE Transactions on Geoscience and Remote Sens-ing vol 37 no 1 pp 639ndash644 1999

[3] J M Bioucas-Dias A Plaza N Dobigeon et al ldquoHyperspec-tral unmixing overview geometrical statistical and sparseregression-based approachesrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 5 no 2 pp354ndash379 2012

[4] M Grana I Villaverde J O Maldonado and C HernandezldquoTwo lattice computing approaches for the unsupervised seg-mentation of hyperspectral imagesrdquo Neurocomputing vol 72no 10-12 pp 2111ndash2120 2009

[5] J Li and J M Bioucas-Dias ldquoMinimum volume simplexanalysis a fast algorithm to unmix hyperspectral datardquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo08) pp III-250ndashIII-253 BostonMass USA July 2008

[6] H Pu B Wang and L Zhang ldquoSimplex geometry-basedabundance estimation algorithm for hyperspectral unmixingrdquoScientia Sinica Informationis vol 42 no 8 pp 1019ndash1033 2012

[7] G Zhou S Xie Z Yang J-M Yang and Z He ldquoMinimum-volume-constrained nonnegative matrix factorizationenhanced ability of learning partsrdquo IEEE Transactions onNeural Networks vol 22 no 10 pp 1626ndash1637 2011

[8] T-H Chan C-Y Chi Y-M Huang and W-K Ma ldquoA convexanalysis-based minimum-volume enclosing simplex algorithmfor hyperspectral unmixingrdquo IEEE Transactions on Signal Pro-cessing vol 57 no 11 pp 4418ndash4432 2009

[9] M Arngren M N Schmidt and J Larsen ldquoBayesian nonneg-ative matrix factorization with volume prior for unmixing ofhyperspectral imagesrdquo in Proceedings of the IEEE InternationalWorkshop onMachine Learning for Signal Processing (MLSP rsquo09)pp 1ndash6 IEEE Grenoble France September 2009

[10] V P Pauca J Piper and R J Plemmons ldquoNonnegative matrixfactorization for spectral data analysisrdquo Linear Algebra and ItsApplications vol 416 no 1 pp 29ndash47 2006

[11] N Wang B Du and L Zhang ldquoAn endmember dissimilar-ity constrained non-negative matrix factorization method forhyperspectral unmixingrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 6 no 2pp 554ndash569 2013

[12] Z Wu S Ye J Liu L Sun and Z Wei ldquoSparse non-negativematrix factorization on GPUs for hyperspectral unmixingrdquoIEEE Journal of Selected Topics in Applied Earth Observationsand Remote Sensing vol 7 no 8 pp 3640ndash3649 2014

[13] J B Greer ldquoSparse demixing of hyperspectral imagesrdquo IEEETransactions on Image Processing vol 21 no 1 pp 219ndash2282012

[14] J A Tropp and S JWright ldquoComputational methods for sparsesolution of linear inverse problemsrdquoProceedings of the IEEE vol98 no 6 pp 948ndash958 2010

[15] M Elad Sparse and Redundant Representations Springer NewYork NY USA 2010

[16] J A Tropp and A C Gilbert ldquoSignal recovery from randommeasurements via orthogonal matching pursuitrdquo IEEE Trans-actions on Information Theory vol 53 no 12 pp 4655ndash46662007

Mathematical Problems in Engineering 17

[17] W Dai and O Milenkovic ldquoSubspace pursuit for compressivesensing signal reconstructionrdquo IEEE Transactions on Informa-tion Theory vol 55 no 5 pp 2230ndash2249 2009

[18] I Marques and M Grana ldquoHybrid sparse linear and latticemethod for hyperspectral image unmixingrdquo inHybrid ArtificialIntelligence Systems vol 8480 of Lecture Notes in ComputerScience pp 266ndash273 Springer 2014

[19] I Marques and M Grana ldquoGreedy sparsification WM algo-rithm for endmember induction in hyperspectral imagesrdquo inNatural and Artificial Computation in Engineering and MedicalApplications vol 7931 of Lecture Notes in Computer Science pp336ndash344 Springer Berlin Germany 2013

[20] M V Afonso J M Bioucas-Dias and M A Figueiredo ldquoAnaugmented Lagrangian approach to the constrained optimiza-tion formulation of imaging inverse problemsrdquo IEEE Transac-tions on Image Processing vol 20 no 3 pp 681ndash695 2011

[21] M-D Iordache J M Bioucas-Dias and A Plaza ldquoTotal varia-tion spatial regularization for sparse hyperspectral unmixingrdquoIEEE Transactions on Geoscience and Remote Sensing vol 50no 11 pp 4484ndash4502 2012

[22] M-D Iordache JM Bioucas-Dias andA Plaza ldquoCollaborativesparse unmixing of hyperspectral datardquo in Proceedings of theIEEE International Geoscience and Remote Sensing Symposium(IGARSS rsquo12) pp 7488ndash7491 IEEE Munich Germany July2012

[23] J M Bioucas-Dias and M A T Figueiredo ldquoAlternating direc-tion algorithms for constrained sparse regression Applicationto hyperspectral unmixingrdquo in Proceedings of the 2ndWorkshopon Hyperspectral Image and Signal Processing Evolution inRemote Sensing (WHISPERS rsquo10) pp 1ndash4 Reykjavik IcelandJune 2010

[24] Z Shi W Tang Z Duren and Z Jiang ldquoSubspace matchingpursuit for sparse unmixing of hyperspectral datardquo IEEE Trans-actions on Geoscience and Remote Sensing vol 52 no 6 pp3256ndash3274 2014

[25] J A Tropp A C Gilbert and M J Strauss ldquoAlgorithms forsimultaneous sparse approximation Part I greedy pursuitrdquoSignal Processing vol 86 no 3 pp 572ndash588 2006

[26] W Tang Z Shi andYWu ldquoRegularized simultaneous forward-backward greedy algorithm for sparse unmixing of hyperspec-tral datardquo IEEE Transactions on Geoscience and Remote Sensingvol 52 no 9 pp 5271ndash5288 2014

[27] J Chen and X Huo ldquoTheoretical results on sparse representa-tions of multiple-measurement vectorsrdquo IEEE Transactions onSignal Processing vol 54 no 12 pp 4634ndash4643 2006

[28] G Davis S Mallat and M Avellaneda ldquoGreedy adaptiveapproximationrdquo Journal of Constructive Approximation vol 13no 1 pp 57ndash98 1997

[29] J A Tropp ldquoGreed is good algorithmic results for sparseapproximationrdquo IEEE Transactions on Information Theory vol50 no 10 pp 2231ndash2242 2004

[30] D L Donoho and M Elad ldquoMaximal sparsity representationvia l1 minimizationrdquo Proceedings of the National Academy ofSciences of the United States of America vol 100 pp 2197ndash22022003

[31] R N Clark G A Swayze R Wise et al USGS Digital SpectralLibrary splib06a US Geological Survey Denver Colo USA2007

[32] J Plaza A Plaza P Martınez and R Perez ldquoH-COMP atool for quantitative and comparative analysis of endmemberidentification algorithmsrdquo in Proceedings of the Geoscience andRemote Sensing Symposium (IGARSS rsquo03) vol 1 pp 291ndash2932003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 8: Research Article Backtracking-Based Simultaneous ...downloads.hindawi.com/journals/mpe/2015/842017.pdf · Research Article Backtracking-Based Simultaneous Orthogonal Matching Pursuit

8 Mathematical Problems in Engineering

Table 1 Processing times(s) measured after applying the tested methods to the two considered simulated data sets with 30 dB white noise

Datacube SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa BSOMPSimulated Data 1 (11 endmembers) 6767 7841 36327 2083 21289 3938 6418Simulated Data 2 25471 27902 127112 9618 35444 28277 106524

50

45

40

35

30

25

20

15

10

20 25 30 35 40 45 50

SNR (dB)

SRE

(dB)

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

Figure 7 Results on Simulated Data 2 with white noise SRE asfunction of SNR

20 25 30 35 40 45 50

SNR (dB)

50

45

40

35

30

25

20

15

10

SRE

(dB)

CLSUnSALSUnSALSMPSOMP

RSFoBaBSOMPSUnSAL_TV

Figure 8 Results on Simulated Data 2 with correlated noise SRE asfunction of SNR

Table 2 Comparison of the sparse unmixing algorithms withwhether the backward step joint sparsity and contextual informa-tion are considered

Property Backwardstep Joint sparsity Contextual

informationSMP sdot

SOMP sdot

RSFoBa sdot sdot sdot

BSOMP sdot sdot

SUnSALCLSUnSAL sdot

SUnSAL-TV sdot

contain fewer noise points and preserve more edge regionswhich are more accurate and have a better visual effect

Table 1 shows the processing time measured after apply-ing the tested algorithms to the two simulated data setswith 30 dB white noise The algorithms were implementedin MATLAB 2009a and executed on a desktop PC equippedwith an Intel Core 2 Duo CPU (293GHz) and 2GB ofRAM memory It can be observed that the SUnSAL-TValgorithm is far more time consuming compared with theother algorithms We can find that BSOMP incorporating abacktracking technique is more time consuming than SOMP

In order to make the differences among the BSOMP algo-rithm and the other sparse unmixing algorithms clearer wedisplay their comparison concerned with several propertiesin Table 2

SUnSAL is a convex relaxation method for sparse unmix-ing which solves the 119897

1norm sparse regression problem

CLSUnSAL is convex relaxation method for solving the 11989721

optimization problemThemain difference between SUnSALand CLSUnSAL is that the former employs pixelwise inde-pendent regressions while the latter enforces joint sparsityamong all the pixels SUnSAL-TV is the special case ofSUnSALwhich incorporates the spatial-contextual coherenceregularizer into the SUnSAL As observed in Figures 7 and8 SUnSAL-TV behaves better than SUnSAL and CLSUnSALwhich indicates that the spatial-contextual coherence regu-larizer can make SUnSAL-TV more effective

SOMP and SMP are the forward SGAs which select oneor more potential endmembers from the spectral library ineach iteration RSFoBa is a forward-backward SGA whichincorporates the backward step spatial-contextual informa-tion and joint sparsity into the greedy algorithm BSOMP isa forward-backward SGA which incorporates the backwardstep and joint sparsity to the greedy algorithm As observedin Figure 6 the numbers of endmembers selected by theSGAs are still larger than the actual number BSOMP and

Mathematical Problems in Engineering 9

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

Figure 9 Continued

10 Mathematical Problems in Engineering

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

Figure 9 Comparison of abundance maps on Synthetic Data 2 with 20 dB white noise From left column to right column are the abundancemaps corresponding to Endmember 1 Endmember 4 and Endmember 9 respectively From top row to bottom row are true abundance mapsor abundance maps obtained by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively

RSFoBa behave better than SMP and SOMP which indicatesthat the backward step can make BSOMP and RSFoBa moreaccurate to select the actual endmembers Differing fromRSFoBa which adopts a forward step and a backward stepto pick some potential endmembers in each block BSOMPadopts a forward step to pick some potential endmembersin each block and then adopts a backward step in the wholehyperspectral data to remove some endmembers chosenwrongly from the estimated endmembers set Figure 10 showsthe SRE (dB) results obtained on Simulated Data 1 withwhite noise as a function of block size We can find thatBSOMP performs better than the other SGAs Figure 11shows the number of the potential endmembers obtainedfrom the spectral library by all the SGA methods It canbe observed that BSOMP can select the actual endmembersmore accurately than RSFoBa This result indicates that abackward step in the whole hyperspectral data can makeBSOMP more effective

42 Evaluation with Real Data The hyperspectral data setused in our real data experiment is the well-known AVIRISCuprite data set (httpavirisjplnasagovhtmlavirisfreed-atahtml) The size of the hyperspectral scene is a 250 times 191pixels subset with 188 spectral bands (water absorption bandsand low SNR bands are removed)The spectral library for thisdata set is the USGS spectral library (containing 498 pure sig-natures) with the corresponding bands removed The miner-als map (httpspeclabcrusgsgovcuprite95tgif22um mapgif) which was produced by a Tricorder 33 software(httpspeclabcrusgsgovPAPERStetracorder) is shown inFigure 12 Note that the Tricorder map is only suitable forthe hyperspectral data collected in 1995 the AVIRIS Cupritedata was collected was in 1997 Thus we only use the mineralmap as a reference indicator for qualitative analysis of thefractional abundance maps produced by different sparseunmixing methods

In this section for the real data experiment the sparsityof solution and the reconstruction error of the hyperspectral

Mathematical Problems in Engineering 11

Table 3 Sparsity of the abundance vectors and the reconstruction errors

Algorithms SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa BSOMPSparsity 175629 207037 20472 13684 15102 12702 11810RMSE 00051 00035 00028 00040 00034 00032 00033

14

13

12

11

10

9

8

7

8 10 12 14 16 18

Block size

SRE

(dB)

SMPSOMP

RSFoBaBSOMP

Figure 10 Results on Simulated Data 1 with white noise when theendmember number is eleven SRE as function of block size

50

55

60

65

45

40

35

30

25

20

15

108 10 12 14 16 18

Block size

SMPSOMP

RSFoBaBSOMP

Endm

embe

r num

ber

Figure 11 Results on Simulated Data 1 with 30 dB white noise whenthe endmember number is eleven number of retained endmembersas function of block size

image are used to evaluate the performances of the sparseunmixing algorithms since the true abundance maps of thereal data are not available The sparsity is obtained by count-ing the average number of nonzero abundances estimated byeach algorithm The same as in [22] we define the fractionalabundances larger than 0001 as nonzero abundances toavoid counting negligible values The reconstruction erroris calculated by the original hyperspectral image and theone reconstructed by the actual endmembers and theircorresponding abundances [32] The root mean square error(RMSE) is used to evaluate the reconstruction error betweenthe original hyperspectral image and the reconstructed one

RMSE = radic 1119870

119870

sum

119895=1

(119884119895minus 119895)2

(14)

where 119884 denotes the original hyperspectral image and denotes the reconstruction hyperspectral image If the RMSEof a sparse unmixing algorithm is smaller it also reflects theunmixing performance of this algorithm

The regularization parameter used for CLSUnSAL wasempirically set to 001 while the one corresponding toSUnSAL was set to 0001 The parameters 120582 and 120582TV ofSUnSAL-TV were set to 0001 and 0001 The block sizes ofthe SGA methods were empirically set to 10 The stoppingthreshold parameter 120590 of SOMP and SMP was set to 001The stopping threshold parameters 120590 and 120575 of BSOMP wereset to 001 and 01 The parameters 120582 and 120590 of RSFoBawere set to 1 and 001 Table 2 shows the sparsity of theabundance vectors and the reconstruction error obtained bythe different algorithm In Table 3 we can find that BSOMPSOMP SMP and SUnSAL perform better than SUnSAL-TV in the sparsest solutions although SUnSAL-TV has thelowest reconstruction errors BSOMP can achieve nearly asfew reconstruction errors as SUnSAL-TV but obtain muchsparser solutions In general BSOMP can obtain the sparsestsolutions and achieve the lower reconstruction errors whichindicates the effectiveness of the proposed algorithm

For visual comparison Figure 13 shows the estimatedfractional abundance maps by applying the proposedBSOMP SMP RSFoBa SOMP SUnSAL and SUnSAL-TValgorithms to the AVIRIS Cuprite data for three differentminerals (Alunite Buddingtonite and Chalcedony) Forcomparison the classification maps of three materialsextracted from the Tricorder 33 software are also presentedin Figure 13 It can be observed that the abundances estimatedby BSOMP are comparable or higher in the regions assignedto the respective minerals in comparison to the otherconsidered algorithms with regard to the classification mapsproduced by the Tricorder 33 software We also analyzed thesparsity of the abundances estimated by BSOMP the average

12 Mathematical Problems in Engineering

Cuprite NevadaAVIRIS 1995 data

USGSClark and Swayze

Tetracorder 33 productSulfates

K-Alunite 150CK-Alunite 250CK-Alunite 450C

JarositeAlunite + Kaoliniteandor Muscovite

Kaolinite group claysKaolinite wxlKaolinite pxlKaolinite + Smectiteor Muscovite

HalloysiteDickite

CarbonatesCalciteCalcite + KaoliniteCalcite + Montmorillonite

ClaysNa-MontmorilloniteNontronite (Fe clay)

Other mineralsLow-Al MuscoviteMed-Al MuscoviteHigh-Al MuscoviteChlorite + Musc MontChloriteBuddingtoniteChalcedony OH QtzPyrophyllite + Alunite

N2km

Na82-Alunite 100CNa40-Alunite 400C

Figure 12 USGS map showing the distribution of different minerals in the Cuprite mining district in Nevada

number of endmembers with abundances higher than 0001estimated by BSOMP is 11810 (per pixel) This means thatthe algorithm uses a minimum number of members toexplain the data Overall the qualitative results and visualresults reported in this section indicate that the BSOMPalgorithm is more effective for sparse unmixing than theother algorithms

5 Conclusions

In this paper we present a new SGA termed BSOMP forsparse unmixing of hyperspectral data The main idea of theproposed algorithm is that BSOMP incorporates a backtrackstep to detect the previous chosen endmembersrsquo reliabilityand then deletes the unreliable endmembers It is provedthat BSOMP can recover the optimal endmembers from thespectral library under certain conditions Experiments onboth synthetic data and real data also demonstrate that theBSOMP algorithm is more effective than the other SGAs forsparse unmixing

The topic we want to discuss is why BSOMP is effectivefor sparse unmixing Endmember selection and abundanceestimation are the two main parts of the SGAs The num-ber of the potential endmembers selected by the SGAs

is far smaller than the number of spectral signatures inthe spectral library and then the abundance estimationmethod makes the abundance vectors sparse further withthe relatively small endmembers subset The main differenceamong BSOMP SOMP SMP and RSFoBa is the endmemberselectionThe SGAs adopt a block-processing strategy to effi-ciently select endmembers but the numbers of endmembersselected are still larger than the actual number However thenonexisting endmembers will have a negative effect on theestimation of the abundances corresponding to the actualendmembers

Compared with SOMP and SMP BSOMP and RSFoBaadopt a backward step to detect the previous chosen end-membersrsquo reliability and delete the nonexisting endmembersSo BSOMP and RSFoBa can select the actual endmembersmore accurately than SOMP and SMP The main differencebetween BSOMP and RSFoBa is that the former adopts abackward step in the whole hyperspectral data while thelatter adopts a backward step in each block As shown inFigures 10 and 11 we can observe that the block size willsignificantly affect the number of the endmembers obtainedby RSFoBa while BSOMP is not affected by the block size Inconclusion BSOMP can select the actual endmembers moreaccurately than the other SGAs and be beneficial to improveestimation effectiveness

Mathematical Problems in Engineering 13

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

02

015

01

005

0

0

04

03

02

01

035

03

025

02

015

01

005

0

04

03

02

01

0

Figure 13 Continued

14 Mathematical Problems in Engineering

025

02

015

01

005

0

02

015

01

005

0

03

025

02

015

01

005

0

03

035

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

04

03

02

01

0

04

05

03

02

01

0

04

05

06

03

02

01

0

Figure 13 Abundance maps estimated by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP and the distribution mapsproduced by Tricorder 33 software for the AVIRIS Cuprite scene From top row to bottom row are the maps produced or estimated byTricorder software SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively From left column to right column arethe maps corresponding to Alunite Buddingtonite and Chalcedony respectively

Appendix

Proof of Theorem 7

First we should introduce some definitionswhichwill be usedduring the proof process

Definition A1 Given hyperspectral image data matrix 119884 isin

119877119871times119870 and available spectral library matrix 119860 isin 119877119871times119898

119860119905is a submatrix of the spectral library 119860 where 119860

119905

contains the 119896 columns indexed in 119878119905119860119905contains the

remaining columns 119860 = [119860119905 119860119905]

119860simopt is a submatrix of 119860

119905 which contains the

remaining (119896 minus 119879) columns indexed in 119878119905 119878opt

119883simopt is a submatrix of119883

119905and contains the rows listed

in 119878119905 119878opt

Nowwewill prove the theorem on the performance of thealgorithm

Proof Let119875opt denote the orthogonal projector onto the rangeof 119860opt

119875opt = 119860opt (119860lowast

opt119860opt)minus1

119860lowast

opt (A1)

where119860lowastopt denotes the conjugate transpose of119860opt Since thecolumns of 119860

119905are orthogonal to the columns of 119877

119905(119877119905= 119884minus

119884119905) we have

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905119877119905

10038171003817100381710038171003817infininfin (A2)

Themaximumcorrelation between119860119905(a submatrix of the

spectral library 119860) and the residual 119877opt = 119884 minus 119884opt is smaller

Mathematical Problems in Engineering 15

than the maximum correlation between the spectral library119860 and the residual 119877opt so we have

10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin (A3)

Invoke (A2) and (A3) and apply the triangle inequality to get

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt + 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A4)

Then the matrix 119884119905minus 119884opt can be expressed in the following

manner

119884119905minus 119884opt = (119868 minus 119875opt) 119884119905 = (119868 minus 119875opt)119860 119905119883119905

= (119868 minus 119875opt)119860simopt119883simopt(A5)

The last equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt Applying the triangle inequality the secondterm of (A4) can be rewritten as follows

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119868 minus 119875opt)119860simopt119883simopt

10038171003817100381710038171003817infininfin

le [10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfin]

times10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A6)

We will apply the cumulative coherence function todeduce the estimates of (A6) the deduction has five steps

(1) Each row of the matrix 119860lowast119905119860simopt lists the inner

products between a spectral endmember and (119896 minus 119879) distinctendmembers we have

10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879) (A7)

(2) Use the fact that sdot infininfin

is submultiplicative andinvoke (A1) to get

10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfin

times100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

times10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

(A8)

(3) Making use of the same deduction as in step (1) wehave

10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfinle 120583 (119879)

10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879)

(A9)

(4) Since the columns of 119860 are normalized the Grammatrix 119860lowastopt119860opt lists the inner products between 119879 different

endmembers so the matrix119860lowastopt119860opt has a unit diagonal andthen we can get

119860lowast

opt119860opt = 119868 minus 119883opt (A10)

where 119883optinfininfin le 120583(119879) then we can expand the inverse ina Neumann series to get100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

=100381710038171003817100381710038171003817(119868 minus 119883opt)

minus1100381710038171003817100381710038171003817infininfin

le1

1 minus10038171003817100381710038171003817119883opt

10038171003817100381710038171003817infininfin

le1

1 minus 120583 (119879)

(A11)

(5) Invoke the bounds from steps (1) to (4) into (A6) wecan obtain that10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le [120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)] times

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A12)

Simplify expression (A12) to conclude that

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfinle120583 (119896 minus 119879)

1 minus 120583 (119879)

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A13)

In order to produce the an upper bound of 119883simoptinfininfin we

deduce the following formula by using the triangle inequality10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119884 minus 119884

119905+ 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le1003817100381710038171003817119860lowast

119905(119884 minus 119884

119905)1003817100381710038171003817infininfin +

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A14)

Since (119884minus119884119905) is orthogonal to the range of119860

119905 Introduce (A5)

into (A14) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

simopt (119868 minus 119875opt)119860simopt119883simopt10038171003817100381710038171003817infininfin

(A15)

The equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt we can replace 119860

119905by 119860simopt Let 119863 = 119860

lowast

simopt(119868 minus

119875opt)119860simopt and rewrite 119863 = 119868 minus (119868 minus 119863) we can expand itsinverse in a Neumann series to obtain

10038171003817100381710038171003817119863119883simopt10038171003817100381710038171003817infininfin

ge (1 minus 119868 minus 119863infininfin)10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A16)

Recall the definition of119863 and apply the triangle inequality toobtain

119868 minus 119863infininfin

=10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt + 119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

(A17)

16 Mathematical Problems in Engineering

The right-hand side of inequality can be deduced completelyanalogous with steps (1)ndash(5) we can obtain

119868 minus 119863infininfin le 120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)

=120583 (119896 minus 119879)

1 minus 120583 (119879)

(A18)

Introduce (A18) and (A16) into (A14) we have10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A19)

Using the same deduction as (A3) it follows that 119860lowast119905(119884 minus

119884opt)infininfin le 119860lowast(119884minus119884opt)infininfin So (A19) can be rewritten as

follows10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A20)

Introduce (A20) into (A13) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le120583 (119896 minus 119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A21)

Introduce (A21) into (A4) to get1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin

le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A22)

Finally assume we have a bound 119860lowast(119884 minus 119884opt)infininfin le 120591 so(A22) can be rewritten as follows

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)120591 (A23)

This completes the argument

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work has been supported by a grant from the NUAAFundamental Research Funds (NS2013085) The authorswould like to thank M D Iordache J Bioucas-Dias and APlaza for sharing Simulated Data 2 and their codes for thealgorithms of CLSUnSAL SUnSAL and SUnSAL-TV Theauthors would also like to thank Shi Z TangW Duren Z andJiang Z for sharing their codes for the algorithms of SMP andRSFoBaMoreover the authors would also like to thank thoseanonymous reviewers for their helpful comments to improvethis paper

References

[1] N Keshava and J F Mustard ldquoSpectral unmixingrdquo IEEE SignalProcessing Magazine vol 19 no 1 pp 44ndash57 2002

[2] Y H Hu H B Lee and F L Scarpace ldquoOptimal linear spectralunmixingrdquo IEEE Transactions on Geoscience and Remote Sens-ing vol 37 no 1 pp 639ndash644 1999

[3] J M Bioucas-Dias A Plaza N Dobigeon et al ldquoHyperspec-tral unmixing overview geometrical statistical and sparseregression-based approachesrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 5 no 2 pp354ndash379 2012

[4] M Grana I Villaverde J O Maldonado and C HernandezldquoTwo lattice computing approaches for the unsupervised seg-mentation of hyperspectral imagesrdquo Neurocomputing vol 72no 10-12 pp 2111ndash2120 2009

[5] J Li and J M Bioucas-Dias ldquoMinimum volume simplexanalysis a fast algorithm to unmix hyperspectral datardquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo08) pp III-250ndashIII-253 BostonMass USA July 2008

[6] H Pu B Wang and L Zhang ldquoSimplex geometry-basedabundance estimation algorithm for hyperspectral unmixingrdquoScientia Sinica Informationis vol 42 no 8 pp 1019ndash1033 2012

[7] G Zhou S Xie Z Yang J-M Yang and Z He ldquoMinimum-volume-constrained nonnegative matrix factorizationenhanced ability of learning partsrdquo IEEE Transactions onNeural Networks vol 22 no 10 pp 1626ndash1637 2011

[8] T-H Chan C-Y Chi Y-M Huang and W-K Ma ldquoA convexanalysis-based minimum-volume enclosing simplex algorithmfor hyperspectral unmixingrdquo IEEE Transactions on Signal Pro-cessing vol 57 no 11 pp 4418ndash4432 2009

[9] M Arngren M N Schmidt and J Larsen ldquoBayesian nonneg-ative matrix factorization with volume prior for unmixing ofhyperspectral imagesrdquo in Proceedings of the IEEE InternationalWorkshop onMachine Learning for Signal Processing (MLSP rsquo09)pp 1ndash6 IEEE Grenoble France September 2009

[10] V P Pauca J Piper and R J Plemmons ldquoNonnegative matrixfactorization for spectral data analysisrdquo Linear Algebra and ItsApplications vol 416 no 1 pp 29ndash47 2006

[11] N Wang B Du and L Zhang ldquoAn endmember dissimilar-ity constrained non-negative matrix factorization method forhyperspectral unmixingrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 6 no 2pp 554ndash569 2013

[12] Z Wu S Ye J Liu L Sun and Z Wei ldquoSparse non-negativematrix factorization on GPUs for hyperspectral unmixingrdquoIEEE Journal of Selected Topics in Applied Earth Observationsand Remote Sensing vol 7 no 8 pp 3640ndash3649 2014

[13] J B Greer ldquoSparse demixing of hyperspectral imagesrdquo IEEETransactions on Image Processing vol 21 no 1 pp 219ndash2282012

[14] J A Tropp and S JWright ldquoComputational methods for sparsesolution of linear inverse problemsrdquoProceedings of the IEEE vol98 no 6 pp 948ndash958 2010

[15] M Elad Sparse and Redundant Representations Springer NewYork NY USA 2010

[16] J A Tropp and A C Gilbert ldquoSignal recovery from randommeasurements via orthogonal matching pursuitrdquo IEEE Trans-actions on Information Theory vol 53 no 12 pp 4655ndash46662007

Mathematical Problems in Engineering 17

[17] W Dai and O Milenkovic ldquoSubspace pursuit for compressivesensing signal reconstructionrdquo IEEE Transactions on Informa-tion Theory vol 55 no 5 pp 2230ndash2249 2009

[18] I Marques and M Grana ldquoHybrid sparse linear and latticemethod for hyperspectral image unmixingrdquo inHybrid ArtificialIntelligence Systems vol 8480 of Lecture Notes in ComputerScience pp 266ndash273 Springer 2014

[19] I Marques and M Grana ldquoGreedy sparsification WM algo-rithm for endmember induction in hyperspectral imagesrdquo inNatural and Artificial Computation in Engineering and MedicalApplications vol 7931 of Lecture Notes in Computer Science pp336ndash344 Springer Berlin Germany 2013

[20] M V Afonso J M Bioucas-Dias and M A Figueiredo ldquoAnaugmented Lagrangian approach to the constrained optimiza-tion formulation of imaging inverse problemsrdquo IEEE Transac-tions on Image Processing vol 20 no 3 pp 681ndash695 2011

[21] M-D Iordache J M Bioucas-Dias and A Plaza ldquoTotal varia-tion spatial regularization for sparse hyperspectral unmixingrdquoIEEE Transactions on Geoscience and Remote Sensing vol 50no 11 pp 4484ndash4502 2012

[22] M-D Iordache JM Bioucas-Dias andA Plaza ldquoCollaborativesparse unmixing of hyperspectral datardquo in Proceedings of theIEEE International Geoscience and Remote Sensing Symposium(IGARSS rsquo12) pp 7488ndash7491 IEEE Munich Germany July2012

[23] J M Bioucas-Dias and M A T Figueiredo ldquoAlternating direc-tion algorithms for constrained sparse regression Applicationto hyperspectral unmixingrdquo in Proceedings of the 2ndWorkshopon Hyperspectral Image and Signal Processing Evolution inRemote Sensing (WHISPERS rsquo10) pp 1ndash4 Reykjavik IcelandJune 2010

[24] Z Shi W Tang Z Duren and Z Jiang ldquoSubspace matchingpursuit for sparse unmixing of hyperspectral datardquo IEEE Trans-actions on Geoscience and Remote Sensing vol 52 no 6 pp3256ndash3274 2014

[25] J A Tropp A C Gilbert and M J Strauss ldquoAlgorithms forsimultaneous sparse approximation Part I greedy pursuitrdquoSignal Processing vol 86 no 3 pp 572ndash588 2006

[26] W Tang Z Shi andYWu ldquoRegularized simultaneous forward-backward greedy algorithm for sparse unmixing of hyperspec-tral datardquo IEEE Transactions on Geoscience and Remote Sensingvol 52 no 9 pp 5271ndash5288 2014

[27] J Chen and X Huo ldquoTheoretical results on sparse representa-tions of multiple-measurement vectorsrdquo IEEE Transactions onSignal Processing vol 54 no 12 pp 4634ndash4643 2006

[28] G Davis S Mallat and M Avellaneda ldquoGreedy adaptiveapproximationrdquo Journal of Constructive Approximation vol 13no 1 pp 57ndash98 1997

[29] J A Tropp ldquoGreed is good algorithmic results for sparseapproximationrdquo IEEE Transactions on Information Theory vol50 no 10 pp 2231ndash2242 2004

[30] D L Donoho and M Elad ldquoMaximal sparsity representationvia l1 minimizationrdquo Proceedings of the National Academy ofSciences of the United States of America vol 100 pp 2197ndash22022003

[31] R N Clark G A Swayze R Wise et al USGS Digital SpectralLibrary splib06a US Geological Survey Denver Colo USA2007

[32] J Plaza A Plaza P Martınez and R Perez ldquoH-COMP atool for quantitative and comparative analysis of endmemberidentification algorithmsrdquo in Proceedings of the Geoscience andRemote Sensing Symposium (IGARSS rsquo03) vol 1 pp 291ndash2932003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 9: Research Article Backtracking-Based Simultaneous ...downloads.hindawi.com/journals/mpe/2015/842017.pdf · Research Article Backtracking-Based Simultaneous Orthogonal Matching Pursuit

Mathematical Problems in Engineering 9

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

Figure 9 Continued

10 Mathematical Problems in Engineering

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

Figure 9 Comparison of abundance maps on Synthetic Data 2 with 20 dB white noise From left column to right column are the abundancemaps corresponding to Endmember 1 Endmember 4 and Endmember 9 respectively From top row to bottom row are true abundance mapsor abundance maps obtained by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively

RSFoBa behave better than SMP and SOMP which indicatesthat the backward step can make BSOMP and RSFoBa moreaccurate to select the actual endmembers Differing fromRSFoBa which adopts a forward step and a backward stepto pick some potential endmembers in each block BSOMPadopts a forward step to pick some potential endmembersin each block and then adopts a backward step in the wholehyperspectral data to remove some endmembers chosenwrongly from the estimated endmembers set Figure 10 showsthe SRE (dB) results obtained on Simulated Data 1 withwhite noise as a function of block size We can find thatBSOMP performs better than the other SGAs Figure 11shows the number of the potential endmembers obtainedfrom the spectral library by all the SGA methods It canbe observed that BSOMP can select the actual endmembersmore accurately than RSFoBa This result indicates that abackward step in the whole hyperspectral data can makeBSOMP more effective

42 Evaluation with Real Data The hyperspectral data setused in our real data experiment is the well-known AVIRISCuprite data set (httpavirisjplnasagovhtmlavirisfreed-atahtml) The size of the hyperspectral scene is a 250 times 191pixels subset with 188 spectral bands (water absorption bandsand low SNR bands are removed)The spectral library for thisdata set is the USGS spectral library (containing 498 pure sig-natures) with the corresponding bands removed The miner-als map (httpspeclabcrusgsgovcuprite95tgif22um mapgif) which was produced by a Tricorder 33 software(httpspeclabcrusgsgovPAPERStetracorder) is shown inFigure 12 Note that the Tricorder map is only suitable forthe hyperspectral data collected in 1995 the AVIRIS Cupritedata was collected was in 1997 Thus we only use the mineralmap as a reference indicator for qualitative analysis of thefractional abundance maps produced by different sparseunmixing methods

In this section for the real data experiment the sparsityof solution and the reconstruction error of the hyperspectral

Mathematical Problems in Engineering 11

Table 3 Sparsity of the abundance vectors and the reconstruction errors

Algorithms SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa BSOMPSparsity 175629 207037 20472 13684 15102 12702 11810RMSE 00051 00035 00028 00040 00034 00032 00033

14

13

12

11

10

9

8

7

8 10 12 14 16 18

Block size

SRE

(dB)

SMPSOMP

RSFoBaBSOMP

Figure 10 Results on Simulated Data 1 with white noise when theendmember number is eleven SRE as function of block size

50

55

60

65

45

40

35

30

25

20

15

108 10 12 14 16 18

Block size

SMPSOMP

RSFoBaBSOMP

Endm

embe

r num

ber

Figure 11 Results on Simulated Data 1 with 30 dB white noise whenthe endmember number is eleven number of retained endmembersas function of block size

image are used to evaluate the performances of the sparseunmixing algorithms since the true abundance maps of thereal data are not available The sparsity is obtained by count-ing the average number of nonzero abundances estimated byeach algorithm The same as in [22] we define the fractionalabundances larger than 0001 as nonzero abundances toavoid counting negligible values The reconstruction erroris calculated by the original hyperspectral image and theone reconstructed by the actual endmembers and theircorresponding abundances [32] The root mean square error(RMSE) is used to evaluate the reconstruction error betweenthe original hyperspectral image and the reconstructed one

RMSE = radic 1119870

119870

sum

119895=1

(119884119895minus 119895)2

(14)

where 119884 denotes the original hyperspectral image and denotes the reconstruction hyperspectral image If the RMSEof a sparse unmixing algorithm is smaller it also reflects theunmixing performance of this algorithm

The regularization parameter used for CLSUnSAL wasempirically set to 001 while the one corresponding toSUnSAL was set to 0001 The parameters 120582 and 120582TV ofSUnSAL-TV were set to 0001 and 0001 The block sizes ofthe SGA methods were empirically set to 10 The stoppingthreshold parameter 120590 of SOMP and SMP was set to 001The stopping threshold parameters 120590 and 120575 of BSOMP wereset to 001 and 01 The parameters 120582 and 120590 of RSFoBawere set to 1 and 001 Table 2 shows the sparsity of theabundance vectors and the reconstruction error obtained bythe different algorithm In Table 3 we can find that BSOMPSOMP SMP and SUnSAL perform better than SUnSAL-TV in the sparsest solutions although SUnSAL-TV has thelowest reconstruction errors BSOMP can achieve nearly asfew reconstruction errors as SUnSAL-TV but obtain muchsparser solutions In general BSOMP can obtain the sparsestsolutions and achieve the lower reconstruction errors whichindicates the effectiveness of the proposed algorithm

For visual comparison Figure 13 shows the estimatedfractional abundance maps by applying the proposedBSOMP SMP RSFoBa SOMP SUnSAL and SUnSAL-TValgorithms to the AVIRIS Cuprite data for three differentminerals (Alunite Buddingtonite and Chalcedony) Forcomparison the classification maps of three materialsextracted from the Tricorder 33 software are also presentedin Figure 13 It can be observed that the abundances estimatedby BSOMP are comparable or higher in the regions assignedto the respective minerals in comparison to the otherconsidered algorithms with regard to the classification mapsproduced by the Tricorder 33 software We also analyzed thesparsity of the abundances estimated by BSOMP the average

12 Mathematical Problems in Engineering

Cuprite NevadaAVIRIS 1995 data

USGSClark and Swayze

Tetracorder 33 productSulfates

K-Alunite 150CK-Alunite 250CK-Alunite 450C

JarositeAlunite + Kaoliniteandor Muscovite

Kaolinite group claysKaolinite wxlKaolinite pxlKaolinite + Smectiteor Muscovite

HalloysiteDickite

CarbonatesCalciteCalcite + KaoliniteCalcite + Montmorillonite

ClaysNa-MontmorilloniteNontronite (Fe clay)

Other mineralsLow-Al MuscoviteMed-Al MuscoviteHigh-Al MuscoviteChlorite + Musc MontChloriteBuddingtoniteChalcedony OH QtzPyrophyllite + Alunite

N2km

Na82-Alunite 100CNa40-Alunite 400C

Figure 12 USGS map showing the distribution of different minerals in the Cuprite mining district in Nevada

number of endmembers with abundances higher than 0001estimated by BSOMP is 11810 (per pixel) This means thatthe algorithm uses a minimum number of members toexplain the data Overall the qualitative results and visualresults reported in this section indicate that the BSOMPalgorithm is more effective for sparse unmixing than theother algorithms

5 Conclusions

In this paper we present a new SGA termed BSOMP forsparse unmixing of hyperspectral data The main idea of theproposed algorithm is that BSOMP incorporates a backtrackstep to detect the previous chosen endmembersrsquo reliabilityand then deletes the unreliable endmembers It is provedthat BSOMP can recover the optimal endmembers from thespectral library under certain conditions Experiments onboth synthetic data and real data also demonstrate that theBSOMP algorithm is more effective than the other SGAs forsparse unmixing

The topic we want to discuss is why BSOMP is effectivefor sparse unmixing Endmember selection and abundanceestimation are the two main parts of the SGAs The num-ber of the potential endmembers selected by the SGAs

is far smaller than the number of spectral signatures inthe spectral library and then the abundance estimationmethod makes the abundance vectors sparse further withthe relatively small endmembers subset The main differenceamong BSOMP SOMP SMP and RSFoBa is the endmemberselectionThe SGAs adopt a block-processing strategy to effi-ciently select endmembers but the numbers of endmembersselected are still larger than the actual number However thenonexisting endmembers will have a negative effect on theestimation of the abundances corresponding to the actualendmembers

Compared with SOMP and SMP BSOMP and RSFoBaadopt a backward step to detect the previous chosen end-membersrsquo reliability and delete the nonexisting endmembersSo BSOMP and RSFoBa can select the actual endmembersmore accurately than SOMP and SMP The main differencebetween BSOMP and RSFoBa is that the former adopts abackward step in the whole hyperspectral data while thelatter adopts a backward step in each block As shown inFigures 10 and 11 we can observe that the block size willsignificantly affect the number of the endmembers obtainedby RSFoBa while BSOMP is not affected by the block size Inconclusion BSOMP can select the actual endmembers moreaccurately than the other SGAs and be beneficial to improveestimation effectiveness

Mathematical Problems in Engineering 13

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

02

015

01

005

0

0

04

03

02

01

035

03

025

02

015

01

005

0

04

03

02

01

0

Figure 13 Continued

14 Mathematical Problems in Engineering

025

02

015

01

005

0

02

015

01

005

0

03

025

02

015

01

005

0

03

035

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

04

03

02

01

0

04

05

03

02

01

0

04

05

06

03

02

01

0

Figure 13 Abundance maps estimated by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP and the distribution mapsproduced by Tricorder 33 software for the AVIRIS Cuprite scene From top row to bottom row are the maps produced or estimated byTricorder software SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively From left column to right column arethe maps corresponding to Alunite Buddingtonite and Chalcedony respectively

Appendix

Proof of Theorem 7

First we should introduce some definitionswhichwill be usedduring the proof process

Definition A1 Given hyperspectral image data matrix 119884 isin

119877119871times119870 and available spectral library matrix 119860 isin 119877119871times119898

119860119905is a submatrix of the spectral library 119860 where 119860

119905

contains the 119896 columns indexed in 119878119905119860119905contains the

remaining columns 119860 = [119860119905 119860119905]

119860simopt is a submatrix of 119860

119905 which contains the

remaining (119896 minus 119879) columns indexed in 119878119905 119878opt

119883simopt is a submatrix of119883

119905and contains the rows listed

in 119878119905 119878opt

Nowwewill prove the theorem on the performance of thealgorithm

Proof Let119875opt denote the orthogonal projector onto the rangeof 119860opt

119875opt = 119860opt (119860lowast

opt119860opt)minus1

119860lowast

opt (A1)

where119860lowastopt denotes the conjugate transpose of119860opt Since thecolumns of 119860

119905are orthogonal to the columns of 119877

119905(119877119905= 119884minus

119884119905) we have

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905119877119905

10038171003817100381710038171003817infininfin (A2)

Themaximumcorrelation between119860119905(a submatrix of the

spectral library 119860) and the residual 119877opt = 119884 minus 119884opt is smaller

Mathematical Problems in Engineering 15

than the maximum correlation between the spectral library119860 and the residual 119877opt so we have

10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin (A3)

Invoke (A2) and (A3) and apply the triangle inequality to get

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt + 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A4)

Then the matrix 119884119905minus 119884opt can be expressed in the following

manner

119884119905minus 119884opt = (119868 minus 119875opt) 119884119905 = (119868 minus 119875opt)119860 119905119883119905

= (119868 minus 119875opt)119860simopt119883simopt(A5)

The last equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt Applying the triangle inequality the secondterm of (A4) can be rewritten as follows

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119868 minus 119875opt)119860simopt119883simopt

10038171003817100381710038171003817infininfin

le [10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfin]

times10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A6)

We will apply the cumulative coherence function todeduce the estimates of (A6) the deduction has five steps

(1) Each row of the matrix 119860lowast119905119860simopt lists the inner

products between a spectral endmember and (119896 minus 119879) distinctendmembers we have

10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879) (A7)

(2) Use the fact that sdot infininfin

is submultiplicative andinvoke (A1) to get

10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfin

times100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

times10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

(A8)

(3) Making use of the same deduction as in step (1) wehave

10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfinle 120583 (119879)

10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879)

(A9)

(4) Since the columns of 119860 are normalized the Grammatrix 119860lowastopt119860opt lists the inner products between 119879 different

endmembers so the matrix119860lowastopt119860opt has a unit diagonal andthen we can get

119860lowast

opt119860opt = 119868 minus 119883opt (A10)

where 119883optinfininfin le 120583(119879) then we can expand the inverse ina Neumann series to get100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

=100381710038171003817100381710038171003817(119868 minus 119883opt)

minus1100381710038171003817100381710038171003817infininfin

le1

1 minus10038171003817100381710038171003817119883opt

10038171003817100381710038171003817infininfin

le1

1 minus 120583 (119879)

(A11)

(5) Invoke the bounds from steps (1) to (4) into (A6) wecan obtain that10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le [120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)] times

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A12)

Simplify expression (A12) to conclude that

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfinle120583 (119896 minus 119879)

1 minus 120583 (119879)

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A13)

In order to produce the an upper bound of 119883simoptinfininfin we

deduce the following formula by using the triangle inequality10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119884 minus 119884

119905+ 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le1003817100381710038171003817119860lowast

119905(119884 minus 119884

119905)1003817100381710038171003817infininfin +

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A14)

Since (119884minus119884119905) is orthogonal to the range of119860

119905 Introduce (A5)

into (A14) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

simopt (119868 minus 119875opt)119860simopt119883simopt10038171003817100381710038171003817infininfin

(A15)

The equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt we can replace 119860

119905by 119860simopt Let 119863 = 119860

lowast

simopt(119868 minus

119875opt)119860simopt and rewrite 119863 = 119868 minus (119868 minus 119863) we can expand itsinverse in a Neumann series to obtain

10038171003817100381710038171003817119863119883simopt10038171003817100381710038171003817infininfin

ge (1 minus 119868 minus 119863infininfin)10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A16)

Recall the definition of119863 and apply the triangle inequality toobtain

119868 minus 119863infininfin

=10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt + 119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

(A17)

16 Mathematical Problems in Engineering

The right-hand side of inequality can be deduced completelyanalogous with steps (1)ndash(5) we can obtain

119868 minus 119863infininfin le 120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)

=120583 (119896 minus 119879)

1 minus 120583 (119879)

(A18)

Introduce (A18) and (A16) into (A14) we have10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A19)

Using the same deduction as (A3) it follows that 119860lowast119905(119884 minus

119884opt)infininfin le 119860lowast(119884minus119884opt)infininfin So (A19) can be rewritten as

follows10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A20)

Introduce (A20) into (A13) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le120583 (119896 minus 119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A21)

Introduce (A21) into (A4) to get1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin

le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A22)

Finally assume we have a bound 119860lowast(119884 minus 119884opt)infininfin le 120591 so(A22) can be rewritten as follows

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)120591 (A23)

This completes the argument

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work has been supported by a grant from the NUAAFundamental Research Funds (NS2013085) The authorswould like to thank M D Iordache J Bioucas-Dias and APlaza for sharing Simulated Data 2 and their codes for thealgorithms of CLSUnSAL SUnSAL and SUnSAL-TV Theauthors would also like to thank Shi Z TangW Duren Z andJiang Z for sharing their codes for the algorithms of SMP andRSFoBaMoreover the authors would also like to thank thoseanonymous reviewers for their helpful comments to improvethis paper

References

[1] N Keshava and J F Mustard ldquoSpectral unmixingrdquo IEEE SignalProcessing Magazine vol 19 no 1 pp 44ndash57 2002

[2] Y H Hu H B Lee and F L Scarpace ldquoOptimal linear spectralunmixingrdquo IEEE Transactions on Geoscience and Remote Sens-ing vol 37 no 1 pp 639ndash644 1999

[3] J M Bioucas-Dias A Plaza N Dobigeon et al ldquoHyperspec-tral unmixing overview geometrical statistical and sparseregression-based approachesrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 5 no 2 pp354ndash379 2012

[4] M Grana I Villaverde J O Maldonado and C HernandezldquoTwo lattice computing approaches for the unsupervised seg-mentation of hyperspectral imagesrdquo Neurocomputing vol 72no 10-12 pp 2111ndash2120 2009

[5] J Li and J M Bioucas-Dias ldquoMinimum volume simplexanalysis a fast algorithm to unmix hyperspectral datardquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo08) pp III-250ndashIII-253 BostonMass USA July 2008

[6] H Pu B Wang and L Zhang ldquoSimplex geometry-basedabundance estimation algorithm for hyperspectral unmixingrdquoScientia Sinica Informationis vol 42 no 8 pp 1019ndash1033 2012

[7] G Zhou S Xie Z Yang J-M Yang and Z He ldquoMinimum-volume-constrained nonnegative matrix factorizationenhanced ability of learning partsrdquo IEEE Transactions onNeural Networks vol 22 no 10 pp 1626ndash1637 2011

[8] T-H Chan C-Y Chi Y-M Huang and W-K Ma ldquoA convexanalysis-based minimum-volume enclosing simplex algorithmfor hyperspectral unmixingrdquo IEEE Transactions on Signal Pro-cessing vol 57 no 11 pp 4418ndash4432 2009

[9] M Arngren M N Schmidt and J Larsen ldquoBayesian nonneg-ative matrix factorization with volume prior for unmixing ofhyperspectral imagesrdquo in Proceedings of the IEEE InternationalWorkshop onMachine Learning for Signal Processing (MLSP rsquo09)pp 1ndash6 IEEE Grenoble France September 2009

[10] V P Pauca J Piper and R J Plemmons ldquoNonnegative matrixfactorization for spectral data analysisrdquo Linear Algebra and ItsApplications vol 416 no 1 pp 29ndash47 2006

[11] N Wang B Du and L Zhang ldquoAn endmember dissimilar-ity constrained non-negative matrix factorization method forhyperspectral unmixingrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 6 no 2pp 554ndash569 2013

[12] Z Wu S Ye J Liu L Sun and Z Wei ldquoSparse non-negativematrix factorization on GPUs for hyperspectral unmixingrdquoIEEE Journal of Selected Topics in Applied Earth Observationsand Remote Sensing vol 7 no 8 pp 3640ndash3649 2014

[13] J B Greer ldquoSparse demixing of hyperspectral imagesrdquo IEEETransactions on Image Processing vol 21 no 1 pp 219ndash2282012

[14] J A Tropp and S JWright ldquoComputational methods for sparsesolution of linear inverse problemsrdquoProceedings of the IEEE vol98 no 6 pp 948ndash958 2010

[15] M Elad Sparse and Redundant Representations Springer NewYork NY USA 2010

[16] J A Tropp and A C Gilbert ldquoSignal recovery from randommeasurements via orthogonal matching pursuitrdquo IEEE Trans-actions on Information Theory vol 53 no 12 pp 4655ndash46662007

Mathematical Problems in Engineering 17

[17] W Dai and O Milenkovic ldquoSubspace pursuit for compressivesensing signal reconstructionrdquo IEEE Transactions on Informa-tion Theory vol 55 no 5 pp 2230ndash2249 2009

[18] I Marques and M Grana ldquoHybrid sparse linear and latticemethod for hyperspectral image unmixingrdquo inHybrid ArtificialIntelligence Systems vol 8480 of Lecture Notes in ComputerScience pp 266ndash273 Springer 2014

[19] I Marques and M Grana ldquoGreedy sparsification WM algo-rithm for endmember induction in hyperspectral imagesrdquo inNatural and Artificial Computation in Engineering and MedicalApplications vol 7931 of Lecture Notes in Computer Science pp336ndash344 Springer Berlin Germany 2013

[20] M V Afonso J M Bioucas-Dias and M A Figueiredo ldquoAnaugmented Lagrangian approach to the constrained optimiza-tion formulation of imaging inverse problemsrdquo IEEE Transac-tions on Image Processing vol 20 no 3 pp 681ndash695 2011

[21] M-D Iordache J M Bioucas-Dias and A Plaza ldquoTotal varia-tion spatial regularization for sparse hyperspectral unmixingrdquoIEEE Transactions on Geoscience and Remote Sensing vol 50no 11 pp 4484ndash4502 2012

[22] M-D Iordache JM Bioucas-Dias andA Plaza ldquoCollaborativesparse unmixing of hyperspectral datardquo in Proceedings of theIEEE International Geoscience and Remote Sensing Symposium(IGARSS rsquo12) pp 7488ndash7491 IEEE Munich Germany July2012

[23] J M Bioucas-Dias and M A T Figueiredo ldquoAlternating direc-tion algorithms for constrained sparse regression Applicationto hyperspectral unmixingrdquo in Proceedings of the 2ndWorkshopon Hyperspectral Image and Signal Processing Evolution inRemote Sensing (WHISPERS rsquo10) pp 1ndash4 Reykjavik IcelandJune 2010

[24] Z Shi W Tang Z Duren and Z Jiang ldquoSubspace matchingpursuit for sparse unmixing of hyperspectral datardquo IEEE Trans-actions on Geoscience and Remote Sensing vol 52 no 6 pp3256ndash3274 2014

[25] J A Tropp A C Gilbert and M J Strauss ldquoAlgorithms forsimultaneous sparse approximation Part I greedy pursuitrdquoSignal Processing vol 86 no 3 pp 572ndash588 2006

[26] W Tang Z Shi andYWu ldquoRegularized simultaneous forward-backward greedy algorithm for sparse unmixing of hyperspec-tral datardquo IEEE Transactions on Geoscience and Remote Sensingvol 52 no 9 pp 5271ndash5288 2014

[27] J Chen and X Huo ldquoTheoretical results on sparse representa-tions of multiple-measurement vectorsrdquo IEEE Transactions onSignal Processing vol 54 no 12 pp 4634ndash4643 2006

[28] G Davis S Mallat and M Avellaneda ldquoGreedy adaptiveapproximationrdquo Journal of Constructive Approximation vol 13no 1 pp 57ndash98 1997

[29] J A Tropp ldquoGreed is good algorithmic results for sparseapproximationrdquo IEEE Transactions on Information Theory vol50 no 10 pp 2231ndash2242 2004

[30] D L Donoho and M Elad ldquoMaximal sparsity representationvia l1 minimizationrdquo Proceedings of the National Academy ofSciences of the United States of America vol 100 pp 2197ndash22022003

[31] R N Clark G A Swayze R Wise et al USGS Digital SpectralLibrary splib06a US Geological Survey Denver Colo USA2007

[32] J Plaza A Plaza P Martınez and R Perez ldquoH-COMP atool for quantitative and comparative analysis of endmemberidentification algorithmsrdquo in Proceedings of the Geoscience andRemote Sensing Symposium (IGARSS rsquo03) vol 1 pp 291ndash2932003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 10: Research Article Backtracking-Based Simultaneous ...downloads.hindawi.com/journals/mpe/2015/842017.pdf · Research Article Backtracking-Based Simultaneous Orthogonal Matching Pursuit

10 Mathematical Problems in Engineering

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

1

08

06

04

02

0

Figure 9 Comparison of abundance maps on Synthetic Data 2 with 20 dB white noise From left column to right column are the abundancemaps corresponding to Endmember 1 Endmember 4 and Endmember 9 respectively From top row to bottom row are true abundance mapsor abundance maps obtained by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively

RSFoBa behave better than SMP and SOMP which indicatesthat the backward step can make BSOMP and RSFoBa moreaccurate to select the actual endmembers Differing fromRSFoBa which adopts a forward step and a backward stepto pick some potential endmembers in each block BSOMPadopts a forward step to pick some potential endmembersin each block and then adopts a backward step in the wholehyperspectral data to remove some endmembers chosenwrongly from the estimated endmembers set Figure 10 showsthe SRE (dB) results obtained on Simulated Data 1 withwhite noise as a function of block size We can find thatBSOMP performs better than the other SGAs Figure 11shows the number of the potential endmembers obtainedfrom the spectral library by all the SGA methods It canbe observed that BSOMP can select the actual endmembersmore accurately than RSFoBa This result indicates that abackward step in the whole hyperspectral data can makeBSOMP more effective

42 Evaluation with Real Data The hyperspectral data setused in our real data experiment is the well-known AVIRISCuprite data set (httpavirisjplnasagovhtmlavirisfreed-atahtml) The size of the hyperspectral scene is a 250 times 191pixels subset with 188 spectral bands (water absorption bandsand low SNR bands are removed)The spectral library for thisdata set is the USGS spectral library (containing 498 pure sig-natures) with the corresponding bands removed The miner-als map (httpspeclabcrusgsgovcuprite95tgif22um mapgif) which was produced by a Tricorder 33 software(httpspeclabcrusgsgovPAPERStetracorder) is shown inFigure 12 Note that the Tricorder map is only suitable forthe hyperspectral data collected in 1995 the AVIRIS Cupritedata was collected was in 1997 Thus we only use the mineralmap as a reference indicator for qualitative analysis of thefractional abundance maps produced by different sparseunmixing methods

In this section for the real data experiment the sparsityof solution and the reconstruction error of the hyperspectral

Mathematical Problems in Engineering 11

Table 3 Sparsity of the abundance vectors and the reconstruction errors

Algorithms SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa BSOMPSparsity 175629 207037 20472 13684 15102 12702 11810RMSE 00051 00035 00028 00040 00034 00032 00033

14

13

12

11

10

9

8

7

8 10 12 14 16 18

Block size

SRE

(dB)

SMPSOMP

RSFoBaBSOMP

Figure 10 Results on Simulated Data 1 with white noise when theendmember number is eleven SRE as function of block size

50

55

60

65

45

40

35

30

25

20

15

108 10 12 14 16 18

Block size

SMPSOMP

RSFoBaBSOMP

Endm

embe

r num

ber

Figure 11 Results on Simulated Data 1 with 30 dB white noise whenthe endmember number is eleven number of retained endmembersas function of block size

image are used to evaluate the performances of the sparseunmixing algorithms since the true abundance maps of thereal data are not available The sparsity is obtained by count-ing the average number of nonzero abundances estimated byeach algorithm The same as in [22] we define the fractionalabundances larger than 0001 as nonzero abundances toavoid counting negligible values The reconstruction erroris calculated by the original hyperspectral image and theone reconstructed by the actual endmembers and theircorresponding abundances [32] The root mean square error(RMSE) is used to evaluate the reconstruction error betweenthe original hyperspectral image and the reconstructed one

RMSE = radic 1119870

119870

sum

119895=1

(119884119895minus 119895)2

(14)

where 119884 denotes the original hyperspectral image and denotes the reconstruction hyperspectral image If the RMSEof a sparse unmixing algorithm is smaller it also reflects theunmixing performance of this algorithm

The regularization parameter used for CLSUnSAL wasempirically set to 001 while the one corresponding toSUnSAL was set to 0001 The parameters 120582 and 120582TV ofSUnSAL-TV were set to 0001 and 0001 The block sizes ofthe SGA methods were empirically set to 10 The stoppingthreshold parameter 120590 of SOMP and SMP was set to 001The stopping threshold parameters 120590 and 120575 of BSOMP wereset to 001 and 01 The parameters 120582 and 120590 of RSFoBawere set to 1 and 001 Table 2 shows the sparsity of theabundance vectors and the reconstruction error obtained bythe different algorithm In Table 3 we can find that BSOMPSOMP SMP and SUnSAL perform better than SUnSAL-TV in the sparsest solutions although SUnSAL-TV has thelowest reconstruction errors BSOMP can achieve nearly asfew reconstruction errors as SUnSAL-TV but obtain muchsparser solutions In general BSOMP can obtain the sparsestsolutions and achieve the lower reconstruction errors whichindicates the effectiveness of the proposed algorithm

For visual comparison Figure 13 shows the estimatedfractional abundance maps by applying the proposedBSOMP SMP RSFoBa SOMP SUnSAL and SUnSAL-TValgorithms to the AVIRIS Cuprite data for three differentminerals (Alunite Buddingtonite and Chalcedony) Forcomparison the classification maps of three materialsextracted from the Tricorder 33 software are also presentedin Figure 13 It can be observed that the abundances estimatedby BSOMP are comparable or higher in the regions assignedto the respective minerals in comparison to the otherconsidered algorithms with regard to the classification mapsproduced by the Tricorder 33 software We also analyzed thesparsity of the abundances estimated by BSOMP the average

12 Mathematical Problems in Engineering

Cuprite NevadaAVIRIS 1995 data

USGSClark and Swayze

Tetracorder 33 productSulfates

K-Alunite 150CK-Alunite 250CK-Alunite 450C

JarositeAlunite + Kaoliniteandor Muscovite

Kaolinite group claysKaolinite wxlKaolinite pxlKaolinite + Smectiteor Muscovite

HalloysiteDickite

CarbonatesCalciteCalcite + KaoliniteCalcite + Montmorillonite

ClaysNa-MontmorilloniteNontronite (Fe clay)

Other mineralsLow-Al MuscoviteMed-Al MuscoviteHigh-Al MuscoviteChlorite + Musc MontChloriteBuddingtoniteChalcedony OH QtzPyrophyllite + Alunite

N2km

Na82-Alunite 100CNa40-Alunite 400C

Figure 12 USGS map showing the distribution of different minerals in the Cuprite mining district in Nevada

number of endmembers with abundances higher than 0001estimated by BSOMP is 11810 (per pixel) This means thatthe algorithm uses a minimum number of members toexplain the data Overall the qualitative results and visualresults reported in this section indicate that the BSOMPalgorithm is more effective for sparse unmixing than theother algorithms

5 Conclusions

In this paper we present a new SGA termed BSOMP forsparse unmixing of hyperspectral data The main idea of theproposed algorithm is that BSOMP incorporates a backtrackstep to detect the previous chosen endmembersrsquo reliabilityand then deletes the unreliable endmembers It is provedthat BSOMP can recover the optimal endmembers from thespectral library under certain conditions Experiments onboth synthetic data and real data also demonstrate that theBSOMP algorithm is more effective than the other SGAs forsparse unmixing

The topic we want to discuss is why BSOMP is effectivefor sparse unmixing Endmember selection and abundanceestimation are the two main parts of the SGAs The num-ber of the potential endmembers selected by the SGAs

is far smaller than the number of spectral signatures inthe spectral library and then the abundance estimationmethod makes the abundance vectors sparse further withthe relatively small endmembers subset The main differenceamong BSOMP SOMP SMP and RSFoBa is the endmemberselectionThe SGAs adopt a block-processing strategy to effi-ciently select endmembers but the numbers of endmembersselected are still larger than the actual number However thenonexisting endmembers will have a negative effect on theestimation of the abundances corresponding to the actualendmembers

Compared with SOMP and SMP BSOMP and RSFoBaadopt a backward step to detect the previous chosen end-membersrsquo reliability and delete the nonexisting endmembersSo BSOMP and RSFoBa can select the actual endmembersmore accurately than SOMP and SMP The main differencebetween BSOMP and RSFoBa is that the former adopts abackward step in the whole hyperspectral data while thelatter adopts a backward step in each block As shown inFigures 10 and 11 we can observe that the block size willsignificantly affect the number of the endmembers obtainedby RSFoBa while BSOMP is not affected by the block size Inconclusion BSOMP can select the actual endmembers moreaccurately than the other SGAs and be beneficial to improveestimation effectiveness

Mathematical Problems in Engineering 13

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

02

015

01

005

0

0

04

03

02

01

035

03

025

02

015

01

005

0

04

03

02

01

0

Figure 13 Continued

14 Mathematical Problems in Engineering

025

02

015

01

005

0

02

015

01

005

0

03

025

02

015

01

005

0

03

035

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

04

03

02

01

0

04

05

03

02

01

0

04

05

06

03

02

01

0

Figure 13 Abundance maps estimated by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP and the distribution mapsproduced by Tricorder 33 software for the AVIRIS Cuprite scene From top row to bottom row are the maps produced or estimated byTricorder software SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively From left column to right column arethe maps corresponding to Alunite Buddingtonite and Chalcedony respectively

Appendix

Proof of Theorem 7

First we should introduce some definitionswhichwill be usedduring the proof process

Definition A1 Given hyperspectral image data matrix 119884 isin

119877119871times119870 and available spectral library matrix 119860 isin 119877119871times119898

119860119905is a submatrix of the spectral library 119860 where 119860

119905

contains the 119896 columns indexed in 119878119905119860119905contains the

remaining columns 119860 = [119860119905 119860119905]

119860simopt is a submatrix of 119860

119905 which contains the

remaining (119896 minus 119879) columns indexed in 119878119905 119878opt

119883simopt is a submatrix of119883

119905and contains the rows listed

in 119878119905 119878opt

Nowwewill prove the theorem on the performance of thealgorithm

Proof Let119875opt denote the orthogonal projector onto the rangeof 119860opt

119875opt = 119860opt (119860lowast

opt119860opt)minus1

119860lowast

opt (A1)

where119860lowastopt denotes the conjugate transpose of119860opt Since thecolumns of 119860

119905are orthogonal to the columns of 119877

119905(119877119905= 119884minus

119884119905) we have

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905119877119905

10038171003817100381710038171003817infininfin (A2)

Themaximumcorrelation between119860119905(a submatrix of the

spectral library 119860) and the residual 119877opt = 119884 minus 119884opt is smaller

Mathematical Problems in Engineering 15

than the maximum correlation between the spectral library119860 and the residual 119877opt so we have

10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin (A3)

Invoke (A2) and (A3) and apply the triangle inequality to get

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt + 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A4)

Then the matrix 119884119905minus 119884opt can be expressed in the following

manner

119884119905minus 119884opt = (119868 minus 119875opt) 119884119905 = (119868 minus 119875opt)119860 119905119883119905

= (119868 minus 119875opt)119860simopt119883simopt(A5)

The last equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt Applying the triangle inequality the secondterm of (A4) can be rewritten as follows

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119868 minus 119875opt)119860simopt119883simopt

10038171003817100381710038171003817infininfin

le [10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfin]

times10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A6)

We will apply the cumulative coherence function todeduce the estimates of (A6) the deduction has five steps

(1) Each row of the matrix 119860lowast119905119860simopt lists the inner

products between a spectral endmember and (119896 minus 119879) distinctendmembers we have

10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879) (A7)

(2) Use the fact that sdot infininfin

is submultiplicative andinvoke (A1) to get

10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfin

times100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

times10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

(A8)

(3) Making use of the same deduction as in step (1) wehave

10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfinle 120583 (119879)

10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879)

(A9)

(4) Since the columns of 119860 are normalized the Grammatrix 119860lowastopt119860opt lists the inner products between 119879 different

endmembers so the matrix119860lowastopt119860opt has a unit diagonal andthen we can get

119860lowast

opt119860opt = 119868 minus 119883opt (A10)

where 119883optinfininfin le 120583(119879) then we can expand the inverse ina Neumann series to get100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

=100381710038171003817100381710038171003817(119868 minus 119883opt)

minus1100381710038171003817100381710038171003817infininfin

le1

1 minus10038171003817100381710038171003817119883opt

10038171003817100381710038171003817infininfin

le1

1 minus 120583 (119879)

(A11)

(5) Invoke the bounds from steps (1) to (4) into (A6) wecan obtain that10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le [120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)] times

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A12)

Simplify expression (A12) to conclude that

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfinle120583 (119896 minus 119879)

1 minus 120583 (119879)

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A13)

In order to produce the an upper bound of 119883simoptinfininfin we

deduce the following formula by using the triangle inequality10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119884 minus 119884

119905+ 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le1003817100381710038171003817119860lowast

119905(119884 minus 119884

119905)1003817100381710038171003817infininfin +

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A14)

Since (119884minus119884119905) is orthogonal to the range of119860

119905 Introduce (A5)

into (A14) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

simopt (119868 minus 119875opt)119860simopt119883simopt10038171003817100381710038171003817infininfin

(A15)

The equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt we can replace 119860

119905by 119860simopt Let 119863 = 119860

lowast

simopt(119868 minus

119875opt)119860simopt and rewrite 119863 = 119868 minus (119868 minus 119863) we can expand itsinverse in a Neumann series to obtain

10038171003817100381710038171003817119863119883simopt10038171003817100381710038171003817infininfin

ge (1 minus 119868 minus 119863infininfin)10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A16)

Recall the definition of119863 and apply the triangle inequality toobtain

119868 minus 119863infininfin

=10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt + 119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

(A17)

16 Mathematical Problems in Engineering

The right-hand side of inequality can be deduced completelyanalogous with steps (1)ndash(5) we can obtain

119868 minus 119863infininfin le 120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)

=120583 (119896 minus 119879)

1 minus 120583 (119879)

(A18)

Introduce (A18) and (A16) into (A14) we have10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A19)

Using the same deduction as (A3) it follows that 119860lowast119905(119884 minus

119884opt)infininfin le 119860lowast(119884minus119884opt)infininfin So (A19) can be rewritten as

follows10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A20)

Introduce (A20) into (A13) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le120583 (119896 minus 119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A21)

Introduce (A21) into (A4) to get1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin

le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A22)

Finally assume we have a bound 119860lowast(119884 minus 119884opt)infininfin le 120591 so(A22) can be rewritten as follows

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)120591 (A23)

This completes the argument

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work has been supported by a grant from the NUAAFundamental Research Funds (NS2013085) The authorswould like to thank M D Iordache J Bioucas-Dias and APlaza for sharing Simulated Data 2 and their codes for thealgorithms of CLSUnSAL SUnSAL and SUnSAL-TV Theauthors would also like to thank Shi Z TangW Duren Z andJiang Z for sharing their codes for the algorithms of SMP andRSFoBaMoreover the authors would also like to thank thoseanonymous reviewers for their helpful comments to improvethis paper

References

[1] N Keshava and J F Mustard ldquoSpectral unmixingrdquo IEEE SignalProcessing Magazine vol 19 no 1 pp 44ndash57 2002

[2] Y H Hu H B Lee and F L Scarpace ldquoOptimal linear spectralunmixingrdquo IEEE Transactions on Geoscience and Remote Sens-ing vol 37 no 1 pp 639ndash644 1999

[3] J M Bioucas-Dias A Plaza N Dobigeon et al ldquoHyperspec-tral unmixing overview geometrical statistical and sparseregression-based approachesrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 5 no 2 pp354ndash379 2012

[4] M Grana I Villaverde J O Maldonado and C HernandezldquoTwo lattice computing approaches for the unsupervised seg-mentation of hyperspectral imagesrdquo Neurocomputing vol 72no 10-12 pp 2111ndash2120 2009

[5] J Li and J M Bioucas-Dias ldquoMinimum volume simplexanalysis a fast algorithm to unmix hyperspectral datardquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo08) pp III-250ndashIII-253 BostonMass USA July 2008

[6] H Pu B Wang and L Zhang ldquoSimplex geometry-basedabundance estimation algorithm for hyperspectral unmixingrdquoScientia Sinica Informationis vol 42 no 8 pp 1019ndash1033 2012

[7] G Zhou S Xie Z Yang J-M Yang and Z He ldquoMinimum-volume-constrained nonnegative matrix factorizationenhanced ability of learning partsrdquo IEEE Transactions onNeural Networks vol 22 no 10 pp 1626ndash1637 2011

[8] T-H Chan C-Y Chi Y-M Huang and W-K Ma ldquoA convexanalysis-based minimum-volume enclosing simplex algorithmfor hyperspectral unmixingrdquo IEEE Transactions on Signal Pro-cessing vol 57 no 11 pp 4418ndash4432 2009

[9] M Arngren M N Schmidt and J Larsen ldquoBayesian nonneg-ative matrix factorization with volume prior for unmixing ofhyperspectral imagesrdquo in Proceedings of the IEEE InternationalWorkshop onMachine Learning for Signal Processing (MLSP rsquo09)pp 1ndash6 IEEE Grenoble France September 2009

[10] V P Pauca J Piper and R J Plemmons ldquoNonnegative matrixfactorization for spectral data analysisrdquo Linear Algebra and ItsApplications vol 416 no 1 pp 29ndash47 2006

[11] N Wang B Du and L Zhang ldquoAn endmember dissimilar-ity constrained non-negative matrix factorization method forhyperspectral unmixingrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 6 no 2pp 554ndash569 2013

[12] Z Wu S Ye J Liu L Sun and Z Wei ldquoSparse non-negativematrix factorization on GPUs for hyperspectral unmixingrdquoIEEE Journal of Selected Topics in Applied Earth Observationsand Remote Sensing vol 7 no 8 pp 3640ndash3649 2014

[13] J B Greer ldquoSparse demixing of hyperspectral imagesrdquo IEEETransactions on Image Processing vol 21 no 1 pp 219ndash2282012

[14] J A Tropp and S JWright ldquoComputational methods for sparsesolution of linear inverse problemsrdquoProceedings of the IEEE vol98 no 6 pp 948ndash958 2010

[15] M Elad Sparse and Redundant Representations Springer NewYork NY USA 2010

[16] J A Tropp and A C Gilbert ldquoSignal recovery from randommeasurements via orthogonal matching pursuitrdquo IEEE Trans-actions on Information Theory vol 53 no 12 pp 4655ndash46662007

Mathematical Problems in Engineering 17

[17] W Dai and O Milenkovic ldquoSubspace pursuit for compressivesensing signal reconstructionrdquo IEEE Transactions on Informa-tion Theory vol 55 no 5 pp 2230ndash2249 2009

[18] I Marques and M Grana ldquoHybrid sparse linear and latticemethod for hyperspectral image unmixingrdquo inHybrid ArtificialIntelligence Systems vol 8480 of Lecture Notes in ComputerScience pp 266ndash273 Springer 2014

[19] I Marques and M Grana ldquoGreedy sparsification WM algo-rithm for endmember induction in hyperspectral imagesrdquo inNatural and Artificial Computation in Engineering and MedicalApplications vol 7931 of Lecture Notes in Computer Science pp336ndash344 Springer Berlin Germany 2013

[20] M V Afonso J M Bioucas-Dias and M A Figueiredo ldquoAnaugmented Lagrangian approach to the constrained optimiza-tion formulation of imaging inverse problemsrdquo IEEE Transac-tions on Image Processing vol 20 no 3 pp 681ndash695 2011

[21] M-D Iordache J M Bioucas-Dias and A Plaza ldquoTotal varia-tion spatial regularization for sparse hyperspectral unmixingrdquoIEEE Transactions on Geoscience and Remote Sensing vol 50no 11 pp 4484ndash4502 2012

[22] M-D Iordache JM Bioucas-Dias andA Plaza ldquoCollaborativesparse unmixing of hyperspectral datardquo in Proceedings of theIEEE International Geoscience and Remote Sensing Symposium(IGARSS rsquo12) pp 7488ndash7491 IEEE Munich Germany July2012

[23] J M Bioucas-Dias and M A T Figueiredo ldquoAlternating direc-tion algorithms for constrained sparse regression Applicationto hyperspectral unmixingrdquo in Proceedings of the 2ndWorkshopon Hyperspectral Image and Signal Processing Evolution inRemote Sensing (WHISPERS rsquo10) pp 1ndash4 Reykjavik IcelandJune 2010

[24] Z Shi W Tang Z Duren and Z Jiang ldquoSubspace matchingpursuit for sparse unmixing of hyperspectral datardquo IEEE Trans-actions on Geoscience and Remote Sensing vol 52 no 6 pp3256ndash3274 2014

[25] J A Tropp A C Gilbert and M J Strauss ldquoAlgorithms forsimultaneous sparse approximation Part I greedy pursuitrdquoSignal Processing vol 86 no 3 pp 572ndash588 2006

[26] W Tang Z Shi andYWu ldquoRegularized simultaneous forward-backward greedy algorithm for sparse unmixing of hyperspec-tral datardquo IEEE Transactions on Geoscience and Remote Sensingvol 52 no 9 pp 5271ndash5288 2014

[27] J Chen and X Huo ldquoTheoretical results on sparse representa-tions of multiple-measurement vectorsrdquo IEEE Transactions onSignal Processing vol 54 no 12 pp 4634ndash4643 2006

[28] G Davis S Mallat and M Avellaneda ldquoGreedy adaptiveapproximationrdquo Journal of Constructive Approximation vol 13no 1 pp 57ndash98 1997

[29] J A Tropp ldquoGreed is good algorithmic results for sparseapproximationrdquo IEEE Transactions on Information Theory vol50 no 10 pp 2231ndash2242 2004

[30] D L Donoho and M Elad ldquoMaximal sparsity representationvia l1 minimizationrdquo Proceedings of the National Academy ofSciences of the United States of America vol 100 pp 2197ndash22022003

[31] R N Clark G A Swayze R Wise et al USGS Digital SpectralLibrary splib06a US Geological Survey Denver Colo USA2007

[32] J Plaza A Plaza P Martınez and R Perez ldquoH-COMP atool for quantitative and comparative analysis of endmemberidentification algorithmsrdquo in Proceedings of the Geoscience andRemote Sensing Symposium (IGARSS rsquo03) vol 1 pp 291ndash2932003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 11: Research Article Backtracking-Based Simultaneous ...downloads.hindawi.com/journals/mpe/2015/842017.pdf · Research Article Backtracking-Based Simultaneous Orthogonal Matching Pursuit

Mathematical Problems in Engineering 11

Table 3 Sparsity of the abundance vectors and the reconstruction errors

Algorithms SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa BSOMPSparsity 175629 207037 20472 13684 15102 12702 11810RMSE 00051 00035 00028 00040 00034 00032 00033

14

13

12

11

10

9

8

7

8 10 12 14 16 18

Block size

SRE

(dB)

SMPSOMP

RSFoBaBSOMP

Figure 10 Results on Simulated Data 1 with white noise when theendmember number is eleven SRE as function of block size

50

55

60

65

45

40

35

30

25

20

15

108 10 12 14 16 18

Block size

SMPSOMP

RSFoBaBSOMP

Endm

embe

r num

ber

Figure 11 Results on Simulated Data 1 with 30 dB white noise whenthe endmember number is eleven number of retained endmembersas function of block size

image are used to evaluate the performances of the sparseunmixing algorithms since the true abundance maps of thereal data are not available The sparsity is obtained by count-ing the average number of nonzero abundances estimated byeach algorithm The same as in [22] we define the fractionalabundances larger than 0001 as nonzero abundances toavoid counting negligible values The reconstruction erroris calculated by the original hyperspectral image and theone reconstructed by the actual endmembers and theircorresponding abundances [32] The root mean square error(RMSE) is used to evaluate the reconstruction error betweenthe original hyperspectral image and the reconstructed one

RMSE = radic 1119870

119870

sum

119895=1

(119884119895minus 119895)2

(14)

where 119884 denotes the original hyperspectral image and denotes the reconstruction hyperspectral image If the RMSEof a sparse unmixing algorithm is smaller it also reflects theunmixing performance of this algorithm

The regularization parameter used for CLSUnSAL wasempirically set to 001 while the one corresponding toSUnSAL was set to 0001 The parameters 120582 and 120582TV ofSUnSAL-TV were set to 0001 and 0001 The block sizes ofthe SGA methods were empirically set to 10 The stoppingthreshold parameter 120590 of SOMP and SMP was set to 001The stopping threshold parameters 120590 and 120575 of BSOMP wereset to 001 and 01 The parameters 120582 and 120590 of RSFoBawere set to 1 and 001 Table 2 shows the sparsity of theabundance vectors and the reconstruction error obtained bythe different algorithm In Table 3 we can find that BSOMPSOMP SMP and SUnSAL perform better than SUnSAL-TV in the sparsest solutions although SUnSAL-TV has thelowest reconstruction errors BSOMP can achieve nearly asfew reconstruction errors as SUnSAL-TV but obtain muchsparser solutions In general BSOMP can obtain the sparsestsolutions and achieve the lower reconstruction errors whichindicates the effectiveness of the proposed algorithm

For visual comparison Figure 13 shows the estimatedfractional abundance maps by applying the proposedBSOMP SMP RSFoBa SOMP SUnSAL and SUnSAL-TValgorithms to the AVIRIS Cuprite data for three differentminerals (Alunite Buddingtonite and Chalcedony) Forcomparison the classification maps of three materialsextracted from the Tricorder 33 software are also presentedin Figure 13 It can be observed that the abundances estimatedby BSOMP are comparable or higher in the regions assignedto the respective minerals in comparison to the otherconsidered algorithms with regard to the classification mapsproduced by the Tricorder 33 software We also analyzed thesparsity of the abundances estimated by BSOMP the average

12 Mathematical Problems in Engineering

Cuprite NevadaAVIRIS 1995 data

USGSClark and Swayze

Tetracorder 33 productSulfates

K-Alunite 150CK-Alunite 250CK-Alunite 450C

JarositeAlunite + Kaoliniteandor Muscovite

Kaolinite group claysKaolinite wxlKaolinite pxlKaolinite + Smectiteor Muscovite

HalloysiteDickite

CarbonatesCalciteCalcite + KaoliniteCalcite + Montmorillonite

ClaysNa-MontmorilloniteNontronite (Fe clay)

Other mineralsLow-Al MuscoviteMed-Al MuscoviteHigh-Al MuscoviteChlorite + Musc MontChloriteBuddingtoniteChalcedony OH QtzPyrophyllite + Alunite

N2km

Na82-Alunite 100CNa40-Alunite 400C

Figure 12 USGS map showing the distribution of different minerals in the Cuprite mining district in Nevada

number of endmembers with abundances higher than 0001estimated by BSOMP is 11810 (per pixel) This means thatthe algorithm uses a minimum number of members toexplain the data Overall the qualitative results and visualresults reported in this section indicate that the BSOMPalgorithm is more effective for sparse unmixing than theother algorithms

5 Conclusions

In this paper we present a new SGA termed BSOMP forsparse unmixing of hyperspectral data The main idea of theproposed algorithm is that BSOMP incorporates a backtrackstep to detect the previous chosen endmembersrsquo reliabilityand then deletes the unreliable endmembers It is provedthat BSOMP can recover the optimal endmembers from thespectral library under certain conditions Experiments onboth synthetic data and real data also demonstrate that theBSOMP algorithm is more effective than the other SGAs forsparse unmixing

The topic we want to discuss is why BSOMP is effectivefor sparse unmixing Endmember selection and abundanceestimation are the two main parts of the SGAs The num-ber of the potential endmembers selected by the SGAs

is far smaller than the number of spectral signatures inthe spectral library and then the abundance estimationmethod makes the abundance vectors sparse further withthe relatively small endmembers subset The main differenceamong BSOMP SOMP SMP and RSFoBa is the endmemberselectionThe SGAs adopt a block-processing strategy to effi-ciently select endmembers but the numbers of endmembersselected are still larger than the actual number However thenonexisting endmembers will have a negative effect on theestimation of the abundances corresponding to the actualendmembers

Compared with SOMP and SMP BSOMP and RSFoBaadopt a backward step to detect the previous chosen end-membersrsquo reliability and delete the nonexisting endmembersSo BSOMP and RSFoBa can select the actual endmembersmore accurately than SOMP and SMP The main differencebetween BSOMP and RSFoBa is that the former adopts abackward step in the whole hyperspectral data while thelatter adopts a backward step in each block As shown inFigures 10 and 11 we can observe that the block size willsignificantly affect the number of the endmembers obtainedby RSFoBa while BSOMP is not affected by the block size Inconclusion BSOMP can select the actual endmembers moreaccurately than the other SGAs and be beneficial to improveestimation effectiveness

Mathematical Problems in Engineering 13

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

02

015

01

005

0

0

04

03

02

01

035

03

025

02

015

01

005

0

04

03

02

01

0

Figure 13 Continued

14 Mathematical Problems in Engineering

025

02

015

01

005

0

02

015

01

005

0

03

025

02

015

01

005

0

03

035

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

04

03

02

01

0

04

05

03

02

01

0

04

05

06

03

02

01

0

Figure 13 Abundance maps estimated by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP and the distribution mapsproduced by Tricorder 33 software for the AVIRIS Cuprite scene From top row to bottom row are the maps produced or estimated byTricorder software SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively From left column to right column arethe maps corresponding to Alunite Buddingtonite and Chalcedony respectively

Appendix

Proof of Theorem 7

First we should introduce some definitionswhichwill be usedduring the proof process

Definition A1 Given hyperspectral image data matrix 119884 isin

119877119871times119870 and available spectral library matrix 119860 isin 119877119871times119898

119860119905is a submatrix of the spectral library 119860 where 119860

119905

contains the 119896 columns indexed in 119878119905119860119905contains the

remaining columns 119860 = [119860119905 119860119905]

119860simopt is a submatrix of 119860

119905 which contains the

remaining (119896 minus 119879) columns indexed in 119878119905 119878opt

119883simopt is a submatrix of119883

119905and contains the rows listed

in 119878119905 119878opt

Nowwewill prove the theorem on the performance of thealgorithm

Proof Let119875opt denote the orthogonal projector onto the rangeof 119860opt

119875opt = 119860opt (119860lowast

opt119860opt)minus1

119860lowast

opt (A1)

where119860lowastopt denotes the conjugate transpose of119860opt Since thecolumns of 119860

119905are orthogonal to the columns of 119877

119905(119877119905= 119884minus

119884119905) we have

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905119877119905

10038171003817100381710038171003817infininfin (A2)

Themaximumcorrelation between119860119905(a submatrix of the

spectral library 119860) and the residual 119877opt = 119884 minus 119884opt is smaller

Mathematical Problems in Engineering 15

than the maximum correlation between the spectral library119860 and the residual 119877opt so we have

10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin (A3)

Invoke (A2) and (A3) and apply the triangle inequality to get

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt + 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A4)

Then the matrix 119884119905minus 119884opt can be expressed in the following

manner

119884119905minus 119884opt = (119868 minus 119875opt) 119884119905 = (119868 minus 119875opt)119860 119905119883119905

= (119868 minus 119875opt)119860simopt119883simopt(A5)

The last equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt Applying the triangle inequality the secondterm of (A4) can be rewritten as follows

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119868 minus 119875opt)119860simopt119883simopt

10038171003817100381710038171003817infininfin

le [10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfin]

times10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A6)

We will apply the cumulative coherence function todeduce the estimates of (A6) the deduction has five steps

(1) Each row of the matrix 119860lowast119905119860simopt lists the inner

products between a spectral endmember and (119896 minus 119879) distinctendmembers we have

10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879) (A7)

(2) Use the fact that sdot infininfin

is submultiplicative andinvoke (A1) to get

10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfin

times100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

times10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

(A8)

(3) Making use of the same deduction as in step (1) wehave

10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfinle 120583 (119879)

10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879)

(A9)

(4) Since the columns of 119860 are normalized the Grammatrix 119860lowastopt119860opt lists the inner products between 119879 different

endmembers so the matrix119860lowastopt119860opt has a unit diagonal andthen we can get

119860lowast

opt119860opt = 119868 minus 119883opt (A10)

where 119883optinfininfin le 120583(119879) then we can expand the inverse ina Neumann series to get100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

=100381710038171003817100381710038171003817(119868 minus 119883opt)

minus1100381710038171003817100381710038171003817infininfin

le1

1 minus10038171003817100381710038171003817119883opt

10038171003817100381710038171003817infininfin

le1

1 minus 120583 (119879)

(A11)

(5) Invoke the bounds from steps (1) to (4) into (A6) wecan obtain that10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le [120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)] times

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A12)

Simplify expression (A12) to conclude that

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfinle120583 (119896 minus 119879)

1 minus 120583 (119879)

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A13)

In order to produce the an upper bound of 119883simoptinfininfin we

deduce the following formula by using the triangle inequality10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119884 minus 119884

119905+ 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le1003817100381710038171003817119860lowast

119905(119884 minus 119884

119905)1003817100381710038171003817infininfin +

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A14)

Since (119884minus119884119905) is orthogonal to the range of119860

119905 Introduce (A5)

into (A14) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

simopt (119868 minus 119875opt)119860simopt119883simopt10038171003817100381710038171003817infininfin

(A15)

The equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt we can replace 119860

119905by 119860simopt Let 119863 = 119860

lowast

simopt(119868 minus

119875opt)119860simopt and rewrite 119863 = 119868 minus (119868 minus 119863) we can expand itsinverse in a Neumann series to obtain

10038171003817100381710038171003817119863119883simopt10038171003817100381710038171003817infininfin

ge (1 minus 119868 minus 119863infininfin)10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A16)

Recall the definition of119863 and apply the triangle inequality toobtain

119868 minus 119863infininfin

=10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt + 119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

(A17)

16 Mathematical Problems in Engineering

The right-hand side of inequality can be deduced completelyanalogous with steps (1)ndash(5) we can obtain

119868 minus 119863infininfin le 120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)

=120583 (119896 minus 119879)

1 minus 120583 (119879)

(A18)

Introduce (A18) and (A16) into (A14) we have10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A19)

Using the same deduction as (A3) it follows that 119860lowast119905(119884 minus

119884opt)infininfin le 119860lowast(119884minus119884opt)infininfin So (A19) can be rewritten as

follows10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A20)

Introduce (A20) into (A13) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le120583 (119896 minus 119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A21)

Introduce (A21) into (A4) to get1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin

le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A22)

Finally assume we have a bound 119860lowast(119884 minus 119884opt)infininfin le 120591 so(A22) can be rewritten as follows

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)120591 (A23)

This completes the argument

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work has been supported by a grant from the NUAAFundamental Research Funds (NS2013085) The authorswould like to thank M D Iordache J Bioucas-Dias and APlaza for sharing Simulated Data 2 and their codes for thealgorithms of CLSUnSAL SUnSAL and SUnSAL-TV Theauthors would also like to thank Shi Z TangW Duren Z andJiang Z for sharing their codes for the algorithms of SMP andRSFoBaMoreover the authors would also like to thank thoseanonymous reviewers for their helpful comments to improvethis paper

References

[1] N Keshava and J F Mustard ldquoSpectral unmixingrdquo IEEE SignalProcessing Magazine vol 19 no 1 pp 44ndash57 2002

[2] Y H Hu H B Lee and F L Scarpace ldquoOptimal linear spectralunmixingrdquo IEEE Transactions on Geoscience and Remote Sens-ing vol 37 no 1 pp 639ndash644 1999

[3] J M Bioucas-Dias A Plaza N Dobigeon et al ldquoHyperspec-tral unmixing overview geometrical statistical and sparseregression-based approachesrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 5 no 2 pp354ndash379 2012

[4] M Grana I Villaverde J O Maldonado and C HernandezldquoTwo lattice computing approaches for the unsupervised seg-mentation of hyperspectral imagesrdquo Neurocomputing vol 72no 10-12 pp 2111ndash2120 2009

[5] J Li and J M Bioucas-Dias ldquoMinimum volume simplexanalysis a fast algorithm to unmix hyperspectral datardquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo08) pp III-250ndashIII-253 BostonMass USA July 2008

[6] H Pu B Wang and L Zhang ldquoSimplex geometry-basedabundance estimation algorithm for hyperspectral unmixingrdquoScientia Sinica Informationis vol 42 no 8 pp 1019ndash1033 2012

[7] G Zhou S Xie Z Yang J-M Yang and Z He ldquoMinimum-volume-constrained nonnegative matrix factorizationenhanced ability of learning partsrdquo IEEE Transactions onNeural Networks vol 22 no 10 pp 1626ndash1637 2011

[8] T-H Chan C-Y Chi Y-M Huang and W-K Ma ldquoA convexanalysis-based minimum-volume enclosing simplex algorithmfor hyperspectral unmixingrdquo IEEE Transactions on Signal Pro-cessing vol 57 no 11 pp 4418ndash4432 2009

[9] M Arngren M N Schmidt and J Larsen ldquoBayesian nonneg-ative matrix factorization with volume prior for unmixing ofhyperspectral imagesrdquo in Proceedings of the IEEE InternationalWorkshop onMachine Learning for Signal Processing (MLSP rsquo09)pp 1ndash6 IEEE Grenoble France September 2009

[10] V P Pauca J Piper and R J Plemmons ldquoNonnegative matrixfactorization for spectral data analysisrdquo Linear Algebra and ItsApplications vol 416 no 1 pp 29ndash47 2006

[11] N Wang B Du and L Zhang ldquoAn endmember dissimilar-ity constrained non-negative matrix factorization method forhyperspectral unmixingrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 6 no 2pp 554ndash569 2013

[12] Z Wu S Ye J Liu L Sun and Z Wei ldquoSparse non-negativematrix factorization on GPUs for hyperspectral unmixingrdquoIEEE Journal of Selected Topics in Applied Earth Observationsand Remote Sensing vol 7 no 8 pp 3640ndash3649 2014

[13] J B Greer ldquoSparse demixing of hyperspectral imagesrdquo IEEETransactions on Image Processing vol 21 no 1 pp 219ndash2282012

[14] J A Tropp and S JWright ldquoComputational methods for sparsesolution of linear inverse problemsrdquoProceedings of the IEEE vol98 no 6 pp 948ndash958 2010

[15] M Elad Sparse and Redundant Representations Springer NewYork NY USA 2010

[16] J A Tropp and A C Gilbert ldquoSignal recovery from randommeasurements via orthogonal matching pursuitrdquo IEEE Trans-actions on Information Theory vol 53 no 12 pp 4655ndash46662007

Mathematical Problems in Engineering 17

[17] W Dai and O Milenkovic ldquoSubspace pursuit for compressivesensing signal reconstructionrdquo IEEE Transactions on Informa-tion Theory vol 55 no 5 pp 2230ndash2249 2009

[18] I Marques and M Grana ldquoHybrid sparse linear and latticemethod for hyperspectral image unmixingrdquo inHybrid ArtificialIntelligence Systems vol 8480 of Lecture Notes in ComputerScience pp 266ndash273 Springer 2014

[19] I Marques and M Grana ldquoGreedy sparsification WM algo-rithm for endmember induction in hyperspectral imagesrdquo inNatural and Artificial Computation in Engineering and MedicalApplications vol 7931 of Lecture Notes in Computer Science pp336ndash344 Springer Berlin Germany 2013

[20] M V Afonso J M Bioucas-Dias and M A Figueiredo ldquoAnaugmented Lagrangian approach to the constrained optimiza-tion formulation of imaging inverse problemsrdquo IEEE Transac-tions on Image Processing vol 20 no 3 pp 681ndash695 2011

[21] M-D Iordache J M Bioucas-Dias and A Plaza ldquoTotal varia-tion spatial regularization for sparse hyperspectral unmixingrdquoIEEE Transactions on Geoscience and Remote Sensing vol 50no 11 pp 4484ndash4502 2012

[22] M-D Iordache JM Bioucas-Dias andA Plaza ldquoCollaborativesparse unmixing of hyperspectral datardquo in Proceedings of theIEEE International Geoscience and Remote Sensing Symposium(IGARSS rsquo12) pp 7488ndash7491 IEEE Munich Germany July2012

[23] J M Bioucas-Dias and M A T Figueiredo ldquoAlternating direc-tion algorithms for constrained sparse regression Applicationto hyperspectral unmixingrdquo in Proceedings of the 2ndWorkshopon Hyperspectral Image and Signal Processing Evolution inRemote Sensing (WHISPERS rsquo10) pp 1ndash4 Reykjavik IcelandJune 2010

[24] Z Shi W Tang Z Duren and Z Jiang ldquoSubspace matchingpursuit for sparse unmixing of hyperspectral datardquo IEEE Trans-actions on Geoscience and Remote Sensing vol 52 no 6 pp3256ndash3274 2014

[25] J A Tropp A C Gilbert and M J Strauss ldquoAlgorithms forsimultaneous sparse approximation Part I greedy pursuitrdquoSignal Processing vol 86 no 3 pp 572ndash588 2006

[26] W Tang Z Shi andYWu ldquoRegularized simultaneous forward-backward greedy algorithm for sparse unmixing of hyperspec-tral datardquo IEEE Transactions on Geoscience and Remote Sensingvol 52 no 9 pp 5271ndash5288 2014

[27] J Chen and X Huo ldquoTheoretical results on sparse representa-tions of multiple-measurement vectorsrdquo IEEE Transactions onSignal Processing vol 54 no 12 pp 4634ndash4643 2006

[28] G Davis S Mallat and M Avellaneda ldquoGreedy adaptiveapproximationrdquo Journal of Constructive Approximation vol 13no 1 pp 57ndash98 1997

[29] J A Tropp ldquoGreed is good algorithmic results for sparseapproximationrdquo IEEE Transactions on Information Theory vol50 no 10 pp 2231ndash2242 2004

[30] D L Donoho and M Elad ldquoMaximal sparsity representationvia l1 minimizationrdquo Proceedings of the National Academy ofSciences of the United States of America vol 100 pp 2197ndash22022003

[31] R N Clark G A Swayze R Wise et al USGS Digital SpectralLibrary splib06a US Geological Survey Denver Colo USA2007

[32] J Plaza A Plaza P Martınez and R Perez ldquoH-COMP atool for quantitative and comparative analysis of endmemberidentification algorithmsrdquo in Proceedings of the Geoscience andRemote Sensing Symposium (IGARSS rsquo03) vol 1 pp 291ndash2932003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 12: Research Article Backtracking-Based Simultaneous ...downloads.hindawi.com/journals/mpe/2015/842017.pdf · Research Article Backtracking-Based Simultaneous Orthogonal Matching Pursuit

12 Mathematical Problems in Engineering

Cuprite NevadaAVIRIS 1995 data

USGSClark and Swayze

Tetracorder 33 productSulfates

K-Alunite 150CK-Alunite 250CK-Alunite 450C

JarositeAlunite + Kaoliniteandor Muscovite

Kaolinite group claysKaolinite wxlKaolinite pxlKaolinite + Smectiteor Muscovite

HalloysiteDickite

CarbonatesCalciteCalcite + KaoliniteCalcite + Montmorillonite

ClaysNa-MontmorilloniteNontronite (Fe clay)

Other mineralsLow-Al MuscoviteMed-Al MuscoviteHigh-Al MuscoviteChlorite + Musc MontChloriteBuddingtoniteChalcedony OH QtzPyrophyllite + Alunite

N2km

Na82-Alunite 100CNa40-Alunite 400C

Figure 12 USGS map showing the distribution of different minerals in the Cuprite mining district in Nevada

number of endmembers with abundances higher than 0001estimated by BSOMP is 11810 (per pixel) This means thatthe algorithm uses a minimum number of members toexplain the data Overall the qualitative results and visualresults reported in this section indicate that the BSOMPalgorithm is more effective for sparse unmixing than theother algorithms

5 Conclusions

In this paper we present a new SGA termed BSOMP forsparse unmixing of hyperspectral data The main idea of theproposed algorithm is that BSOMP incorporates a backtrackstep to detect the previous chosen endmembersrsquo reliabilityand then deletes the unreliable endmembers It is provedthat BSOMP can recover the optimal endmembers from thespectral library under certain conditions Experiments onboth synthetic data and real data also demonstrate that theBSOMP algorithm is more effective than the other SGAs forsparse unmixing

The topic we want to discuss is why BSOMP is effectivefor sparse unmixing Endmember selection and abundanceestimation are the two main parts of the SGAs The num-ber of the potential endmembers selected by the SGAs

is far smaller than the number of spectral signatures inthe spectral library and then the abundance estimationmethod makes the abundance vectors sparse further withthe relatively small endmembers subset The main differenceamong BSOMP SOMP SMP and RSFoBa is the endmemberselectionThe SGAs adopt a block-processing strategy to effi-ciently select endmembers but the numbers of endmembersselected are still larger than the actual number However thenonexisting endmembers will have a negative effect on theestimation of the abundances corresponding to the actualendmembers

Compared with SOMP and SMP BSOMP and RSFoBaadopt a backward step to detect the previous chosen end-membersrsquo reliability and delete the nonexisting endmembersSo BSOMP and RSFoBa can select the actual endmembersmore accurately than SOMP and SMP The main differencebetween BSOMP and RSFoBa is that the former adopts abackward step in the whole hyperspectral data while thelatter adopts a backward step in each block As shown inFigures 10 and 11 we can observe that the block size willsignificantly affect the number of the endmembers obtainedby RSFoBa while BSOMP is not affected by the block size Inconclusion BSOMP can select the actual endmembers moreaccurately than the other SGAs and be beneficial to improveestimation effectiveness

Mathematical Problems in Engineering 13

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

02

015

01

005

0

0

04

03

02

01

035

03

025

02

015

01

005

0

04

03

02

01

0

Figure 13 Continued

14 Mathematical Problems in Engineering

025

02

015

01

005

0

02

015

01

005

0

03

025

02

015

01

005

0

03

035

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

04

03

02

01

0

04

05

03

02

01

0

04

05

06

03

02

01

0

Figure 13 Abundance maps estimated by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP and the distribution mapsproduced by Tricorder 33 software for the AVIRIS Cuprite scene From top row to bottom row are the maps produced or estimated byTricorder software SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively From left column to right column arethe maps corresponding to Alunite Buddingtonite and Chalcedony respectively

Appendix

Proof of Theorem 7

First we should introduce some definitionswhichwill be usedduring the proof process

Definition A1 Given hyperspectral image data matrix 119884 isin

119877119871times119870 and available spectral library matrix 119860 isin 119877119871times119898

119860119905is a submatrix of the spectral library 119860 where 119860

119905

contains the 119896 columns indexed in 119878119905119860119905contains the

remaining columns 119860 = [119860119905 119860119905]

119860simopt is a submatrix of 119860

119905 which contains the

remaining (119896 minus 119879) columns indexed in 119878119905 119878opt

119883simopt is a submatrix of119883

119905and contains the rows listed

in 119878119905 119878opt

Nowwewill prove the theorem on the performance of thealgorithm

Proof Let119875opt denote the orthogonal projector onto the rangeof 119860opt

119875opt = 119860opt (119860lowast

opt119860opt)minus1

119860lowast

opt (A1)

where119860lowastopt denotes the conjugate transpose of119860opt Since thecolumns of 119860

119905are orthogonal to the columns of 119877

119905(119877119905= 119884minus

119884119905) we have

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905119877119905

10038171003817100381710038171003817infininfin (A2)

Themaximumcorrelation between119860119905(a submatrix of the

spectral library 119860) and the residual 119877opt = 119884 minus 119884opt is smaller

Mathematical Problems in Engineering 15

than the maximum correlation between the spectral library119860 and the residual 119877opt so we have

10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin (A3)

Invoke (A2) and (A3) and apply the triangle inequality to get

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt + 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A4)

Then the matrix 119884119905minus 119884opt can be expressed in the following

manner

119884119905minus 119884opt = (119868 minus 119875opt) 119884119905 = (119868 minus 119875opt)119860 119905119883119905

= (119868 minus 119875opt)119860simopt119883simopt(A5)

The last equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt Applying the triangle inequality the secondterm of (A4) can be rewritten as follows

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119868 minus 119875opt)119860simopt119883simopt

10038171003817100381710038171003817infininfin

le [10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfin]

times10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A6)

We will apply the cumulative coherence function todeduce the estimates of (A6) the deduction has five steps

(1) Each row of the matrix 119860lowast119905119860simopt lists the inner

products between a spectral endmember and (119896 minus 119879) distinctendmembers we have

10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879) (A7)

(2) Use the fact that sdot infininfin

is submultiplicative andinvoke (A1) to get

10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfin

times100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

times10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

(A8)

(3) Making use of the same deduction as in step (1) wehave

10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfinle 120583 (119879)

10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879)

(A9)

(4) Since the columns of 119860 are normalized the Grammatrix 119860lowastopt119860opt lists the inner products between 119879 different

endmembers so the matrix119860lowastopt119860opt has a unit diagonal andthen we can get

119860lowast

opt119860opt = 119868 minus 119883opt (A10)

where 119883optinfininfin le 120583(119879) then we can expand the inverse ina Neumann series to get100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

=100381710038171003817100381710038171003817(119868 minus 119883opt)

minus1100381710038171003817100381710038171003817infininfin

le1

1 minus10038171003817100381710038171003817119883opt

10038171003817100381710038171003817infininfin

le1

1 minus 120583 (119879)

(A11)

(5) Invoke the bounds from steps (1) to (4) into (A6) wecan obtain that10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le [120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)] times

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A12)

Simplify expression (A12) to conclude that

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfinle120583 (119896 minus 119879)

1 minus 120583 (119879)

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A13)

In order to produce the an upper bound of 119883simoptinfininfin we

deduce the following formula by using the triangle inequality10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119884 minus 119884

119905+ 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le1003817100381710038171003817119860lowast

119905(119884 minus 119884

119905)1003817100381710038171003817infininfin +

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A14)

Since (119884minus119884119905) is orthogonal to the range of119860

119905 Introduce (A5)

into (A14) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

simopt (119868 minus 119875opt)119860simopt119883simopt10038171003817100381710038171003817infininfin

(A15)

The equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt we can replace 119860

119905by 119860simopt Let 119863 = 119860

lowast

simopt(119868 minus

119875opt)119860simopt and rewrite 119863 = 119868 minus (119868 minus 119863) we can expand itsinverse in a Neumann series to obtain

10038171003817100381710038171003817119863119883simopt10038171003817100381710038171003817infininfin

ge (1 minus 119868 minus 119863infininfin)10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A16)

Recall the definition of119863 and apply the triangle inequality toobtain

119868 minus 119863infininfin

=10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt + 119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

(A17)

16 Mathematical Problems in Engineering

The right-hand side of inequality can be deduced completelyanalogous with steps (1)ndash(5) we can obtain

119868 minus 119863infininfin le 120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)

=120583 (119896 minus 119879)

1 minus 120583 (119879)

(A18)

Introduce (A18) and (A16) into (A14) we have10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A19)

Using the same deduction as (A3) it follows that 119860lowast119905(119884 minus

119884opt)infininfin le 119860lowast(119884minus119884opt)infininfin So (A19) can be rewritten as

follows10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A20)

Introduce (A20) into (A13) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le120583 (119896 minus 119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A21)

Introduce (A21) into (A4) to get1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin

le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A22)

Finally assume we have a bound 119860lowast(119884 minus 119884opt)infininfin le 120591 so(A22) can be rewritten as follows

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)120591 (A23)

This completes the argument

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work has been supported by a grant from the NUAAFundamental Research Funds (NS2013085) The authorswould like to thank M D Iordache J Bioucas-Dias and APlaza for sharing Simulated Data 2 and their codes for thealgorithms of CLSUnSAL SUnSAL and SUnSAL-TV Theauthors would also like to thank Shi Z TangW Duren Z andJiang Z for sharing their codes for the algorithms of SMP andRSFoBaMoreover the authors would also like to thank thoseanonymous reviewers for their helpful comments to improvethis paper

References

[1] N Keshava and J F Mustard ldquoSpectral unmixingrdquo IEEE SignalProcessing Magazine vol 19 no 1 pp 44ndash57 2002

[2] Y H Hu H B Lee and F L Scarpace ldquoOptimal linear spectralunmixingrdquo IEEE Transactions on Geoscience and Remote Sens-ing vol 37 no 1 pp 639ndash644 1999

[3] J M Bioucas-Dias A Plaza N Dobigeon et al ldquoHyperspec-tral unmixing overview geometrical statistical and sparseregression-based approachesrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 5 no 2 pp354ndash379 2012

[4] M Grana I Villaverde J O Maldonado and C HernandezldquoTwo lattice computing approaches for the unsupervised seg-mentation of hyperspectral imagesrdquo Neurocomputing vol 72no 10-12 pp 2111ndash2120 2009

[5] J Li and J M Bioucas-Dias ldquoMinimum volume simplexanalysis a fast algorithm to unmix hyperspectral datardquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo08) pp III-250ndashIII-253 BostonMass USA July 2008

[6] H Pu B Wang and L Zhang ldquoSimplex geometry-basedabundance estimation algorithm for hyperspectral unmixingrdquoScientia Sinica Informationis vol 42 no 8 pp 1019ndash1033 2012

[7] G Zhou S Xie Z Yang J-M Yang and Z He ldquoMinimum-volume-constrained nonnegative matrix factorizationenhanced ability of learning partsrdquo IEEE Transactions onNeural Networks vol 22 no 10 pp 1626ndash1637 2011

[8] T-H Chan C-Y Chi Y-M Huang and W-K Ma ldquoA convexanalysis-based minimum-volume enclosing simplex algorithmfor hyperspectral unmixingrdquo IEEE Transactions on Signal Pro-cessing vol 57 no 11 pp 4418ndash4432 2009

[9] M Arngren M N Schmidt and J Larsen ldquoBayesian nonneg-ative matrix factorization with volume prior for unmixing ofhyperspectral imagesrdquo in Proceedings of the IEEE InternationalWorkshop onMachine Learning for Signal Processing (MLSP rsquo09)pp 1ndash6 IEEE Grenoble France September 2009

[10] V P Pauca J Piper and R J Plemmons ldquoNonnegative matrixfactorization for spectral data analysisrdquo Linear Algebra and ItsApplications vol 416 no 1 pp 29ndash47 2006

[11] N Wang B Du and L Zhang ldquoAn endmember dissimilar-ity constrained non-negative matrix factorization method forhyperspectral unmixingrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 6 no 2pp 554ndash569 2013

[12] Z Wu S Ye J Liu L Sun and Z Wei ldquoSparse non-negativematrix factorization on GPUs for hyperspectral unmixingrdquoIEEE Journal of Selected Topics in Applied Earth Observationsand Remote Sensing vol 7 no 8 pp 3640ndash3649 2014

[13] J B Greer ldquoSparse demixing of hyperspectral imagesrdquo IEEETransactions on Image Processing vol 21 no 1 pp 219ndash2282012

[14] J A Tropp and S JWright ldquoComputational methods for sparsesolution of linear inverse problemsrdquoProceedings of the IEEE vol98 no 6 pp 948ndash958 2010

[15] M Elad Sparse and Redundant Representations Springer NewYork NY USA 2010

[16] J A Tropp and A C Gilbert ldquoSignal recovery from randommeasurements via orthogonal matching pursuitrdquo IEEE Trans-actions on Information Theory vol 53 no 12 pp 4655ndash46662007

Mathematical Problems in Engineering 17

[17] W Dai and O Milenkovic ldquoSubspace pursuit for compressivesensing signal reconstructionrdquo IEEE Transactions on Informa-tion Theory vol 55 no 5 pp 2230ndash2249 2009

[18] I Marques and M Grana ldquoHybrid sparse linear and latticemethod for hyperspectral image unmixingrdquo inHybrid ArtificialIntelligence Systems vol 8480 of Lecture Notes in ComputerScience pp 266ndash273 Springer 2014

[19] I Marques and M Grana ldquoGreedy sparsification WM algo-rithm for endmember induction in hyperspectral imagesrdquo inNatural and Artificial Computation in Engineering and MedicalApplications vol 7931 of Lecture Notes in Computer Science pp336ndash344 Springer Berlin Germany 2013

[20] M V Afonso J M Bioucas-Dias and M A Figueiredo ldquoAnaugmented Lagrangian approach to the constrained optimiza-tion formulation of imaging inverse problemsrdquo IEEE Transac-tions on Image Processing vol 20 no 3 pp 681ndash695 2011

[21] M-D Iordache J M Bioucas-Dias and A Plaza ldquoTotal varia-tion spatial regularization for sparse hyperspectral unmixingrdquoIEEE Transactions on Geoscience and Remote Sensing vol 50no 11 pp 4484ndash4502 2012

[22] M-D Iordache JM Bioucas-Dias andA Plaza ldquoCollaborativesparse unmixing of hyperspectral datardquo in Proceedings of theIEEE International Geoscience and Remote Sensing Symposium(IGARSS rsquo12) pp 7488ndash7491 IEEE Munich Germany July2012

[23] J M Bioucas-Dias and M A T Figueiredo ldquoAlternating direc-tion algorithms for constrained sparse regression Applicationto hyperspectral unmixingrdquo in Proceedings of the 2ndWorkshopon Hyperspectral Image and Signal Processing Evolution inRemote Sensing (WHISPERS rsquo10) pp 1ndash4 Reykjavik IcelandJune 2010

[24] Z Shi W Tang Z Duren and Z Jiang ldquoSubspace matchingpursuit for sparse unmixing of hyperspectral datardquo IEEE Trans-actions on Geoscience and Remote Sensing vol 52 no 6 pp3256ndash3274 2014

[25] J A Tropp A C Gilbert and M J Strauss ldquoAlgorithms forsimultaneous sparse approximation Part I greedy pursuitrdquoSignal Processing vol 86 no 3 pp 572ndash588 2006

[26] W Tang Z Shi andYWu ldquoRegularized simultaneous forward-backward greedy algorithm for sparse unmixing of hyperspec-tral datardquo IEEE Transactions on Geoscience and Remote Sensingvol 52 no 9 pp 5271ndash5288 2014

[27] J Chen and X Huo ldquoTheoretical results on sparse representa-tions of multiple-measurement vectorsrdquo IEEE Transactions onSignal Processing vol 54 no 12 pp 4634ndash4643 2006

[28] G Davis S Mallat and M Avellaneda ldquoGreedy adaptiveapproximationrdquo Journal of Constructive Approximation vol 13no 1 pp 57ndash98 1997

[29] J A Tropp ldquoGreed is good algorithmic results for sparseapproximationrdquo IEEE Transactions on Information Theory vol50 no 10 pp 2231ndash2242 2004

[30] D L Donoho and M Elad ldquoMaximal sparsity representationvia l1 minimizationrdquo Proceedings of the National Academy ofSciences of the United States of America vol 100 pp 2197ndash22022003

[31] R N Clark G A Swayze R Wise et al USGS Digital SpectralLibrary splib06a US Geological Survey Denver Colo USA2007

[32] J Plaza A Plaza P Martınez and R Perez ldquoH-COMP atool for quantitative and comparative analysis of endmemberidentification algorithmsrdquo in Proceedings of the Geoscience andRemote Sensing Symposium (IGARSS rsquo03) vol 1 pp 291ndash2932003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 13: Research Article Backtracking-Based Simultaneous ...downloads.hindawi.com/journals/mpe/2015/842017.pdf · Research Article Backtracking-Based Simultaneous Orthogonal Matching Pursuit

Mathematical Problems in Engineering 13

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

03

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

02

015

01

005

0

0

04

03

02

01

035

03

025

02

015

01

005

0

04

03

02

01

0

Figure 13 Continued

14 Mathematical Problems in Engineering

025

02

015

01

005

0

02

015

01

005

0

03

025

02

015

01

005

0

03

035

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

04

03

02

01

0

04

05

03

02

01

0

04

05

06

03

02

01

0

Figure 13 Abundance maps estimated by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP and the distribution mapsproduced by Tricorder 33 software for the AVIRIS Cuprite scene From top row to bottom row are the maps produced or estimated byTricorder software SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively From left column to right column arethe maps corresponding to Alunite Buddingtonite and Chalcedony respectively

Appendix

Proof of Theorem 7

First we should introduce some definitionswhichwill be usedduring the proof process

Definition A1 Given hyperspectral image data matrix 119884 isin

119877119871times119870 and available spectral library matrix 119860 isin 119877119871times119898

119860119905is a submatrix of the spectral library 119860 where 119860

119905

contains the 119896 columns indexed in 119878119905119860119905contains the

remaining columns 119860 = [119860119905 119860119905]

119860simopt is a submatrix of 119860

119905 which contains the

remaining (119896 minus 119879) columns indexed in 119878119905 119878opt

119883simopt is a submatrix of119883

119905and contains the rows listed

in 119878119905 119878opt

Nowwewill prove the theorem on the performance of thealgorithm

Proof Let119875opt denote the orthogonal projector onto the rangeof 119860opt

119875opt = 119860opt (119860lowast

opt119860opt)minus1

119860lowast

opt (A1)

where119860lowastopt denotes the conjugate transpose of119860opt Since thecolumns of 119860

119905are orthogonal to the columns of 119877

119905(119877119905= 119884minus

119884119905) we have

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905119877119905

10038171003817100381710038171003817infininfin (A2)

Themaximumcorrelation between119860119905(a submatrix of the

spectral library 119860) and the residual 119877opt = 119884 minus 119884opt is smaller

Mathematical Problems in Engineering 15

than the maximum correlation between the spectral library119860 and the residual 119877opt so we have

10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin (A3)

Invoke (A2) and (A3) and apply the triangle inequality to get

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt + 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A4)

Then the matrix 119884119905minus 119884opt can be expressed in the following

manner

119884119905minus 119884opt = (119868 minus 119875opt) 119884119905 = (119868 minus 119875opt)119860 119905119883119905

= (119868 minus 119875opt)119860simopt119883simopt(A5)

The last equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt Applying the triangle inequality the secondterm of (A4) can be rewritten as follows

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119868 minus 119875opt)119860simopt119883simopt

10038171003817100381710038171003817infininfin

le [10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfin]

times10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A6)

We will apply the cumulative coherence function todeduce the estimates of (A6) the deduction has five steps

(1) Each row of the matrix 119860lowast119905119860simopt lists the inner

products between a spectral endmember and (119896 minus 119879) distinctendmembers we have

10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879) (A7)

(2) Use the fact that sdot infininfin

is submultiplicative andinvoke (A1) to get

10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfin

times100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

times10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

(A8)

(3) Making use of the same deduction as in step (1) wehave

10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfinle 120583 (119879)

10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879)

(A9)

(4) Since the columns of 119860 are normalized the Grammatrix 119860lowastopt119860opt lists the inner products between 119879 different

endmembers so the matrix119860lowastopt119860opt has a unit diagonal andthen we can get

119860lowast

opt119860opt = 119868 minus 119883opt (A10)

where 119883optinfininfin le 120583(119879) then we can expand the inverse ina Neumann series to get100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

=100381710038171003817100381710038171003817(119868 minus 119883opt)

minus1100381710038171003817100381710038171003817infininfin

le1

1 minus10038171003817100381710038171003817119883opt

10038171003817100381710038171003817infininfin

le1

1 minus 120583 (119879)

(A11)

(5) Invoke the bounds from steps (1) to (4) into (A6) wecan obtain that10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le [120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)] times

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A12)

Simplify expression (A12) to conclude that

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfinle120583 (119896 minus 119879)

1 minus 120583 (119879)

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A13)

In order to produce the an upper bound of 119883simoptinfininfin we

deduce the following formula by using the triangle inequality10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119884 minus 119884

119905+ 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le1003817100381710038171003817119860lowast

119905(119884 minus 119884

119905)1003817100381710038171003817infininfin +

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A14)

Since (119884minus119884119905) is orthogonal to the range of119860

119905 Introduce (A5)

into (A14) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

simopt (119868 minus 119875opt)119860simopt119883simopt10038171003817100381710038171003817infininfin

(A15)

The equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt we can replace 119860

119905by 119860simopt Let 119863 = 119860

lowast

simopt(119868 minus

119875opt)119860simopt and rewrite 119863 = 119868 minus (119868 minus 119863) we can expand itsinverse in a Neumann series to obtain

10038171003817100381710038171003817119863119883simopt10038171003817100381710038171003817infininfin

ge (1 minus 119868 minus 119863infininfin)10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A16)

Recall the definition of119863 and apply the triangle inequality toobtain

119868 minus 119863infininfin

=10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt + 119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

(A17)

16 Mathematical Problems in Engineering

The right-hand side of inequality can be deduced completelyanalogous with steps (1)ndash(5) we can obtain

119868 minus 119863infininfin le 120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)

=120583 (119896 minus 119879)

1 minus 120583 (119879)

(A18)

Introduce (A18) and (A16) into (A14) we have10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A19)

Using the same deduction as (A3) it follows that 119860lowast119905(119884 minus

119884opt)infininfin le 119860lowast(119884minus119884opt)infininfin So (A19) can be rewritten as

follows10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A20)

Introduce (A20) into (A13) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le120583 (119896 minus 119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A21)

Introduce (A21) into (A4) to get1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin

le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A22)

Finally assume we have a bound 119860lowast(119884 minus 119884opt)infininfin le 120591 so(A22) can be rewritten as follows

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)120591 (A23)

This completes the argument

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work has been supported by a grant from the NUAAFundamental Research Funds (NS2013085) The authorswould like to thank M D Iordache J Bioucas-Dias and APlaza for sharing Simulated Data 2 and their codes for thealgorithms of CLSUnSAL SUnSAL and SUnSAL-TV Theauthors would also like to thank Shi Z TangW Duren Z andJiang Z for sharing their codes for the algorithms of SMP andRSFoBaMoreover the authors would also like to thank thoseanonymous reviewers for their helpful comments to improvethis paper

References

[1] N Keshava and J F Mustard ldquoSpectral unmixingrdquo IEEE SignalProcessing Magazine vol 19 no 1 pp 44ndash57 2002

[2] Y H Hu H B Lee and F L Scarpace ldquoOptimal linear spectralunmixingrdquo IEEE Transactions on Geoscience and Remote Sens-ing vol 37 no 1 pp 639ndash644 1999

[3] J M Bioucas-Dias A Plaza N Dobigeon et al ldquoHyperspec-tral unmixing overview geometrical statistical and sparseregression-based approachesrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 5 no 2 pp354ndash379 2012

[4] M Grana I Villaverde J O Maldonado and C HernandezldquoTwo lattice computing approaches for the unsupervised seg-mentation of hyperspectral imagesrdquo Neurocomputing vol 72no 10-12 pp 2111ndash2120 2009

[5] J Li and J M Bioucas-Dias ldquoMinimum volume simplexanalysis a fast algorithm to unmix hyperspectral datardquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo08) pp III-250ndashIII-253 BostonMass USA July 2008

[6] H Pu B Wang and L Zhang ldquoSimplex geometry-basedabundance estimation algorithm for hyperspectral unmixingrdquoScientia Sinica Informationis vol 42 no 8 pp 1019ndash1033 2012

[7] G Zhou S Xie Z Yang J-M Yang and Z He ldquoMinimum-volume-constrained nonnegative matrix factorizationenhanced ability of learning partsrdquo IEEE Transactions onNeural Networks vol 22 no 10 pp 1626ndash1637 2011

[8] T-H Chan C-Y Chi Y-M Huang and W-K Ma ldquoA convexanalysis-based minimum-volume enclosing simplex algorithmfor hyperspectral unmixingrdquo IEEE Transactions on Signal Pro-cessing vol 57 no 11 pp 4418ndash4432 2009

[9] M Arngren M N Schmidt and J Larsen ldquoBayesian nonneg-ative matrix factorization with volume prior for unmixing ofhyperspectral imagesrdquo in Proceedings of the IEEE InternationalWorkshop onMachine Learning for Signal Processing (MLSP rsquo09)pp 1ndash6 IEEE Grenoble France September 2009

[10] V P Pauca J Piper and R J Plemmons ldquoNonnegative matrixfactorization for spectral data analysisrdquo Linear Algebra and ItsApplications vol 416 no 1 pp 29ndash47 2006

[11] N Wang B Du and L Zhang ldquoAn endmember dissimilar-ity constrained non-negative matrix factorization method forhyperspectral unmixingrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 6 no 2pp 554ndash569 2013

[12] Z Wu S Ye J Liu L Sun and Z Wei ldquoSparse non-negativematrix factorization on GPUs for hyperspectral unmixingrdquoIEEE Journal of Selected Topics in Applied Earth Observationsand Remote Sensing vol 7 no 8 pp 3640ndash3649 2014

[13] J B Greer ldquoSparse demixing of hyperspectral imagesrdquo IEEETransactions on Image Processing vol 21 no 1 pp 219ndash2282012

[14] J A Tropp and S JWright ldquoComputational methods for sparsesolution of linear inverse problemsrdquoProceedings of the IEEE vol98 no 6 pp 948ndash958 2010

[15] M Elad Sparse and Redundant Representations Springer NewYork NY USA 2010

[16] J A Tropp and A C Gilbert ldquoSignal recovery from randommeasurements via orthogonal matching pursuitrdquo IEEE Trans-actions on Information Theory vol 53 no 12 pp 4655ndash46662007

Mathematical Problems in Engineering 17

[17] W Dai and O Milenkovic ldquoSubspace pursuit for compressivesensing signal reconstructionrdquo IEEE Transactions on Informa-tion Theory vol 55 no 5 pp 2230ndash2249 2009

[18] I Marques and M Grana ldquoHybrid sparse linear and latticemethod for hyperspectral image unmixingrdquo inHybrid ArtificialIntelligence Systems vol 8480 of Lecture Notes in ComputerScience pp 266ndash273 Springer 2014

[19] I Marques and M Grana ldquoGreedy sparsification WM algo-rithm for endmember induction in hyperspectral imagesrdquo inNatural and Artificial Computation in Engineering and MedicalApplications vol 7931 of Lecture Notes in Computer Science pp336ndash344 Springer Berlin Germany 2013

[20] M V Afonso J M Bioucas-Dias and M A Figueiredo ldquoAnaugmented Lagrangian approach to the constrained optimiza-tion formulation of imaging inverse problemsrdquo IEEE Transac-tions on Image Processing vol 20 no 3 pp 681ndash695 2011

[21] M-D Iordache J M Bioucas-Dias and A Plaza ldquoTotal varia-tion spatial regularization for sparse hyperspectral unmixingrdquoIEEE Transactions on Geoscience and Remote Sensing vol 50no 11 pp 4484ndash4502 2012

[22] M-D Iordache JM Bioucas-Dias andA Plaza ldquoCollaborativesparse unmixing of hyperspectral datardquo in Proceedings of theIEEE International Geoscience and Remote Sensing Symposium(IGARSS rsquo12) pp 7488ndash7491 IEEE Munich Germany July2012

[23] J M Bioucas-Dias and M A T Figueiredo ldquoAlternating direc-tion algorithms for constrained sparse regression Applicationto hyperspectral unmixingrdquo in Proceedings of the 2ndWorkshopon Hyperspectral Image and Signal Processing Evolution inRemote Sensing (WHISPERS rsquo10) pp 1ndash4 Reykjavik IcelandJune 2010

[24] Z Shi W Tang Z Duren and Z Jiang ldquoSubspace matchingpursuit for sparse unmixing of hyperspectral datardquo IEEE Trans-actions on Geoscience and Remote Sensing vol 52 no 6 pp3256ndash3274 2014

[25] J A Tropp A C Gilbert and M J Strauss ldquoAlgorithms forsimultaneous sparse approximation Part I greedy pursuitrdquoSignal Processing vol 86 no 3 pp 572ndash588 2006

[26] W Tang Z Shi andYWu ldquoRegularized simultaneous forward-backward greedy algorithm for sparse unmixing of hyperspec-tral datardquo IEEE Transactions on Geoscience and Remote Sensingvol 52 no 9 pp 5271ndash5288 2014

[27] J Chen and X Huo ldquoTheoretical results on sparse representa-tions of multiple-measurement vectorsrdquo IEEE Transactions onSignal Processing vol 54 no 12 pp 4634ndash4643 2006

[28] G Davis S Mallat and M Avellaneda ldquoGreedy adaptiveapproximationrdquo Journal of Constructive Approximation vol 13no 1 pp 57ndash98 1997

[29] J A Tropp ldquoGreed is good algorithmic results for sparseapproximationrdquo IEEE Transactions on Information Theory vol50 no 10 pp 2231ndash2242 2004

[30] D L Donoho and M Elad ldquoMaximal sparsity representationvia l1 minimizationrdquo Proceedings of the National Academy ofSciences of the United States of America vol 100 pp 2197ndash22022003

[31] R N Clark G A Swayze R Wise et al USGS Digital SpectralLibrary splib06a US Geological Survey Denver Colo USA2007

[32] J Plaza A Plaza P Martınez and R Perez ldquoH-COMP atool for quantitative and comparative analysis of endmemberidentification algorithmsrdquo in Proceedings of the Geoscience andRemote Sensing Symposium (IGARSS rsquo03) vol 1 pp 291ndash2932003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 14: Research Article Backtracking-Based Simultaneous ...downloads.hindawi.com/journals/mpe/2015/842017.pdf · Research Article Backtracking-Based Simultaneous Orthogonal Matching Pursuit

14 Mathematical Problems in Engineering

025

02

015

01

005

0

02

015

01

005

0

03

025

02

015

01

005

0

03

035

025

02

015

01

005

0

025

02

015

01

005

0

025

02

015

01

005

0

04

03

02

01

0

04

05

03

02

01

0

04

05

06

03

02

01

0

Figure 13 Abundance maps estimated by SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP and the distribution mapsproduced by Tricorder 33 software for the AVIRIS Cuprite scene From top row to bottom row are the maps produced or estimated byTricorder software SUnSAL CLSUnSAL SUnSAL-TV SOMP SMP RSFoBa and BSOMP respectively From left column to right column arethe maps corresponding to Alunite Buddingtonite and Chalcedony respectively

Appendix

Proof of Theorem 7

First we should introduce some definitionswhichwill be usedduring the proof process

Definition A1 Given hyperspectral image data matrix 119884 isin

119877119871times119870 and available spectral library matrix 119860 isin 119877119871times119898

119860119905is a submatrix of the spectral library 119860 where 119860

119905

contains the 119896 columns indexed in 119878119905119860119905contains the

remaining columns 119860 = [119860119905 119860119905]

119860simopt is a submatrix of 119860

119905 which contains the

remaining (119896 minus 119879) columns indexed in 119878119905 119878opt

119883simopt is a submatrix of119883

119905and contains the rows listed

in 119878119905 119878opt

Nowwewill prove the theorem on the performance of thealgorithm

Proof Let119875opt denote the orthogonal projector onto the rangeof 119860opt

119875opt = 119860opt (119860lowast

opt119860opt)minus1

119860lowast

opt (A1)

where119860lowastopt denotes the conjugate transpose of119860opt Since thecolumns of 119860

119905are orthogonal to the columns of 119877

119905(119877119905= 119884minus

119884119905) we have

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905119877119905

10038171003817100381710038171003817infininfin (A2)

Themaximumcorrelation between119860119905(a submatrix of the

spectral library 119860) and the residual 119877opt = 119884 minus 119884opt is smaller

Mathematical Problems in Engineering 15

than the maximum correlation between the spectral library119860 and the residual 119877opt so we have

10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin (A3)

Invoke (A2) and (A3) and apply the triangle inequality to get

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt + 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A4)

Then the matrix 119884119905minus 119884opt can be expressed in the following

manner

119884119905minus 119884opt = (119868 minus 119875opt) 119884119905 = (119868 minus 119875opt)119860 119905119883119905

= (119868 minus 119875opt)119860simopt119883simopt(A5)

The last equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt Applying the triangle inequality the secondterm of (A4) can be rewritten as follows

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119868 minus 119875opt)119860simopt119883simopt

10038171003817100381710038171003817infininfin

le [10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfin]

times10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A6)

We will apply the cumulative coherence function todeduce the estimates of (A6) the deduction has five steps

(1) Each row of the matrix 119860lowast119905119860simopt lists the inner

products between a spectral endmember and (119896 minus 119879) distinctendmembers we have

10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879) (A7)

(2) Use the fact that sdot infininfin

is submultiplicative andinvoke (A1) to get

10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfin

times100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

times10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

(A8)

(3) Making use of the same deduction as in step (1) wehave

10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfinle 120583 (119879)

10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879)

(A9)

(4) Since the columns of 119860 are normalized the Grammatrix 119860lowastopt119860opt lists the inner products between 119879 different

endmembers so the matrix119860lowastopt119860opt has a unit diagonal andthen we can get

119860lowast

opt119860opt = 119868 minus 119883opt (A10)

where 119883optinfininfin le 120583(119879) then we can expand the inverse ina Neumann series to get100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

=100381710038171003817100381710038171003817(119868 minus 119883opt)

minus1100381710038171003817100381710038171003817infininfin

le1

1 minus10038171003817100381710038171003817119883opt

10038171003817100381710038171003817infininfin

le1

1 minus 120583 (119879)

(A11)

(5) Invoke the bounds from steps (1) to (4) into (A6) wecan obtain that10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le [120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)] times

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A12)

Simplify expression (A12) to conclude that

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfinle120583 (119896 minus 119879)

1 minus 120583 (119879)

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A13)

In order to produce the an upper bound of 119883simoptinfininfin we

deduce the following formula by using the triangle inequality10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119884 minus 119884

119905+ 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le1003817100381710038171003817119860lowast

119905(119884 minus 119884

119905)1003817100381710038171003817infininfin +

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A14)

Since (119884minus119884119905) is orthogonal to the range of119860

119905 Introduce (A5)

into (A14) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

simopt (119868 minus 119875opt)119860simopt119883simopt10038171003817100381710038171003817infininfin

(A15)

The equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt we can replace 119860

119905by 119860simopt Let 119863 = 119860

lowast

simopt(119868 minus

119875opt)119860simopt and rewrite 119863 = 119868 minus (119868 minus 119863) we can expand itsinverse in a Neumann series to obtain

10038171003817100381710038171003817119863119883simopt10038171003817100381710038171003817infininfin

ge (1 minus 119868 minus 119863infininfin)10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A16)

Recall the definition of119863 and apply the triangle inequality toobtain

119868 minus 119863infininfin

=10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt + 119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

(A17)

16 Mathematical Problems in Engineering

The right-hand side of inequality can be deduced completelyanalogous with steps (1)ndash(5) we can obtain

119868 minus 119863infininfin le 120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)

=120583 (119896 minus 119879)

1 minus 120583 (119879)

(A18)

Introduce (A18) and (A16) into (A14) we have10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A19)

Using the same deduction as (A3) it follows that 119860lowast119905(119884 minus

119884opt)infininfin le 119860lowast(119884minus119884opt)infininfin So (A19) can be rewritten as

follows10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A20)

Introduce (A20) into (A13) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le120583 (119896 minus 119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A21)

Introduce (A21) into (A4) to get1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin

le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A22)

Finally assume we have a bound 119860lowast(119884 minus 119884opt)infininfin le 120591 so(A22) can be rewritten as follows

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)120591 (A23)

This completes the argument

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work has been supported by a grant from the NUAAFundamental Research Funds (NS2013085) The authorswould like to thank M D Iordache J Bioucas-Dias and APlaza for sharing Simulated Data 2 and their codes for thealgorithms of CLSUnSAL SUnSAL and SUnSAL-TV Theauthors would also like to thank Shi Z TangW Duren Z andJiang Z for sharing their codes for the algorithms of SMP andRSFoBaMoreover the authors would also like to thank thoseanonymous reviewers for their helpful comments to improvethis paper

References

[1] N Keshava and J F Mustard ldquoSpectral unmixingrdquo IEEE SignalProcessing Magazine vol 19 no 1 pp 44ndash57 2002

[2] Y H Hu H B Lee and F L Scarpace ldquoOptimal linear spectralunmixingrdquo IEEE Transactions on Geoscience and Remote Sens-ing vol 37 no 1 pp 639ndash644 1999

[3] J M Bioucas-Dias A Plaza N Dobigeon et al ldquoHyperspec-tral unmixing overview geometrical statistical and sparseregression-based approachesrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 5 no 2 pp354ndash379 2012

[4] M Grana I Villaverde J O Maldonado and C HernandezldquoTwo lattice computing approaches for the unsupervised seg-mentation of hyperspectral imagesrdquo Neurocomputing vol 72no 10-12 pp 2111ndash2120 2009

[5] J Li and J M Bioucas-Dias ldquoMinimum volume simplexanalysis a fast algorithm to unmix hyperspectral datardquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo08) pp III-250ndashIII-253 BostonMass USA July 2008

[6] H Pu B Wang and L Zhang ldquoSimplex geometry-basedabundance estimation algorithm for hyperspectral unmixingrdquoScientia Sinica Informationis vol 42 no 8 pp 1019ndash1033 2012

[7] G Zhou S Xie Z Yang J-M Yang and Z He ldquoMinimum-volume-constrained nonnegative matrix factorizationenhanced ability of learning partsrdquo IEEE Transactions onNeural Networks vol 22 no 10 pp 1626ndash1637 2011

[8] T-H Chan C-Y Chi Y-M Huang and W-K Ma ldquoA convexanalysis-based minimum-volume enclosing simplex algorithmfor hyperspectral unmixingrdquo IEEE Transactions on Signal Pro-cessing vol 57 no 11 pp 4418ndash4432 2009

[9] M Arngren M N Schmidt and J Larsen ldquoBayesian nonneg-ative matrix factorization with volume prior for unmixing ofhyperspectral imagesrdquo in Proceedings of the IEEE InternationalWorkshop onMachine Learning for Signal Processing (MLSP rsquo09)pp 1ndash6 IEEE Grenoble France September 2009

[10] V P Pauca J Piper and R J Plemmons ldquoNonnegative matrixfactorization for spectral data analysisrdquo Linear Algebra and ItsApplications vol 416 no 1 pp 29ndash47 2006

[11] N Wang B Du and L Zhang ldquoAn endmember dissimilar-ity constrained non-negative matrix factorization method forhyperspectral unmixingrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 6 no 2pp 554ndash569 2013

[12] Z Wu S Ye J Liu L Sun and Z Wei ldquoSparse non-negativematrix factorization on GPUs for hyperspectral unmixingrdquoIEEE Journal of Selected Topics in Applied Earth Observationsand Remote Sensing vol 7 no 8 pp 3640ndash3649 2014

[13] J B Greer ldquoSparse demixing of hyperspectral imagesrdquo IEEETransactions on Image Processing vol 21 no 1 pp 219ndash2282012

[14] J A Tropp and S JWright ldquoComputational methods for sparsesolution of linear inverse problemsrdquoProceedings of the IEEE vol98 no 6 pp 948ndash958 2010

[15] M Elad Sparse and Redundant Representations Springer NewYork NY USA 2010

[16] J A Tropp and A C Gilbert ldquoSignal recovery from randommeasurements via orthogonal matching pursuitrdquo IEEE Trans-actions on Information Theory vol 53 no 12 pp 4655ndash46662007

Mathematical Problems in Engineering 17

[17] W Dai and O Milenkovic ldquoSubspace pursuit for compressivesensing signal reconstructionrdquo IEEE Transactions on Informa-tion Theory vol 55 no 5 pp 2230ndash2249 2009

[18] I Marques and M Grana ldquoHybrid sparse linear and latticemethod for hyperspectral image unmixingrdquo inHybrid ArtificialIntelligence Systems vol 8480 of Lecture Notes in ComputerScience pp 266ndash273 Springer 2014

[19] I Marques and M Grana ldquoGreedy sparsification WM algo-rithm for endmember induction in hyperspectral imagesrdquo inNatural and Artificial Computation in Engineering and MedicalApplications vol 7931 of Lecture Notes in Computer Science pp336ndash344 Springer Berlin Germany 2013

[20] M V Afonso J M Bioucas-Dias and M A Figueiredo ldquoAnaugmented Lagrangian approach to the constrained optimiza-tion formulation of imaging inverse problemsrdquo IEEE Transac-tions on Image Processing vol 20 no 3 pp 681ndash695 2011

[21] M-D Iordache J M Bioucas-Dias and A Plaza ldquoTotal varia-tion spatial regularization for sparse hyperspectral unmixingrdquoIEEE Transactions on Geoscience and Remote Sensing vol 50no 11 pp 4484ndash4502 2012

[22] M-D Iordache JM Bioucas-Dias andA Plaza ldquoCollaborativesparse unmixing of hyperspectral datardquo in Proceedings of theIEEE International Geoscience and Remote Sensing Symposium(IGARSS rsquo12) pp 7488ndash7491 IEEE Munich Germany July2012

[23] J M Bioucas-Dias and M A T Figueiredo ldquoAlternating direc-tion algorithms for constrained sparse regression Applicationto hyperspectral unmixingrdquo in Proceedings of the 2ndWorkshopon Hyperspectral Image and Signal Processing Evolution inRemote Sensing (WHISPERS rsquo10) pp 1ndash4 Reykjavik IcelandJune 2010

[24] Z Shi W Tang Z Duren and Z Jiang ldquoSubspace matchingpursuit for sparse unmixing of hyperspectral datardquo IEEE Trans-actions on Geoscience and Remote Sensing vol 52 no 6 pp3256ndash3274 2014

[25] J A Tropp A C Gilbert and M J Strauss ldquoAlgorithms forsimultaneous sparse approximation Part I greedy pursuitrdquoSignal Processing vol 86 no 3 pp 572ndash588 2006

[26] W Tang Z Shi andYWu ldquoRegularized simultaneous forward-backward greedy algorithm for sparse unmixing of hyperspec-tral datardquo IEEE Transactions on Geoscience and Remote Sensingvol 52 no 9 pp 5271ndash5288 2014

[27] J Chen and X Huo ldquoTheoretical results on sparse representa-tions of multiple-measurement vectorsrdquo IEEE Transactions onSignal Processing vol 54 no 12 pp 4634ndash4643 2006

[28] G Davis S Mallat and M Avellaneda ldquoGreedy adaptiveapproximationrdquo Journal of Constructive Approximation vol 13no 1 pp 57ndash98 1997

[29] J A Tropp ldquoGreed is good algorithmic results for sparseapproximationrdquo IEEE Transactions on Information Theory vol50 no 10 pp 2231ndash2242 2004

[30] D L Donoho and M Elad ldquoMaximal sparsity representationvia l1 minimizationrdquo Proceedings of the National Academy ofSciences of the United States of America vol 100 pp 2197ndash22022003

[31] R N Clark G A Swayze R Wise et al USGS Digital SpectralLibrary splib06a US Geological Survey Denver Colo USA2007

[32] J Plaza A Plaza P Martınez and R Perez ldquoH-COMP atool for quantitative and comparative analysis of endmemberidentification algorithmsrdquo in Proceedings of the Geoscience andRemote Sensing Symposium (IGARSS rsquo03) vol 1 pp 291ndash2932003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 15: Research Article Backtracking-Based Simultaneous ...downloads.hindawi.com/journals/mpe/2015/842017.pdf · Research Article Backtracking-Based Simultaneous Orthogonal Matching Pursuit

Mathematical Problems in Engineering 15

than the maximum correlation between the spectral library119860 and the residual 119877opt so we have

10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin (A3)

Invoke (A2) and (A3) and apply the triangle inequality to get

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin =10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt + 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A4)

Then the matrix 119884119905minus 119884opt can be expressed in the following

manner

119884119905minus 119884opt = (119868 minus 119875opt) 119884119905 = (119868 minus 119875opt)119860 119905119883119905

= (119868 minus 119875opt)119860simopt119883simopt(A5)

The last equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt Applying the triangle inequality the secondterm of (A4) can be rewritten as follows

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119868 minus 119875opt)119860simopt119883simopt

10038171003817100381710038171003817infininfin

le [10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfin]

times10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A6)

We will apply the cumulative coherence function todeduce the estimates of (A6) the deduction has five steps

(1) Each row of the matrix 119860lowast119905119860simopt lists the inner

products between a spectral endmember and (119896 minus 119879) distinctendmembers we have

10038171003817100381710038171003817119860lowast

119905119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879) (A7)

(2) Use the fact that sdot infininfin

is submultiplicative andinvoke (A1) to get

10038171003817100381710038171003817119860lowast

119905119875opt119860simopt

10038171003817100381710038171003817infininfinle10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfin

times100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

times10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

(A8)

(3) Making use of the same deduction as in step (1) wehave

10038171003817100381710038171003817119860lowast

119905119860opt

10038171003817100381710038171003817infininfinle 120583 (119879)

10038171003817100381710038171003817119860lowast

opt119860simopt10038171003817100381710038171003817infininfin

le 120583 (119896 minus 119879)

(A9)

(4) Since the columns of 119860 are normalized the Grammatrix 119860lowastopt119860opt lists the inner products between 119879 different

endmembers so the matrix119860lowastopt119860opt has a unit diagonal andthen we can get

119860lowast

opt119860opt = 119868 minus 119883opt (A10)

where 119883optinfininfin le 120583(119879) then we can expand the inverse ina Neumann series to get100381710038171003817100381710038171003817(119860lowast

opt119860opt)minus1100381710038171003817100381710038171003817infininfin

=100381710038171003817100381710038171003817(119868 minus 119883opt)

minus1100381710038171003817100381710038171003817infininfin

le1

1 minus10038171003817100381710038171003817119883opt

10038171003817100381710038171003817infininfin

le1

1 minus 120583 (119879)

(A11)

(5) Invoke the bounds from steps (1) to (4) into (A6) wecan obtain that10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le [120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)] times

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A12)

Simplify expression (A12) to conclude that

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfinle120583 (119896 minus 119879)

1 minus 120583 (119879)

10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A13)

In order to produce the an upper bound of 119883simoptinfininfin we

deduce the following formula by using the triangle inequality10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

119905(119884 minus 119884

119905+ 119884opt minus 119884119905)

10038171003817100381710038171003817infininfin

le1003817100381710038171003817119860lowast

119905(119884 minus 119884

119905)1003817100381710038171003817infininfin +

10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

(A14)

Since (119884minus119884119905) is orthogonal to the range of119860

119905 Introduce (A5)

into (A14) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

=10038171003817100381710038171003817119860lowast

simopt (119868 minus 119875opt)119860simopt119883simopt10038171003817100381710038171003817infininfin

(A15)

The equality holds because (119868 minus 119875opt) is orthogonal to therange of 119860opt we can replace 119860

119905by 119860simopt Let 119863 = 119860

lowast

simopt(119868 minus

119875opt)119860simopt and rewrite 119863 = 119868 minus (119868 minus 119863) we can expand itsinverse in a Neumann series to obtain

10038171003817100381710038171003817119863119883simopt10038171003817100381710038171003817infininfin

ge (1 minus 119868 minus 119863infininfin)10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A16)

Recall the definition of119863 and apply the triangle inequality toobtain

119868 minus 119863infininfin

=10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt + 119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

le10038171003817100381710038171003817119868 minus 119860lowast

simopt119860simopt10038171003817100381710038171003817infininfin

+10038171003817100381710038171003817119860lowast

simopt119875opt119860simopt10038171003817100381710038171003817infininfin

(A17)

16 Mathematical Problems in Engineering

The right-hand side of inequality can be deduced completelyanalogous with steps (1)ndash(5) we can obtain

119868 minus 119863infininfin le 120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)

=120583 (119896 minus 119879)

1 minus 120583 (119879)

(A18)

Introduce (A18) and (A16) into (A14) we have10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A19)

Using the same deduction as (A3) it follows that 119860lowast119905(119884 minus

119884opt)infininfin le 119860lowast(119884minus119884opt)infininfin So (A19) can be rewritten as

follows10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A20)

Introduce (A20) into (A13) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le120583 (119896 minus 119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A21)

Introduce (A21) into (A4) to get1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin

le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A22)

Finally assume we have a bound 119860lowast(119884 minus 119884opt)infininfin le 120591 so(A22) can be rewritten as follows

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)120591 (A23)

This completes the argument

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work has been supported by a grant from the NUAAFundamental Research Funds (NS2013085) The authorswould like to thank M D Iordache J Bioucas-Dias and APlaza for sharing Simulated Data 2 and their codes for thealgorithms of CLSUnSAL SUnSAL and SUnSAL-TV Theauthors would also like to thank Shi Z TangW Duren Z andJiang Z for sharing their codes for the algorithms of SMP andRSFoBaMoreover the authors would also like to thank thoseanonymous reviewers for their helpful comments to improvethis paper

References

[1] N Keshava and J F Mustard ldquoSpectral unmixingrdquo IEEE SignalProcessing Magazine vol 19 no 1 pp 44ndash57 2002

[2] Y H Hu H B Lee and F L Scarpace ldquoOptimal linear spectralunmixingrdquo IEEE Transactions on Geoscience and Remote Sens-ing vol 37 no 1 pp 639ndash644 1999

[3] J M Bioucas-Dias A Plaza N Dobigeon et al ldquoHyperspec-tral unmixing overview geometrical statistical and sparseregression-based approachesrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 5 no 2 pp354ndash379 2012

[4] M Grana I Villaverde J O Maldonado and C HernandezldquoTwo lattice computing approaches for the unsupervised seg-mentation of hyperspectral imagesrdquo Neurocomputing vol 72no 10-12 pp 2111ndash2120 2009

[5] J Li and J M Bioucas-Dias ldquoMinimum volume simplexanalysis a fast algorithm to unmix hyperspectral datardquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo08) pp III-250ndashIII-253 BostonMass USA July 2008

[6] H Pu B Wang and L Zhang ldquoSimplex geometry-basedabundance estimation algorithm for hyperspectral unmixingrdquoScientia Sinica Informationis vol 42 no 8 pp 1019ndash1033 2012

[7] G Zhou S Xie Z Yang J-M Yang and Z He ldquoMinimum-volume-constrained nonnegative matrix factorizationenhanced ability of learning partsrdquo IEEE Transactions onNeural Networks vol 22 no 10 pp 1626ndash1637 2011

[8] T-H Chan C-Y Chi Y-M Huang and W-K Ma ldquoA convexanalysis-based minimum-volume enclosing simplex algorithmfor hyperspectral unmixingrdquo IEEE Transactions on Signal Pro-cessing vol 57 no 11 pp 4418ndash4432 2009

[9] M Arngren M N Schmidt and J Larsen ldquoBayesian nonneg-ative matrix factorization with volume prior for unmixing ofhyperspectral imagesrdquo in Proceedings of the IEEE InternationalWorkshop onMachine Learning for Signal Processing (MLSP rsquo09)pp 1ndash6 IEEE Grenoble France September 2009

[10] V P Pauca J Piper and R J Plemmons ldquoNonnegative matrixfactorization for spectral data analysisrdquo Linear Algebra and ItsApplications vol 416 no 1 pp 29ndash47 2006

[11] N Wang B Du and L Zhang ldquoAn endmember dissimilar-ity constrained non-negative matrix factorization method forhyperspectral unmixingrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 6 no 2pp 554ndash569 2013

[12] Z Wu S Ye J Liu L Sun and Z Wei ldquoSparse non-negativematrix factorization on GPUs for hyperspectral unmixingrdquoIEEE Journal of Selected Topics in Applied Earth Observationsand Remote Sensing vol 7 no 8 pp 3640ndash3649 2014

[13] J B Greer ldquoSparse demixing of hyperspectral imagesrdquo IEEETransactions on Image Processing vol 21 no 1 pp 219ndash2282012

[14] J A Tropp and S JWright ldquoComputational methods for sparsesolution of linear inverse problemsrdquoProceedings of the IEEE vol98 no 6 pp 948ndash958 2010

[15] M Elad Sparse and Redundant Representations Springer NewYork NY USA 2010

[16] J A Tropp and A C Gilbert ldquoSignal recovery from randommeasurements via orthogonal matching pursuitrdquo IEEE Trans-actions on Information Theory vol 53 no 12 pp 4655ndash46662007

Mathematical Problems in Engineering 17

[17] W Dai and O Milenkovic ldquoSubspace pursuit for compressivesensing signal reconstructionrdquo IEEE Transactions on Informa-tion Theory vol 55 no 5 pp 2230ndash2249 2009

[18] I Marques and M Grana ldquoHybrid sparse linear and latticemethod for hyperspectral image unmixingrdquo inHybrid ArtificialIntelligence Systems vol 8480 of Lecture Notes in ComputerScience pp 266ndash273 Springer 2014

[19] I Marques and M Grana ldquoGreedy sparsification WM algo-rithm for endmember induction in hyperspectral imagesrdquo inNatural and Artificial Computation in Engineering and MedicalApplications vol 7931 of Lecture Notes in Computer Science pp336ndash344 Springer Berlin Germany 2013

[20] M V Afonso J M Bioucas-Dias and M A Figueiredo ldquoAnaugmented Lagrangian approach to the constrained optimiza-tion formulation of imaging inverse problemsrdquo IEEE Transac-tions on Image Processing vol 20 no 3 pp 681ndash695 2011

[21] M-D Iordache J M Bioucas-Dias and A Plaza ldquoTotal varia-tion spatial regularization for sparse hyperspectral unmixingrdquoIEEE Transactions on Geoscience and Remote Sensing vol 50no 11 pp 4484ndash4502 2012

[22] M-D Iordache JM Bioucas-Dias andA Plaza ldquoCollaborativesparse unmixing of hyperspectral datardquo in Proceedings of theIEEE International Geoscience and Remote Sensing Symposium(IGARSS rsquo12) pp 7488ndash7491 IEEE Munich Germany July2012

[23] J M Bioucas-Dias and M A T Figueiredo ldquoAlternating direc-tion algorithms for constrained sparse regression Applicationto hyperspectral unmixingrdquo in Proceedings of the 2ndWorkshopon Hyperspectral Image and Signal Processing Evolution inRemote Sensing (WHISPERS rsquo10) pp 1ndash4 Reykjavik IcelandJune 2010

[24] Z Shi W Tang Z Duren and Z Jiang ldquoSubspace matchingpursuit for sparse unmixing of hyperspectral datardquo IEEE Trans-actions on Geoscience and Remote Sensing vol 52 no 6 pp3256ndash3274 2014

[25] J A Tropp A C Gilbert and M J Strauss ldquoAlgorithms forsimultaneous sparse approximation Part I greedy pursuitrdquoSignal Processing vol 86 no 3 pp 572ndash588 2006

[26] W Tang Z Shi andYWu ldquoRegularized simultaneous forward-backward greedy algorithm for sparse unmixing of hyperspec-tral datardquo IEEE Transactions on Geoscience and Remote Sensingvol 52 no 9 pp 5271ndash5288 2014

[27] J Chen and X Huo ldquoTheoretical results on sparse representa-tions of multiple-measurement vectorsrdquo IEEE Transactions onSignal Processing vol 54 no 12 pp 4634ndash4643 2006

[28] G Davis S Mallat and M Avellaneda ldquoGreedy adaptiveapproximationrdquo Journal of Constructive Approximation vol 13no 1 pp 57ndash98 1997

[29] J A Tropp ldquoGreed is good algorithmic results for sparseapproximationrdquo IEEE Transactions on Information Theory vol50 no 10 pp 2231ndash2242 2004

[30] D L Donoho and M Elad ldquoMaximal sparsity representationvia l1 minimizationrdquo Proceedings of the National Academy ofSciences of the United States of America vol 100 pp 2197ndash22022003

[31] R N Clark G A Swayze R Wise et al USGS Digital SpectralLibrary splib06a US Geological Survey Denver Colo USA2007

[32] J Plaza A Plaza P Martınez and R Perez ldquoH-COMP atool for quantitative and comparative analysis of endmemberidentification algorithmsrdquo in Proceedings of the Geoscience andRemote Sensing Symposium (IGARSS rsquo03) vol 1 pp 291ndash2932003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 16: Research Article Backtracking-Based Simultaneous ...downloads.hindawi.com/journals/mpe/2015/842017.pdf · Research Article Backtracking-Based Simultaneous Orthogonal Matching Pursuit

16 Mathematical Problems in Engineering

The right-hand side of inequality can be deduced completelyanalogous with steps (1)ndash(5) we can obtain

119868 minus 119863infininfin le 120583 (119896 minus 119879) +120583 (119879) 120583 (119896 minus 119879)

1 minus 120583 (119879)

=120583 (119896 minus 119879)

1 minus 120583 (119879)

(A18)

Introduce (A18) and (A16) into (A14) we have10038171003817100381710038171003817119860lowast

119905(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A19)

Using the same deduction as (A3) it follows that 119860lowast119905(119884 minus

119884opt)infininfin le 119860lowast(119884minus119884opt)infininfin So (A19) can be rewritten as

follows10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

ge [1 minus120583 (119896 minus 119879)

1 minus 120583 (119879)]10038171003817100381710038171003817119883simopt10038171003817100381710038171003817infininfin

(A20)

Introduce (A20) into (A13) we can get10038171003817100381710038171003817119860lowast

119905(119884119905minus 119884opt)

10038171003817100381710038171003817infininfin

le120583 (119896 minus 119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A21)

Introduce (A21) into (A4) to get1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin

le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)

10038171003817100381710038171003817119860lowast(119884 minus 119884opt)

10038171003817100381710038171003817infininfin

(A22)

Finally assume we have a bound 119860lowast(119884 minus 119884opt)infininfin le 120591 so(A22) can be rewritten as follows

1003817100381710038171003817119860lowast119877119905

1003817100381710038171003817infininfin le1 minus 120583 (119879)

1 minus 120583 (119879) minus 120583 (119896 minus 119879)120591 (A23)

This completes the argument

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work has been supported by a grant from the NUAAFundamental Research Funds (NS2013085) The authorswould like to thank M D Iordache J Bioucas-Dias and APlaza for sharing Simulated Data 2 and their codes for thealgorithms of CLSUnSAL SUnSAL and SUnSAL-TV Theauthors would also like to thank Shi Z TangW Duren Z andJiang Z for sharing their codes for the algorithms of SMP andRSFoBaMoreover the authors would also like to thank thoseanonymous reviewers for their helpful comments to improvethis paper

References

[1] N Keshava and J F Mustard ldquoSpectral unmixingrdquo IEEE SignalProcessing Magazine vol 19 no 1 pp 44ndash57 2002

[2] Y H Hu H B Lee and F L Scarpace ldquoOptimal linear spectralunmixingrdquo IEEE Transactions on Geoscience and Remote Sens-ing vol 37 no 1 pp 639ndash644 1999

[3] J M Bioucas-Dias A Plaza N Dobigeon et al ldquoHyperspec-tral unmixing overview geometrical statistical and sparseregression-based approachesrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 5 no 2 pp354ndash379 2012

[4] M Grana I Villaverde J O Maldonado and C HernandezldquoTwo lattice computing approaches for the unsupervised seg-mentation of hyperspectral imagesrdquo Neurocomputing vol 72no 10-12 pp 2111ndash2120 2009

[5] J Li and J M Bioucas-Dias ldquoMinimum volume simplexanalysis a fast algorithm to unmix hyperspectral datardquo inProceedings of the IEEE International Geoscience and RemoteSensing Symposium (IGARSS rsquo08) pp III-250ndashIII-253 BostonMass USA July 2008

[6] H Pu B Wang and L Zhang ldquoSimplex geometry-basedabundance estimation algorithm for hyperspectral unmixingrdquoScientia Sinica Informationis vol 42 no 8 pp 1019ndash1033 2012

[7] G Zhou S Xie Z Yang J-M Yang and Z He ldquoMinimum-volume-constrained nonnegative matrix factorizationenhanced ability of learning partsrdquo IEEE Transactions onNeural Networks vol 22 no 10 pp 1626ndash1637 2011

[8] T-H Chan C-Y Chi Y-M Huang and W-K Ma ldquoA convexanalysis-based minimum-volume enclosing simplex algorithmfor hyperspectral unmixingrdquo IEEE Transactions on Signal Pro-cessing vol 57 no 11 pp 4418ndash4432 2009

[9] M Arngren M N Schmidt and J Larsen ldquoBayesian nonneg-ative matrix factorization with volume prior for unmixing ofhyperspectral imagesrdquo in Proceedings of the IEEE InternationalWorkshop onMachine Learning for Signal Processing (MLSP rsquo09)pp 1ndash6 IEEE Grenoble France September 2009

[10] V P Pauca J Piper and R J Plemmons ldquoNonnegative matrixfactorization for spectral data analysisrdquo Linear Algebra and ItsApplications vol 416 no 1 pp 29ndash47 2006

[11] N Wang B Du and L Zhang ldquoAn endmember dissimilar-ity constrained non-negative matrix factorization method forhyperspectral unmixingrdquo IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing vol 6 no 2pp 554ndash569 2013

[12] Z Wu S Ye J Liu L Sun and Z Wei ldquoSparse non-negativematrix factorization on GPUs for hyperspectral unmixingrdquoIEEE Journal of Selected Topics in Applied Earth Observationsand Remote Sensing vol 7 no 8 pp 3640ndash3649 2014

[13] J B Greer ldquoSparse demixing of hyperspectral imagesrdquo IEEETransactions on Image Processing vol 21 no 1 pp 219ndash2282012

[14] J A Tropp and S JWright ldquoComputational methods for sparsesolution of linear inverse problemsrdquoProceedings of the IEEE vol98 no 6 pp 948ndash958 2010

[15] M Elad Sparse and Redundant Representations Springer NewYork NY USA 2010

[16] J A Tropp and A C Gilbert ldquoSignal recovery from randommeasurements via orthogonal matching pursuitrdquo IEEE Trans-actions on Information Theory vol 53 no 12 pp 4655ndash46662007

Mathematical Problems in Engineering 17

[17] W Dai and O Milenkovic ldquoSubspace pursuit for compressivesensing signal reconstructionrdquo IEEE Transactions on Informa-tion Theory vol 55 no 5 pp 2230ndash2249 2009

[18] I Marques and M Grana ldquoHybrid sparse linear and latticemethod for hyperspectral image unmixingrdquo inHybrid ArtificialIntelligence Systems vol 8480 of Lecture Notes in ComputerScience pp 266ndash273 Springer 2014

[19] I Marques and M Grana ldquoGreedy sparsification WM algo-rithm for endmember induction in hyperspectral imagesrdquo inNatural and Artificial Computation in Engineering and MedicalApplications vol 7931 of Lecture Notes in Computer Science pp336ndash344 Springer Berlin Germany 2013

[20] M V Afonso J M Bioucas-Dias and M A Figueiredo ldquoAnaugmented Lagrangian approach to the constrained optimiza-tion formulation of imaging inverse problemsrdquo IEEE Transac-tions on Image Processing vol 20 no 3 pp 681ndash695 2011

[21] M-D Iordache J M Bioucas-Dias and A Plaza ldquoTotal varia-tion spatial regularization for sparse hyperspectral unmixingrdquoIEEE Transactions on Geoscience and Remote Sensing vol 50no 11 pp 4484ndash4502 2012

[22] M-D Iordache JM Bioucas-Dias andA Plaza ldquoCollaborativesparse unmixing of hyperspectral datardquo in Proceedings of theIEEE International Geoscience and Remote Sensing Symposium(IGARSS rsquo12) pp 7488ndash7491 IEEE Munich Germany July2012

[23] J M Bioucas-Dias and M A T Figueiredo ldquoAlternating direc-tion algorithms for constrained sparse regression Applicationto hyperspectral unmixingrdquo in Proceedings of the 2ndWorkshopon Hyperspectral Image and Signal Processing Evolution inRemote Sensing (WHISPERS rsquo10) pp 1ndash4 Reykjavik IcelandJune 2010

[24] Z Shi W Tang Z Duren and Z Jiang ldquoSubspace matchingpursuit for sparse unmixing of hyperspectral datardquo IEEE Trans-actions on Geoscience and Remote Sensing vol 52 no 6 pp3256ndash3274 2014

[25] J A Tropp A C Gilbert and M J Strauss ldquoAlgorithms forsimultaneous sparse approximation Part I greedy pursuitrdquoSignal Processing vol 86 no 3 pp 572ndash588 2006

[26] W Tang Z Shi andYWu ldquoRegularized simultaneous forward-backward greedy algorithm for sparse unmixing of hyperspec-tral datardquo IEEE Transactions on Geoscience and Remote Sensingvol 52 no 9 pp 5271ndash5288 2014

[27] J Chen and X Huo ldquoTheoretical results on sparse representa-tions of multiple-measurement vectorsrdquo IEEE Transactions onSignal Processing vol 54 no 12 pp 4634ndash4643 2006

[28] G Davis S Mallat and M Avellaneda ldquoGreedy adaptiveapproximationrdquo Journal of Constructive Approximation vol 13no 1 pp 57ndash98 1997

[29] J A Tropp ldquoGreed is good algorithmic results for sparseapproximationrdquo IEEE Transactions on Information Theory vol50 no 10 pp 2231ndash2242 2004

[30] D L Donoho and M Elad ldquoMaximal sparsity representationvia l1 minimizationrdquo Proceedings of the National Academy ofSciences of the United States of America vol 100 pp 2197ndash22022003

[31] R N Clark G A Swayze R Wise et al USGS Digital SpectralLibrary splib06a US Geological Survey Denver Colo USA2007

[32] J Plaza A Plaza P Martınez and R Perez ldquoH-COMP atool for quantitative and comparative analysis of endmemberidentification algorithmsrdquo in Proceedings of the Geoscience andRemote Sensing Symposium (IGARSS rsquo03) vol 1 pp 291ndash2932003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 17: Research Article Backtracking-Based Simultaneous ...downloads.hindawi.com/journals/mpe/2015/842017.pdf · Research Article Backtracking-Based Simultaneous Orthogonal Matching Pursuit

Mathematical Problems in Engineering 17

[17] W Dai and O Milenkovic ldquoSubspace pursuit for compressivesensing signal reconstructionrdquo IEEE Transactions on Informa-tion Theory vol 55 no 5 pp 2230ndash2249 2009

[18] I Marques and M Grana ldquoHybrid sparse linear and latticemethod for hyperspectral image unmixingrdquo inHybrid ArtificialIntelligence Systems vol 8480 of Lecture Notes in ComputerScience pp 266ndash273 Springer 2014

[19] I Marques and M Grana ldquoGreedy sparsification WM algo-rithm for endmember induction in hyperspectral imagesrdquo inNatural and Artificial Computation in Engineering and MedicalApplications vol 7931 of Lecture Notes in Computer Science pp336ndash344 Springer Berlin Germany 2013

[20] M V Afonso J M Bioucas-Dias and M A Figueiredo ldquoAnaugmented Lagrangian approach to the constrained optimiza-tion formulation of imaging inverse problemsrdquo IEEE Transac-tions on Image Processing vol 20 no 3 pp 681ndash695 2011

[21] M-D Iordache J M Bioucas-Dias and A Plaza ldquoTotal varia-tion spatial regularization for sparse hyperspectral unmixingrdquoIEEE Transactions on Geoscience and Remote Sensing vol 50no 11 pp 4484ndash4502 2012

[22] M-D Iordache JM Bioucas-Dias andA Plaza ldquoCollaborativesparse unmixing of hyperspectral datardquo in Proceedings of theIEEE International Geoscience and Remote Sensing Symposium(IGARSS rsquo12) pp 7488ndash7491 IEEE Munich Germany July2012

[23] J M Bioucas-Dias and M A T Figueiredo ldquoAlternating direc-tion algorithms for constrained sparse regression Applicationto hyperspectral unmixingrdquo in Proceedings of the 2ndWorkshopon Hyperspectral Image and Signal Processing Evolution inRemote Sensing (WHISPERS rsquo10) pp 1ndash4 Reykjavik IcelandJune 2010

[24] Z Shi W Tang Z Duren and Z Jiang ldquoSubspace matchingpursuit for sparse unmixing of hyperspectral datardquo IEEE Trans-actions on Geoscience and Remote Sensing vol 52 no 6 pp3256ndash3274 2014

[25] J A Tropp A C Gilbert and M J Strauss ldquoAlgorithms forsimultaneous sparse approximation Part I greedy pursuitrdquoSignal Processing vol 86 no 3 pp 572ndash588 2006

[26] W Tang Z Shi andYWu ldquoRegularized simultaneous forward-backward greedy algorithm for sparse unmixing of hyperspec-tral datardquo IEEE Transactions on Geoscience and Remote Sensingvol 52 no 9 pp 5271ndash5288 2014

[27] J Chen and X Huo ldquoTheoretical results on sparse representa-tions of multiple-measurement vectorsrdquo IEEE Transactions onSignal Processing vol 54 no 12 pp 4634ndash4643 2006

[28] G Davis S Mallat and M Avellaneda ldquoGreedy adaptiveapproximationrdquo Journal of Constructive Approximation vol 13no 1 pp 57ndash98 1997

[29] J A Tropp ldquoGreed is good algorithmic results for sparseapproximationrdquo IEEE Transactions on Information Theory vol50 no 10 pp 2231ndash2242 2004

[30] D L Donoho and M Elad ldquoMaximal sparsity representationvia l1 minimizationrdquo Proceedings of the National Academy ofSciences of the United States of America vol 100 pp 2197ndash22022003

[31] R N Clark G A Swayze R Wise et al USGS Digital SpectralLibrary splib06a US Geological Survey Denver Colo USA2007

[32] J Plaza A Plaza P Martınez and R Perez ldquoH-COMP atool for quantitative and comparative analysis of endmemberidentification algorithmsrdquo in Proceedings of the Geoscience andRemote Sensing Symposium (IGARSS rsquo03) vol 1 pp 291ndash2932003

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 18: Research Article Backtracking-Based Simultaneous ...downloads.hindawi.com/journals/mpe/2015/842017.pdf · Research Article Backtracking-Based Simultaneous Orthogonal Matching Pursuit

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of