-
Hindawi Publishing CorporationAbstract and Applied AnalysisVolume 2013, Article ID 528281, 8 pageshttp://dx.doi.org/10.1155/2013/528281
Research ArticleOn the Low-Rank Approximation Arising in the GeneralizedKarhunen-Loeve Transform
Xue-Feng Duan,1 Qing-Wen Wang,2 and Jiao-Fen Li1
1 College of Mathematics and Computational Science, Guilin University of Electronic Technology, Guilin 541004, China2Department of Mathematics, Shanghai University, Shanghai 200444, China
Correspondence should be addressed to Qing-Wen Wang; [email protected]
Received 11 March 2013; Accepted 25 April 2013
Academic Editor: Masoud Hajarian
Copyright Β© 2013 Xue-Feng Duan et al.This is an open access article distributed under the Creative CommonsAttribution License,which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We consider the low-rank approximation problem arising in the generalized Karhunen-Loeve transform. A sufficient condition forthe existence of a solution is derived, and the analytical expression of the solution is given. A numerical algorithm is proposed tocompute the solution. The new algorithm is illustrated by numerical experiments.
1. Introduction
Throughout this paper, we use π πΓπ to denote the set ofπΓπreal matrices. We use π΄π and π΄+ to denote the transposeand Moore-Penrose generalized inverse of the matrix π΄,respectively. The symbol ππΓπ stands for the set of all π Γ πorthogonal matrices. The symbols rank(π΄) and βπ΄β
πΉstand
for the rank and the Frobenius norm of the matrix π΄,respectively. For π = (π
π) β π
π, the symbol βπβ stands forthe π2-norm of the vector π, that is, βπβ
2= (βπ
π=1π2
π)1/2. The
symbol π΄1/2 stands for the square root of the matrix π΄, thatis, (π΄1/2)2 = π΄. For the random vector π₯ = (π₯
π) β π π, we use
πΈ{π₯π} to stand for the expected value of the πth entry π₯
π, and
we useπΈ{π₯π₯π} = (πππ)πΓπ
to stand for the covariancematrix ofthe random vector π₯, where π
ππ= πΈ[(π₯
πβπΈ{π₯
π})(π₯πβπΈ{π₯
π})],
π, π = 1, 2, . . . , π.The generalized Karhunen-Loeve transform is a well-
known signal processing technique for data compression andfiltering (see [1β4] for more details). A simple descriptionof the generalized Karhunen-Loeve transform is as follows.Given two random vectors π₯ β π π, π β π π and an integerπ (1 β€ π < min{π, π}), the generalized Karhunen-Loevetransform is presented by a matrix πβ, which is a solution ofthe following minimization problem (see [1, 4]):
minπβπ πΓπ, rank(π)=π
πΈ {βπ β ππ₯β2
} . (1)
Here the vector π depends on some prior knowledge about thedata π₯.
Without the rank constraint on π, the solution of theminimization problem (1) is
π0= π π π₯π +
π₯, (2)
where π π π₯
= πΈ{π π₯π
}, π π₯
= πΈ{π₯π₯π
}. The minimizationproblem with this case is associated with the well-knownconcept of Wiener filtering (see [3]).
With the rank constraint on π, that is, rank(π) = π, wefirst consider the cost function of the minimization problem(1). By using the fact π
π π₯π π₯π +
π₯= π π π₯
and the four Moore-Penrose equations of π +
π₯, it is easy to verify that (see also [1])
πΈ {βπ β ππ₯β2
}= tr {(π β π0) π π₯(π β π
0)π
} + πΈ {π β π0π₯
2
} .
(3)
Noting that the covariance matrix π π₯is symmetric nonnega-
tive definite, then it can be factorized as
π π₯= π 1/2
π₯(π 1/2
π₯)π
. (4)
-
2 Abstract and Applied Analysis
Substituting (4) into (3) gives rise to
πΈ {βπ β ππ₯β2
}
= tr {(π β π0) π 1/2
π₯(π 1/2
π₯)π
(π β π0)π
}
+ πΈ {π β π0π₯
2
}
= tr {[(π β π0) π 1/2
π₯] [(π β π
0) π 1/2
π₯]π
}
+ πΈ {π β π0π₯
2
}
=(π β π
0) π 1/2
π₯
2
πΉ
+ πΈ {π β π0π₯
2
}
=ππ 1/2
π₯β π0π 1/2
π₯
2
πΉ
+ πΈ {π β π0π₯
2
}
=π0π 1/2
π₯β ππ 1/2
π₯
2
πΉ
+ πΈ {π β π0π₯
2
} ,
(5)
since πΈ{β π β π0π₯β2
} is a constant, then
minπβπ πΓπ, rank(π)=π
πΈ {βπ β ππ₯β2
}
= minπβπ πΓπ, rank(π)=π
π0π 1/2
π₯β ππ 1/2
π₯
2
πΉ
+ πΈ {π β π0π₯
2
} ,
(6)
that is to say, minimizing πΈ{βπ β ππ₯β2} is equivalent tominimizing βπ
0π 1/2
π₯β ππ 1/2
π₯β2
πΉ. Therefore, we can find the
solution πβ of (1) by solving the minimization problem
minπβπ πΓπ, rank(π)=π
π0π 1/2
π₯β ππ 1/2
π₯
πΉ, (7)
which can be summarized as the following low rank approx-imation problem:
Problem 1. Given two matrices π΄ β π πΓπ, π΅ β π πΓπ and aninteger π, 1 β€ π < π, π, find a matrix π β π πΓπ of rank πsuch that
π΄ β ππ΅
πΉ= minπβπ πΓπ, rank(π)=π
βπ΄ β ππ΅βπΉ. (8)
In the last few years there has been a constantly increasinginterest in developing the theory and numerical approachesfor the low rank approximations of a matrix, due to theirwide applications. A well-known method for the low rankapproximation is the singular value decomposition (SVD)[5, 6]. When the desired rank is relatively low and the matrixis large and sparse, a complete SVD becomes too expensive.Some less expensive alternatives for numerical computation,for example, Lanczos bidiagonalization process [7], and theMonte Carlo algorithm [8] are available. To speed up thecomputation of SVD, random sampling has been employedin [9]. Recently, Ye [10] proposed the generalized low rankapproximations of matrices (GLRAM) method. This methodis proved to have less computational time than the traditionalsingular value decomposition-based methods in practical
applications. Later, GLRAM method has been revisited andextended by Liu et al. [11] and Liang and Shi [12]. In someapplications, we need to emphasize important parts anddeemphasize unimportant parts of the data matrix, so theweighted low rank approximations were considered by manyauthors. Some numerical methods, such as Newton-likealgorithm [13], left versus right representations method [14],and unconstrained optimization method [15], are proposed.Recently, by using the hierarchical identification principle[16] which regards the knownmatrix as the system parametermatrix to be identified, Ding et al. and Xie et al. present thegradient-based iterative algorithms [16β21] and least-squares-based iterative algorithm [22, 23] for solving matrix equa-tions. The methods are innovational and computationallyefficient numerical algorithms.
The common and practical method to tackle the low rankapproximation Problem 1 is the singular value decomposition(SVD) (e.g. [1]). We briefly review SVDmethod as following.Minimizing (8) by a rank-πmatrixππ΅ is known [5, Page 69]to satisfy
ππ΅ = π΄π=
π
β
π=1
πππ’πVπ
π, (9)
where π΄πdenotes rank-π singular value decomposition
truncation, that is, if the following SVD holds
π΄ =
rank(π΄)β
π=1
πππ’πVπ
π, (10)
then π΄π
= βπ
π=1πππ’πVππ. If the matrix π΅ is square and
nonsingular, then by (9) we obtain that the solution ofProblem 1 is
π = π΄ππ΅β1
= (
π
β
π=1
πππ’πVπ
π)π΅β1
. (11)
The SVD method has two disadvantages as following: (1) itrequires the matrix π΅ to be square and nonsingular; (2) inorder to derive the solution (11), wemust compute the inversematrix of π΅, whose computation cost is very expensive.
In this paper, we develop a new method to solve thelow rank approximation Problem 1, which can avoid thedisadvantages of SVD method. We first transform Problem1 into the fixed rank solution of a matrix equation and thenuse the generalized singular value decomposition (GSVD) tosolve it. Based on these, we derive a sufficient condition forthe existence of a solution of Problem 1, and the analyticalexpression of the solution is given. A numerical algorithm isproposed to compute the solution. Numerical examples areused to illustrate the numerical algorithm. The first one isartificial to show that the new algorithm is feasible to solveProblem 1, and the second is simulation, which shows that thenew algorithm can be used to realize the image compression.
2. Main Results
In this section, we give a sufficient condition and an analyticalexpression for the solution of Problem 1 by transforming
-
Abstract and Applied Analysis 3
Problem 1 into the fixed rank solution of a matrix equation.Finally, we establish an algorithm for solving Problem 1.
Lemma 2. Amatrixπ β π πΓπ is a solution of Problem 1 if andonly if it is a solution of the following matrix equation:
ππ΅π΅π
= π΄π΅π
, rank (π) = π. (12)
Proof. It is easy to verify that a matrixπ β π πΓπ is a solutionof Problem 1 if and only if π satisfies the following twoequalities simultaneously:
π΄ β ππ΅
πΉ= minπβπ πΓπ
βπ΄ β ππ΅βπΉ, (13)
rank (π) = π. (14)
Since the normal equation of the least squares problem(13) is
ππ΅π΅π
= π΄π΅π (15)
and noting that the least squares problem (13) and its normalequation (15) have the same solution sets, then (13) and (14)can be equivalently written as
ππ΅π΅π
= π΄π΅π
, rank (π) = π (16)
which also imply that Problem 1 is equivalent to (12).
Remark 3. From Lemma 2 it follows that Problem 1 isequivalent to (12), hence we can solve Problem 1 by findinga fixed rank solution of the matrix equationππ΅π΅π = π΄π΅π.
Now we will use generalized singular value decomposi-tion (GSVD) to solve (12). Set
πΆ = π΅π΅π
β π πΓπ
, π· = π΄π΅π
β π πΓπ
. (17)
The GSVD of the matrix pair (πΆ,π·) is given by (see [24])
πΆ = πΞ£1π, π· = πΞ£
2π, (18)
where π β ππΓπ, π β ππΓπ, π β π πΓπ is a nonsingularmatrix, π = rank([πΆπ, π·π]), π = rank(πΆ), π‘ = rank(πΆ) +rank(π·) β rank([πΆπ, π·π]), and
Ξ£1=(
πΌπΆ
0
0
π β π‘
0
ππΆ
0
π‘
0
0
ππΆ
π β π
0
0
0
)
π β π
π β π‘
π‘
π β π,
Ξ£2=(
ππ·
0
0
π β π‘
0
ππ·
0
π‘
0
0
πΌπ·
π β π
0
0
0
)
π β π
π β π β π‘ + π
π‘
π β π
(19)
are block matrices, with πΌπΆand πΌπ·are identity matrices, π
πΆ
and ππ·are zero matrices:
ππΆ= diag (πΌ
1, πΌ2, . . . , πΌ
π‘) , 1 > πΌ
1β₯ πΌ2β₯ β β β β₯ πΌ
π‘> 0,
ππ·= diag (π½
1, π½2, . . . , π½
π‘) , 0 < π½
1β€ π½2β€ β β β β€ π½
π‘< 1,
πΌ2
π+ π½2
π= 1, π = 1, 2, . . . , π‘.
(20)
By (17) and (18), we have
ππ΅π΅π
= π΄π΅π
ββ ππΆ = π· ββ ππΞ£1π = πΞ£
2π
ββ ππΞ£1πβπΞ£
2π = 0
ββ π(ππ
ππΞ£1β Ξ£2)π = 0.
(21)
Set
π = ππ
ππ, (22)
and π is partitioned as follows:
π =(
π11
π21
π31
π β π‘
π12
π22
π32
π‘
π13
π23
π33
)
π β π
π β π β π‘ + π
π‘
π β π,
(23)
then
ππ
ππΞ£1β Ξ£2
= πΞ£1β Ξ£2
= (
π11
π12
π13
π21
π22
π23
π31
π32
π33
)(
πΌπΆ
0 0 0
0 ππΆ
0 0
0 0 ππΆ0
)
β (
ππ·
0 0 0
0 ππ·
0 0
0 0 πΌπ·0
)
=(
π11
π21
π31
π β π‘
π12ππΆ
π22ππΆβ ππ·
π32ππΆ
π‘
0
0
βπΌπ·
π β π
0
0
0
)
π β π
π β π β π‘ + π
π‘
π β π.
(24)
Therefore, by (21) and (24), we have
ππ΅π΅π
= π΄π΅π
ββ π(
π11
π12ππΆ
0 0
π21
π22ππΆβ ππ·
0 0
π31
π32ππΆ
βπΌπ·0
)π = 0
ββ rank[
[
π(
π11
π12ππΆ
0 0
π21
π22ππΆβ ππ·
0 0
π31
π32ππΆ
βπΌπ·0
)π]
]
= 0
ββ rank[
[
(
π11
π12ππΆ
0 0
π21
π22ππΆβ ππ·
0 0
π31
π32ππΆ
βπΌπ·0
)]
]
= 0
ββ π β π = 0, π11= π21= π31= π12
= π32= 0, π
22= ππ·πβ1
πΆ,
(25)
that is to say, thematrix equationππ΅π΅π = π΄π΅π has a solutionif and only if
π β π = rank ([πΆπ, π·π]) β rank (πΆ)
= rank ([π΅π΅π, π΅π΄π]) β rank (π΅π΅π) = 0,(26)
-
4 Abstract and Applied Analysis
and according to (22), we know that the expression of thesolution is
π = ππππ
, (27)
where
π =(0
0
π β π‘
0
ππ·πβ1
πΆ
π‘
π13
π23
)
π β π
π β π‘
π‘,
π13β π (πβπ‘)Γ(πβπ)
, π23β π π‘Γ(πβπ)
.
(28)
By (26)β(28) and noting that π13
and π23
are arbitrarymatrices, we have
minπβπ πΓπ
rank (π) = minπβπ πΓπ
rank (ππππ)
= minπ13βπ (πβπ‘)Γ(πβπ)
,π23βπ π‘Γ(πβπ)
rank (π)
= π‘
= rank (π·)
= rank (π΄π΅π) ,
maxπβπ πΓπ
rank (π) = maxπβπ πΓπ
rank (ππππ)
= maxπ13βπ (πβπ‘)Γ(πβπ)
, π23βπ π‘Γ(πβπ)
rank (π)
= π‘ +min {π β π,π β π‘}
= min {π, π + π‘ β π}
= min {π, π + rank (πΆ) + rank (π·)
β rank ([πΆπ, π·π]) β rank (πΆ)}
= min {π, π + rank (π·) β rank ([πΆπ, π·π])}
= min {π, π + rank (π·) β rank (πΆ)}
= min {π, π + rank (π΄π΅π) β rank (π΅π΅π)} .(29)
Hence, if
rank ([π΅π΅π, π΅π΄π]) β rank (π΅π΅π) = π β π = 0, (30)
rank (π΄π΅π) β€ π β€ min {π, π + rank (π΄π΅π) β rank (π΅π΅π)} ,(31)
then (12) has a solution, and the expressions of the solutionare given by (26)β(28), that is,
π = ππππ
= π(0 0 π
13
0 ππ·πβ1
πΆπ23
)ππ
, (32)
where π23
β π π‘Γ(πβπ) is an arbitrary matrix and π
13β
π (πβπ‘)Γ(πβπ) is chosen such that
rank (π13) = π β π‘ = π β rank (π·) = π β rank (π΄π΅π) .
(33)
And noting that the low rank approximation Problem 1is equivalent to (12) (i.e. Lemma 2), then we obtain thefollowing.
Theorem 4. If
rank ([π΅π΅π, π΅π΄π]) β rank (π΅π΅π) = π β π = 0,
rank (π΄π΅π) β€ π β€ min {π, π + rank (π΄π΅π) β rank (π΅π΅π)} ,(34)
then Problem 1 has a solution, and the expressions of thesolution are given by
π = ππππ
= π(0 0 π
13
0 ππ·πβ1
πΆπ23
)ππ
, (35)
where π23
β π π‘Γ(πβπ) is an arbitrary matrix and π
13β
π (πβπ‘)Γ(πβπ) is chosen such that
rank (π13) = π β π‘ = π β rank (π·) = π β rank (π΄π΅π) .
(36)
Remark 5. In contrast with (11), the solution expression (35)does not require the matrix π΅ to be square and nonsingularand does not need to compute the inverse of π΅.
Based on Theorem 4, we can establish an algorithm forfinding the solution of Problem 1.
Algorithm 6. (1) Input the matrices π΄, π΅ and the integer π;
(2) make the GSVD of the matrix pair (πΆ,π·) accordingto (18);
(3) choose π23β π π‘Γ(πβπ) andπ
13β π (πβπ‘)Γ(πβπ), such that
rank(π13) = π β rank(π΄π΅π);
(4) compute the solutionπ according to (35).
3. Numerical Experiments
In this section, we first use a simple artificial example toillustrate that Algorithm 6 is feasible to solve Problem 1, thenwe use a simulation to show that Algorithm 6 can be used torealize the image compression. The experiments were donewith MATLAB 7.6 on a 64-bit Intel Pentium Xeon 2.66GHzwith πmach β 2.0 Γ 10
β16.
-
Abstract and Applied Analysis 5
(a) (b)
Figure 1: (a) Original image; (b) noisy image.
Table 1: Execution time for deriving Figures 2(a)β3(c).
Figure 2(a) Figure 3(a) Figure 2(b) Figure 3(b) Figure 2(c) Figure 3(c)3.5835 (s) 3.9216 (s) 2.8627 (s) 3.0721 (s) 2.0591 (s) 2.1433 (s)
Example 7. Consider Problem 1 with
π΄ =(
0.6884 0.5873 0.4236 0.4243 0.8483 1.0400 0.9778 0.1541
1.0869 0.8243 0.5998 0.5993 1.0695 1.3080 1.5460 0.3538
0.5709 0.3925 0.1344 0.3478 0.4318 0.5389 0.4866 0.1035
1.1466 0.8938 0.9278 0.5209 1.3410 1.5086 1.4849 0.2599
0.7752 0.5851 0.4972 0.3909 0.8055 0.9389 0.9832 0.1978
) ,
π΅ =(
(
0.0554 0.6308 0.8468 0.9419 0.6891 0.2808 0.4558 0.7709
0.1655 0.4783 0.3702 0.7526 1.0548 0.4358 0.7835 0.3309
0.1150 0.3808 0.3316 0.5940 0.7677 0.3168 0.5645 0.2978
0.1693 0.5032 0.4001 0.7903 1.0890 0.4499 0.8073 0.3580
0.0869 0.4247 0.4608 0.6495 0.6779 0.2786 0.4829 0.4170
0.1142 0.5772 0.6351 0.8815 0.9040 0.3715 0.6421 0.5951
)
)
.
(37)
We make GSVD of the matrix pair (πΆ,π·) = (π΅π΅π, π΄π΅π) asfollows:
πΆ = πΞ£1π, π· = πΞ£
2π, (38)
where
π =(
(
β0.7574 0.4215 β0.0590 0.0130 β0.2832 β0.4060
β0.1509 β0.5893 β0.3965 β0.6437 β0.1562 β0.1845
β0.1775 β0.3461 0.8997 β0.1454 β0.0810 β0.1071
β0.1752 β0.5847 β0.1667 0.7506 β0.1161 β0.1510
β0.3397 β0.0817 β0.0302 β0.0251 0.9311 β0.0970
β0.4754 β0.0814 β0.0331 β0.0218 β0.0915 0.8703
)
)
, (39)
π =(
β0.2818 0.4322 0.0937 0.7418 β0.4180
β0.1814 β0.6971 β0.4417 0.0795 β0.5503
0.9419 β0.0182 β0.0330 0.2445 β0.2272
0.0166 0.5179 β0.2279 β0.5906 β0.5751
β0.0160 β0.2424 0.8753 β0.1863 β0.3743
) , (40)
-
6 Abstract and Applied Analysis
π =(
(
β6.2034 β5.7059 β4.4070 β5.9640 β4.5789 β6.1909
β7.5279 β8.6034 β6.4574 β8.9389 β6.2317 β8.3756
0.4175 β0.1314 0.2453 0.4166 β0.1876 β0.7345
0.1928 0.6688 0.2243 β0.6061 β0.2199 β0.2228
β0.0101 0.2451 β0.4669 β0.0248 0.7439 β0.4097
0.1926 0.3245 β0.7213 0.2930 β0.4910 0.1022
)
)
, (41)
Ξ£1=(
(
0.9985 0 0 0 0
0 0.4192 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
)
)
, Ξ£2=(
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0.0541 0 0 0 0
0 0.9079 0 0 0
). (42)
It is easy to verify that
rank ([π΅π΅π, π΅π΄π]) β rank (π΅π΅π) = 0,
rank (π΄π΅π) = 2,
min {π, π + rank (π΄π΅π) β rank (π΅π΅π)} = 5,
(43)
that is, if 2 β€ π β€ 5, then the conditions of Theorem 4 aresatisfied. Setting π = 2 β [2, 5], according to (35), we obtainthat the solution of Problem 1 is
π = π(
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0.0541
0.99850 0 0 0 0
00.9079
0.41920 0 0 0
) , (44)
ππ
= (
β0.4121 0.5275 0.3062 0.5223 0.0603 0.0546
β0.5057 0.7017 0.4117 0.6961 0.0959 0.0949
β0.2175 0.2880 0.1680 0.2855 0.0357 0.0338
β0.5008 0.7389 0.4367 0.7339 0.1126 0.1166
β0.3341 0.4793 0.2823 0.4758 0.0696 0.0708
) .
(45)
Setting π = 4 β [2, 5], according to (35), we obtain that thesolution of Problem 1 is
π = π(
0 0 1 0 0 0
0 0 0 1 0 0
0 0 0 0 0 0
0.0541
0.99850 0 0 0 0
00.9079
0.41920 0 0 0
) ,
ππ
= (
β0.3898 0.3610 β0.0102 0.8937 0.0579 0.0549
β0.5040 1.2224 0.3498 0.2032 0.1188 0.1155
β0.2733 β0.0736 1.0181 0.1148 0.0077 0.0030
β0.4950 0.3989 0.3764 1.1198 0.0991 0.1052
β0.3363 0.6416 0.3032 0.2965 0.0762 0.0764
) .
(46)
Example 7 shows that Algorithm 6 is feasible to solveProblem 1. However, the SVD method in [1] cannot be usedto solve Example 7, because π΅ is not a square matrix.
Example 8. We will use the generalized Karhunen-Loevetransform, based on Algorithm 6 and SVD method in [1],respectively, to realize the image compression. Figure 1(a)(see page 3) is the test image which has 256 Γ 256 pixels and256 levels on each pixel. We separate it into 32 Γ 32 blockssuch that each block has 8 Γ 8 pixels. Let π(π,π)
π,πand π(π,π)
π,π
(π, π = 0, 1, 2, . . . , 7; π, π = 0, 1, 2, . . . , 31) be the values of theimage and a Gaussian noise (generated by Matlab functionππππππ π) at the (π, π)th pixel in the (π, π)th block, respectively.For convenience, let π = π + 8π, π = π + 32π, and the (π, π)thpixel in the (π, π)th block be expressed as the πth pixel in theπth block (π = 0, 1, 2, . . . , 63; π = 0, 1, . . . , 1023). We can alsoexpress π(π,π)
π,πand π(π,π)π,π
as π(π)π
and π(π)π, respectively.
The test image is processed on each block. Therefore, wecan assume that the blocked image space is 64-π· real vectorspace π 64. The πth block of the original image is expressed bythe πth vector:
π π
= (π π
0, π π
1, . . . , π
π
63)π
. (47)
Hence the original image is expressed by 1024 64-π· vectors{π π
}1023
π=0. The noise is similarly expressed by {ππ}1023
π=0, where
ππ
= (ππ
0, ππ
1, . . . , π
π
63)π
. (48)
Figure 1(b) is the noisy image {π₯π}1023π=0
, where
π₯π
= π π
+ ππ
, π = 0, 1, . . . , 1023. (49)
By (47), (49), (2), (4) and the definition of covariance matrix,we get π
0and π 1/2
π₯of (7). Then we use Algorithm 6 and SVD
method in [1] to realize the image compression respectively,and the experiment results are in pages 4 and 5.
Figure 2 illustrates that Algorithm 6 can be used torealize image compression. Although it is difficult to see thedifference between Figures 2 and 3, which are compressed bySVDmethod in [1], fromTable 1 we can see that the executiontime of Algorithm 6 is less than that of SVD method at thesame rank. This shows that our algorithm outperforms theSVD method in execution time.
-
Abstract and Applied Analysis 7
(a) (b) (c)
Figure 2: Image compression by Algorithm 6 with different rank π: (a) π = 40; (b) π = 30; and (c) π = 20.
(a) (b) (c)
Figure 3: Image compression by SVD method with different rank π: (a) π = 40; (b) π = 30; and (c) π = 20.
4. Conclusion
The low rank approximation Problem 1 arising in the gen-eralized Karhunen-Loeve transform is studied in this paper.We first transform Problem 1 into the fixed rank solution of amatrix equation and then use the generalized singular valuedecomposition (GSVD) to solve it. Based on these, we derivea sufficient condition for the existence of a solution, and theanalytical expression of the solution is also given. Finally, weuse numerical experiments to show that new algorithm isfeasible and effective.
Acknowledgments
This research was supported by the National Natural ScienceFoundation of China (11101100; 11226323; 11261014; and11171205), the Natural Science Foundation of GuangxiProvince (2012GXNSFBA053006; 2013GXNSFBA019009;and 2011GXNSFA018138), the Key Project of ScientificResearch Innovation Foundation of Shanghai MunicipalEducation Commission (13ZZ080), the Natural Science
Foundation of Shanghai (11ZR1412500), the Ph.D.Programs Foundation of Ministry of Education of China(20093108110001), theDiscipline Project at the correspondinglevel of Shanghai (A. 13-0101-12-005), and Shanghai LeadingAcademic Discipline Project (J50101).
References
[1] Y. Hua and W. Q. Liu, βGeneralized Karhunen-Loeve trans-form,β IEEE Signal Processing Letters, vol. 5, pp. 141β142, 1998.
[2] S. Kraut, R. H. Anderson, and J. L. Krolik, βA generalizedKarhunen-Loeve basis for efficient estimation of troposphericrefractivity using radar clutter,β IEEE Transactions on SignalProcessing, vol. 52, no. 1, pp. 48β60, 2004.
[3] H. Ogawa and E. Oja, βProjection filter, Wiener filter, andKarhunen-LoeΜve subspaces in digital image restoration,β Jour-nal of Mathematical Analysis and Applications, vol. 114, no. 1, pp.37β51, 1986.
[4] Y. Yamashita and H. Ogawa, βRelative Karhumen-Loeve trans-form,β IEEE Transactions on Signal Process, vol. 44, pp. 371β378,1996.
-
8 Abstract and Applied Analysis
[5] G. H. Golub and C. F. Van Loan, Matrix Computations, JohnsHopkins University Press, Baltimore, Md, USA, 3rd edition,1996.
[6] P. C. Hansen, βThe truncated SVD as a method for regulariza-tion,β BIT Numerical Mathematics, vol. 27, no. 4, pp. 534β553,1987.
[7] H. D. Simon and H. Zha, βLow-rank matrix approximationusing the Lanczos bidiagonalization process with applications,βSIAM Journal on Scientific Computing, vol. 21, no. 6, pp. 2257β2274, 2000.
[8] P. Drineas, R. Kannan, and M. W. Mahoney, βFast MonteCarlo algorithms for matricesβII. Computing a low-rankapproximation to a matrix,β SIAM Journal on Computing, vol.36, no. 1, pp. 158β183, 2006.
[9] A. Frieze, R. Kannan, and S. Vempala, βFast Monte-Carloalgorithms for finding low-rank approximations,β Journal of theACM, vol. 51, no. 6, pp. 1025β1041, 2004.
[10] J. P. Ye, βGeneralized low rank approximations of matrices,βMachine Learning, vol. 61, pp. 167β191, 2005.
[11] J. Liu, S. C. Chen, Z. H. Zhou, and X. Y. Tan, βGeneralized lowrank approximations of matrices revisited,β IEEE Transactionson Neural Networks, vol. 21, pp. 621β632, 2010.
[12] Z. Z. Liang and P. F. Shi, βAn analytical algorithm for generalizedlow rank approxiamtions of matrices,β Pattern Recognition, vol.38, pp. 2213β2216, 2005.
[13] J. H. Manton, R. Mahony, and Y. Hua, βThe geometry ofweighted low-rank approximations,β IEEE Transactions on Sig-nal Processing, vol. 51, no. 2, pp. 500β514, 2003.
[14] I. Markovsky and S. Van Huffel, βLeft versus right representa-tions for solving weighted low-rank approximation problems,βLinear Algebra and its Applications, vol. 422, no. 2-3, pp. 540β552, 2007.
[15] M. Schuermans, P. Lemmerling, and S. Van Huffel, βBlock-rowHankel weighted low rank approximation,β Numerical LinearAlgebra with Applications, vol. 13, no. 4, pp. 293β302, 2006.
[16] F. Ding and T. Chen, βOn iterative solutions of general coupledmatrix equations,β SIAM Journal on Control and Optimization,vol. 44, no. 6, pp. 2269β2284, 2006.
[17] F. Ding and T. Chen, βGradient based iterative algorithmsfor solving a class of matrix equations,β IEEE Transactions onAutomatic Control, vol. 50, no. 8, pp. 1216β1221, 2005.
[18] F. Ding, P. X. Liu, and J. Ding, βIterative solutions of thegeneralized Sylvester matrix equations by using the hierarchicalidentification principle,β Applied Mathematics and Computa-tion, vol. 197, no. 1, pp. 41β50, 2008.
[19] J. Ding, Y. Liu, and F. Ding, βIterative solutions to matrixequations of the form AiXBi = Fi,β Computers & Mathematicswith Applications, vol. 59, no. 11, pp. 3500β3507, 2010.
[20] L. Xie, Y. Liu, and H. Yang, βGradient based and least squaresbased iterative algorithms for matrix equations AXB + CXπD= F,β Applied Mathematics and Computation, vol. 217, no. 5, pp.2191β2199, 2010.
[21] L. Xie, J. Ding, and F. Ding, βGradient based iterative solutionsfor general linear matrix equations,β Computers & Mathematicswith Applications, vol. 58, no. 7, pp. 1441β1448, 2009.
[22] F. Ding and T. Chen, βIterative least-squares solutions ofcoupled Sylvester matrix equations,β Systems & Control Letters,vol. 54, no. 2, pp. 95β107, 2005.
[23] W. Xiong, W. Fan, and R. Ding, βLeast-squares parameterestimation algorithm for a class of input nonlinear systems,βJournal of Applied Mathematics, vol. 2012, Article ID 684074, 14pages, 2012.
[24] C. C. Paige andM.A. Saunders, βTowards a generalized singularvalue decomposition,β SIAM Journal onNumerical Analysis, vol.18, no. 3, pp. 398β405, 1981.
-
Submit your manuscripts athttp://www.hindawi.com
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttp://www.hindawi.com
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
CombinatoricsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
International Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com
Volume 2014 Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Stochastic AnalysisInternational Journal of