a weighted variational model for simultaneous reflectance and … · 2016-05-16 · a weighted...

9
A weighted variational model for simultaneous reflectance and illumination estimation Xueyang Fu 1,2 , Delu Zeng 1,2 , Yue Huang 1,2 , Xiao-Ping Zhang 1,3 , Xinghao Ding 1,2* 1 Fujian Key Laboratory of Sensing and Computing for Smart City, Xiamen University, China 2 School of Information Science and Engineering, Xiamen University, China 3 Department of Electrical and Computer Engineering, Ryerson University, Canada [email protected], {dltsang, yhuang2010}@xmu.edu.cn, [email protected], [email protected] Abstract We propose a weighted variational model to estimate both the reflectance and the illumination from an observed image. We show that, though it is widely adopted for ease of modeling, the log-transformed image for this task is not ide- al. Based on the previous investigation of the logarithmic transformation, a new weighted variational model is pro- posed for better prior representation, which is imposed in the regularization terms. Different from conventional varia- tional models, the proposed model can preserve the estimat- ed reflectance with more details. Moreover, the proposed model can suppress noise to some extent. An alternating minimization scheme is adopted to solve the proposed mod- el. Experimental results demonstrate the effectiveness of the proposed model with its algorithm. Compared with oth- er variational methods, the proposed method yields com- parable or better results on both subjective and objective assessments. 1. Introduction Observed images can be decomposed as a product of re- flectance and illumination using a simplified physical model of light reflection [14]. In such a decomposition, the illu- mination represents the light intensity on the objects in the image, and the reflectance represents the physical character- istics of objects. There are many applications derived from this decomposition, such as ordinary image enhancemen- t [24, 33, 3, 9], high dynamic range image tone mapping [7, 29], remote sensing image correction [12] and target se- lection and tracking [18, 10]. Since estimating the reflectance and the illumination from a single observed image is an ill-posed inverse prob- lem [14], a number of modeling approaches have been de- veloped to incorporate prior structural assumptions. For * Correspondence author. example, [14, 13] propose path-based algorithms. These methods require intensive parameter tuning and have high computational complexity, which was later addressed us- ing recursive matrix calculation [5, 21]. Another family of methods use partial differential equations (PDE) [8, 23], which can be solved efficiently using the fast Fourier trans- formation (FFT). In [38], a nonlocal texture constraint is adopted for decomposing intrinsic images. Recently, several variational methods have been present- ed. In [11], Kimmel et al. propose a variational model to estimate the illumination, which is assumed varies smooth- ly. The assumption of the reflectance is not considered in the model. In [20], a total variation (TV) and nonlocal TV regularized model, solved by adopting Bregman iter- ation, is developed and studied. An L1-based variational model is further introduced in [19] to focus on estimating the reflectance. In [26], a TV model for image decomposi- tion by considering both reflectance and illumination com- ponents in the objective function is proposed. The model is divided into two sub-problems and solved by the split Bregman method. However, this method may result in an over-smoothed reflectance due to the side effect of the log- arithmic transformation. This method is complemented by using nonlocal bounded variation [35] to achieve an effec- tive decomposition of illumination and reflectance. Follow- ing [26], a variational model with barriers for image de- composition is proposed in [34]. The model is defined as a constrained optimization problem associated with the d- educed energy functional. In [16], an adaptive perceptu- ally inspired variational model is developed to recover the reflectance and to adjust the uneven intensity distribution. However, the computational cost of this method is high; processing a 1000×1000 image requires about 10 minutes. A kernel-based variational model is introduced in [2]. An analysis of this model on contrast is provided in the liter- ature. This method can deal with color cast and enhance both under- and over-exposed images. A perception based 2782

Upload: others

Post on 09-Jan-2020

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Weighted Variational Model for Simultaneous Reflectance and … · 2016-05-16 · A weighted variational model for simultaneous reflectance and illumination estimation Xueyang

A weighted variational model for simultaneous reflectance and illumination

estimation

Xueyang Fu1,2, Delu Zeng1,2, Yue Huang1,2, Xiao-Ping Zhang1,3, Xinghao Ding1,2∗

1Fujian Key Laboratory of Sensing and Computing for Smart City, Xiamen University, China2School of Information Science and Engineering, Xiamen University, China

3Department of Electrical and Computer Engineering, Ryerson University, Canada

[email protected], {dltsang, yhuang2010}@xmu.edu.cn, [email protected], [email protected]

Abstract

We propose a weighted variational model to estimate

both the reflectance and the illumination from an observed

image. We show that, though it is widely adopted for ease of

modeling, the log-transformed image for this task is not ide-

al. Based on the previous investigation of the logarithmic

transformation, a new weighted variational model is pro-

posed for better prior representation, which is imposed in

the regularization terms. Different from conventional varia-

tional models, the proposed model can preserve the estimat-

ed reflectance with more details. Moreover, the proposed

model can suppress noise to some extent. An alternating

minimization scheme is adopted to solve the proposed mod-

el. Experimental results demonstrate the effectiveness of

the proposed model with its algorithm. Compared with oth-

er variational methods, the proposed method yields com-

parable or better results on both subjective and objective

assessments.

1. Introduction

Observed images can be decomposed as a product of re-

flectance and illumination using a simplified physical model

of light reflection [14]. In such a decomposition, the illu-

mination represents the light intensity on the objects in the

image, and the reflectance represents the physical character-

istics of objects. There are many applications derived from

this decomposition, such as ordinary image enhancemen-

t [24, 33, 3, 9], high dynamic range image tone mapping

[7, 29], remote sensing image correction [12] and target se-

lection and tracking [18, 10].

Since estimating the reflectance and the illumination

from a single observed image is an ill-posed inverse prob-

lem [14], a number of modeling approaches have been de-

veloped to incorporate prior structural assumptions. For

∗Correspondence author.

example, [14, 13] propose path-based algorithms. These

methods require intensive parameter tuning and have high

computational complexity, which was later addressed us-

ing recursive matrix calculation [5, 21]. Another family

of methods use partial differential equations (PDE) [8, 23],

which can be solved efficiently using the fast Fourier trans-

formation (FFT). In [38], a nonlocal texture constraint is

adopted for decomposing intrinsic images.

Recently, several variational methods have been present-

ed. In [11], Kimmel et al. propose a variational model to

estimate the illumination, which is assumed varies smooth-

ly. The assumption of the reflectance is not considered in

the model. In [20], a total variation (TV) and nonlocal

TV regularized model, solved by adopting Bregman iter-

ation, is developed and studied. An L1-based variational

model is further introduced in [19] to focus on estimating

the reflectance. In [26], a TV model for image decomposi-

tion by considering both reflectance and illumination com-

ponents in the objective function is proposed. The model

is divided into two sub-problems and solved by the split

Bregman method. However, this method may result in an

over-smoothed reflectance due to the side effect of the log-

arithmic transformation. This method is complemented by

using nonlocal bounded variation [35] to achieve an effec-

tive decomposition of illumination and reflectance. Follow-

ing [26], a variational model with barriers for image de-

composition is proposed in [34]. The model is defined as

a constrained optimization problem associated with the d-

educed energy functional. In [16], an adaptive perceptu-

ally inspired variational model is developed to recover the

reflectance and to adjust the uneven intensity distribution.

However, the computational cost of this method is high;

processing a 1000×1000 image requires about 10 minutes.

A kernel-based variational model is introduced in [2]. An

analysis of this model on contrast is provided in the liter-

ature. This method can deal with color cast and enhance

both under- and over-exposed images. A perception based

2782

Page 2: A Weighted Variational Model for Simultaneous Reflectance and … · 2016-05-16 · A weighted variational model for simultaneous reflectance and illumination estimation Xueyang

color correction of digital images in the variational frame-

work is presented in [28, 1]. Both color enhancement and

local contrast improvement can be achieved in this method.

Meanwhile, a numerical implementation of the gradient de-

scent technique is also shown in these works. Another ap-

proach based on a variational Bayesian method is proposed

in [32]. In this literature, the variational Bayes approxima-

tion approach is adopted to estimate both the illumination

and the reflectance. However, finer details of the estimated

reflectance are lost due to the logarithmic transformation.

In addition, the computational cost is high since it requires

calculations of some linear equations. In [17], a higher or-

der TV model for estimating the reflectance and the illu-

mination is proposed. The primal-dual splitting algorith-

m, which needs to many iterations, is adopted to solve the

model. Based on nonlocal differential operators, a unifying

model focuses on the reflectance is proposed in [39].

Most of the above variational methods use the logarith-

mic transformation for pre-processing not only to reduce the

computational complexity [30] but also to simulate human

vision perception mechanism, such as Weber’s law [31].

Using the logarithmic transformation to model the fidelity

term is proper since the process simulates human eye per-

ception of light intensity. However, since the logarithmic

transformation suppresses the variation of gradient mag-

nitude in bright regions, using the gradient in logarithmic

domain as the regularization terms is not proper. In other

words, solving the ill-posed problem by adopting these reg-

ularization terms may lead to an undesirable result, such as

loss of finer details in the estimated reflectance.

In this paper, a weighted variational model for simulta-

neously estimating reflectance and illumination is present-

ed. First, by analyzing the characteristic of the logarithmic

transformation, we show that the logarithmic transforma-

tion is not proper to be directly used as regularization terms.

Then, based on the previous analysis, a weighted variation-

al model is introduced for better prior representation and

an alternating minimization scheme is adopted to solve the

proposed model. Unlike existing variational methods us-

ing complex techniques, such as nonlocal techniques [35]

and dictionary learning techniques [3], the proposed model

can achieve significant improvement by simply weighting

the widely used regularization terms. Compared with clas-

sical variational models, the proposed model can preserve

the estimated reflectance with more details. Moreover, the

proposed model can suppress noise to some extent. An al-

ternating minimization scheme is adopted to solve the pro-

posed model. Experimental results demonstrate the effec-

tiveness of the proposed model with its algorithm. Com-

pared with other variational methods, the proposed method

yields comparable or better results on both subjective and

objective assessments.

2. Background: Discussion on log-transform

The physical model of light reflection can be simply de-

scribed as S = R·L, where S is the observed image, R is the

reflectance of the image within the range (0, 1], and L is the

illumination within the range (0,∞). The dot “·” denotes

pixel-wise multiplication and all images are vectorized. It

follows that S ≤ L. The goal is to estimate the reflectance

R and the illumination L from the observed image S.

To this end, most variational methods first transform

S = R · L into the logarithmic domain, s = r + l, where

s = log(S), r = log(R) and l = log(L). Using this

logarithmic transformation, the first variational algorithm

for this decomposition was proposed in [11]. This ap-

proach only models the illumination l and then estimates

the reflectance R by exp(s− l) in post-processing. Another

variational method [26] considers both illumination and re-

flectance in the objective function, which is arguably more

appropriate. However, the directly estimated reflectance im-

ages are typically too smooth and loose much of the desired

edges and texture details. In [26], the authors abandon the

directly estimated reflectance and instead use exp(s − l).Our goal is to develop an objective function that outputs

a usable illumination and reflectance. To this end, we ob-

serve that conventional methods use an objective function

along the following lines:

E(r, l) =‖l + r − s‖22 + λ1‖∇l‖

22 + λ2‖∇r‖1

s.t. r ≤ 0 and s ≤ l. (1)

In this objective function, the logarithmic illumination l us-

es a squared penalty to enforce spatial smoothness and the

logarithmic reflectance r is encouraged to be piece-wise

constant using L1-norm. The fidelity term is the squared

error term between the log-transformed image and its break-

down in illumination and reflectance.

Since the logarithmic transform conforms with human

perception of light intensity as described by Weber’s law

[31], it is appropriate to use the logarithmic transformed

image to define the fidelity term. However, we argue that

the logarithmic transform is not appropriate in the penal-

ty terms on l and r, since it magnifies errors in certain

ranges more than others. More specifically, given a target

stimulus signal x, its gradient variation is ∇x, and, while

in the log-transformed domain it is ∇(log(x)) = 1x∇x.

Therefore, when x is very small, the gradient variation

in the log-transform ∇(log(x)) is highly weighted by 1x

.

While this too agrees with Weber’s law, when the variation

∇(log(x)) = 1x∇x is used in a norm penalty, it inevitably

dominates over the variation term in the high magnitude ar-

eas. This is not desirable when we estimate/recover the fine

details in the high magnitude stimuli areas as well.

Such phenomenon is illustrated in Fig 1. As shown in

Fig. 1(b), much structure is amplified in low magnitude s-

2783

Page 3: A Weighted Variational Model for Simultaneous Reflectance and … · 2016-05-16 · A weighted variational model for simultaneous reflectance and illumination estimation Xueyang

(a) input and log-transformed (b) corresponding gradients of (a)

Figure 1. Example of logarithmic transformation.

timuli areas in the log-transformed domain, indicating that

these areas are highly weighted by 1x

. We can see that,

when minimizing a norm over this entire image, the intensi-

ty of fine edges and details in high illumination areas will be

buried in irrelevant details in low illumination areas. This

scaling of intensities will make estimating/recovering fine

structures in these areas more challenging.

To preserve edges and details, most existing variational

methods use the estimated illumination to estimate the re-

flectance using the equation R = S/L in post-processing.

However, any error in estimated illumination at a pixel, will

clearly affect the estimated reflectance at the same location.

Intuitively, we argue that a method which can estimate il-

lumination and reflectance simultaneously and accurately

should be considered. Based on the above discussion, we

also argue that the gradient variation in the log-transformed

domain is not proper as a regularization term. In the next

section we present a weighted variational model to address

this, and derive an alternating minimization algorithm to es-

timate the illumination and reflectance simultaneously.

3. Model and Algorithm

As described above, a loss of fine details is incurred by

the weight 1x

in ∇(log(x)) = 1x∇x when penalizing the

log-transform domain. For better prior representation, it is

natural to modify the form of ∇(log(x)), i.e., the regular-

ization terms, to eliminate the weight’s impact. Based on

the widely used assumptions, a new objective function is

established by weighting the regularization terms:

E(r, l) = ‖r + l − s‖22 + c1‖e

r · ∇r‖1 + c2∥

∥el · ∇l∥

2

2

s.t. r ≤ 0 and s ≤ l, (2)

where c1 and c2 are positive parameters, ‖‖p denotes the

p-norm operator. To minimize E(r, l), for the first term

(‖r + l − s‖22), which corresponds to L2 data fidelity, is to

minimize the distance between estimated (r + l) and ob-

served image s. The second term (‖er · ∇r‖1), which corre-

sponds to TV reflectance sparsity, enforces piece-wise con-

stant on the reflectance r. The third term (∥

∥el · ∇l∥

2) en-

forces spatial smoothness on the illumination l. Note that

the ∇r and ∇l are respectively weighted by er and el to

eliminate the impact of 1R

and 1L

.

Since er and ∇r are coupled in the second term and el

and ∇l are coupled in the third term, minimizing the ob-

jective function (2) is difficult. However, eliminating the

impact of the weights, i.e., 1r

and 1l, in regularization terms

can better represent the prior knowledge. Thus, we build a

new objective function which is easy to minimize, besides,

it can effectively eliminate the impact of the weights. Note

that er = R and el = L can be computed by using the pre-

vious results and seen as constant vectors. Based on this

strategy, the new objective function at the kth iteration is:

E(rk, lk) =∥

∥rk + lk − s∥

2

2+ c1

∥Rk−1 · ∇rk∥

1

+ c2∥

∥Lk−1 · ∇lk∥

2

2

s.t. rk ≤ 0 and s ≤ lk, (3)

Since there are two unknowns in the objective function,

traditional gradient descent methods are unusable. In this

paper, an alternating direction method of multipliers (AD-

MM) [6] is adopted to solve the objective function. For the

L1-norm, an auxiliary variable d and an error b are intro-

duced and function (3) is rewritten as:

E(rk, lk,dk, bk) =∥

∥rk + lk − s∥

2

2+ c2

∥Lk−1 · ∇lk∥

2

2

+ c1{∥

∥dk∥

1+ λ

∥Rk−1 · ∇rk − dk + bk∥

2

2}

s.t. rk ≤ 0 and s ≤ lk. (4)

This objective function will have local minima according to

ADMM theory [6]. Three separate sub-problems are itera-

tively cycled through. In particular, for the kth iteration:

(P1) dk = argmind

‖d‖1

+ λ∥

∥Rk−1 · ∇rk−1 − d + bk−1∥

2

2,

(P2) rk = argminr

∥r + lk−1 − s∥

2

2

+ c1λ∥

∥Rk−1 · ∇r − dk + bk−1∥

2

2,

bk = bk−1 + Rk · ∇rk − dk,

(P3) lk = argminl

∥l + rk − s∥

2

2+ c2

∥Lk−1 · ∇l∥

2

2.

The three sub-problems have closed form global optimal

solutions. The update for bk follows from ADMM. The

algorithm is detailed as follows:

1) Algorithm for P1 A shrinkage operation is

adopted to update dk at the kth iteration:

dkh = shrink(Rk−1 · ∇hrk−1 + bk−1

h , 12λ ),

dkv = shrink(Rk−1 · ∇vrk−1 + bk−1

v , 12λ ),

(5)

2784

Page 4: A Weighted Variational Model for Simultaneous Reflectance and … · 2016-05-16 · A weighted variational model for simultaneous reflectance and illumination estimation Xueyang

where shrink(x, λ) = x|x| ∗max(|x| − λ, 0) with x

|x| equal

to 0 when |x| = 0. h and v are the horizontal and vertical

directions, respectively.2) Algorithm for P2 Since P2 is a least squares

problem, rk has a closed form solution. The Fast FourierTransformation (FFT) is adopted to speed up the process.

By setting the first-order derivative to zero, rk is updated bythe following expression:

rk= F

−1(

F (s − lk−1) + c1λΦ

F (U)+c1λRk−1 · (F∗(∇h) · F (∇h) + F∗(∇v) · F (∇v))),

(6)

where U is the identity matrix, Φ = F ∗(∇h) · F (dkh −

bk−1h ) + F ∗(∇v) · F (dk

v − bk−1v ), F is the FFT operator,

F ∗ is the conjugate transpose and F−1 is the inverse FFT

operator. The derivative operator is diagonalized after FFT

so that matrix inversion can be avoided. All calculations are

performed pixel-wise. According to the prior: r ≤ 0, a sim-

ple correction is made after rk is updated: rk = min(rk, 0).Updating bk by the following expression:

bkh = bk−1

h + Rk · ∇hrk − dkh,

bkv = bk−1

v + Rk · ∇vrk − dkv .

(7)

This operation is the similar to “adding back the noise”

used in TV denoising [27]. d, r and b are updated until

εr = (∥

∥rk − rk−1∥

∥ /∥

∥rk−1∥

∥) ≤ ε1.

3) Algorithm for P3 Since P3 is also a least squares

problem, updating lk is similar to rk:

lk= F

−1(

F (s − rk)

F (U)+c2Lk−1(F∗(∇h) · F (∇h) + F∗(∇v) · F (∇v))).

(8)

According to the prior: s ≤ l, a simple correction is made

after lk is updated: lk = max(lk, s). l is updated until εl =(∥

∥lk − lk−1∥

∥ /∥

∥lk−1∥

∥) ≤ ε2.

Since large-matrix inversion is avoided by FFT and the

shrinkage operation is fast due to requiring only a few op-

erations, r and l can be solved simultaneously and efficient-

ly. The final estimated reflectance and illumination are ob-

tained by R = er and L = el, respectively. In the next

section, experimental results are presented to demonstrate

the effectiveness of the proposed method.

4. Experimental Results

All experiments are performed using our unoptimized

Matlab implementation1 on a PC with Intel Core i5 CPU

4460, 8GB RAM. We compare the proposed model with

other two classical models: Kimmel’s model [11] and Ng’s

1We share our Matlab implementation at: http://smartdsp.

xmu.edu.cn/cvpr2016.html

model [26]. For fair comparison, the parameters used in

the two methods are set to be optimal according to [11] and

[26]. In our experiments, the empirical parameters c1, c2and λ are set at 0.01, 0.1 and 1, respectively. For the stop-

ping parameters, ε1 and ε2 are set to be 10−3. The initial

value R0 = 0, b0 = 0, εr = 10 and εl = 10 . In general,

there are two approaches for dealing with the color images

[11, 26]. More precisely, one approach processes each col-

or channel (Red, Green, and Blue) separately to improve

the colors or correct color distortion. Another approach is

to transform color images into the HSV (Hue, Saturation

and Value) space and only process the V-channel, and then

transform it back to the RGB domain.

Reflectance and Illumination Estimation First, two

comparisons of reflectance and illumination estimation are

shown in Figs. 2 and 3. Fig. 2 shows the comparison in the

RGB domain and Fig. 3 shows the comparison in the HSV

domain. Since the reflectance is not considered in Kim-

mel’s model [11], only Ng’s estimated reflectance [26] is

shown. The results in Fig. 2 include color information since

the algorithms are applied to each RGB channel separately.

As can be seen, the illumination estimated by our model is

more smooth than other two methods. In other words, our

results are more conform with the illumination’s prior of s-

patial smoothness. Though Ng’s method [26] considers the

reflectance into the model to make it more appropriate and

reasonable for the decomposition, the directly estimated re-

flectance images shown in Figs. 2(e) and 3(e) appear fuzzy

and lose finer details. This is due to the side effect of the

logarithmic transformation discussed in Section 2. As can

be seen in Figs. 2(f) and 3(f), our reflectance images effec-

tively preserve details and are clearer than that by the Ng’s

method. Meanwhile, since the illumination is removed, de-

tails in dark areas are revealed in the reflectance. Since it is

hard to obtain the ground truth of both the reflectance and

the illumination, how to quantitatively assess the estimation

is a tough problem. In the following content, different ex-

periments are designed to demonstrate the effectiveness of

the proposed model and algorithm.

Illumination Adjustment Since the illumination contains

the lightness information, removing or adjusting the illu-

mination can generate visually pleasing results for dark im-

ages. Therefore, the Gamma correction operation is adopted

to modify the estimated illumination. Same as in [11, 26],

the Gamma correction of L with an adjusting parameter γis defined by:

L′ = W

(

L

W

)1

γ

, (9)

where W is 255 in an 8-bit image and the empirical param-

eter γ is set as 2.2. Since the method [11] only estimates

the illumination and the method [26] obtains a fuzzy re-

2785

Page 5: A Weighted Variational Model for Simultaneous Reflectance and … · 2016-05-16 · A weighted variational model for simultaneous reflectance and illumination estimation Xueyang

(a) input (b) illumination by [11] (c) illumination by [26] (d) our illumination (e) reflectance by [26] (f) our reflectance

Figure 2. Comparison of reflectance and illumination estimation in the RGB domain.

(a) input (b) illumination by [11] (c) illumination by [26] (d) our illumination (e) reflectance by [26] (f) our reflectance

Figure 3. Comparison of reflectance and illumination estimation in the HSV domain. The results look gray because we only show the

estimated illumination and reflectance of the V-channel without Hue and Saturation.

flectance, the enhanced images of the two methods is com-

puted by:

Senhanced = (S

L) · L′. (10)

While our estimated reflectance preserves finer details, the

enhanced image of the proposed method is computed by:

Senhanced = R · L′. (11)

Fig. 4 shows some examples of illumination adjustment

by different methods. A classical image enhancement al-

gorithm based on the center/surround Retinex: multiscale

Retinex with color restoration (MSRCR) [9] is also adopt-

ed to make a comparison. To preserve color information,

the Gamma correction is processed in the HSV domain. As

can be seen, MSRCR can effectively enhance details and

contrast with color correction. However, this method has

over-enhancement to some degree since the estimated illu-

mination is directly removed. The global visual effect of

the proposed method is similar to the other two variational

methods [11, 26]. This is because adding corrected illu-

mination back to reflectance is a complementary process.

However, the proposed method has a better natural perfor-

mance, such as shadow areas of background in the first row

and the clouds in the third row.

Since the ground truth of the enhanced image is un-

known, a blind image quality assessment is used to evaluate

the enhanced results. This metric is the natural image qual-

ity evaluator (NIQE) [22] based on statistical regularities

from natural and undistorted images. A lower NIQE value

represents a higher image quality and the best results are

boldfaced. As shown in Table 1, the proposed method has a

lower average than other three methods, in agreement with

the our empirical assessment of having the highest quality

of enhanced results.

Table 1. Average NIQE values of Fig. 4

inputs [9] [11] [26] ours

1st row 2.47 2.35 3.18 2.45 2.13

2nd row 2.52 2.32 2.70 2.42 2.38

3rd row 3.97 3.54 3.91 3.46 3.72

4th row 3.10 2.93 3.62 3.16 2.77

Average 3.02 2.79 3.35 2.87 2.75

Table 2. Average NIQE values on 300 images

inputs [9] [11] [26] ours

Backlit 4.01 3.68 3.43 3.37 3.51

Non-uniform 3.47 3.28 3.25 3.33 3.17

night-time 5.12 5.01 4.55 4.51 4.21

We also test 300 images with different illumination con-

ditions to demonstrate the effectiveness of our model on the

Gamma correction. The tested 300 images are selected from

the non-uniform illumination image dataset [33], the night-

time image dataset [4], the NASA image dataset [25] and

the Google image searching. The 300 images are catego-

rized into 16 backlit images, 123 non-uniform illumination

images and 164 night-time images. Table 2 shows corre-

sponding average NIQE values of the three categories. As

shown in Table 2, the proposed method has the lowest NIQE

value on most images, which indicates that our model has a

consistent good performance on different kinds of images.

Noise Suppression An important problem of dark image

enhancement is how to suppress noise in dark areas when

applying the Gamma correction. Two dark images con-

taminated with slight additive white Gaussian noise n with

σ = 5 are processed. Figs. 5 and 6 show the enhanced

2786

Page 6: A Weighted Variational Model for Simultaneous Reflectance and … · 2016-05-16 · A weighted variational model for simultaneous reflectance and illumination estimation Xueyang

(a) inputs (b) results by [9] (c) results by [11] (d) results by [26] (e) our results

Figure 4. Comparison of illumination adjustment.

(a) clean image (b) noisy image (c) enhanced results by [11] (d) enhanced results by [26] (e) our enhanced results

Figure 5. Comparison of noise suppression.

(a) clean image (b) noisy image (c) enhanced results by [11] (d) enhanced results by [26] (e) our enhanced results

Figure 6. Comparison of noise suppression.

results with corresponding enlargements in red rectangles.

The Gamma correction is processed in the HSV domain.

For methods [11, 26], the Gamma correction only relies on

the estimated illumination, i.e.,

Senhanced =S

L· L′ = (R · L + n) ·

L′

L. (12)

It is clear that noise will be amplified after Gamma cor-

rection since L ≤ L′ according to Eq. (10). As can be seen

in Figs. 5(c) and (d) and Figs. 6(c) and (d), both Kimmel’s

method [11] and Ng’s method [26] amplify the noise af-

ter Gamma correction. Since the proposed model estimates

reflectance and illumination simultaneously, the L1 regular-

ization term in function (2) can effectively handle noise.

Color Correction As mentioned before, when each RG-

B channel is estimated separately, the estimated reflectance

and illumination contains color information. By removing

the illumination, the reflectance retains the original color

information of the object, meaning that the proposed model

2787

Page 7: A Weighted Variational Model for Simultaneous Reflectance and … · 2016-05-16 · A weighted variational model for simultaneous reflectance and illumination estimation Xueyang

(a) input (b) ground truth

(c) results by [11] (d) results by [26] (e) our results

Figure 7. Comparison of color correction.

(a) input (b) ground truth

(c) results by [11] (d) results by [26] (e) our results

Figure 8. Comparison of color correction.

Figure 10. The relation between εr, εl and iteration numbers.

has the effect of color correction. In this experiment, the

color correction performance is shown in Figs. 7 and 8 to

demonstrate the accuracy of the estimated reflectance and

illumination. The S-CIELAB color metric [37] based on s-

patial processing to measure color errors is adopted to verify

the accuracy of color correction. The third rows in Figs. 7

and 8 are the corrected results obtained by methods [11, 26]

and the proposed one, respectively. The third rows in Figs.

7 and 8 are the spatial location of the errors between the

ground truth and other corrected results. The fourth rows

in Figs. 7 and 8 are histograms of S-CIELAB errors of the

ground truth with other three results, respectively. As can

be seen from the spatial locations of the errors, the green

areas of our result are smaller than those of other two meth-

ods. This indicates that the difference between the ground

truth and our result is the smallest, which also can be seen

from the histogram distribution of errors. This experimen-

t demonstrates that the proposed model can estimate more

accurate results when dealing with color distorted images.

Parameters Impact The impact of regularization parame-

ters c1 and c2 on the proposed model (2) is shown in Fig. 9.

The default values are set as 0.01 and 0.1. To test the impact

of different parameters respectively, we change the value of

one parameter while keeping the other one unchanged. As

can be seen in Fig. 9, details of estimated reflectance are

fuzzed since L1 term decreases as c1 increases. The esti-

mated illumination smoothed as c2 increases since c2 con-

trols the smoothness. In most cases, the empirical setting of

regularization parameters generates satisfactory results.

Convergence Rate and Computational Time First, the

convergence rate of the proposed algorithm is analyzed.

The operation is processed in the HSV domain with a

512 × 512 image. Fig. 10 shows the relationship between

the error εr, εl and the number of iterations. As shown from

the error curve, the convergence rate is fast. This is due

to the approximation strategy and the alternating optimiza-

tion algorithm are adopted to split the non-convex objective

function (2) into three sub-problems that have closed form

solutions.

The computational time is also tested since it is an impor-

tant factor of an algorithm. One hundred images with size of

2788

Page 8: A Weighted Variational Model for Simultaneous Reflectance and … · 2016-05-16 · A weighted variational model for simultaneous reflectance and illumination estimation Xueyang

(a) default c1 = 0.01, c2 = 0.1 (b) c1 = 0.1, c2 = 0.1 (c) c1 = 1, c2 = 0.1 (d) c1 = 0.01, c2 = 0.01 (e) c1 = 0.01, c2 = 1

Figure 9. Examples of parameters impact. First row: estimated illumination images. Second row: estimated reflectance images.

Table 3. Average computational times in seconds

ε1 = ε2 = 0.1 ε1 = ε2 = 0.01 ε1 = ε2 = 0.001

time 0.41 1.79 13.51

512× 512 are processed with different stopping parameters

ε1 and ε2. The operation is processed in the HSV domain

and average computational times are shown in Table 3. As

can be seen, the smaller stopping parameters is, the high-

er computational time required. This is due more iterations

are required to achieve a smaller error. Note that the exper-

iments are performed by unoptimized Matlab implementa-

tion and the computational time can be further improved by

C programming and advanced computing devices.

Extension: Illumination Invariant Facial Images Final-

ly, the proposed model is used to process illumination in-

variant facial images. In this experiment, the reflectance is

estimated by the new model over four images [15, 36] un-

der different illumination conditions. As can be seen in Fig.

11, though the person’s face is affected by different light-

ing conditions, the estimated reflectance is similar since the

illumination is well estimated and eliminated from the ob-

served images. This experiment shows that the proposed

model has the potential to be used in other computer vision

applications.

5. Conclusion

In this paper, a new weighted variational model for si-

multaneous estimation of reflectance and illumination is in-

troduced. First, we show the side effect of the logarithmic

transformation. Based on the analysis of the logarithmic

transformation, a weighted variational model is introduced

to refine the regularization terms for better prior represen-

tation. An alternating minimization algorithm with an ap-

proximation strategy is adopted to estimate the model. We

present a comprehensive experimental analysis of the pro-

posed model using both subjective and objective assess-

ments. Compared with other testing methods, the proposed

(a) observed facial images

(b) corresponding reflectance images

Figure 11. Processing illumination invariant facial images.

one shows similar or even better results with satisfactory

convergence rate and computational times.

6. Acknowledgements

We would like to thank the anonymous reviewers for

their helpful feedback. This work was supported in part

by the National Natural Science Foundation of China un-

der Grants 61571382, 61571005, 81301278, 61172179 and

61103121, in part by the Guangdong Natural Science Foun-

dation under Grant 2015A030313007, in part by the Funda-

mental Research Funds for the Central Universities under

Grants 20720160075, 20720150169 and 20720150093, and

in part by the Research Fund for the Doctoral Program of

Higher Education under Grant 20120121120043.

References

[1] M. Bertalmı́o, V. Caselles, , E. Provenzi, and A. Rizzi.

Perceptual color correction through variational techniques.

IEEE Trans. Image Process., 16(4):1058–1072, 2007.

[2] M. Bertalmı́o, V. Caselles, and E. Provenzi. Issues about

Retinex theory and contrast enhancement. Int. J. Comput.

Vision, 83(1):101–119, 2009.

2789

Page 9: A Weighted Variational Model for Simultaneous Reflectance and … · 2016-05-16 · A weighted variational model for simultaneous reflectance and illumination estimation Xueyang

[3] H. Chang, M. K. Ng, W. Wang, and T. Zeng. Retinex image

enhancement via a learned dictionary. Optical Engineering,

54(1):013107–1–013107–15, 2015.

[4] Z. Chen, T. Jiang, and Y. Tian. Quality assessment for com-

paring image enhancement algorithms. In IEEE Conference

on Computer Vision and Pattern Recognition (CVPR), pages

3003–3010, Ohio, June 2014.

[5] B. Funt, F. Ciurea, and J. McCann. Retinex in matlab. J.

Electron. Imaging, 13(1):48–57, 2004.

[6] T. Goldstein and S. Osher. The split Bregman method for L1-

regularized problems. SIAM J. Imaging Sci., 2(2):323–343,

2009.

[7] B. Gu, W. Li, M. Zhu, and M. Wang. Local edge-preserving

multiscale decomposition for high dynamic range image tone

mapping. IEEE Trans. Image Process., 22(1):70–79, 2013.

[8] B. K. P. Horn. Determining lightness from an image. Com-

put. Graphics Image Process., 3(4):277–299, 1974.

[9] D. J. Jobson, Z. U. Rahman, and G. A. Woodell. A multiscale

Retinex for bridging the gap between color images and the

human observation of scenes. IEEE Trans. Image Process.,

6(7):965–976, 1997.

[10] C. Jung, T. Sun, and L. Jiao. Eye detection under vary-

ing illumination using the Retinex theory. Neurocomput.,

113:130–137, 2013.

[11] R. Kimmel, M. Elad, D. Shaked, R. Keshet, and I. Sobel. A

variational framework for Retinex. Int. J. Comput. Vision,

52(1):7–23, 2003.

[12] X. Lan, H. Shen, L. Zhang, and Q. Yuan. A spatially adaptive

Retinex variational model for the uneven intensity correction

of remote sensing images. Signal Process., 101:19–34, 2014.

[13] E. H. Land. The Retinex theory of color vision. Sci. Amer.,

1977.

[14] E. H. Land and J. J. McCann. Lightness and Retinex theory.

J. Opt. Soc. Am., 61(1):1–11, 1971.

[15] K. C. Lee, J. Ho, and D. J. Kriegman. Acquiring linear sub-

spaces for face recognition under variable lighting. IEEE

Trans. Pattern Anal. Mach. Intell., 27(5):684–698, 2005.

[16] H. Li, L. Zhang, and H. Shen. A perceptually inspired

variational method for the uneven intensity correction of re-

mote sensing images. IEEE Trans. Geosci. Remote Sens.,

50(8):3053–3065, 2012.

[17] J. Liang and X. Zhang. Retinex by higher order total varia-

tion decomposition. J. Math. Imaging Vis., 52(3):345–355,

2015.

[18] Y. Liu, R. R. Martin, L. D. Domincis, and B. Li. Using

Retinex for point selection in 3D shape registration. Pattern

Recogn., 47(6):2126–2142, 2014.

[19] W. Ma, J. M. Morel, S. Osher, and A. Chien. An L1-based

variational model for Retinex theory and its applications to

medical images. In IEEE Conference on Computer Vision

and Pattern Recognition (CVPR), pages 153–160, 2011.

[20] W. Ma and S. Osher. A TV bregman iterative model of

Retinex theory. Inverse Problems and Imaging, 6(4):697–

708, 2012.

[21] J. McCann. Lessons learned from mondrians applied to re-

al images and color gamuts. In Proc. IS&T/SID 7th Color

Imag. Conf., pages 1–8, 1999.

[22] A. Mittal, R. Soundararajan, and A. C. Bovik. Making a

completely blind image quality analyzer. IEEE Signal Pro-

cess. Lett., 20(3):209–212, 2013.

[23] J. M. Morel, A. B. Petro, and C. Sbert. A PDE formal-

ization of Retinex theory. IEEE Trans. Image Process.,

19(11):2825–2837, 2010.

[24] Y. Nam, D. Choi, and B. Song. Power-constrained con-

trast enhancement algorithm using multi-scale Retinex for

oled display. IEEE Trans. Image Process., 23(8):3308–3320,

2014.

[25] NASA. Retinex image processing. 2001. http://

dragon.larc.nasa.gov/retinex/pao/news/.

[26] M. K. Ng and W. Wang. A total variation model for Retinex.

SIAM J. Imaging Sci., 4(1):345–365, 2011.

[27] S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin. An it-

erative regularization method for total variation-based image

restoration. Multiscale Model. Simul., 4(2):460–489, 2005.

[28] R. Palma-Amestoy, E. Provenzi, M. Bertalmı́o, and

V. Caselles. A perceptually inspired variational framework

for color enhancement. IEEE Trans. Pattern Anal. Mach. In-

tell., 31(3):458–474, 2009.

[29] S. Pan, X. An, and H. He. Adapting iterative Retinex com-

putation for high-dynamic-range tone mapping. J. Electron.

Imaging, 22(2):023006–1–023006–10, 2013.

[30] E. Provenzi, L. D. Carli, A. Rizzi, and D. Marini. Mathemat-

ical definition and analysis of the Retinex algorithm. J. Opt.

Soc. Amer. A, 22(12):2613–2621, 2005.

[31] W. F. Schreiber. Fundamentals of electronic imaging system-

s. Springer, 1986.

[32] L. Wang, L. Xiao, H. Liu, and Z. Wei. Variational

Bayesian method for Retinex. IEEE Trans. Image Process.,

23(8):3381–3396, 2014.

[33] S. Wang, J. Zheng, H. M. Hu, and B. Li. Naturalness p-

reserved enhancement algorithm for non-uniform illumina-

tion images. IEEE Trans. Image Process., 22(9):3538–3548,

2013.

[34] W. Wang and C. He. A variational model with barrier func-

tionals for Retinex. SIAM J. Imaging Sci., 8(3):1955–1980,

2015.

[35] W. Wang and M. K. Ng. A nonlocal total variation mod-

el for image decomposition: illumination and reflectance.

Numerical Mathematics: Theory, Methods and Applications,

7(3):334–355, 2014.

[36] Yale. The extended Yale face database B.

2001. http://vision.ucsd.edu/˜iskwak/

ExtYaleDatabase/ExtYaleB.html.

[37] X. Zhang and B. A. Wandell. A spatial extension of CIELAB

for digital color image reproduction. In SID internation-

al symposium digest of technical papers, volume 27, pages

731–734, 1996.

[38] Q. Zhao, P. Tan, Q. Dai, L. Shen, E. Wu, and S. Lin.

A closed-form solution to retinex with nonlocal texture

constraints. IEEE Trans. Pattern Anal. Mach. Intell.,

34(7):1437–1444, 2012.

[39] D. Zosso, G. Tran, and S. J. Osher. Non-local Retinex—

a unifying framework and beyond. SIAM J. Imaging Sci.,

8(2):787–826, 2015.

2790