2498 ieee transactions on pattern analysis · pdf fileproblem for which the goal is to recover...

15
Nonlinear Camera Response Functions and Image Deblurring: Theoretical Analysis and Practice Yu-Wing Tai, Member, IEEE, Xiaogang Chen, Student Member, IEEE, Sunyeong Kim, Student Member, IEEE, Seon Joo Kim, Member, IEEE, Feng Li, Member, IEEE, Jie Yang, Jingyi Yu, Yasuyuki Matsushita, Senior Member, IEEE, and Michael S. Brown Abstract—This paper investigates the role that nonlinear camera response functions (CRFs) have on image deblurring. We present a comprehensive study to analyze the effects of CRFs on motion deblurring. In particular, we show how nonlinear CRFs can cause a spatially invariant blur to behave as a spatially varying blur. We prove that such nonlinearity can cause large errors around edges when directly applying deconvolution to a motion blurred image without CRF correction. These errors are inevitable even with a known point spread function (PSF) and with state-of-the-art regularization-based deconvolution algorithms. In addition, we show how CRFs can adversely affect PSF estimation algorithms in the case of blind deconvolution. To help counter these effects, we introduce two methods to estimate the CRF directly from one or more blurred images when the PSF is known or unknown. Our experimental results on synthetic and real images validate our analysis and demonstrate the robustness and accuracy of our approaches. Index Terms—Nonlinear camera response functions (CRFs), motion deblurring, CRF estimation Ç 1 INTRODUCTION I MAGE deblurring is a long standing computer vision problem for which the goal is to recover a sharp image from a blurred image. Mathematically, the problem is formulated as B ¼ I K þ n; ð1Þ where B is the captured blurred image, I is the latent image, K is the point spread function (PSF), is the convolution operator, and n represents image noise. One common assumption in most previous algorithms that is often overlooked is that the image B in (1) responds in a linear fashion with respect to irradiance, i.e., the final image intensity is proportional to the amount of light received by the sensor. This assumption is valid when we capture an image in a RAW format. However, when capturing an image using a common consumer level camera or camera on a mobile device, there is a nonlinear camera response function (CRF) that maps the scene irradiance to intensity. CRFs vary among different camera manufacturers and models due to design factors such as compressing the scene’s dynamic range or to simulate conventional irradi- ance responses of film [15], [33]. Taking this nonlinear response into account, the imaging process of (1) can be considered as B ¼ f ðI K þ nÞ; ð2Þ where f ðÞ is the CRF. To remove the effect of the nonlinear CRF from image deblurring, the image B has to first be linearized by the inverse CRF, i.e., f 1 . After deconvolution, the CRF is applied again to restore the original intensity which leads to the following process: I ¼ f ðf 1 ðBÞ K 1 Þ; ð3Þ where K 1 is inverse filter which denotes the deconvolu- tion process. Contributions. This paper offers two contributions with regard to CRFs and their role in image deblurring. First, we provide a systematic analysis of the effect that a CRF has on the blurring process and show how a nonlinear CRF can make a spatially invariant blur behave as a spatially varying blur around edges. We prove that such nonlinearity can cause abrupt ringing artifacts around edges which are nonuniform. In addition, these artifacts are difficult to 2498 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 35, NO. 10, OCTOBER 2013 . Y.-W. Tai and S. Kim are with the Department of Electrical Engineering, Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon 305-701, South Korea. E-mail: [email protected], [email protected]. . X. Chen and J. Yang are with the Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 800 DongChuan Road, Shanghai 200240, China. E-mail: {cxg, jieyang}@stju.edu.cn. . S.J. Kim is with the Department of Computer Science, Engineering College, Yonsei University, 134 Shinchon-Dong, Seodaemun-Gu, Seoul 120-749, South Korea. E-mail: [email protected]. . F. Li is with Mitsubishi Electric Research Laboratories, 201 Broadway, Cambridge, MA 02139. E-mail: [email protected]. . J. Yu is with the Department of CIS, College of Engineering, University of Delaware, 410 Smith Hall, Newark, DE 19176. E-mail: [email protected]. . Y. Matsushita is with Microsoft Research Asia T2-13173, No. 49, Danling St., Haidian District, Beijing 100080, China. E-mail: [email protected]. . M.S. Brown is with the School of Computing, National University of Singapore, Computing 1, 13 Computing Drive, Singapore 117417, Singapore. E-mail: [email protected]. Manuscript received 30 Sept. 2012; revised 7 Jan. 2013; accepted 1 Feb. 2013; published online 13 Feb. 2013. Recommended for acceptance by C.-K. Tang. For information on obtaining reprints of this article, please send e-mail to: [email protected], and reference IEEECS Log Number TPAMI-2012-09-0798. Digital Object Identifier no. 10.1109/TPAMI.2013.40. 0162-8828/13/$31.00 ß 2013 IEEE Published by the IEEE Computer Society

Upload: duongkien

Post on 13-Mar-2018

213 views

Category:

Documents


1 download

TRANSCRIPT

Nonlinear Camera Response Functionsand Image Deblurring:

Theoretical Analysis and PracticeYu-Wing Tai, Member, IEEE, Xiaogang Chen, Student Member, IEEE,

Sunyeong Kim, Student Member, IEEE, Seon Joo Kim, Member, IEEE, Feng Li, Member, IEEE,

Jie Yang, Jingyi Yu, Yasuyuki Matsushita, Senior Member, IEEE, and Michael S. Brown

Abstract—This paper investigates the role that nonlinear camera response functions (CRFs) have on image deblurring. We present a

comprehensive study to analyze the effects of CRFs on motion deblurring. In particular, we show how nonlinear CRFs can cause a

spatially invariant blur to behave as a spatially varying blur. We prove that such nonlinearity can cause large errors around edges when

directly applying deconvolution to a motion blurred image without CRF correction. These errors are inevitable even with a known point

spread function (PSF) and with state-of-the-art regularization-based deconvolution algorithms. In addition, we show how CRFs can

adversely affect PSF estimation algorithms in the case of blind deconvolution. To help counter these effects, we introduce two methods

to estimate the CRF directly from one or more blurred images when the PSF is known or unknown. Our experimental results on

synthetic and real images validate our analysis and demonstrate the robustness and accuracy of our approaches.

Index Terms—Nonlinear camera response functions (CRFs), motion deblurring, CRF estimation

Ç

1 INTRODUCTION

IMAGE deblurring is a long standing computer visionproblem for which the goal is to recover a sharp image

from a blurred image. Mathematically, the problem isformulated as

B ¼ I �K þ n; ð1Þ

where B is the captured blurred image, I is the latent image,

K is the point spread function (PSF), � is the convolution

operator, and n represents image noise.

One common assumption in most previous algorithmsthat is often overlooked is that the image B in (1) respondsin a linear fashion with respect to irradiance, i.e., the finalimage intensity is proportional to the amount of lightreceived by the sensor. This assumption is valid when wecapture an image in a RAW format. However, whencapturing an image using a common consumer level cameraor camera on a mobile device, there is a nonlinear cameraresponse function (CRF) that maps the scene irradiance tointensity. CRFs vary among different camera manufacturersand models due to design factors such as compressing thescene’s dynamic range or to simulate conventional irradi-ance responses of film [15], [33]. Taking this nonlinearresponse into account, the imaging process of (1) can beconsidered as

B ¼ fðI �K þ nÞ; ð2Þ

where fð�Þ is the CRF. To remove the effect of the nonlinearCRF from image deblurring, the image B has to first belinearized by the inverse CRF, i.e., f�1. After deconvolution,the CRF is applied again to restore the original intensitywhich leads to the following process:

I ¼ fðf�1ðBÞ �K�1Þ; ð3Þ

where K�1 is inverse filter which denotes the deconvolu-tion process.

Contributions. This paper offers two contributions withregard to CRFs and their role in image deblurring. First, weprovide a systematic analysis of the effect that a CRF has onthe blurring process and show how a nonlinear CRF canmake a spatially invariant blur behave as a spatially varyingblur around edges. We prove that such nonlinearity cancause abrupt ringing artifacts around edges which arenonuniform. In addition, these artifacts are difficult to

2498 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 35, NO. 10, OCTOBER 2013

. Y.-W. Tai and S. Kim are with the Department of Electrical Engineering,Korea Advanced Institute of Science and Technology, 291 Daehak-ro,Yuseong-gu, Daejeon 305-701, South Korea.E-mail: [email protected], [email protected].

. X. Chen and J. Yang are with the Institute of Image Processing and PatternRecognition, Shanghai Jiaotong University, 800 DongChuan Road,Shanghai 200240, China. E-mail: {cxg, jieyang}@stju.edu.cn.

. S.J. Kim is with the Department of Computer Science, Engineering College,Yonsei University, 134 Shinchon-Dong, Seodaemun-Gu, Seoul 120-749,South Korea. E-mail: [email protected].

. F. Li is with Mitsubishi Electric Research Laboratories, 201 Broadway,Cambridge, MA 02139. E-mail: [email protected].

. J. Yu is with the Department of CIS, College of Engineering, University ofDelaware, 410 Smith Hall, Newark, DE 19176.E-mail: [email protected].

. Y. Matsushita is with Microsoft Research Asia T2-13173, No. 49, DanlingSt., Haidian District, Beijing 100080, China.E-mail: [email protected].

. M.S. Brown is with the School of Computing, National University ofSingapore, Computing 1, 13 Computing Drive, Singapore 117417,Singapore. E-mail: [email protected].

Manuscript received 30 Sept. 2012; revised 7 Jan. 2013; accepted 1 Feb. 2013;published online 13 Feb. 2013.Recommended for acceptance by C.-K. Tang.For information on obtaining reprints of this article, please send e-mail to:[email protected], and reference IEEECS Log NumberTPAMI-2012-09-0798.Digital Object Identifier no. 10.1109/TPAMI.2013.40.

0162-8828/13/$31.00 � 2013 IEEE Published by the IEEE Computer Society

eliminate by regularization. Our analysis also shows thatPSF estimations for various blind deconvolution algorithmsare adversely affected by the nonlinear CRF.

Along with the theoretical analysis, we further introducetwo algorithms to estimate the CRF from one or moreimages: The first method is based on a least-squaresformation when the PSF is known; the second method isformulated as a rank minimization problem when the PSF isunknown. Both of these approaches exploit the relationshipbetween the blur profile about edges in a linearized imageand the PSF. While our estimation methods cannot competewith well-defined radiometric calibration methods based oncalibration patterns or multiple exposures, they are usefulto produce a sufficiently accurate CRF for improvingdeblurring results.

Shorter versions of this work appeared in [22] and [7].This paper unifies these two concurrent independentworks with more in-depth discussion and analysis of therole of CRF in deblurring, and additional experiments. Inaddition, a new section that analyzes the effectiveness ofgamma curve correction in traditional deblurring methodsand the limitations of the proposed algorithms arepresented in Section 7.1.

The remainder of our paper is organized as follows: InSection 2, we review related works in motion deblurringand CRF estimation. Our theoretical analysis about the blurinconsistency introduced by a nonlinear CRF is presented inSection 3, followed by the analysis on the deconvolutionartifacts and PSF estimation errors in Section 4. Our twoalgorithms which estimate the nonlinear CRF from motionblurred image(s) with known and unknown PSF arepresented in Section 5. In Section 6, we present ourexperimental results. Section 7.1 provides additional dis-cussion about the effectiveness of gamma curve correction.Finally, we conclude our work in Section 8.

2 RELATED WORK

Image deblurring is a classic problem with well-studiedapproaches, including Richardson-Lucy [35], [40] andWiener deconvolution [48]. Recently, several differentdirections were introduced to enhance the performanceof deblurring. These include methods that use imagestatistics [14], [29], [18], [6], sparsity or sharp edgeprediction [30], [20], [8], [2], [50], [26], [45], multipleimages or hybrid imaging systems [1], [39], [9], [51], [5],[42], [43], and new blur models that account for cameramotion [44], [46], [47], [19], [17]. The vast majority of thesemethods, however, do not consider the nonlinearity in theimaging process due to CRFs.

The goal of radiometric calibration is to compute a CRFfrom a given set of images or a single image. The mostaccurate radiometric calibration algorithms use multipleimages with different exposures [12], [4], [37], [13], [15],[21], [24], [34], [28]. Our work is more related to singleimage-based radiometric calibration techniques [32], [33],[36], [38], [49]. In [32], the CRF is computed by observingthe color distributions of local edge regions: The CRF iscomputed as the mapping that transforms nonlineardistributions of edge colors into linear distributions. Thisidea is further extended to deal with a single grayscaleimage using histograms of edge regions in [33]. In [49], aCRF is estimated by temporally mixing of a step edge

within a single camera exposure by the linear motion blurof a calibration pattern. Unlike [49], however, our methoddeals with uncontrolled blurred images.

To the best of our knowledge, there are only a handful ofprevious works that consider CRFs in the context of imagedeblurring. Examples include the work by Fergus et al. [14],where images are first linearized by an inverse gamma-correction with � ¼ 2:2. Real-world CRFs, however, areoften drastically different from gamma curves [15], [31].Another example by Lu et al. [34] involves reconstructing ahigh dynamic range image from a set of differently exposedand possibly motion blurred images. Work by Cho et al.[10] discussed nonlinear CRFs as a cause for artifacts indeblurring, but provided little insight into why suchartifacts arise. Their work suggested avoiding this by usinga precalibrated CRF or the camera’s RAW output. While aprecalibrated CRF is undoubtedly the optimal solution, theCRF may not always be available. Moreover, work byChakrabarti et al. [3] suggests that a CRF may be scenedependent when the camera is in “auto mode.” Further-more, work by Kim et al. [23] showed that the CRF for agiven camera may vary for different camera picture styles(e.g., landscape, portrait, etc.).

Our work aims to provide more insight into the effectthat CRFs have on the image deblurring process. Inaddition, we seek to provide a method to estimate a CRFfrom a blurred input image in the face of a missing orunreliable precalibrated CRF.

3 BLUR INCONSISTENCY DUE TO NONLINEAR CRF

We first study the role of the CRF in the image blurringprocess. We denote by f the nonlinear CRF, f�1 the inverse off , I the blur-free intensity image, ~I the irradiance image of I,i.e., I ¼ fð~IÞ, B the observed motion blurred image withB ¼ fð~I �K þ nÞ. To focus our analysis, we follow previouswork by assuming that the PSF is spatially invariant and theimage noise is negligible (i.e., n � 0).

We analyze the blur inconsistency introduced by anonlinear CRF by measuring

� ¼ B� bB; ð4Þ

where bB ¼ I �K, which denotes the conventional inten-sity-based blurring process. Note that I is used instead of ~Iin bB. Our goal here is to understand where the intensity-based convolution model would introduce errors when CRFcorrection is excluded.

Claim 1. In uniform intensity regions, � ¼ 0.

Proof. Since pixels within the blur kernel region haveuniform intensity, we have f�1ðIÞ �K ¼ f�1ðIÞ ¼f�1ðI �KÞ. Therefore,

B ¼ fðf�1ðIÞ �KÞ ¼ fðf�1ðI �KÞÞ ¼ bB; ð5Þ

thus, � ¼ 0. tuClaim 1 applies to any CRF (both linear or nonlinear).

This implies that a nonlinear CRF will not affect deblurringquality for uniform intensity regions.

Claim 2. If the blur kernel K is small and the CRF f is smooth,� � 0 in low-frequency regions.

TAI ET AL.: NONLINEAR CAMERA RESPONSE FUNCTIONS AND IMAGE DEBLURRING: THEORETICAL ANALYSIS AND PRACTICE 2499

Proof. Let I ¼ I þ�I be a local patch covered by the blur

kernel K. The term I is the average intensity within the

patch and �I is the deviation from I. In low-frequency

regions, �I is small.Next, we apply the first-order Taylor series expansion

to f�1ðIÞ �K as

f�1ðI þ�IÞ �K � f�1ðIÞ �K þ ðf 0�1ðIÞ ��IÞ �K; ð6Þ

where f 0�1 is the first-order derivative of f�1. Since I is

uniform, we have f�1ðIÞ �K ¼ f�1ðIÞ and f 0�1ðIÞ is

constant in the local neighborhood. Thus, (6) can be

approximated as

f�1ðIÞ þ f 0�1ðIÞ ��I �K: ð7Þ

Similarly, by using the first-order Taylor seriesexpansion, we have

f�1ðI �KÞ ¼ f�1ðI �K þ�I �KÞ� f�1ðI �KÞ þ f 0�1ðI �KÞ � ð�I �KÞ

ð8Þ

¼ f�1ðIÞ þ f 0�1ðIÞ ��I �K: ð9Þ

Therefore,

B ¼ fðf�1ðIÞ �KÞ � fðf�1ðI �KÞÞ ¼ bB; ð10Þ

i.e., � � 0. tuClaim 2 holds only for small kernels. When the kernel

size is large, for example, 80� 80, the first-order Taylorseries expansion is not accurate. To illustrate this property,we simulate a 1D smooth signal in Fig. 1a. A uniformmotion blur kernel with different size (10, 20, 30, and80 pixels) is applied to the 1D signal. We compute B and bBwith f�1 being a gamma curve (� ¼ 2:2). The plotting of �

along the 1D signal with different sized Ks are shown inFig. 1a. Note that � � 0 for the first three kernels.

Claim 3. � can be large at high-frequency high contrast regions.

Proof. Let us consider a sharp edge in I represented by a

step edge function

IðxÞ ¼ 0; if x < 0:5;1; otherwise:

�ð11Þ

Since f�1 has boundary f�1ð0Þ ¼ 0 and f�1ð1Þ ¼ 1 [15],we have f�1ðIÞ ¼ I. Therefore,

B ¼ fðf�1ðIÞ �KÞ ¼ fðI �KÞ ¼ fð bBÞ: ð12Þ

Hence, � ¼ fð bBÞ � bB. tuIn this example, �ðxÞ measures how fðxÞ deviates from

the linear function yðxÞ ¼ x. In other words, �ðxÞ ! 0, ifffðxÞ ! x. However, practical CRFs f are highly nonlinear[15], [23].

To validate Claim 3, we simulate a 1D signal of sharpedges with different gradient magnitudes and measure theblur inconsistency � as shown in Fig. 1b. The first row is theoriginal clear signal. The second row is the blur signal Bwhere the blur kernel size is 20 pixels. The third row showsthe �, which is nonzero around edge regions, and the �increase as the gradient magnitudes get larger. To furtheranalyze the effect of kernel size on sharp signals, we plot �with four different sized kernels in Fig. 1c. Notice that thearea of nonzero � increases with kernel size since the blurryedge gets larger. Also, the magnitude of � depends on edgemagnitude, but it does not depends on kernel size.

Theorem 1. Let Imin and Imax be the local minimum andmaximum pixel intensities in a local neighborhood covered bykernel K in image I. If f�1 is convex, then the BlurInconsistency is bounded by 0 � � � Imax � Imin.

Proof. Consider f�1ðIÞ �K as a convex combination ofpixels from f�1ðIÞ since K contains only zero or positivevalues. If f�1 is convex, we can use the Jensen’sinequality to obtain

f�1ðIÞ �K � f�1ðI �KÞ: ð13Þ

Further, since the CRF f is a monotonically increasing[12], [37], we have

B ¼ fðf�1ðIÞ �KÞ � fðf�1ðI �KÞÞ ¼ bB; ð14Þ

i.e., � � 0.Next, we derive the upper bound of �. Since I �K �

Imax and f�1 is monotonically increasing (as f�1 isinverse of f and f is monotonically increasing),f�1ðIÞ �K � f�1ðImaxÞ. Therefore, we have

B ¼ fðf�1ðIÞ �KÞ � fðf�1ðImaxÞÞ ¼ Imax: ð15Þ

2500 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 35, NO. 10, OCTOBER 2013

Fig. 1. Illustration of blur inconsistency �. A gamma curve with � ¼ 2:2 is used to simulate f�1 across all subfigures. (a) � is computed along a 1Dsmooth signal with four different uniform motion blur kernels of size 10, 20, 30, and 80 pixels. (b) The first two rows show the latent pattern and thecorresponding irradiance-based motion blurred pattern, respectively. The bottom row shows the measured �, which varies across different edgestrength. (c) Plotting of � in (b) with different kernel sizes.

Likewise, we can also derive bB ¼ I �K � Imin. Combin-ing this with (14) and (15), we have Imin � bB � B � Imax.Therefore,

0 � � � Imax � bB � Imax � Imin: ð16Þ

tu

Theorem 1 explains the phenomenon in Fig. 1: When thegradient magnitude of an edge is large, the upper bound of� will be large. On the other hand, in low contrast regions,the upper bound Imax � Imin is small and so is �. Thus, Bcan be well approximated by bB.

It is important to note that Theorem 1 assumes a convexinverse CRF f�1. This property has been observed in manyprevious results. For example, the widely used Gammacurves f�1ðxÞ ¼ x� , � > 1, are convex. To better illustratethe convexity of f�1 (or equally the concavity of f), we plot188 real camera CRF curves (f) collected in [15] in Fig. 2. Wecompute the discrete second-order derivatives of the188 real camera CRF curves. Our experiment shows thatthe majority (84.4 percent) of sample points are negativeand therefore the inverse CRF f�1 is largely convex.

Combining Claims 1, 2, 3, and Theorem 1, we see that theblur inconsistency introduced by nonlinear CRF mainlyappears in high contrast high-frequency regions in originalsignal. These regions, however, are the regions that mostlycause ringing artifacts in image deblurring. Next, we willanalyze the effects of CRF and how it affects the quality ofimage deblurring.

4 EFFECTS OF CRF IN IMAGE DEBLURRING

4.1 Effects on PSF and Image Deconvolution

We begin our analysis using synthetic examples shown inFig. 3. An image with three different intensity levels (black,gray, and white) is blurred with 1D motion PSFs with uniformspeed (kernel A) and nonuniform speed (kernel B) to generatethe observations in Figs. 3a and 3b and Figs. 3c and 3d,respectively. A real CRF1 is used to nonlinearly remap theimage intensity in Figs. 3b and 3d after convolution.

In this example, we can observe how the blur profiles ofan image become spatially varying after the nonlinear

remapping of image intensities even though the underlyingmotion PSF is spatially invariant. With a linear CRF, the blurprofile from the black to gray region (first step edge) is thesame as the blur profile from the gray to white region(second step edge), as shown in Figs. 3a and 3c. However,with a nonlinear CRF, the two blur profiles becomedifferent, as shown in Figs. 3b and 3d. This is because theslope and the curvature of the CRF are different for differentranges of intensities. Fig. 4 helps to illustrate this effect.

To examine the effect that these cases have on imagedeblurring, we performed nonblind deconvolution on thesynthetic images, as shown in the second and third rows ofFig. 3. The second row shows the results of Wienerfiltering [48] while the third row shows the results of [30]with sparsity regularization. Without using any regular-ization, the deblurring results of both the linear andnonlinear CRF contain ringing artifacts due to the zerocomponent of the PSF in the frequency domain. However,the magnitude of these ringing artifacts in the nonlinearCRF result is significantly larger than the one with thelinear CRF. The use of image regularization [30] helps toreduce the ringing artifacts, but the regularization is lesseffective in the case of a nonlinear CRF, even with a verylarge regularization weight.

In addition, if we assume I is a step edge and f is aGamma function, fðxÞ ¼ x� with � < 1, we can rewrite B ¼fð bBÞ (see (12)) using Taylor series expansion:

B ¼ bB� ¼ bBþ P; ð17Þ

where P ¼P1

�¼1ð��1Þ��!

bBðln bBÞ�. An important property offunction xðlnxÞ� is that it approaches zero for x! 0þ orx! 1. Therefore, P only has nonzero values in the blurrededge regions. Assuming an invertible filter K which hasnonzero entries in frequency domain, we can represent thedeconvolution of B with K by B�K�1, and thus

B�K�1 ¼ bB�K�1 þ P �K�1; ð18Þ

where bB�K�1 ¼ I (see (4)). Thus, P �K�1 is the decon-volution artifacts introduced by the blur inconsistency �.Notice that even the blur inconsistency appears only withinthe edge areas; the deconvolution process can propagate theblur inconsistency errors from edge regions to smoothregions. Fig. 5 illustrates the effects of decomposition in(18). Again, the CRF plays a significant role in quality of theimage deblurring.

4.2 Effects on PSF Estimation

We also analyze the effects of a CRF on the PSF estimationby comparing the estimated PSF between a linear andnonlinear CRF. We show the estimated PSFs from Ferguset al. [14], Shan et al. [41], Xu and Jia [50], and Cho et al.[11] in Fig. 6. The raw images from [11] were used for ourtesting purpose.

Ideally, the CRF should only affect the relative intensityof a PSF, but the shape of a PSF should remain the samesince the shape of the PSF describes the trajectory of motioncausing the motion blur, and the intensity of the PSFdescribes the relative speed of the motion. However, as wecan observe in Fig. 6, the estimated PSFs are noticeablydifferent, especially for the results from [14] and [41]. This is

TAI ET AL.: NONLINEAR CAMERA RESPONSE FUNCTIONS AND IMAGE DEBLURRING: THEORETICAL ANALYSIS AND PRACTICE 2501

Fig. 2. The left panel shows 188 CRF curves of real cameras from DoRF[15]. Nearly all curves appear concave.

1. Precalibrated CRF of a Cannon G5 camera in standard mode is used.

because both [14] and [41] use alternating optimization fortheir nonblind deconvolution. As previously discussed, thenonlinear CRF causes errors during the deconvolutionprocess, which in turn propagates the errors to theestimated PSF in an alternating optimization framework.The results from [50] and [11] contains fewer errors becausethey separate the process of PSF estimation and imagedeconvolution. However, the shape of the estimated PSF isstill different. The method in [50] requires edge selectionand sharpening, and the method in [11] requires edgeprofiles to be aligned. Since the nonlinear CRF alters theedge profiles in the blurred image, their estimated PSF alsocontains errors inherent from nonuniform edge profiles.Note that small errors in the PSF estimation can causesignificant artifacts in subsequent deconvolution.

5 CRF ESTIMATION FROM A BLURRED IMAGE

In this section, we describe a method to estimate the CRFfrom one or more blurred images. To begin, we examine therelationship between the PSF and edge profiles in a blurredimage under a linear CRF assumption. We then describe amethod to estimate the CRF based on this relationship usingleast-squares fitting assuming a known PSF. This method isthen converted to a robust estimation of an unknown PSFand a CRF using rank minimization.

5.1 PSF and Edge Profiles

We begin with the observation that the shape of a blurprofile with the linear CRF resembles the shape of thecumulative distribution of the 1D PSF as shown in Figs. 3aand 3c. Our analysis is similar in fashion to that in [18],which used alpha mattes of blurred object to estimate thePSF. Our approach, however, works directly from the imageintensities and requires no matting or object extraction.Instead, we only need to identify the step edges withhomogeneous areas on both sides. For such edges, the

2502 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 35, NO. 10, OCTOBER 2013

Fig. 4. This figure illustrates the problem when the image intensitiesare remapped according to different parts of the CRF. Two identicaledge profiles at different intensity ranges are remapped to havedifferent shapes.

Fig. 3. This figure shows a simple step edge image that has been blurred by two different PSF kernels: a 1D motion PSF with uniform speed(kernel A) and a 1D PSF with nonuniform speed (kernel B). These blurred images are transformed with a linear CRF (a) and (c), and with anonlinear CRF (b) and (d). The 1D PSF and the 1D slice of intensity values are also shown. (e)-(h) Nonblind deconvolution results of (a)-(d)using Wiener filter. (i)-(l) Nonblind deconvolution results of (a)-(d) using an iterative reweighting method [30] with sparsity regularization.

Fig. 5. Given an invertible filter, the ringing artifacts of a step edge arecaused by the deconvolution of blur inconsistency P introduced bynonlinear CRF in (18).

shape of the blur profile is equal to the shape of the

cumulative distribution of the 1D PSF.

Theorem 2. Under a 1D motion blur, if the CRF is linear and the

original edge in clear image is a step edge with uniform

intensity on both sides, the shape of the blur profile is equal to

the shape of the cumulative distribution of the 1D PSF.

Proof. Consider a simple case where the original step edge

has values ½0; . . . ; 0; 1; . . . ; 1 and the values of the PSF

are ½�1; �2; . . . ; �M . If the number of 0s and 1s are both

larger than M, the blur profile after the motion blur is

equal to ½�1; �1 þ �2; . . . ;PM

i¼1 �i, which is the cumula-

tive distribution of the 1D PSF.For any edges with intensities ½I1; I2, the value of the

blur profile at m 2 ½1; . . . ;M after the blur is equal toI1 þ

Pmi¼1 �iðI2 � I1Þ. tu

Theorem 2 is valid under the assumptions that the

motion blurred image does not contain any noise or

quantization error.In the case of a 2D PSF, Theorem 2 still holds when the

edge is a straight line. In this case, the 1D PSF becomes

the marginal probability of the 2D PSF projected onto the line

perpendicular to the edge direction as illustrated in Fig. 7.

5.2 CRF Approximation with a Known PSF

Consider that the shapes of blurred edge profiles are equal

to the shapes of the cumulative distribution of the PSF2 if

the CRF is linear. Given that we know the PSF, we can

compute the CRF as follows:

arg mingð�Þ

XE1

j¼1

XMm¼1

wjgðIjðmÞÞ � lj

wj�Xmi¼1

�i

!2

þXE2

j¼1

XMm¼1

wjgðIjðmÞÞ � lj

wj�XMi¼m

�i

!2

;

ð19Þ

where gð�Þ ¼ f�1ð�Þ is the inverse CRF function, E1 and E2

are the numbers of selected blurred edge profiles from

dark to bright regions and from bright region to dark

regions, respectively.The variables lj and wj are the minimum intensity value

and the intensity range (intensity difference betweenthe maximum and the minimum intensity values) of theblurred edge profiles after applying the inverse CRF. Blurprofiles that span a wider intensity range are weightedmore because their wider dynamic range covers a largerportion of gð�Þ, and therefore provide more informationabout the shape of gð�Þ. We filtered out the edges with wj <0:1 since, according to our blur inconsistency analysis inSection 3, these low contrast edges do not provide muchinformation for PSF estimation.

We follow the method in [49] and model the inverse

CRF gð�Þ using a polynomial of degree d ¼ 5 with

coefficients ap, i.e., gðIÞ ¼Pd

p¼0 apIp. The optimization is

subject to boundary constraints gð0Þ ¼ 0 and gð1Þ ¼ 1, and a

monotonicity constraint that enforces the first derivative of

gð�Þ to be nonnegative. Our goal is to find the coefficients apsuch that the following objective function is minimized:

arg minap

XE1

j¼1

XMm¼1

wj

Pdp¼0 apIjðmÞ

p � ljwj

�Xmi¼1

�i

!2

þXE2

j¼1

XMm¼1

wj

Pdp¼0 apIjðmÞ

p � ljwj

�XMi¼m

�i

!2

þ �1 a20 þ

Xdp¼0

ap � 1

!20@ 1A

þ �2

XLr¼1

HXdp¼0

apr� 1

L

� �p� r

L

� �p� � !;

ð20Þ

where H is the Heviside step function for enforcing the

monotonicity constraint, i.e.,H ¼ 1 if gðrÞ < gðr� 1Þ, or H ¼0 otherwise and L is the maximum intensity level, for

example, 255 for 8-bit color depth. The weights are fixed to

�1 ¼ 100 and �2 ¼ 10, which control the boundary constraint

and the monotonic constraint, respectively. The solution of

(20) can be obtained by the simplex search method of

Lagarias et al. [27]3.

TAI ET AL.: NONLINEAR CAMERA RESPONSE FUNCTIONS AND IMAGE DEBLURRING: THEORETICAL ANALYSIS AND PRACTICE 2503

Fig. 7. This figure shows how the edge profile of a step edge in a motionblurred image is equal to the cumulative distribution function of themarginal probability of the motion blur kernel along the directionperpendicular to the edge.

Fig. 6. This figure shows examples of PSF kernels from several imageswith linear CRF (left columns) or nonlinear CRF (right columns). PSFkernels in the same row are from the same image with differentestimated methods: Fergus et al. [14], Shan et al. [41], Xu and Jia [50],and Cho et al. [11].

2. For simplicity, we assume that the PSF is 1D, and it is well alignedwith the edge orientation. If the PSF is 2D, we can compute the marginalprobability of the PSF. 3. fminsearch function in Matlab.

5.3 CRF Estimation with Unknown PSF

Using the cumulative distribution of the PSF can reliablyestimate the CRF under ideal conditions. However, the PSFis usually unknown in practice. As we have studied inSection 4.2, nonlinear CRF affects the accuracy of the PSFestimation, which in turn will affect our CRF estimationdescribed in Section 5.2. In this section, we introduce a CRFestimation method without explicitly computing the PSF.

As previously discussed, we want to find an inverseresponse function gð�Þ that makes the blurred edge profileshave the same shape after applying the inverse CRF. Thiscan be achieved by minimizing the distance between eachblur profile to the average blur profile:

arg mingð�Þ

XE1

j¼1

XMm¼1

wjgðIjðmÞÞ � lj

wj�A1ðmÞ

� �2

þXE2

j¼1

XMm¼1

wjgðIjðmÞÞ � lj

wj�A2ðmÞ

� �2

;

ð21Þ

where A1ðmÞ ¼PE1

k¼1wkW gðIkðmÞÞ is the weighted average

blur profile, and W ¼PE1

l¼1 wl is a normalization factor.Using the constraint in (21), we can compute the CRF;

however, this approach is unreliable not only because theconstraint in (21) is weaker than the constraint in (19), butbecause the nature of least-squares fitting is sensitive tooutliers. To avoid these problems, we generalize ourmethod to robust estimation via rank minimization.

Recall that the edge profiles should have the same shapeafter applying the inverse CRF. This means that if the CRF islinear, the edge profiles are linearly dependent with eachother, and hence the observation matrix of edge profilesform a rank-1 matrix for each group of edge profiles:

gðMÞ ¼gðI1ð1ÞÞ � l1 � � � gðI1ðMÞÞ � l1

..

. . .. ..

.

gðIE1ð1ÞÞ � lE1

� � � gðIE1ðMÞÞ � lE1

0B@1CA; ð22Þ

where M is length of edge profiles and E1 is the number ofobserved edge profiles grouped according to the orienta-tion of edges. Now, we transform the problem into a rankminimization problem which finds a function gð�Þ thatminimizes the rank of the observation matrix M of edgeprofiles. Since the CRF is the same for the whole image, wedefine our objective function for rank minimization asfollows:

arg mingð�Þ

XKk¼1

wkrankðgðMkÞÞ; ð23Þ

where K is the total number of observation matrix (totalnumber of group of edge profiles), wk is a weight given toeach observation matrix. We assign larger weight to theobservation matrix that contains more edge profiles.Note that (23) is also applicable to multiple images sincethe observation matrix is built individually for each edgeorientation and for each input image.

We evaluate the rank of matrixM by measuring the ratioof its singular values:

arg mingð�Þ

XKk¼1

wkXEkj¼2

�kj�k1

; ð24Þ

where �kj are the singular values of gðMÞk. If theobservation matrix is rank-1, only the first singular valuesare nonzero and hence minimize (24). In our experiments,we found that (24) can be simplified to just measuring theratio of the first two singular values:

arg mingð�Þ

XKk¼1

wk�k2

�k1: ð25Þ

Combining the monotonic constraint, the boundaryconstraint, and the polynomial function constraint from(20), we obtain our final objective function:

arg minap

XKk¼1

wk�k2

�k1þ �1 a2

0 þXdp¼0

ap � 1

!20@ 1A

þ �2

XLr¼1

HXdp¼0

apr� 1

L

� �p� r

L

� �p� � !:

ð26Þ

Equation (26) can be solved effectively using nonlinearleast-squares fitting.4

5.4 Implementation Issues for 2D PSF

Our two proposed methods for CRF estimation are basedon the 1D blur profile analysis. Since, in practice, PSFs are2D in nature, we need to group image edges with similarorientation and select valid edge samples for building theobservation matrix. We use the method in [11] to select theblurred edges. The work in [11] filtered edge candidates bykeeping only high contrast straight-line edges. A userparameter controls the minimum length of straight-lineedges, which depends on the size of the PSF. We refer to[11] for details of selecting high contrast straight-line edgesin blurry images.

After selecting valid edge profiles, they are groupedaccording to edge orientations. Fig. 9 shows examples of theselected edges and the grouping results. The selected edgeswere grouped by selecting partitioning thresholds such thatthe orientation variation within each group is less than2 degrees. To increase the number of candidate edgeswithin each group, edges in the opposite direction weregrouped together. When building the observation matrix in(22), we reverse the CDF of the opposite edges bycomputing f1� ½gðIjð1ÞÞ � lj � � � 1� ½gðIjðMÞÞ � ljg. Tomake the selected edge profiles more robust against noiseand outliers, we apply a 1D directional filter in a directionorthogonal to the edge orientation to get the local average ofedge profiles. We found that this directional filteringimproved the robustness of our edge selection algorithmeven when the candidate edges were slightly curved.

After edge selection and grouping, edge profiles arealigned to build the observation matrix. In [11], edgeprofiles were aligned according to the center of mass. In ourcase, however, due to the nonlinear CRF effects, alignmentbased on the center of mass is not reliable. We insteadalign the two end points of the edges. In cases where theprojected 1D PSF contains discontinuities, the starting andend points of the discontinuities are also considered in thealignment. Since the amount of blur in different directions

2504 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 35, NO. 10, OCTOBER 2013

4. lsqnonlin function in Matlab.

varies according to the shape of the PSF, we give a largerweight to the directions with longer edge profiles as theyprovide more information about the CRF than sharp edges.When dealing with multiple images, the same weightingscheme applies where a larger weight is given to a moreblurry edge profile.

Our rank minimization using nonlinear least-squaresfitting is sensitive to the initial estimation of the CRF. Inour implementation, we use the average CRF profile fromthe database of response functions (DoRF) created byGrossberg and Nayar [16] as our initial guess. The DoRFdatabase contains 201 measured functions, which allows usto obtain a good local minima in practice.

6 EXPERIMENTAL RESULTS

In this section, we evaluate the performance of our CRFestimation method using both synthetic and real examples.In the synthetic examples, we test our algorithm underdifferent conditions to better understand the behaviors andthe limitations of our method. In the real examples, weevaluate our method by comparing the amount of ringingartifacts with and without CRF intensity correction todemonstrate the effectiveness of our algorithm and theimportance of CRF in the context of image deblurring.

6.1 Synthetic Examples

Fig. 8 shows the performance of our CRF estimation underdifferent conditions. We first test the effects of the intensityrange of the blur profiles in Fig. 8a. In a real application, it isuncommon that all edges will have a similar intensity range.These intensity range variations can potentially affect theestimated CRF as low dynamic range edges usually containlarger quantization errors. As shown in Fig. 8f, our method isreasonably robust to these intensity range variations.

Our method assumes that the original edges are stepedges. In practice, there may be a color mixing effect evenfor an edge that is considered as a sharp edge. In ourexperiments, we find that the performance of our approachdegrades quickly if the step edge assumption is violated.

However, as shown in Fig. 8g, our approach is still effectiveif the color mixing effects is less than 3 pixels wide given aPSF with size 15. The robustness of our method when edgecolor mixing is present depends on the size of the PSF, withour approach being more effective for larger PSFs.

Noise is inevitable even when the ISO of a camera is high.We test the robustness of our method against image noise inFig. 8c. We add Gaussian noise to (2), where the noise isadded after the convolution process but before the CRFmapping. As can be observed in Fig. 8h, the noise affects theaccuracy of our method. In fact, using the model in (2), wecan observe that the noise has also captured some character-istics of the CRF. The magnitude of noise in the darker regionis larger than the magnitude of noise in the brighter region.Such information may even be useful and combined into ourframework to improve the performance of our algorithm, asdiscussed in [36].

We test the sensitivity of our algorithm when the unionof blur profiles does not cover the whole range of CRF. Ascan be seen in Fig. 8i, our method still gives reasonableestimations. This is because the polynomial and mono-tonicity constraints assist in maintaining the shape of CRF.Note that having a limited range of intensities will degradethe performance of all radiometric calibration methods.

Finally, we show a result where we use all input imagesto estimate the CRF. As expected, more input images give amore accurate CRF estimation.

TAI ET AL.: NONLINEAR CAMERA RESPONSE FUNCTIONS AND IMAGE DEBLURRING: THEORETICAL ANALYSIS AND PRACTICE 2505

Fig. 8. We test the robustness of our CRF estimation method under different configurations. (a) Blur profiles with different intensity ranges, (b) edgesin the original image contains mixed intensities (edge width is equal to 3 pixels), (c) Gaussian noise (� ¼ 0:02) is added according to (2), (d) the unionof intensity range of all blur profiles does not cover the whole CRF curve. (e) Blur profiles of (a), (b), (c), and (d). The original image (black lines), theblurred image with linear CRF (red lines), and the blurred image with nonlinear CRF (blue lines) are shown on top of each figure in (a)-(d). (f)-(i) Thecorresponding estimated inverse CRF using our methods with (a)-(d). (j) The corresponding estimated inverse CRF with multiple images in (e).

Fig. 9. Our selected edges for CRF estimation. Edges are groupedaccording to the edge orientation (color coded). The estimated CRF andthe deblurred images are shown in Fig. 12.

The synthetic experiments show that the performance ofour method depends on not only the quantity of theobserved blur profiles but also the quality of the profiles.For instance, a blur profile with a large motion covering awide intensity range is better than the combination of blurprofiles that cover only a portion of intensity range. Whencombining the observations from multiple images, ourestimated CRF is more accurate as the mutual informationfrom different images increases the robustness of ouralgorithm. The rank minimization also makes our algorithmless sensitive to outliers.

6.2 Real Examples

6.2.1 With Known PSF

To test our algorithm with a known PSF in real-worldexamples, we implemented the method from [51] whichuses a pair of blurry/sharp image to estimate a reliablemotion PSF. Although we can use previous single imagemethods, for example, [14], [41], [50], [11], to get the PSF, wefound that the PSF from [51] is more reliable, especiallywhen the CRF is nonlinear (see Section 4.2). We validatedour approach on three different camera models: Canon 60D,400D, and Nikon D3100.

Figs. 10a, 10b, and 10c show the captured sharp/blurryimage pairs. The ISO value and the relative exposuresetting for each captured image pair were also shown. Thesharp images were captured with a tripod, whereas theblurry images were captured by holding the camera byhands with long exposure period. To obtain the groundtruth CRF for comparisons, we used the Macbeth colorcheckerboard and applied the PCA-based method [15] toestimate f�1 via curve fitting [38]. Figs. 10d, 10e, and 10fplot our estimated CRFs against the ground truth CRFs.

The green dots are the sampled chart values from theMacbeth color checkerboard.

Finally, we tested the effectiveness of different regular-ization-based deconvolution algorithms in Fig. 11. Inparticular, we compare the method from Krishnan andFergus [25], Levin et al. [30], and Xu and Jia [50]. The workfrom Krishnan and Fergus uses hyper-Laplacian prior toregularize the deconvolved image, Levin et al. use thesparsity regularization, while Xu and Jia use the L1-normregularization. Among all the deconvolution results, thegamma correction consistently shows better results than theresults using linear CRF correction. However, with ourestimated CRF correction, the deconvolution results for allthree methods can be further improved.

6.2.2 With Unknown PSF

When the PSF is not available, we estimate the inverse CRFusing the method in Section 5.3. Note that since the CRFestimation step is separated from the PSF estimation step,the accuracy of our estimated CRF for the real examplesdoes not depend on the PSF estimation algorithm. In ourimplementation, we choose the method in [50] to estimatethe PSF from a single image and the method in [30] with thesparsity regularization for deconvolution.

Fig. 12 shows the results from the input image in (a) thatis captured by a Canon EOS 400D (first row) and a NikonD90 DSLR (second and third rows) camera with a nonlinearCRF. The results in (b) are the estimated CRF using singleand multiple images. The ground truth CRFs werecomputed by using method in [28] with a calibrationpattern. For the reference, we have also shown the inversegamma-correction curve with � equal to 2.2. This is acommon method suggested by the previous deblurring

2506 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 35, NO. 10, OCTOBER 2013

Fig. 10. CRF estimation on real images. The top row shows the sharp/blurred pairs. The bottom row shows our recovered CRF (in blue) and theground truth CRF (in red) obtained by acquiring the MacBeth’s chart (in green).

algorithm [14] when the CRF is unknown. Note the shape ofthe inverse gamma-correction curve is very different fromthe shape of ground truth and our estimated CRF. Wecompare the results with gamma correction (� ¼ 2:2) andwith our estimated CRF corrections in (c) and (d),respectively. As expected, our results with CRF correctionsare better—not only do the deblurring results contain fewerartifacts, but the estimated PSFs are also more accurate afterthe CRF correction.

7 DISCUSSION

7.1 Effectiveness of Gamma Correction

When the ground truth CRF is unknown, a commonmethod in previous deblurring algorithms [14], [50] is touse gamma curve correction with � ¼ 2:2. In this section, weprovide additional analysis to the effectiveness of thisapproach. Let the blurred irradiance be ~B ¼ ~I �K, and theobserved intensity be B ¼ fð ~BÞ. When gamma curve isused, we have

fg�1ðfð ~BÞÞ ¼ fð ~BÞ2:2; ð27Þ

where fgðxÞ ¼ x1=2:2 denotes the gamma curve CRF. Thus,the absolute error introduced by the curve can bemeasured by

�ð ~BÞ ¼ jfð ~BÞ2:2 � ~Bj: ð28Þ

As discussed previously in Sections 3 and 4, errors in theCRF estimation will be directly transferred to the deblurredimage, leading to large ringing artifacts that cannot be fullysuppressed by regularization.

To show the magnitude of the approximation errors, wemeasure the differences between the gamma curve and the188 real CRF database [15] presented in Fig. 2. Fig. 13ashows the approximation errors of the gamma curve. Thecentral dark curve shown in the figure represents the meanof the error �ð ~BÞ for all 188 real CRF curves. Note that mostof the error curves have errors larger than 0.1 which is10 percent of intensity range. Some curves even have errorsas large as 0.6. For the reference, we also show theapproximation errors if a linear CRF is used in Fig. 13b.Although the gamma curve has smaller errors compared tothe linear CRF, gamma curve is insufficient to represent thereal-world CRFs, as noted in previous work [15], [31], [23].

Next, we use the synthetic examples in Fig. 3 to analyzethe effectiveness of gamma curve correction in the non-blind deconvolution. Fig. 14 shows the result. The gammacurve correction can reduce errors compared to the resultsusing the linear correction, and the ringing artifacts arefurther reduced when a regularization-based deconvolutionis used. Yet, as illustrated in Fig. 14, it cannot completely

TAI ET AL.: NONLINEAR CAMERA RESPONSE FUNCTIONS AND IMAGE DEBLURRING: THEORETICAL ANALYSIS AND PRACTICE 2507

Fig. 11. Qualitative comparison on nonblind deconvolution results with linear CRF, gamma curve, and our estimated CRF correction. We show theresults for three different state-of-the-art regularization-based deconvolution method: Krishnan and Fergus [25], Levin et al. [30], and Xu and Jia [50].The estimated CRF is shown in Fig. 10. Among all of the compared results, our estimated CRF correction consistently performs better than gammacurve correction.

remove the ringing artifacts caused by the blur inconsis-

tency from nonlinear CRF.Finally, we show an example in Fig. 15, where the gamma

curve correction is “successful.” We use the mean CRF of

the 188 real CRF database [15] as the ground truth CRF. Asillustrated in Fig. 15b, the gamma curve with � ¼ 2:2 isclosed to the inverse of mean CRFs. We use the method inSection 5.2 to estimate the CRF with known PSF. Thisexample is considered as successful in deblurring since thedeconvolution artifacts are small and unnoticeable after aregularization is used. However, if we compare the errormaps between the gamma curve correction and ourestimated CRF correction, our approach can further reduceerrors with the same deconvolution algorithm [25].

7.2 Assumptions and Limitations

Our proposed algorithms work under several assumptions.In the following, we analyze each of the assumptions anddiscuss the limitations of our approach in practise.

7.2.1 Robustness against Image Noise

Theorem 2 for our CRF estimation algorithms assumesnoise-free conditions. To analyze the robustness of ouralgorithms against image noise, we generate a set ofsynthetic blurry and noisy images with different amounts

2508 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 35, NO. 10, OCTOBER 2013

Fig. 13. The error curves of DoRF CRFs when (a) gamma curve(� ¼ 2:2) and (b) linear CRF are used to approximate the ground truthCRFs. The horizontal axis denotes the ~B values and the vertical axisdenotes the error �ð ~BÞ. The colors of these curves represent differentunderlying CRFs. The central dark curve shows the mean � for all188 real CRFs.

Fig. 12. Our result with real examples: (a) input images, (b) estimated CRFs, (c) deblurred images with gamma correction, and (d) deblurred imageswith CRF correction.

of Gaussian noise. The ground truth image in Fig. 15 andthe CRF curves of Canon400D and Nikon D3100 were usedto produce the synthetic inputs. Fig. 16 shows the plot ofRMSE of the estimated CRF (with unknown PSF) againstdifferent amount of noise. Our CRF estimation algorithm is

robust to image noise thanks to the usage of rankminimization optimization and the directional filtering inthe edge selection step.

7.2.2 Spatially Varying Blur

Our CRF estimation algorithms assume spatially invariantmotion blur. When facing spatially varying motion blur ordefocus blur with different depth layers, the blind CRFestimation algorithm with unknown PSF will break downsince the observation matrix is no longer a rank-1 matrix. Insuch a scenario, only the CRF estimation algorithm withknown PSF can be used, providing that the local PSF can beobtained accurately. The blur inconsistency analysis, how-ever, should still hold, which claims that the irradiance ofthe blurry image should be linearized before deblurring toavoid ringing artifacts caused by nonlinear CRF.

7.2.3 Failure Cases

We finally provide a failure case in Fig. 17 where the CRFestimation fails due to the narrow intensity range of theinput image. The estimated CRF deviates from the groundtruth CRF, especially for the area that is not covered by theintensity range of input image. However, when comparingdeblurring results using a gamma curve, our result is stillbetter with fewer ringing artifacts. As discussed earlier, thegoal of this paper is to provide a method to handlenonlinear CRF in deblurring problem when the groundtruth CRF is not accessible. In some cases, even anincorrectly estimated CRF may produce better deblurringresults than not performing linearization at all.

The blur inconsistency analysis in Section 3 assumes aconvex CRF. In practice, there are camera models where theCRF is nonconvex in shape, especially for the blue channel.

TAI ET AL.: NONLINEAR CAMERA RESPONSE FUNCTIONS AND IMAGE DEBLURRING: THEORETICAL ANALYSIS AND PRACTICE 2509

Fig. 15. (a) Input blurred Image. (b) The inverse CRFs. Note the gammacurve with � ¼ 2:2 is close to the inverse of mean CRFs. (c) Deblurredimage with gamma curve correction. (d) Deblurred image with ourestimated inverse CRF correction. (e) Error maps using gammacorrection. (f) Error maps using our estimated CRF correction.

Fig. 14. We tested the effectiveness of gamma correction on a syntheticexample. (a) Input synthetic signal. Deconvolution with gamma curvecorrection using (b) wiener filter, and (c) sparsity regularization [30]. Asdiscussed in Section 4, the ringing artifacts are mainly caused by theblur inconsistency. The gamma curve correction cannot fully eliminateblur inconsistency, especially for edges with high contrast.

Fig. 16. The estimated CRF under different noise level. (a) Canon400D.(b) Nikon D3100. (c) Plot of RMSE of the estimated CRF againstdifferent amount of noise.

Fig. 17. Failure case. Our algorithm fails to estimate accurate CRF whenthe intensity range of the input image is narrow. Yet the deblurred resultis still better than the gamma curve, with fewer ringing artifacts.

We tested our algorithms for nonconvex CRF usingsynthetic examples. However, we found that our estimatedCRFs for unknown PSF are not as accurate as the convexcases. One possible explanation is that the nonconvex CRFsdeviate enough from the mean CRF to cause the nonlinearleast-squares fitting in the rank minimization to converge toan incorrect local minima.

We have also tested our algorithms for highly texturedimages where straight-line edges are very limited. Bothalgorithms failed since the edge detection algorithm fail todetect straight-line edges. We consider this is a fundamentallimitation to our algorithms.

8 CONCLUSION

This paper offers two contributions targeting imagedeblurring in the face of nonlinear CRFs. First, we havepresented an analysis on the role that nonlinear CRFs playin image deblurring. We proved that the blur inconsistencyintroduced by nonlinear CRF can cause notable ringingartifacts in the deconvolution process which cannot becompletely ameliorated by image regularization. Such blurinconsistency is spatially varying and it depends on edgesharpness, edge intensity range, and image brightness. Wehave also demonstrated how the nonlinear CRF adverselyaffects PSF estimation for several state-of-the-art techniques.

In Section 5, we proved how the shape of edge projectionsresembles the cumulative distribution function of 1D PSFsfor linear CRFs. This theorem was used to formulate twoCRF estimation strategies for when the motion blur PSFkernel is known or not known. In the latter case, we showhow rank minimization can be used to provide a robustestimation. We do note that our approach is sensitive to thequality of the selected blur profiles. The requirement ofstraight line sharp edges also places a limitation to ourcurrent solution. To this end, we have also provided amultiple image solution in case a motion blurred image doesnot contains sufficient edge profiles for CRF estimation.

Experimental results in real examples demonstrated theimportance of intensity linearization in the context of imagedeblurring. In addition, we have analyzed the effectivenessof gamma curve correction, which is commonly used inprevious deblurring algorithms [14], [50]. The shape ofgamma curve with � ¼ 2:2 is close to the inverse of meanCRFs. Therefore, in most cases, gamma curve correction ismore effective than linear CRFs. However, when comparedto the results with our estimated CRF corrections, gammacurve correction is still insufficient, especially when theCRF largely deviates from the mean CRFs.

ACKNOWLEDGMENTS

This work was funded in part by the National ResearchFoundation of Korea (2012-0003359), Microsoft ResearchAsia under the KAIST-Microsoft Research CollaborationCenter, Natural Science Foundation (China) (Nos.61273258, 61105001), the Committee of Science andTechnology, Shanghai (No. 11530700200), US NationalScience Foundation under grants IIS-CAREER-0845268and IIS-RI-1016395, the US Air Force Office of ScienceResearch under the YIP Award, and the SingaporeA*STAR PSF grant (Proj No. 1121202020).

REFERENCES

[1] M. Ben-Ezra and S. Nayar, “Motion-Based Motion Deblurring,”IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 26, no. 6,pp. 689-698, June 2004.

[2] J. Cai, H. Ji, C. Liu, and Z. Shen, “Blind Motion Deblurring from aSingle Image Using Sparse Approximation,” Proc. IEEE Conf.Computer Vision and Pattern Recognition, 2009.

[3] A. Chakrabarti, D. Scharstein, and T. Zickler, “An EmpiricalCamera Model for Internet Color Vision,” Proc. British MachineVision Conf., 2009.

[4] Y. Chang and J. Reid, “RGB Calibration for Color Image-Analysisin Machine Vision,” IEEE Trans. Image Processing, vol. 5, no. 10,pp. 1414-1422, Oct. 1996.

[5] J. Chen and C.K. Tang, “Robust Dual Motion Deblurring,” Proc.IEEE Conf. Computer Vision and Pattern Recognition, 2008.

[6] X. Chen, X. He, J. Yang, and Q. Wu, “An Effective DocumentImage Deblurring Algorithm,” Proc. IEEE Conf. Computer Visionand Pattern Recognition, 2011.

[7] X. Chen, F. Li, J. Yang, and J. Yu, “A Theoretical Analysis ofCamera Response Functions in Image Deblurring,” Proc. 12thEuropean Conf. Computer Vision, 2012.

[8] S. Cho and S. Lee, “Fast Motion Deblurring,” Proc. ACM SiggraphASIA, 2009.

[9] S. Cho, Y. Matsushita, and S. Lee, “Removing Non-UniformMotion Blur from Images,” Proc. 11th IEEE Int’l Conf. ComputerVision, 2007.

[10] S. Cho, J. Wang, and S. Lee, “Handling Outliers in Non-BlindImage Deconvolution,” Proc. IEEE Int’l Conf. Computer Vision,2011.

[11] T.-S. Cho, S. Paris, B. Freeman, and B. Horn, “Blur KernelEstimation Using the Radon Transform,” Proc. IEEE Conf.Computer Vision and Pattern Recognition, 2011.

[12] P. Debevec and J. Malik, “Recovering High Dynamic RangeRadiance Maps from Photographs,” Proc. ACM Siggraph, 1997.

[13] H. Farid, “Blind Inverse Gamma Correction,” IEEE Trans. ImageProcessing, vol. 10, no. 10, pp. 1428-1433, Oct. 2001.

[14] R. Fergus, B. Singh, A. Hertzmann, S.T. Roweis, and W.T.Freeman, “Removing Camera Shake from a Single Photograph,”ACM Trans. Graphics, vol. 25, no. 3, pp. 787-794, 2006.

[15] M. Grossberg and S. Nayar, “Modeling the Space of CameraResponse Functions,” IEEE Trans. Pattern Analysis and MachineIntelligence, vol. 26, no. 10, pp. 1272-1282, Oct. 2004.

[16] M.D. Grossberg and S.K. Nayar, “What Is the Space of CameraResponse Functions?” Proc. IEEE Conf. Computer Vision and PatternRecognition, 2003.

[17] M. Hirsch, C. Schuler, S. Harmeling, and B. Scholkopf, “FastRemoval of Non-Uniform Camera Shake,” Proc. IEEE Int’l Conf.Computer Vision, 2011.

[18] J. Jia, “Single Image Motion Deblurring Using Transparency,”Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2007.

[19] N. Joshi, S. Kang, L. Zitnick, and R. Szeliski, “Image Deblurringwith Inertial Measurement Sensors,” ACM Trans. Graphics, vol. 29,no. 3, 2010.

[20] N. Joshi, R. Szeliski, and D. Kriegman, “PSF Estimation UsingSharp Edge Prediction,” Proc. IEEE Conf. Computer Vision andPattern Recognition, 2008.

[21] S. Kim, J. Frahm, and M. Pollefeys, “Radiometric Calibration withIllumination Change for Outdoor Scene Analysis,” Proc. IEEEConf. Computer Vision and Pattern Recognition, 2008.

[22] S. Kim, Y.-W. Tai, S. Kim, M.S. Brown, and Y. Matsushita,“Nonlinear Camera Response Functions and Image Deblurring,”Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2012.

[23] S.J. Kim, H.T. Lin, Z. Lu, S. Susstrunk, S. Lin, and M.S. Brown, “ANew In-Camera Imaging Model for Color Computer Vision andIts Application,” IEEE Trans. Pattern Analysis and MachineIntelligence, vol. 34, no. 12, pp. 2289-2302, Dec. 2012.

[24] S.J. Kim and M. Pollefeys, “Robust Radiometric Calibration andVignetting Correction,” IEEE Trans. Pattern Analysis and MachineIntelligence, vol. 30, no. 4, pp. 562-576, Apr. 2008.

[25] D. Krishnan and R. Fergus, “Fast Image Deconvolution UsingHyper-Laplacian Priors,” Proc. Conf. Neural Information ProcessingSystems, 2009.

[26] D. Krishnan, T. Tay, and R. Fergus, “Blind Deconvolution Using aNormalized Sparsity Measure,” Proc. IEEE Conf. Computer Visionand Pattern Recognition, 2011.

2510 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 35, NO. 10, OCTOBER 2013

[27] J. Lagarias, J. Reeds, M. Wright, and P. Wright, “ConvergenceProperties of the Nelder-Mead Simplex Method in Low Dimen-sions,” SIAM J. Optimization, vol. 9, no. 1, pp. 112-147, 1998.

[28] J.-Y. Lee, B. Shi, Y. Matsushita, I. Kweon, and K. Ikeuchi,“Radiometric Calibration by Transform Invariant Low-RankStructure,” Proc. IEEE Conf. Computer Vision and Pattern Recogni-tion, 2011.

[29] A. Levin, “Blind Motion Deblurring Using Image Statistics,” Proc.Conf. Neural Information Processing Systems, 2006.

[30] A. Levin, R. Fergus, F. Durand, and W.T. Freeman, “Image andDepth from a Conventional Camera with a Coded Aperture,”ACM Trans. Graphics, vol. 26, no. 3, 2007.

[31] H. Lin, S.J. Kim, S. Susstrunk, and M.S. Brown, “RevisitingRadiometric Calibration for Color Computer Vision,” Proc. IEEEInt’l Conf. Computer Vision, 2011.

[32] S. Lin, J. Gu, S. Yamazaki, and H.-Y. Shum, “RadiometricCalibration from a Single Image,” Proc. IEEE Conf. ComputerVision and Pattern Recognition, 2004.

[33] S. Lin and L. Zhang, “Determining the Radiometric ResponseFunction from a Single Grayscale Image,” Proc. IEEE Conf.Computer Vision and Pattern Recognition, 2005.

[34] P.-Y. Lu, T.-H. Huang, M.-S. Wu, Y.-T. Cheng, and Y.-Y. Chuang,“High Dynamic Range Image Reconstruction from Hand-HeldCameras,” Proc. IEEE Conf. Computer Vision and Pattern Recogni-tion, 2009.

[35] L. Lucy, “An Iterative Technique for the Rectification of ObservedDistributions,” Astronomical J., vol. 79, p. 754, 1974.

[36] Y. Matsushita and S. Lin, “Radiometric Calibration from NoiseDistributions,” Proc. IEEE Conf. Computer Vision and PatternRecognition, 2007.

[37] T. Mitsunaga and S. Nayer, “Radiometric Self Calibration,” Proc.IEEE Conf. Computer Vision and Pattern Recognition, 1999.

[38] T.-T. Ng, S.-F. Chang, and M.-P. Tsui, “Using Geometry Invariantsfor Camera Response Function Estimation,” Proc. IEEE Conf.Computer Vision and Pattern Recognition, 2007.

[39] A. Rav-Acha and S. Peleg, “Two Motion Blurred Images AreBetter Than One,” Pattern Recognition Letters, vol. 26, pp. 311-317, 2005.

[40] W. Richardson, “Bayesian-Based Iterative Method of ImageRestoration,” J. Optical Soc. Am., vol. 62, no. 1, 1972.

[41] Q. Shan, J. Jia, and A. Agarwala, “High-Quality MotionDeblurring from a Single Image,” ACM Trans. Graphics, vol. 27,2008.

[42] Y.-W. Tai, H. Du, M. Brown, and S. Lin, “Image/Video DeblurringUsing a Hybrid Camera,” Proc. IEEE Conf. Computer Vision andPattern Recognition, 2008.

[43] Y.-W. Tai, H. Du, M.S. Brown, and S. Lin, “Correction of SpatiallyVarying Image and Video Motion Blur Using a Hybrid Camera,”IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 32, no. 6,pp. 1012-1028, June 2010.

[44] Y.-W. Tai, N. Kong, S. Lin, and S. Shin, “Coded Exposure Imagingfor Projective Motion Deblurring,” Proc. IEEE Conf. ComputerVision and Pattern Recognition, 2010.

[45] Y.-W. Tai and S. Lin, “Motion-Aware Noise Filtering forDeblurring of Noisy and Blurry Images,” Proc. IEEE Conf.Computer Vision and Pattern Recognition, 2012.

[46] Y.-W. Tai, P. Tan, and M. Brown, “Richardson-Lucy Deblurringfor Scenes under Projective Motion Path,” IEEE Trans. PatternAnalysis and Machine Intelligence, vol. 33, no. 8, pp. 1603-1618, Aug.2011.

[47] O. Whyte, J. Sivic, A. Zisserman, and J. Ponce, “Non-UniformDeblurring for Shaken Images,” Proc. IEEE Conf. Computer Visionand Pattern Recognition, 2010.

[48] N. Wiener, Extrapolation Interpolation, and Smoothing of StationaryTime Series. Wiley, 1949.

[49] B. Wilburn, H. Xu, and Y. Matsushita, “Radiometric CalibrationUsing Temporal Irradiance Mixtures,” Proc. IEEE Conf. ComputerVision and Pattern Recognition, 2008.

[50] L. Xu and J. Jia, “Two-Phase Kernel Estimation for Robust MotionDeblurring,” Proc. 11th European Conf. Computer Vision, 2010.

[51] L. Yuan, J. Sun, L. Quan, and H. Shum, “Image Deblurring withBlurred/Noisy Image Pairs,” ACM Trans. Graphics, vol. 26, no. 3,2007.

Yu-Wing Tai received the BEng (first classhonors) and MS degrees in computer sciencefrom the Hong Kong University of Science andTechnology in 2003 and 2005, respectively, andthe PhD degree from the National University ofSingapore in June 2009. He joined the KoreaAdvanced Institute of Science and Technologyas an assistant professor in Fall 2009. Heregularly serves on the program committees forthe major computer vision conferences (ICCV,

CVPR, and ECCV). His research interests include computer vision andimage/video processing. He is a member of the IEEE.

Xiaogang Chen is currently working toward thePhD degree in the Institute of Image Processingand Pattern Recognition, Shanghai Jiao TongUniversity, China. His research interests includeimage and video processing, and computervision. He is a student member of the IEEE.

Sunyeong Kim received the BS and MSdegrees in computer science from the KoreaAdvanced Institute of Science and Technology(KAIST) in 2010 and 2012, respectively. She iscurrently working toward the PhD degree atKAIST. Her research interests include computervision and image/video processing, especially incomputational photography. She is a studentmember of the IEEE.

Seon Joo Kim received the BS and MS degreesfrom Yonsei University, Seoul, Korea, in 1997and 2001, respectively, and the PhD degree incomputer science from the University of NorthCarolina at Chapel Hill in 2008. He has been anassistant professor in the Department of Com-puter Science, Yonsei University since March2013. His research interests include computervision, computer graphics/computational photo-graphy, and HCI/visualization. He is a member

of the IEEE.

Feng Li received the BE degree from theDepartment of Electrical Engineering, FuzhouUniversity, in 2003, the MS degree from theInstitute of Pattern Recognition, Shanghai Jiao-tong University, in 2006, and the PhD degreefrom the Department of Computer and Informa-tion Sciences, University of Delaware, in 2011.He joined Mitsubishi Electric Research Labs asa visiting research scientist in November 2011.His research interests include multicamera

system design and applications, fluid surface reconstruction, andmedical imaging. He is a member of the IEEE.

TAI ET AL.: NONLINEAR CAMERA RESPONSE FUNCTIONS AND IMAGE DEBLURRING: THEORETICAL ANALYSIS AND PRACTICE 2511

Jie Yang received the PhD degree in computerscience from the University of Hamburg, Ger-many. He is now a professor and director of theInstitute of Image Processing and PatternRecognition, Shanghai Jiao Tong University,China. He has led more than 30 national andministry scientific research projects in imageprocessing, pattern recognition, data amalgama-tion, data mining, and artificial intelligence.

Jingyi Yu received the BS degree from theCalifornia Institute of Technology in 2000 andthe PhD degree from the Massachusetts Insti-tute of Technology in 2005. He is an associateprofessor in the Department of Computer andInformation Sciences and the Department ofElectrical and Computer Engineering at theUniversity of Delaware. His research interestsinclude a range of topics in computer vision andcomputer graphics, especially on computational

cameras and displays. He served as a program chair of OMNIVIS ’11, ageneral chair of Projector-Camera Systems ’08, and an area andsession chair of ICCV ’11. He is a recipient of both the US NationalScience Foundation CAREER Award and the US Air Force Office ofScience Research YIP Award.

Yasuyuki Matsushita received the BS, MS, andPhD degrees in electrical engineering andcomputer science from the University of Tokyo,in 1998, 2000, and 2003, respectively. He joinedMicrosoft Research Asia in April 2003. He is alead researcher in the Visual Computing Group.His areas of research interests include photo-metric methods in computer vision. He is on theeditorial board of the IEEE Transactions onPattern Analysis and Machine Intelligence

(TPAMI), International Journal of Computer Vision (IJCV), IPSJ Journalof Computer Vision and Applications (CVA), The Visual ComputerJournal, and the Encyclopedia of Computer Vision. He served/is servingas a program cochair of PSIVT ’10, 3DIMPVT ’11, and ACCV ’12. He isappointed as a guest associate professor at Osaka University (sinceApril 2010), visiting associate professor at the National Institute ofInformatics (since April 2011), and Tohoku University (since April 2012),Japan. He is a senior member of the IEEE.

Michael S. Brown received the BS and PhDdegrees in computer science from the Universityof Kentucky in 1995 and 2001, respectively. Heis currently an associate professor and assistantdean (external relations) in the School ofComputing at the National University of Singa-pore. His research interests include computervision, image processing, and computer gra-phics. He has served as an area chair for CVPR,ICCV, ECCV, and ACCV and is currently an

associate editor for the IEEE Transactions on Pattern Analysis andMachine Intelligence (TPAMI).

. For more information on this or any other computing topic,please visit our Digital Library at www.computer.org/publications/dlib.

2512 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 35, NO. 10, OCTOBER 2013