super high dynamic range video - okutomi · pdf filefig. 1. overview of the shdr imaging...

6
Super High Dynamic Range Video Yuka Ogino * , Masayuki Tanaka * , Takashi Shibata *† , and Masatoshi Okutomi * * Department of Mechanical and Control Engineering Tokyo Institute of Tecnology Data Science Research Labs, NEC Corporation Abstract—High dynamic range (HDR) imaging is highly de- manded in computer vision algorithms. An HDR image is composed with several low dynamic range (LDR) images, which usually have some disparities. In many HDR imaging algorithms, the disparities are estimated based on the texture information of the LDR images. However, the texture information is often lost completely if scenes include extremely bright and dark regions simultaneously. Recently, super high dynamic range (SHDR) imaging algorithm has been proposed where the disparities are estimated based on the segment shapes instead of the textures for handling such extreme scenes. In this paper, we extend the SHDR imaging algorithm to SHDR video generation introducing temporal smoothness terms. The temporal smoothness terms improve the temporal stability and the precision of the disparity estimation. Quantitative and qualitative evaluations demonstrate that the proposed algorithm outperforms existing algorithms. I. I NTRODUCTION High dynamic range (HDR) imaging is widely used. Several consumer digital cameras already have HDR imaging func- tionality. Nevertheless, the improvement of the dynamic range is restricted. A very high dynamic range is often necessary to capture actual scenes which include strong back lighting and/or car headlights. Existing HDR imaging algorithms can- not manage such scenes properly. This point persists as a challenge related to HDR imaging. An HDR image is reconstructed with multiple low-dynamic- range images taken under different exposure settings. Multiple images are usually taken sequentially by a single camera while changing the exposure setting. Image alignment between different exposure images is necessary to reconstruct an HDR image of a dynamic scene or the scene which includes moving objects. Another approache to take multiple exposure images is using a multiple camera system. Even if multiple cameras are temporarily synchronized, there are disparities among multiple images because the viewpoint of each camera is spatially different. Therefore, image alignment is also required for the HDR imaging with the multiple camera system. Optical-flow- based algorithms have been proposed to reconstruct an HDR image handling the disparity between multiple exposure im- ages [1]–[10]. In the optical-flow-based algorithms, the optical flows between the different exposure images are estimated based on texture similarities. Therefore, the optical-flow-based algorithms implicitly assume that texture information is avail- able from all different exposure images. However, this assump- tion is not always true, especially for scenes that require a very high dynamic range. In this case, some exposure images include regions in which texture information is completely lost because of saturation or crushed-black. The optical-flow- based algorithms produce severe artifacts for those regions. A weighting approach [11]–[13] is another approach to handle displacement between different exposure images. The weight- ing approach assigns a lower weight to ghost pixels or dynamic regions to reduce ghost artifacts, where the weight is used to compose the HDR image. However, the weighting approach also has the same problem as the optical-flow-based algorithm because the weight is calculated based on texture similarities. In order to manage the problems of saturation and crushed- black, Hayami et al. proposed so-called super HDR imaging for such a scene with an extremely wide range of scene radiance [14]. A key idea of their algorithm is to align images based on the shapes of segmented regions instead of texture information, so that they can align extremely underexposed or overexposed regions without being affected by saturation or crushed-black. HDR video generation is another challenge related to HDR imaging. A specialized HDR camera system that includes multiple cameras and beam splitters has been proposed for the HDR video generation [2]. That HDR camera system can cap- ture images simultaneously under different exposures without disparity. However, such systems are bulky and expensive. In addition, the number of the cameras are usually limited to two or three. Since more than five cameras are usually necessary to capture actual scenes with an extremely wide range of scene radiance, their camera systems cannot be used for such scenes. In this paper, We propose a super high dynamic range (SHDR) video generation algorithm. Different exposure image sequences are captured by a synchronized multiple camera sys- tem. The number of cameras can be increased easily for scenes with an extremely wide range of scene radiance. We extend the SHDR imaging [14] considering temporal smoothness. The proposed SHDR video generation algorithm consists of three steps, i.e. a segmentation step, a segment-based alignment step, and an image composing step. The segmentation step and the segment-based alignment step can be formulated as an optimization problem. We design each cost function with smoothness term between frames. Experimental comparisons with synthesized and actual image sequences demonstrate that the proposed algorithm outperforms existing algorithms. 2016 23rd International Conference on Pattern Recognition (ICPR) Cancún Center, Cancún, México, December 4-8, 2016 978-1-5090-4846-5/16/$31.00 ©2016 IEEE 4203

Upload: vutu

Post on 14-Feb-2018

221 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Super High Dynamic Range Video - Okutomi · PDF fileFig. 1. Overview of the SHDR imaging algorithm. Fig. 2. Overview of the proposed SHDR video generation algorithm. II. SHDR IMAGING

Super High Dynamic Range VideoYuka Ogino∗, Masayuki Tanaka∗, Takashi Shibata∗†, and Masatoshi Okutomi∗

∗Department of Mechanical and Control EngineeringTokyo Institute of Tecnology

†Data Science Research Labs, NEC Corporation

Abstract—High dynamic range (HDR) imaging is highly de-manded in computer vision algorithms. An HDR image iscomposed with several low dynamic range (LDR) images, whichusually have some disparities. In many HDR imaging algorithms,the disparities are estimated based on the texture information ofthe LDR images. However, the texture information is often lostcompletely if scenes include extremely bright and dark regionssimultaneously. Recently, super high dynamic range (SHDR)imaging algorithm has been proposed where the disparities areestimated based on the segment shapes instead of the texturesfor handling such extreme scenes. In this paper, we extend theSHDR imaging algorithm to SHDR video generation introducingtemporal smoothness terms. The temporal smoothness termsimprove the temporal stability and the precision of the disparityestimation. Quantitative and qualitative evaluations demonstratethat the proposed algorithm outperforms existing algorithms.

I. INTRODUCTION

High dynamic range (HDR) imaging is widely used. Severalconsumer digital cameras already have HDR imaging func-tionality. Nevertheless, the improvement of the dynamic rangeis restricted. A very high dynamic range is often necessaryto capture actual scenes which include strong back lightingand/or car headlights. Existing HDR imaging algorithms can-not manage such scenes properly. This point persists as achallenge related to HDR imaging.

An HDR image is reconstructed with multiple low-dynamic-range images taken under different exposure settings. Multipleimages are usually taken sequentially by a single camerawhile changing the exposure setting. Image alignment betweendifferent exposure images is necessary to reconstruct an HDRimage of a dynamic scene or the scene which includes movingobjects. Another approache to take multiple exposure images isusing a multiple camera system. Even if multiple cameras aretemporarily synchronized, there are disparities among multipleimages because the viewpoint of each camera is spatiallydifferent. Therefore, image alignment is also required for theHDR imaging with the multiple camera system. Optical-flow-based algorithms have been proposed to reconstruct an HDRimage handling the disparity between multiple exposure im-ages [1]–[10]. In the optical-flow-based algorithms, the opticalflows between the different exposure images are estimatedbased on texture similarities. Therefore, the optical-flow-basedalgorithms implicitly assume that texture information is avail-able from all different exposure images. However, this assump-tion is not always true, especially for scenes that require a

very high dynamic range. In this case, some exposure imagesinclude regions in which texture information is completelylost because of saturation or crushed-black. The optical-flow-based algorithms produce severe artifacts for those regions. Aweighting approach [11]–[13] is another approach to handledisplacement between different exposure images. The weight-ing approach assigns a lower weight to ghost pixels or dynamicregions to reduce ghost artifacts, where the weight is used tocompose the HDR image. However, the weighting approachalso has the same problem as the optical-flow-based algorithmbecause the weight is calculated based on texture similarities.

In order to manage the problems of saturation and crushed-black, Hayami et al. proposed so-called super HDR imagingfor such a scene with an extremely wide range of sceneradiance [14]. A key idea of their algorithm is to align imagesbased on the shapes of segmented regions instead of textureinformation, so that they can align extremely underexposed oroverexposed regions without being affected by saturation orcrushed-black.

HDR video generation is another challenge related to HDRimaging. A specialized HDR camera system that includesmultiple cameras and beam splitters has been proposed for theHDR video generation [2]. That HDR camera system can cap-ture images simultaneously under different exposures withoutdisparity. However, such systems are bulky and expensive. Inaddition, the number of the cameras are usually limited to twoor three. Since more than five cameras are usually necessary tocapture actual scenes with an extremely wide range of sceneradiance, their camera systems cannot be used for such scenes.

In this paper, We propose a super high dynamic range(SHDR) video generation algorithm. Different exposure imagesequences are captured by a synchronized multiple camera sys-tem. The number of cameras can be increased easily for sceneswith an extremely wide range of scene radiance. We extendthe SHDR imaging [14] considering temporal smoothness. Theproposed SHDR video generation algorithm consists of threesteps, i.e. a segmentation step, a segment-based alignmentstep, and an image composing step. The segmentation stepand the segment-based alignment step can be formulated asan optimization problem. We design each cost function withsmoothness term between frames. Experimental comparisonswith synthesized and actual image sequences demonstrate thatthe proposed algorithm outperforms existing algorithms.

2016 23rd International Conference on Pattern Recognition (ICPR)Cancún Center, Cancún, México, December 4-8, 2016

978-1-5090-4846-5/16/$31.00 ©2016 IEEE 4203

Page 2: Super High Dynamic Range Video - Okutomi · PDF fileFig. 1. Overview of the SHDR imaging algorithm. Fig. 2. Overview of the proposed SHDR video generation algorithm. II. SHDR IMAGING

Fig. 1. Overview of the SHDR imaging algorithm.

Fig. 2. Overview of the proposed SHDR video generation algorithm.

II. SHDR IMAGING

Here, we briefly review SHDR imaging [14]. Figure 1shows the overview of the SHDR imaging. First, each LDRimage captured under different exposure setting is segmentedbased on three intensity ranges: the saturated, the appropriate,and the blacked-out ranges. The threshold of each intensityrange of each LDR image is designed so that the same-shaped regions appear in different exposure LDR images.Then, appropriate intensity range regions are aligned based ontheir shape, similarly to a jigsaw puzzle. Detailed algorithms ofthe segmentation and the shape-based alignment are describedin [14]. After alignment, the SHDR image is composed usinga Poisson image reconstruction [15].

III. PROPOSED SHDR VIDEO

We can generate the SHDR video by applying of the SHDRimaging algorithm [14] frame-by-frame. However, this naiveextension of the existing SHDR imaging algorithm producesjitter artifacts, because the naive extension does not incorporatetemporal relations among frames. Here, we propose a SHDRvideo generation algorithm considering temporal smoothness.Figure 2 presents an overview of the proposed SHDR videogeneration algorithm. The proposed SHDR video generationalgorithm consists of a Graph-Cut segmentation of a data cube,disparity estimation, and image composing. Each exposureimage sequence can be regarded as a data cube with whichaxes are x, y, and frame. Those data cube of exposureimage sequences are segmented to three intensity ranges:blacked-out, appropriate, and saturated intensity ranges. Theappropriate regions of different exposure image sequencesare aligned based on the disparity estimation. The SHDRvideo is reconstructed by composing the aligned appropriate

regions. We use Poisson image reconstruction [15] for imagecomposition as well as the SHDR imaging. In the followingsubsections, we describe Graph-Cut segmentation of the datacube and disparity estimation.

A. SegmentationWe use a Graph-Cut algorithm to segment the data cube of

each exposure image sequence. Figure 3 schematically showsthe nodes and the edges of the cost function of the Graph-Cut algorithm for the segmentation of the data cube. Theblack dots represent the pixels or the nodes. The blue and redlines respectively represent spatial and temporal edges, whichrepresent smoothness term. The mathematical expression ofthe cost function of the segmentation can be expressed as

J(Lm) =

F∑f=1

∑v∈V

Jd(lfv ;xfv ) + α

F∑f=1

∑(u,v)∈A

Js(lfv , l

fu)

+ β

F∑f=2

∑v∈V

Jf (lfv , lf−1v ) + β

F−1∑f=1

∑v∈V

Jf (lfv , lf+1v ),

(1)

where Lm stands for a set of labels of m-th exposure imagesequence, lmf,v represents a label of v-th pixel of f -th frameof m-th exposure image sequence, xmf,v represents the sceneradiance of the v-th pixel of the f -th frame of the m-thexposure image sequence, F denotes the number of frames, Vrepresents a set of pixel numbers, A represents a set of adjasentpixel pair, α and β are tuning parameters, Jd represents adata term, Js represents a spatial smoothness term, and Jfrepresents a temporal smoothness term, respectively. Labels 0,1, and 2 respectively represent the blacked-out, the appropriate,and the saturated.

4204

Page 3: Super High Dynamic Range Video - Okutomi · PDF fileFig. 1. Overview of the SHDR imaging algorithm. Fig. 2. Overview of the proposed SHDR video generation algorithm. II. SHDR IMAGING

Fig. 3. Schematic graph structure for the segmentation of the data cube,where the black dot, the blue line, and the red line represent the node, thespatial edge, and the temporal edge, respectively.

Fig. 4. The data cost of each label for the segmentation of the data cube.

Fig. 5. The disparity prediction process.

The data terms for the saturate, the appropriate, and theblacked-out labels are shown in Fig. 4. Thresholds θ1 and θ2are designed so that same-shaped regions appear in differentexposure images. This relation between the threshold and theshape of region is the key of the segment-based disparityestimation. Detailed settings of thresholds are described in[14].

Spatial and temporal smoothness terms can be expressedas simple differences of labels, as Js(l

fv , l

fu) = |lfv −

lfu|, Jf (lfv , lf−1v ) = |lfv − lf−1v |. These smoothness terms also

contribute to noise reduction. The temporal smoothness ismore effective than the spatial smoothness because the samelocation of adjacent frames tend to be same label if the motionof the target object is small. We empirically set the parametersas α = 2 and β = 4.

B. Segment-based disparity estimation

Different exposure image sequences are taken by differentviewpoints of cameras. We must align the appropriate re-gions of different exposure image sequences to reconstructthe SHDR video. For the alignment, disparities between theappropriate regions must be estimated. However, textures ofsome regions in the scene, which includes extremely brightand dark regions, are lost completely. Therefore, we estimatedisparities based on the shapes of regions as jigsaw puzzle[14].

In segment-based disparity estimation, first, the image se-quence that has the largest area of the appropriate regionsis selected as a reference image sequence Iref . The mergedregion of the appropriate and the saturated regions in theshorter exposure image If

ref−1 become the same shape as the

saturated region in the reference image Ifref . As discussed

in the previous section, the thresholds are designed to satisfythis relation. Therefore, we can estimate the disparity betweenthe saturated region in the reference image and the mergedregion of the appropriate and the saturated regions in theshorter exposure image based on the region shape instead ofthe regions texture. It is a strong advantage for a scene whichrequires an extremely high-dynamic range. Once the disparityis obtained, the alignment of the regions is an easy task.Blacked-out regions in the reference image are also alignedin the same manner. This alignment algorithm is appliediteratively to align all appropriate regions in different exposureimages.

The segment based disparity estimation can be formulatedas an optimization problem. The cost function of this opti-mization problem is

E(T ) =

F−1∑f=2

Nf∑k=1

|Ωfk |Ed(tfk)+ λsEs(t

f )

+ λfEf (Ωf , tf , tf−1) + λfEf (Ωf , tf , tf+1),

(2)

where Nf stands for number of segments in the f -th frame, Ωfk

represents the k-th segment of the f -th frame, |Ωfk | represents

the area of segment Ωfk , tfk denotes the disparity vector of

segment Ωfk , T representes the set of the disparity vector

tfk : 1 ≤ k ≤ Nf, Ed(tfk) represents the shape similaritycost of the segment Ωf

k , Es denotes the smoothness cost ofthe spatially neighboring regions, Ef stands for the temporalsmoothness cost, and λs and λf are tuning parameters. De-tails of the shape similarity cost are presented in [14]. Thesmoothness cost of the spatially neighbor regions, Es, can beexpressed as

Es(t) =∑

i,j∈Bf

wd(Ωfi ,Ω

fj )||tfj − tfi ||

2, (3)

where Bf represents the set of neighbor regions in f -th frame,wd stands for a Gaussian weight based on the distance betweenthe center of the gravities of two regions. The temporalsmoothness cost, Ef , can be expressed as

Ef (Ωf , tf , tf−1) =

Nf∑k=1

||tfk − Pk(tf−1,Ωfk)||2, (4)

4205

Page 4: Super High Dynamic Range Video - Okutomi · PDF fileFig. 1. Overview of the SHDR imaging algorithm. Fig. 2. Overview of the proposed SHDR video generation algorithm. II. SHDR IMAGING

Fig. 6. Overview of the disparity estimation for the frame f.

where Pk is the disparity prediction from the adjacent frame.An associated segment region is also moved and deformed

if an object moves. Therefore, the disparity comparisons ofthe same segment region between different frames are notstraightforward. In the proposed algorithm, the disparity ofthe segmented region in the current frame is predicted withthe disparity map of the adjacent frame, as shown in Fig. 5.The segment mask in the current frame is overlaid on thedisparity map of the adjacent frame. Then, the disparity ispredicted by averaging the masked region of the disparitymap. The disparity predicted by this manner is used toevaluate the temporal smoothness cost. Figure 6 presents aschematic diagram of the disparity estimation considering thetemporal smoothness. First, the disparities of each frame areestimated in a frame-by-frame manner. Then, the disparity ofthe current frame is refined with the disparities predicted fromthe previous and the subsequent frame of disparity maps. Thisrefinement is applied iteratively. For the example illustrated inFig. 6, the car moves from the right to the left. The viewpointdifference causes a vertical disparity. The disparity dependson the depth of the object (car). The disparity of the objectis unchanged even if the object moves with a constant depth.For this reason, we can assign a large weight for the temporalsmoothness cost even for a dynamic scene, which makes thedisparity estimation very stable.

C. Image composition

Once the disparity of each appropriate segment region isobtained, we can simply align the appropriate segment region.Some artifacts are present on the boundaries of the alignedsegment. We compose the HDR image video using Poissonimage reconstruction to reduce the artifacts on the segmentboundaries [15].

IV. EXPERIMENTAL RESULTS

For evaluation, we show results generated using the pro-posed algorithm and compare the results with those of state-of-the-art algorithms. Video sequences are available at ourproject page1. First, we create a 3D computer graphics (CG)simulation dataset2 for more precise quantitative assessment

1http://www.ok.ctrl.titech.ac.jp/res/SHDR/2We use 3D model from http://www.blendswap.com/blends/view/26649 and

HDR background from http://noemotionhdrs.net/

to obtain the ground truth using Blender, 3D CG software.Blender can generate HDR videos and set camera positionsand parameters to simulate videos taken in real scenes (Fig.7). We produce LDR images at exposures of each camera. Weassume that input videos are taken at six positions by Cameran(1 ≤ n ≤ 6). These camera positions are presented in Fig. 9.The distance between two adjacent cameras aligned verticallyor horizontally is 30 mm. The exposure ratio between Cameran and Camera n − 1 is set as four times. Figure 7 showsa sequence of input images that is one of the input videoframes. In Fig. 8, we present a comparison of the resultanttwo consecutive frames of our algorithm and state-of-the-art algorithms; Sen’s algorithm [1] and the SHDR imagingalgorithm [14] applied frame-by-frame. In the result of Sen’salgorithm [1] saturated regions still remain. Although theSHDR imaging [14] produces a fine HDR image withoutsaturated regions, it suffers from jitter artifacts observablein the video. On the contrast, the proposed SHDR videogeneration algorithm can yield temporally smoothed SHDRvideo.

For quantitative evaluation, we follow an SSIM-based ap-proach proposed in [16]. The original SSIM [17] has threeterms: the contrast, the structure, and the luminance terms.The range of the reconstructed HDR image strongly dependson the reconstruction algorithm. In other words, the recon-structed HDR has a scale ambiguity. In order to handle thisambiguity for the quality assessment, Ma et al [16] proposedto evaluate the quality of the HDR image with the contrastand the structure terms without the luminance term becausethe luminance is not regarded as a highly relevant componentfor the evaluation of the HDR image. We evaluate this two-term SSIM each frame. Figure 10 shows the two-term SSIMof Sen’s algorithm [1], the SHDR imaging algorithm [14],and the proposed algorithm. This results show the proposedalgorithm outperforms the existing algorithms except first fewframes.

Then, we take multi-exposure LDR videos simultaneouslyusing an actual multiple camera system. The camera positionsand exposures are the same as simulation setting (Fig. 9). Weset exposures by changing gains, F-number and/or attachingND filters without changing exposure time to keep motionblur effect unchanged for each image. Figure 11 presentsthe sequence of input images that is one of the input video

4206

Page 5: Super High Dynamic Range Video - Okutomi · PDF fileFig. 1. Overview of the SHDR imaging algorithm. Fig. 2. Overview of the proposed SHDR video generation algorithm. II. SHDR IMAGING

Camera 1 Camera 2 Camera 3 Camera 4 Camera 5 Camera 6Fig. 7. Example frames of the synthesized image sequence assuming six-camera system.

Ground truth Sen et al. [1] SHDR imaging [14] ProposedFig. 8. Example frames of the reconstructed HDR video with the synthesized image sequence.

Fig. 9. Camera Positions. The distance between two adjacent cameras is30 mm.

Fig. 10. Evaluation of the two-term SSIM with synthesized imagesequence.

frames. In Fig. 12, we present a comparison of the resultanttwo consecutive frames of our algorithm with the results ofstate-of-the-art algorithms; Sen’s algorithm [1], Heos algo-rithm [11], and the SHDR imaging [14] applied frame-by-frame. The results of Sen’s algorithm [1] and Heo’s algorithm[11] remains over- and under-saturated. The SHDR imagingmisalinged the LED light behind the car. The proposed SHDRvideo generation algorithm can correctly generate the SHDRvideo. For quantitative evaluation, we use SSIM for multi-exposure fusion (MEF) [16], because the ground truth HDRimage cannot be obtained. The SSIM-MEF is the qualityassessment with a tone mapped HDR image and LDR imageswith different exposures. We can quantitatively evaluate thequality of the tone mapped HDR image without the ground

truth of the HDR image. Figure 13 shows the evaluated SSIM-MEF for four algorithms; Sen’s algorithm [1], Heos algorithm[11], the SHDR imaging [14], and the proposed algorithm.This quantitative comparison demonstrates that the proposedalgorithm outperforms existing algorithms.

V. CONCLUSION

We have proposed the super high dynamic range (SHDR)video generation algorithm. The proposed algorithm consistsof region segmentation, disparity estimation, and image com-position. We have introduced a temporal smoothness term tothe region segmentation and the disparity estimation. Thesetemporal smoothness terms improve the temporal stability andthe precision of disparity estimation. Quantitative and qualita-

4207

Page 6: Super High Dynamic Range Video - Okutomi · PDF fileFig. 1. Overview of the SHDR imaging algorithm. Fig. 2. Overview of the proposed SHDR video generation algorithm. II. SHDR IMAGING

Camera 1 Camera 2 Camera 3 Camera 4 Camera 5 Camera 6Fig. 11. Example frames of the real image sequence captured by six-camera system.

Heo et al. [11] Sen et al. [1] SHDR imaging [14] ProposedFig. 12. Example frames of the reconstructed HDR video with the real image sequence.

Fig. 13. Evaluation of the SSIM-MEF with real image sequence.

tive evaluations conducted with a synthetic image sequenceand a real image sequence demonstrate that the proposedalgorithm outperforms existing algorithms.

REFERENCES

[1] P. Sen, N. K. Kalantari, M. Yaesoubi, S. Darabi, D. B. Goldman, andE. Shechtman, “Robust Patch-Based HDR Reconstruction of DynamicScenes,” ACM Transactions on Graphics (TOG) (Proceedings of SIG-GRAPH Asia 2012), vol. 31, no. 6, pp. 203:1–203:11, 2012.

[2] N. K. Kalantari, E. Shechtman, C. Barnes, S. Darabi, D. B. Goldman,and P. Sen, “Patch-based High Dynamic Range Video,” ACM Trans-actions on Graphics (TOG) (Proceedings of SIGGRAPH Asia 2013),vol. 32, no. 6, pp. 202–1, 2013.

[3] S. Mangiat and J. Gibson, “High dynamic range video with ghostremoval,” Proc. of SPIE, vol. 7798, pp. 779 812–779 812, 2010.

[4] A. Tomaszewska and R. Mantiuk, “Image Registration for Multi-exposure High Dynamic Range Image Acquisition,” Conference inCentral Europe on Computer Graphics, Visualization and ComputerVision (WSCG), pp. 49–55, 2007.

[5] C. Aguerrebere, J. Delon, Y. Gousseau, and P. Muse, “SimultaneousHDR image reconstruction and denoising for dynamic scenes,” IEEEInternational Conference on Computational Photography, pp. 31–41,2013.

[6] J. Hu, O. Gallo, K. Pulli, and X. Sun, “Hdr deghosting: How to dealwith saturation,” IEEE Conference on Computer Vision and PatternRecognition (CVPR), pp. 1163–1170, 2013.

[7] M. Gupta, D. Iso, and S. Nayar, “Fibonacci Exposure Bracketing forHigh Dynamic Range Imaging,” IEEE International Conference onComputer Vision (ICCV), pp. 1–8, 2013.

[8] X. Shen, L. Xu, Q. Zhang, and J. Jia, “Multi-modal and multi-spectralregistration for natural images,” European Conference on ComputerVision (ECCV), pp. 309–324, 2014.

[9] T.-H. Oh, J.-Y. Lee, Y. W. Tai, and I.-S. Kweon, “Robust high dynamicrange imaging by rank minimization,” IEEE Transactions on PatternAnalysis & Machine Intelligence, no. 1, pp. 1–1, 2015.

[10] J. Hu, O. Gallo, and K. Pulli, “Exposure stacks of live scenes with hand-held cameras,” European Conference on Computer Vision (ECCV), pp.499–512, 2012.

[11] Y. S. Heo, K. M. Lee, S. U. Lee, Y. Moon, and J. Cha, “Ghost-freeHigh Dynamic Range Imaging,” Proc. Asian Conference on ComputerVision (ACCV), pp. 486–500, 2010.

[12] K. Jacobs, C. Loscos, and G. Ward, “Automatic high-dynamic rangeimage generation for dynamic scenes,” Computer Graphics and Appli-cations (IEEE), vol. 28, no. 2, pp. 84–93, 2008.

[13] O. Gallo, G. N., C. W., M. Tico, and K. Pulli, “Artifact-free HighDynamic Range Imaging,” IEEE International Conference on Compu-tational Photography (ICCP), pp. 1–7, 2009.

[14] T. Hayami, M. Tanaka, M. Okutomi, T. Shibata, and S. Senda, “SuperHigh Dynamic Range Imaging,” the 22nd International Conference onPattern Recognition (ICPR), pp. 720–725, 2014.

[15] M. Tanaka, R. Kamio, and M. Okutomi, “Seamless image cloning by aclosed form solution of a modified Poisson problem,” SIGGRAPH ASIAPosters, no. 12, pp. 15–15, 2012.

[16] K. Ma, K. Zeng, and Z. Wang, “Perceptual Quality Assessment forMulti-Exposure Image Fusion,” IEEE Transactions on Image Process-ing, vol. 24, no. 11, pp. 3345–3356, 2015.

[17] W. Z. Bovik, A. C., Sheikh, H. R., and Simoncelli, “Eero P. ImageQuality Assessment: From Error Visibility to Structural Similarity,”IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612,2004.

4208