a contrast enhancement based algorithm to improve ... · finally, 4) combining h, i, s components...

5
A CONTRAST ENHANCEMENT BASED ALGORITHM TO IMPROVE VISIBILITY OF COLORED FOGGY IMAGES MANOJ ALWANI and ANIL KUMAR TIWARI Communication and Computer Engineering The LNM Institute of Information Technology Rupa ki Nangal, Post-Sumel, Via-Jamdoli Jaipur-302031, (Rajasthan) INDIA [email protected] , [email protected] Abstract: - Images captured in foggy weather conditions get highly degraded due to suffering from poor contrast and loss of color characteristics. This paper presents a contrast enhancement algorithm for such degraded color images. To restore both contrast and color we propose following four steps: 1) RGB component of the input image is converted into Hue Intensity Saturation (HIS) components; 2) Applying a novel contrast enhancement technique on Intensity (I) component. This is done on the I component due to the well known fact that human vision system is more sensitive to this component as compared with other components (i.e. H, S); and then 3) apply Gamma correction method on the S component for color restoration; finally, 4) combining H, I, S components to get back RGB components. Besides proposed method being simple, experimental results show that the proposed method is very effective in contrast and color enhancement as compared to other competitive methods. Key-Words: - Contrast enhancement, histogram equalization, sky region . 1 Introduction Images taken under foggy weather conditions suffer from degradation due to severe contrast loss and also due to loss in color characteristics. The degree of degradation increases exponentially with the distance of scene points from the sensor [7]. So outdoor image acquisition depends on weather condition. Fogy conditions drop atmospheric visibility and brings whitening effect on the images causing poor contrast. Hence basic challenge is to nullify the the whitening effect there by improving the contrast of the degraded image. At present there are basically two methods for enhancement of such images, described as follows. One of the method for such purpose are based on an estimate of atmospheric degradation model. These methods use physical models to predict the pattern of image degradation and then restore image contrast with appropriate compensations [1]-[5]. A model for degraded image can be given as follows: Y = HX + N , Here X is the original image, Y is the degraded image, H is the degradation function, and N is the additive noise. It is expected that a good estimate of the original image X can be obtained using a model based on restoration system provided degradation function can be estimated to a good accuracy. However, the image pollution process and the mechanism created by fog weather are very complex, so it is difficult to express the different fog weather processes by a unified degradation function. This demands to have specific such model (depending on the foggy condition) while restoring the degraded image. This puts a practical limitation on model based restoration system. This requires extra information about the imaging system or the imaging environmental. Narasimhan and Nayar [1]-[2] use two or more different bad weather images taken from the same point of view to restore scene structure and contrast based on atmospheric scattering model. They assumed that the atmospheric scattering properties are invariable. This limits the use of this method because in complex conditions scattering conditions are time-varying. Researchers have also shown interest in restoration of degraded image when information about scene depths are known [3]-[5]. In [3]-[4], Oakley et. al. use a physics-based method to restore scene contrast by approximating the distribution of radiance in the scene. This is done by a single Gaussian with known variance. Narasimhan et. al. [5], presented an interactive scene depth estimation method, in which the biggest and smallest scene depth is assigned beforehand. Both of these methods RECENT ADVANCES in BUSINESS ADMINISTRATION ISSN: 1790-5109 26 ISBN: 978-960-474-161-8

Upload: others

Post on 06-Apr-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A CONTRAST ENHANCEMENT BASED ALGORITHM TO IMPROVE ... · finally, 4) combining H, I, S components to get back RGB components. Besides proposed method being simple, experimental results

A CONTRAST ENHANCEMENT BASED ALGORITHM TO IMPROVE VISIBILITY OF COLORED FOGGY IMAGES

MANOJ ALWANI and ANIL KUMAR TIWARI

Communication and Computer Engineering The LNM Institute of Information Technology

Rupa ki Nangal, Post-Sumel, Via-Jamdoli Jaipur-302031,

(Rajasthan) INDIA [email protected], [email protected]

Abstract: - Images captured in foggy weather conditions get highly degraded due to suffering from poor contrast and loss of color characteristics. This paper presents a contrast enhancement algorithm for such degraded color images. To restore both contrast and color we propose following four steps: 1) RGB component of the input image is converted into Hue Intensity Saturation (HIS) components; 2) Applying a novel contrast enhancement technique on Intensity (I) component. This is done on the I component due to the well known fact that human vision system is more sensitive to this component as compared with other components (i.e. H, S); and then 3) apply Gamma correction method on the S component for color restoration; finally, 4) combining H, I, S components to get back RGB components. Besides proposed method being simple, experimental results show that the proposed method is very effective in contrast and color enhancement as compared to other competitive methods. Key-Words: - Contrast enhancement, histogram equalization, sky region .

1 Introduction Images taken under foggy weather conditions suffer from degradation due to severe contrast loss and also due to loss in color characteristics. The degree of degradation increases exponentially with the distance of scene points from the sensor [7]. So outdoor image acquisition depends on weather condition. Fogy conditions drop atmospheric visibility and brings whitening effect on the images causing poor contrast. Hence basic challenge is to nullify the the whitening effect there by improving the contrast of the degraded image. At present there are basically two methods for enhancement of such images, described as follows. One of the method for such purpose are based on an estimate of atmospheric degradation model. These methods use physical models to predict the pattern of image degradation and then restore image contrast with appropriate compensations [1]-[5]. A model for degraded image can be given as follows: Y = HX + N, Here X is the original image, Y is the degraded image, H is the degradation function, and N is the additive noise. It is expected that a good estimate of the original image X can be obtained using a model based on restoration system provided degradation function can be estimated to a good accuracy. However, the

image pollution process and the mechanism created by fog weather are very complex, so it is difficult to express the different fog weather processes by a unified degradation function. This demands to have specific such model (depending on the foggy condition) while restoring the degraded image. This puts a practical limitation on model based restoration system. This requires extra information about the imaging system or the imaging environmental. Narasimhan and Nayar [1]-[2] use two or more different bad weather images taken from the same point of view to restore scene structure and contrast based on atmospheric scattering model. They assumed that the atmospheric scattering properties are invariable. This limits the use of this method because in complex conditions scattering conditions are time-varying. Researchers have also shown interest in restoration of degraded image when information about scene depths are known [3]-[5]. In [3]-[4], Oakley et. al. use a physics-based method to restore scene contrast by approximating the distribution of radiance in the scene. This is done by a single Gaussian with known variance. Narasimhan et. al. [5], presented an interactive scene depth estimation method, in which the biggest and smallest scene depth is assigned beforehand. Both of these methods

RECENT ADVANCES in BUSINESS ADMINISTRATION

ISSN: 1790-5109 26 ISBN: 978-960-474-161-8

Page 2: A CONTRAST ENHANCEMENT BASED ALGORITHM TO IMPROVE ... · finally, 4) combining H, I, S components to get back RGB components. Besides proposed method being simple, experimental results

seem to be impractical in real time condition as scene depth may not be known or it is difficult to estimate. Also special apparatus such as radar is needed to measure scene depth. However, these methods give good results depending upon the accuracy of the atmospheric model. However, these methods give good results depending upon accuracy of the atmospheric model. From the above discussions it may be noted that, all methods based on physical model need some information to be known beforehand, hence such methods are difficult to use in real time applications. Another method is based on contrast enhancement techniques which do not require any prior knowledge of atmospheric scattering conditions [6]-[7]. Histogram equalization and linear stretching are one of the most popular global enhancement algorithms. Because of using global information of the image, such techniques do not give satisfactory results when there are depth changes in the scene. In [7] authors find histogram of the degraded image and identify sky regions. Thereafter, they propose to move a mask and change the intensity value of the sky pixels to 255. Other pixels of the mask are subjected to contrast enhancement. This process improves the contrast level of degraded image. However, the major drawback of the method is that objects in the sky region loose their identity as their intensity value is also made 255. Moreover, the algorithm does worst in case of the degraded images which do not have any sky region. In this case the algorithm falsely identifies some region as sky region and moves all the pixel values to 255, leaving no way to identify the object in that region. For color image enhancement algorithms [8]-[9] usually apply gray level enhancement algorithm to color images brightness factor, which amplify the image details, thus enhance the input image at certain extent. These methods ignore color information of the image resulting into output image looks blackish. Therefore, we need a color image enhancement algorithm to make the output image livelier. From the literature survey, it is observed that enhancement of color images uses model based algorithms while gray scale images are enhanced using non-model based methods. In our work, we are proposing a non-model based method for color foggy image enhancement. Because of human vision system is highly sensitive to the change of brightness, and less sensitive to the change of hue and saturation, we put forwarded a foggy color image enhancement algorithm in HIS space. The algorithm changes only brightness and saturation keeping hue as it is. Since brightness and saturation

contain different information, so we propose to use different enhancement methods for these two components. For the brightness, we used an algorithm that stretches the I component of the image and for the saturation component we used an algorithm that controls the colors in the enhanced image and restore the colors in the output image. Rest of the paper is structured as follows. In the next section we describe the algorithm to increase contrast on I component of image. Section 3 describes the proposed method applied on the Saturation component. We present simulation results and give concluding remarks in section 4.

2. Proposed Image Enhancement Algorithm of Vector I in HIS Space: As stated before, human vision system is more sensitive to brightness, so we propose to apply enhancement algorithm on this component. For this RGB component of the input image is first converted into HIS space to get brightness component. Because of scene depth varies differently over whole image, global enhancement methods does not reflect depth change. So to take care of local scene depth changes, we process the image on a block by block basis, assuming that the pixels in the block are now of same seen depth. We enhance the block according to pixel intensities in it. Basically this mean that if the given image has many objects with varying seen depth, global enhancement techniques are expected to do average kind of enhancement of various object. On the other hand, processing on a block-by-block basis will enhance the object effectively. The process is described as follow: If we observe that a given block has only high intensity values (≥ a Particular value, to be discussed later) we declare the block as High Region (HR) block. Pixels of this block are stretched from the minimum intensity value of the block to 255.However if we would have stretch it from 0 to 255 there would have been many black and white spots appearing as salt-peeper noise. On the other hand if a block has mix intensity, i.e. consisting of almost all the intensity values we applied linear contrast stretching method from 0 to 255 to improve contrast level of the block. However, since foggy images are very less likely to have blocks with very low intensity values we were not required to control salt-peeper type of noise in this case. A brief introduction of algorithm is as follows: Step 1: : Let input degraded foggy image ),( yxI

RECENT ADVANCES in BUSINESS ADMINISTRATION

ISSN: 1790-5109 27 ISBN: 978-960-474-161-8

Page 3: A CONTRAST ENHANCEMENT BASED ALGORITHM TO IMPROVE ... · finally, 4) combining H, I, S components to get back RGB components. Besides proposed method being simple, experimental results

be of size NXM . Convert this input image to HIS

space.Take Intensity component as ),(1 yxI .Take

pixels in a Block ),( yxBl (mask) of size m X n .

Step 2: Let output image be ),( yxO . Take two

matrices SUM and COUNT of size NXM .

Initialize them to zero. SUM stores the sum of altered pixel at location ( yx, ) every time block passes through it due to overlapping. Similarly COUNT stores number of times block pass through a pixel location ( yx, ). Take three variables

A, B andStepsize and set A=1, B=1 and

.1=Stepsize Step 3: Find the pixel of highest gray level in the I component. )),(1max( yxIP = ;

[ ],.......1 Mx ∈ [ ]Ny ......1∈ ; Step 4: Get the Higher region of the image ;THPHR −=

Where =TH Threshold. The criterion of selecting the threshold is as follows:

255220 << PIf , then =TH 25;.

220150 ≤< Pelseif ,then =TH 15;

=THelse 10. These thresholds were arrived at after extensive Experimentation with a large set of test images. Step5: Get a block of pixels lB , with

By andA x == from the image ),(1 yxI . Step 6: Find the minimum intensity value in the block.

));,(min( yxBL l=

Step 7: Based onL , identify the block and apply contrast enhancement accordingly.

HRLif ≥ ; //All pixels are in Higher region

Apply contrast enhancement on image from L to 255. else // Mix intensity block Apply contrast enhancement on image from 0 to 255 Step 8: lBLKSUMLKSUM += ),(),( ;

1),(),( += LKCOUNTLKCOUNT ; where

]11[ −+∈ mAK K , ]11[ −+∈ nBL K ; Step 9: Move the block now in raster order (Left to Right and top to bottom) to cover whole image. if( NB < ) stepsizeBB += ;

5 Step Goto ;

)( MAelseif <

;StepsizeAA += ;1=B

5 Step Goto ;

else ;END

ForStepsize >1, the complexity of the algorithm is reduced without any significant loss in the enhancement of the visibility. But increasing it more produces blocking effects. Step 10: Take the average of the values altered at location ),( yx and obtain the enhanced output image. ),(/),(),( yxCOUNTyxSUMyxO = ;

So in this way we get O as output image of vector I of input image.

3. The saturation adjust algorithm For the saturation component, the main processing was the selection of dynamic range so that there should be color fidelity in image. We adjusted the S component using well known gamma correction method as given below :

λSS =_

;

Here S was the saturation of input image, _

S is the extended saturation.λ is the range saturation of the image. We adjusted the color of S component by iteratively changing the value of λ and found it to be fairly constant value (0.8 produce a perceptually good quality for foggy images). This is encouraging in view of the computational requirement for finding value of λ . However, to ensure enhancement of color component of the image, we could have used histogram equalization or linear stretching. But we did not do so because it is not known priori that how much stretching will be most appropriate. We did experimentation to arrive at some range for stretching limit and found that the same was highly image dependent while the value of λ was, as stated before, constant. After applying different algorithms on Intensity and Saturation components of the image, we transform the image in HIS plain again in RGB plane to get the output image. 4. Result and conclusion: Authors [7] have used their algorithms for enhancement of gray scale images. In order to compare the performance of our algorithm with their algorithm, we compared enhanced version of I component with their enhanced images. In our simulation results we have set following parameters: a). Block size as 64 X 64.

RECENT ADVANCES in BUSINESS ADMINISTRATION

ISSN: 1790-5109 28 ISBN: 978-960-474-161-8

Page 4: A CONTRAST ENHANCEMENT BASED ALGORITHM TO IMPROVE ... · finally, 4) combining H, I, S components to get back RGB components. Besides proposed method being simple, experimental results

b). Stepsize using in moving blocks is to set to 10. c). Gamma correction factor, λ is found to be 0.8 These parameters are found with extensive experiments with large number of fog degraded images. It can be seen in Fig 1, 2, 3 that visual quality of images obtained by our algorithm is of superior quality as compared to algorithm in [7].(See enhancement quality of I component and enhancement quality of gray scale). We have also applied global histogram equalization and include those results in the figures. Finally the input and output color images are compared. In order to bring effectiveness of our algorithm in poor visibility condition, we have included some example images Fig-1, and other atmospheric condition images also. (Fig.2 , Fig.3)

a. fog-degraded image b. Gray Image

c. Histogram equalization d.Result by method [5]

e. Our result on vector I f. Our color result Fig.1 Example of placing figure with experimental results

a. fog-degraded image b. Gray Image

c. Histogram equalization d.Result by method [5]

e. Our result on vector I f. Our color result Fig.2 Example of placing figure with experimental results

a. Input image b. Gray Image

c. Histogram equalization d.Result by method [5]

RECENT ADVANCES in BUSINESS ADMINISTRATION

ISSN: 1790-5109 29 ISBN: 978-960-474-161-8

Page 5: A CONTRAST ENHANCEMENT BASED ALGORITHM TO IMPROVE ... · finally, 4) combining H, I, S components to get back RGB components. Besides proposed method being simple, experimental results

e. Our result on vector I f. Our color result Fig.3 Example of placing figure with experimental results

a. fog-degraded image b. Gray Image

c. Histogram equalization d.Result by method [5]

e. Our result on vector I f. Our color result Fig.4 Example of placing figure with experimental results References: [1] S. G. Narasimhan, S. K. Nayar. Chromatic Framework for Vision in Bad Weather. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Vol.1, 2000:1598-1605. [2] Y. Y. Schechner, S. G.Narasimhan,S. K. Nayar. Polarization-Based Vision through Haze. Applied Optics, Special issue "Light and Color in the Open Air".Vol.42 (No.3) 2003:511-525. [3] John P. Oakley, Brenda L. Satherley, Improving image quality in poor visibility conditions using a

physical model for contrast degradation, IEEE Transactions on Image Processing, 1998, 7(2): 167179 [4] Kok Keong Tan, P. Oakley, Enhancement of Color Images in Poor Visibility Conditions, Image Processing, 2000 Proceedings, 2000 International Conference on, 2:788791 [5] S. G. Narasimhan, S. K. Nayar. Contrast Restoration of Weather Degraded Images. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI).Vol.25 (No.6),2003:713-724. [6] Zhu Pei, an image clearness method for fog, xi’an University of Technology, 2004. [7] Y. S. Zhai and X. M. Liu, "An improved fog-degraded image enhancement algorithm," Wavelet Analysis and Pattern Recognition, 2007. ICWAPR'07. International Conference on, vol. 2, 2007. [8] Starck J. L., Murtagh F., Candes E. J. etl. Gray and color image contrast enhancement by the curvelet transform. IEEE Transactions on Image Processing, Volume 12, Issue 6, June 2003, pp: 706 – 717 [9] B. Tang, G. Sapiro. Color image enhancement via chromaticity diffusion. IEEE Transactions on Image Processing, 2001, 10(5):701–707

RECENT ADVANCES in BUSINESS ADMINISTRATION

ISSN: 1790-5109 30 ISBN: 978-960-474-161-8