illumination normalization with time-dependent intrinsic images for video surveillance yasuyuki...

22
Illumination Normalization with Time-Dependent Intrinsic Images for Video Surveillance Yasuyuki Matsushita, Member, IEEE, Ko Nishino, Member, IEEE, Katsushi Ikeuchi, Fellow, IEEE, and Masao Sakauchi, Member, IEEE

Post on 21-Dec-2015

219 views

Category:

Documents


3 download

TRANSCRIPT

Illumination Normalization with Time-Dependent Intrinsic Images for Video Surveillance

Yasuyuki Matsushita, Member, IEEE, Ko Nishino, Member, IEEE,Katsushi Ikeuchi, Fellow, IEEE, and Masao Sakauchi, Member, IEEE

Outline

Introduction Proposed method overview Intrinsic image estimation Shadow removal Illumination eigenspace for direct

estimation of illumination Experimental results Conclusions

Introduction(1/2)

VIDEO surveillance systems involving object detection and tracking require robustness against illumination changes

Illumination change caused by: Weather conditions Large cast shadows of surrounding

structures (large buildings and trees)

Introduction(2/2)

Goal: Normalize the input image sequence in

terms of the distribution of incident lighting to remove illumination effects including shadow effects.

Proposed approach based on intrinsic images

Proposed method overview

Our method is composed of two parts Estimation of intrinsic images

Using this background image sequence, we then derive intrinsic images using our estimation method (extended from Weiss’s ML estimation method)

Direct estimation of illumination images Using the preconstructed illumination eigenspace,

we estimate an illumination image directly from an input image.

Proposed method overview

Intrinsic image estimation

Goal: Estimate intrinsic images under varying

illumination (inspired by ML estimation method[21])

ML is effective to extract the scene texture under Lambertian assumption

In real scene ,it’s often difficult to expect the Lambertian assumption to be hold

Lambertian model散射光強度 = [ 物體表面的法向量 ] ‧ [ 入射光的向量 ]

Intrinsic image estimation This paper propose a set of time-varying

reflectance images R(x; y; t) instead of a time invariant reflectance image R(x; y) Start with ML estimation method[21] Applying ML method, a single reflectance

image Rw(x,y) and a set of illumination images Lw(x,y) are estimated

Scene texture image == reflectance image Our Goal : I(x; y)=R(x; y; t)L(x; y; t) derive R(x; y; t) and L(x; y; t)

Intrinsic image estimation•With nth derivative filters fn, a filtered reflectance image is computed by taking median along the time axis

•With those filters , input images are decomposed into intrinsic images

The filtered illumination images are then computed by using estimated filtered reflectance image

We take a straightforward approach to remove texture edges from lw and derive illumination images l(x; y; t)

Intrinsic image estimation

Shadow removal Using the obtained scene illumination images by our

method, the input image sequence can be normalized in terms of illumination.

Create background images in each short time range (ΔT) Illumination doesn’t vary in ΔT moving objects in the scene are not observed at the same

point longer than the background in ΔT. Using the estimation method to decompose each image in

the background image sequence into corresponding reflectance images R(x; y; t) and illumination images L(x; y; t).

Shadow removalResulting illuminance-invariant image N(x; y; t) can be derived by the following equation

Illumination eigenspace for direct estimation of illumination

We propose illumination eigenspace to model variations of illumination images of the scene. Use principal component analysis (PCA) to

construct the illumination eigenspace of a target scene

The basic idea in PCA is to find the basic components [s1; s2; . . . ; sn] that explain the maximum amount of variance possible by n linearly transformed components.

Illumination eigenspace for direct estimation of illumination

•we mapped L(x; y; t) into the illumination eigenspace•an illumination space matrix is constructed bysubtracting Lw, which is the average of all Lw,

P is an N*M matrix, where N is the number of pixels in the illumination image and M is the number of illumination images Lw.

We made the covariance matrix Q of P as follows:

Finally, the eigenvectors ei and the corresponding eigenvalues i of Q are determined by solving

Illumination eigenspace for direct estimation of illumination Using the illumination eigenspace, direct estimation

of an illumination image can be done given an input image which contains moving objects.

We first divide the input image by a reflectance image to get a pseudoillumination image L*

Using this pseudoillumination image as a query, best approximation of the corresponding illumination image is estimated

Illumination eigenspace for direct estimation of illumination

The number of stored images for this experiment was 2,048 and the contribution ratio was 84.5 percent at 13 dimensions, 90.0 percent at 23 dimensions, and 99.0 percent at 120 dimensions.

The disk space needed to store the subspace was about 32 MBytes when the

image size is 320*243

Illumination eigenspace for direct estimation of illumination

Illumination eigenspace for direct estimation of illumination

The average time of the NN search is shown in Table 1 with MIPS R12000 300MHz, when the number of stored illumination images is 2,048, the image size is 360 243

The estimation time is fast enough for real time processing.

Experimental results Evaluated our shadow elimination method

by object tracking based on block matching sum of squared differences(SSD)

normalized correlation function (NCF)

Experimental results

Experimental results

Conclusions We have described a framework for normalizing

illumination effects of real world scenes Extend current method to properly handle surfaces

with nonrigid reflectance properties Utilize illumination eigenspace, a preconstructed

database which captures the illumination variation of the target scene

Effectively handle the appearance variation caused by illumination

Disadvantage: Current implementation in research code is not fast

enough for realtime processing