adaptive spatiotemporal video demosaicking using ...kostas/publications2008/pub/83.pdfefficient...

4
R. Lukac and K. N. Plataniotis: Adaptive Spatiotemporal Video Demosaicking Using Bidirectional Multistage Spectral Filters Contributed Paper Manuscript received March 27, 2006 0098 3063/06/$20.00 © 2006 IEEE 651 Adaptive Spatiotemporal Video Demosaicking Using Bidirectional Multistage Spectral Filters Rastislav Lukac, Member, IEEE, and Konstantinos N. Plataniotis, Senior Member, IEEE Abstract This paper introduces a new adaptive spatiotemporal video demosaicking solution suitable for single-sensor digital video cameras. The proposed solution uses bidirectional multistage filters operating on spectrally- generated inputs to follow varying spatiotemporal characteristics of the captured video, reduce demosaicking errors, and restore the full-color information in a cost- effective way. Experimentation reported in the paper suggests that the proposed solution produces visually pleasing videos while outperforming the previously proposed computationally efficient video demosaicking methods. 1 Index Terms — Digital color imaging, single-sensor digital video cameras, video demosaicking, spatiotemporal processing. I. INTRODUCTION Typical consumer digital cameras acquire color information of the visual scene using a color filter array (CFA) [1] placed on top of a single-image sensor, such as a charge-coupled device (CCD) or complementary metal semiconductor (CMOS) sensor [2]. Since the sensor is essentially a monochromatic device, it records a single reading at each spatial location. However, three numerical components are necessary to describe a color [3], as human vision is based on three-types of color photoreceptor cone cells [4]. Therefore, two missing values per spatial location have to be obtained by demosaicking [5]-[7] which is an image processing solution employed in a single-sensor processing pipeline to restore the color information from the acquired CFA data. Various single-sensor consumer electronic devices with video capturing capabilities are currently in use. Examples include, but are not limited to digital still and video cameras, personal digital assistants, mobile phones, and visual sensors for surveillance and automotive applications. Since video has a spatiotemporal nature, as it represents a time sequence of two-dimensional image frames, multi-frame or spatiotemporal information should be utilized in video demosaicking to avoid motion artifacts in the demosaicked video [5]-[7]. Existing spatiotemporal video demosaicking solutions are constructed using different signal processing paradigms to match specific design, algorithmic and application constraints. 1 The authors are with The Edward S. Rogers Sr. Department of ECE, University of Toronto, Toronto, Canada. Corresponding Author: Dr. Rastislav Lukac, Multimedia Laboratory, BA 4157, The Edward S. Rogers Sr. Department of ECE, University of Toronto, 10 King's College Road, Toronto, Ontario, M5S 3G4, Canada (e-mail: [email protected], web: http://www.dsp.utoronto.ca/~lukacr) Part of this paper was presented at the IEEE International Conference on Acoustic, Speech & Signal Processing (ICASSP) 2006 in Toulouse, France. Namely, the solution proposed in [5] accommodates cost- effective considerations inherent in mobile phone imaging by operating over three subsequent frames using unidirectional processing windows to compensate for local motion translations. Built on the efficient data-adaptive and spectral modeling concepts, the solution achieves an excellent trade- off between performance and complexity, a feature suggesting its suitability for real-time, in-camera video demosaicking. Another processing paradigm is based on motion- compensation. However, estimating motion in the acquired CFA video is a rather complicated task due to the underlying mosaic layout. Therefore, the video demosaicking solution proposed in [6] is suitable for high-end applications such as digital cinema, with most processing performed off-camera using companion personal computers (PCs). Finally, the solution presented in [7] performs joint multi-frame demosaicking and color super-resolution reconstruction to compensate for the shortcomings of single-sensor imaging systems with the inexpensive optics. High-quality images and video are produced by fusing tens of low-resolution frames at the expense of computational complexity, thus requiring off- camera demosaicking of the acquired CFA video. In this paper, we introduce a new spatiotemporal solution for in-camera video demosaicking. The proposed solution extends unidirectional processing concepts of [5] by producing spectral sub-estimates using higher-order, bidirectional, windows in a refined multistage processing architecture. That way, additional spatial information is utilized, increasing the accuracy of demosaicked outputs while keeping the implementation complexity at the level acceptable for cost-effective consumer electronic devices. The rest of this paper is organized as follows. Section II introduces the proposed solution. Section III provides the performance evaluations and computational complexity analysis, and Section IV summarizes the main ideas. II. PROPOSED VIDEO DEMOSAICKING Among existing CFA layouts, with an overview presented in [1], we will consider a Red-Green-Blue (RGB) Bayer CFA due to its use in the open literature. Using an array with a GRGR phase in the first row, the acquired sensor readings (,,) rst z constitute a 1 2 3 K K K × × gray-scale, mosaic-like sequence, where 1 1,2,..., r K = (for a row) and 2 1,2,..., s K = (for a column) denote the spatial position and 3 1,2,..., t K = indicates the temporal position (frame index).

Upload: others

Post on 15-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Adaptive Spatiotemporal Video Demosaicking Using ...kostas/Publications2008/pub/83.pdfefficient video demosaicking methods. 1 Index Terms — Digital color imaging, single-sensor digital

R. Lukac and K. N. Plataniotis: Adaptive Spatiotemporal Video Demosaicking Using Bidirectional Multistage Spectral Filters

Contributed Paper Manuscript received March 27, 2006 0098 3063/06/$20.00 © 2006 IEEE

651

Adaptive Spatiotemporal Video Demosaicking Using Bidirectional Multistage Spectral Filters

Rastislav Lukac, Member, IEEE, and Konstantinos N. Plataniotis, Senior Member, IEEE

Abstract — This paper introduces a new adaptive

spatiotemporal video demosaicking solution suitable for single-sensor digital video cameras. The proposed solution uses bidirectional multistage filters operating on spectrally-generated inputs to follow varying spatiotemporal characteristics of the captured video, reduce demosaicking errors, and restore the full-color information in a cost-effective way. Experimentation reported in the paper suggests that the proposed solution produces visually pleasing videos while outperforming the previously proposed computationally efficient video demosaicking methods. 1

Index Terms — Digital color imaging, single-sensor digital video cameras, video demosaicking, spatiotemporal processing.

I. INTRODUCTION Typical consumer digital cameras acquire color information

of the visual scene using a color filter array (CFA) [1] placed on top of a single-image sensor, such as a charge-coupled device (CCD) or complementary metal semiconductor (CMOS) sensor [2]. Since the sensor is essentially a monochromatic device, it records a single reading at each spatial location. However, three numerical components are necessary to describe a color [3], as human vision is based on three-types of color photoreceptor cone cells [4]. Therefore, two missing values per spatial location have to be obtained by demosaicking [5]-[7] which is an image processing solution employed in a single-sensor processing pipeline to restore the color information from the acquired CFA data.

Various single-sensor consumer electronic devices with video capturing capabilities are currently in use. Examples include, but are not limited to digital still and video cameras, personal digital assistants, mobile phones, and visual sensors for surveillance and automotive applications. Since video has a spatiotemporal nature, as it represents a time sequence of two-dimensional image frames, multi-frame or spatiotemporal information should be utilized in video demosaicking to avoid motion artifacts in the demosaicked video [5]-[7].

Existing spatiotemporal video demosaicking solutions are constructed using different signal processing paradigms to match specific design, algorithmic and application constraints.

1 The authors are with The Edward S. Rogers Sr. Department of ECE, University of Toronto, Toronto, Canada.

Corresponding Author: Dr. Rastislav Lukac, Multimedia Laboratory, BA 4157, The Edward S. Rogers Sr. Department of ECE, University of Toronto, 10 King's College Road, Toronto, Ontario, M5S 3G4, Canada (e-mail: [email protected], web: http://www.dsp.utoronto.ca/~lukacr)

Part of this paper was presented at the IEEE International Conference on Acoustic, Speech & Signal Processing (ICASSP) 2006 in Toulouse, France.

Namely, the solution proposed in [5] accommodates cost-effective considerations inherent in mobile phone imaging by operating over three subsequent frames using unidirectional processing windows to compensate for local motion translations. Built on the efficient data-adaptive and spectral modeling concepts, the solution achieves an excellent trade-off between performance and complexity, a feature suggesting its suitability for real-time, in-camera video demosaicking. Another processing paradigm is based on motion-compensation. However, estimating motion in the acquired CFA video is a rather complicated task due to the underlying mosaic layout. Therefore, the video demosaicking solution proposed in [6] is suitable for high-end applications such as digital cinema, with most processing performed off-camera using companion personal computers (PCs). Finally, the solution presented in [7] performs joint multi-frame demosaicking and color super-resolution reconstruction to compensate for the shortcomings of single-sensor imaging systems with the inexpensive optics. High-quality images and video are produced by fusing tens of low-resolution frames at the expense of computational complexity, thus requiring off-camera demosaicking of the acquired CFA video.

In this paper, we introduce a new spatiotemporal solution for in-camera video demosaicking. The proposed solution extends unidirectional processing concepts of [5] by producing spectral sub-estimates using higher-order, bidirectional, windows in a refined multistage processing architecture. That way, additional spatial information is utilized, increasing the accuracy of demosaicked outputs while keeping the implementation complexity at the level acceptable for cost-effective consumer electronic devices.

The rest of this paper is organized as follows. Section II introduces the proposed solution. Section III provides the performance evaluations and computational complexity analysis, and Section IV summarizes the main ideas.

II. PROPOSED VIDEO DEMOSAICKING Among existing CFA layouts, with an overview presented

in [1], we will consider a Red-Green-Blue (RGB) Bayer CFA due to its use in the open literature. Using an array with a GRGR phase in the first row, the acquired sensor readings

( , , )r s tz constitute a 1 2 3K K K× × gray-scale, mosaic-like sequence, where 11,2,...,r K= (for a row) and 21,2,...,s K= (for a column) denote the spatial position and 31,2,...,t K= indicates the temporal position (frame index).

Page 2: Adaptive Spatiotemporal Video Demosaicking Using ...kostas/Publications2008/pub/83.pdfefficient video demosaicking methods. 1 Index Terms — Digital color imaging, single-sensor digital

IEEE Transactions on Consumer Electronics, Vol. 52, No. 2, MAY 2006 652

In the equivalent color RGB representation [5], ( , , )r s tz corresponds to an R-like vector ( , , ) ( , , )[ ,0,0]r s t r s tz=x for (odd r, even s), a B-like vector ( , , ) ( , , )[0,0, ]r s t r s tz=x for (even r, odd s), and a G-like vector ( , , ) ( , , )[0, ,0]r s t r s tz=x for (odd r, odd s) and (even r, even s). Thus, each triplet ( , , ) ( , , )1 ( , , )2 ( , , )3[ , , ]r s t r s t r s t r s tx x x=x consists of a single CFA entry ( , , )r s t kx indicating the R ( 1)k = , G ( 2)k = and B ( 3)k = component whereas two zero values denote the missing components. The purpose of video demosaicking is to generate these two values using the available CFA entries, thus producing a 1 2 3K K K× × full-color image sequence 3 3: Z Z→x with the pixels ( , , ).r s tx

A. Step 1: Demosaicking Green Components Following the double density of the acquired G-like entries

compared to the acquired R and B-like entries, the proposed video demosaicking solution starts by populating the missing G components ( , , )2r s tx as follows:

4 4

( , , )2 ( , , )1 1

/r s t r s t k i i ii i

x x w c w= =

⎛ ⎞= + ⎜ ⎟

⎝ ⎠∑ ∑ (1)

where iw is the edge-sensing weight, ic is the spectrally-generated input, and ( , , )r s t kx is the spectrally-normalizing component. Both iw and ic are associated with one of four bidirectional spatiotemporal windows shown in Figs. 1(a-d) through the following definitions:

1

1 ( 1, , 1) ( 1, , 1) ( 1, , ) ( 1, , )

12 ( , 1, 1) ( , 1, 1) ( , 1, ) ( , 1, )

13 ( , 1, 1) ( , 1, 1) ( , 1, ) ( , 1, )

4 ( 1, , 1)

(1 | | | |)

(1 | | | |)

(1 | | | |)

(1 |

r s t r s t r s t r s t

r s t r s t r s t r s t

r s t r s t r s t r s t

r s t

w c c c c

w c c c c

w c c c c

w c

−− − + + − +

−− − + + − +

−+ − − + + −

+ −

= + − + −

= + − + −

= + − + −

= + − 1( 1, , 1) ( 1, , ) ( 1, , )| | |)r s t r s t r s tc c c −− + + −+ −

(2)

1 ( 1, , 1) ( 1, , 1) ( 1, , ) ( 1, , )

2 ( , 1, 1) ( , 1, 1) ( , 1, ) ( , 1, )

3 ( , 1, 1) ( , 1, 1) ( , 1, ) ( , 1, )

4 ( 1, , 1) ( 1, , 1) ( 1, ,

( ) / 4( ) / 4

( ) / 4(

r s t r s t r s t r s t

r s t r s t r s t r s t

r s t r s t r s t r s t

r s t r s t r s

c c c c cc c c c c

c c c c cc c c c

− − + + − +

− − + + − +

+ − − + + −

+ − − + +

= + + +

= + + +

= + + +

= + + ) ( 1, , ) ) / 4t r s tc −+

(3)

where ( 1, , ) ( , 1, ) ( , 1, ) ( 1, , ), , , ,r s r s r s r sc c c cα α α α− − + + for 1, , 1,t t tα = − + are the color-difference values, as known from [5]:

( 1, , ) ( 1, , )2 ( , , ) ( 2, , )

( , 1, ) ( , 1, )2 ( , , ) ( , 2, )

( , 1, ) ( , 1, )2 ( , , ) ( , 2, )

( 1, , ) ( 1, , )2 ( , , ) ( 2, , )

( ) / 2( ) / 2( ) / 2( ) / 2

r s r s r s k r s k

r s r s r s k r s k

r s r s r s k r s k

r s r s r s k r s k

c x x xc x x xc x x xc x x x

α α α α

α α α α

α α α α

α α α α

− − −

− − −

+ + +

+ + +

= − +

= − +

= − +

= − +

(4)

If the demosaicked location ( , , )r s t in (1) corresponds to

odd r and even s, i.e. R-like CFA locations, then both (1) and (4) should be used with 1.k = Otherwise, (1) is used at the B-like CFA locations, i.e. for even r and odd s, suggesting the setting 3k = in (1) and (4).

(a) (b) (c) (d)

(e) (f) (g) (h) Fig. 1. Proposed bidirectional processing windows for spatiotemporal video demosaicking of Bayer CFA-captured image sequences.

B. Step 2: Demosaicking Red and Blue Components To generate the missing R and B components, the

demosaicking formula proposed in (1) should be rewritten as follows:

4 4

( , , ) ( , , )21 1

/r s t k r s t i i ii i

x x w c w= =

⎛ ⎞= + ⎜ ⎟⎝ ⎠∑ ∑ (5)

Since the acquired R and B components have a different arrangement on the image lattice than the acquired G components, the values of iw and ic in (5) are given by

1

1 ( 1, 1, 1) ( 1, 1, 1) ( 1, 1, ) ( 1, 1, )

12 ( 1, 1, 1) ( 1, 1, 1) ( 1, 1, ) ( 1, 1, )

3 ( 1, 1, 1) ( 1, 1, 1) ( 1, 1, ) ( 1, 1, )

(1 | | | |)

(1 | | | |)

(1 | | |

r s t r s t r s t r s t

r s t r s t r s t r s t

r s t r s t r s t r s t

w c c c c

w c c c c

w c c c c

−− − − + + + − − + +

−− + − + − + − + + −

+ − − − + + + − − +

= + − + −

= + − + −

= + − + − 1

14 ( 1, 1, 1) ( 1, 1, 1) ( 1, 1, ) ( 1, 1, )

|)

(1 | | | |)r s t r s t r s t r s tw c c c c

−+ + − − − + + + − −= + − + −

(6)

1 ( 1, 1, 1) ( 1, 1, 1) ( 1, 1, ) ( 1, 1, )

2 ( 1, 1, 1) ( 1, 1, 1) ( 1, 1, ) ( 1, 1, )

3 ( 1, 1, 1) ( 1, 1, 1) ( 1, 1, ) ( 1, 1, )

4 ( 1, 1,

( ) / 4( ) / 4( ) / 4(

r s t r s t r s t r s t

r s t r s t r s t r s t

r s t r s t r s t r s t

r s

c c c c cc c c c cc c c c cc c

− − − + + + − − + +

− + − + − + − + + −

+ − − − + + + − − +

+ +

= + + +

= + + +

= + + +

= 1) ( 1, 1, 1) ( 1, 1, ) ( 1, 1, ) ) / 4t r s t r s t r s tc c c− − − + + + − −+ + +

(7)

Note that the above definitions follow four bidirectional processing windows shown in Figs. 1(e-h). Due to the availability of the demosaicked G planes, the color-difference values ( 1, 1, ) ( 1, 1, ) ( 1, 1, ) ( 1, 1, ), , ,r s r s r s r sc c c cα α α α− − − + + − + + seen in (6) and (7), for 1, , 1,t t tα = − + are now defined as follows:

( 1, 1, ) ( 1, 1, ) ( 1, 1, )2

( 1, 1, ) ( 1, 1, ) ( 1, 1, )2

( 1, 1, ) ( 1, 1, ) ( 1, 1, )2

( 1, 1, ) ( 1, 1, ) ( 1, 1, )2

r s r s k r s

r s r s k r s

r s r s k r s

r s r s k r s

c x xc x x

c x xc x x

α α α

α α α

α α α

α α α

− − − − − −

− + − + − +

+ − + − + −

+ + + + + +

= −

= −

= −

= −

(8)

Demosaicking via (5) with the bidirectional windows

shown in Figs. 1(e-h) does not fully-populate the R and B

Page 3: Adaptive Spatiotemporal Video Demosaicking Using ...kostas/Publications2008/pub/83.pdfefficient video demosaicking methods. 1 Index Terms — Digital color imaging, single-sensor digital

R. Lukac and K. N. Plataniotis: Adaptive Spatiotemporal Video Demosaicking Using Bidirectional Multistage Spectral Filters 653

(a)

(b)

(c) Fig. 2. Test color videos: (a) Cost, (b) Bikes, (c) Nature.

color planes. However, it produces arrangements which are similar to the acquired G components, necessitating bidirectional configurations shown in Figs. 1(a-d) to complete the demosaicking process. Thus, Eq. (5) should be re-applied with iw and ic respectively obtained in (2) and (3) using the following color-difference values:

( 1, , ) ( 1, , ) ( 1, , )2

( , 1, ) ( , 1, ) ( , 1, )2

( , 1, ) ( , 1, ) ( , 1, )2

( 1, , ) ( 1, , ) ( 1, , )2

r s r s k r s

r s r s k r s

r s r s k r s

r s r s k r s

c x xc x x

c x xc x x

α α α

α α α

α α α

α α α

− − −

− − −

+ + +

+ + +

= −

= −

= −

= −

(9)

where 1, , 1.t t tα = − + Note that in accordance with the notation convention used in this paper, the proposed formulas (5), (8), and (9) should be used with 1k = and 3k = in order to demosaick the R and B components, respectively.

III. EXPERIMENTAL RESULTS Fig. 2 shows the test color image sequences which have

been captured using three-sensor camera and normalized into an 8-bit per color component representation, 300 200× spatial resolution and 99 frames. Following the procedure presented in [5], the CFA image sequences used in the input of the proposed solution were obtained by sampling the original video frames with a Bayer pattern [1].

A. Performance Evaluations The performance was evaluated by comparing the

demosaicked sequences to the original sequences. To facilitate the objective comparisons, three objective criteria, i.e. the mean absolute error (MAE), the mean square error (MSE), and the normalized color difference (NCD) criterion were used. The corresponding definitions of these widely accepted error measures can be found in [5].

Table I summarizes the results achieved using cost-effective video demosaicking solutions based on bilinear interpolation (BI), unidirectional processing (UDP) [5], and proposed bidirectional multistage filtering. As it can be seen, the proposed method produced the best results among the considered solutions and both UDP and the proposed method significantly outperformed the BI scheme. Comparisons of the UDP scheme and the proposed solution suggest the improvements obtained by the larger spatial support.

TABLE I OBJECTIVE EVALUATION OF VIDEO DEMOSAICKING

Test color video Method MAE MSE NCD

BI 6.640 177.5 0.0994 UDP 2.660 24.6 0.0439 Cost

Proposed 2.519 21.4 0.0421 BI 6.498 174.5 0.1103

UDP 1.986 14.7 0.0439 Bikes Proposed 1.912 13.4 0.0424

BI 10.238 355.1 0.1448 UDP 3.369 37.4 0.0545 Nature

Proposed 3.035 28.7 0.0503

Fig. 3 allows the visual (subjective) comparison of the original images and demosaicked results cropped in areas with significant structural contents. It is not difficult to see that the BI solution produces the images with the remarkable color artifacts, zipper effects and blurred edges, whereas the remaining two solutions produce high-quality demosaicked videos. Moreover, when the proposed solution is used, additional improvements, in terms of both sharpness and coloration, can be achieved.

(a) (b) (c) Fig. 3. Enlarged parts of the results achieved using the test video: (a) Cost, (b) Bikes, (c) Nature. From top to bottom, the results correspond to the original images, BI, UDP, and the proposed solution.

B. Computational Complexity Analysis Since the computational efficiency of any processing

solution is as important as its performance, the proposed solution is analyzed here in terms of normalizing operations, such as additions, subtractions, multiplications, divisions and absolute values. Execution time, measured using a conventional PC with a standard operating system and programming environment, is provided, as well.

Page 4: Adaptive Spatiotemporal Video Demosaicking Using ...kostas/Publications2008/pub/83.pdfefficient video demosaicking methods. 1 Index Terms — Digital color imaging, single-sensor digital

IEEE Transactions on Consumer Electronics, Vol. 52, No. 2, MAY 2006 654

TABLE II THE NUMBER OF NORMALIZING OPERATIONS PER SPATIAL LOCATION

Method UDP method [5] Proposed method

Criterion Step 1 Step 2 Total Step 1 Step 2 Total Additions 35 23 58 43 31 74

Subtractions 18 18 36 20 20 40 Multiplicat. 6 6 12 6 6 12 Divisions 25 13 38 21 9 30

Abs. values 6 6 12 8 8 16

Table II compares the computational complexity of the UDP and proposed solutions. As it can be seen, enhancing the spatial support in the proposed solution increased the number of additions, subtractions and absolute value calculations compared to the UDP solution. On the other hand, by decreasing the number of processing windows from six unidirectional windows in the UDP solution to four bidirectional windows the proposed method reduces the number of complex divisions.

The execution of the video demosaicking tools, on an Intel Pentium IV 2.40 GHz CPU, 512 MB RAM box with Windows XP operating system and MS Visual C++ 5.0 programming environment, took (on average) 0.291 and 0.294 s per 300 200× video frame employing the UDP and proposed solutions, respectively. Note that the software implementations have not been optimized and that the discussion on the implementation approach suitable for real-time spatiotemporal demosaicking can be found in [5].

IV. SUMMARY In the summary, the proposed video demosaicking solution

is a cost-effective adaptive spectral processor which follows local motion translations by employing refined bidirectional multistage filters. That way, it produces visually pleasing full-color video with the enhanced sharpness and coloration.

REFERENCES [1] R. Lukac and K. N. Plataniotis, “Color filter arrays: Design and

performance analysis,” IEEE Transactions on Consumer Electronics, vol. 51, no. 4, pp. 1260-1267, November 2005.

[2] M. Vrhel, E. Saber, and H. J. Trussell, “Color image generation and display technologies,” IEEE Signal Processing Magazine, vol. 22, no. 1, pp. 23-33, January 2005.

[3] R. Lukac, B. Smolka, K. Martin, K. N. Plataniotis, and A. N. Venetsanopulos, ”Vector filtering for color imaging,” IEEE Signal Processing Magazine; vol. 22, no. 1, pp. 74-86, January 2005.

[4] G. Wyszecki and W. S. Stiles, Color Science, Concepts and Methods, Quantitative Data and Formulas, 2nd Edition, N.Y.: John Wiley, 1982.

[5] R. Lukac and K. N. Plataniotis, “Fast video demosaicking solution for mobile phone imaging applications,” IEEE Transactions on Consumer Electronics, vol. 51, no. 2, pp. 675-681, May 2005.

[6] X. Wu and L. Zhang, “Color video demosaicking via motion estimation and data fusion,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 16, no. 2, pp. 231-240, February 2006.

[7] S. Farsiu, M. Elad, and P. Milanfar, “Multiframe demosaicing and super-resolution of color images,” IEEE Transactions on Image Processing, vol. 15, no. 1, pp. 141-159, January 2006.

Rastislav Lukac (M’02) received the M.Sc. (Ing.) and Ph.D. degrees in Telecommunications from the Technical University of Kosice, Slovak Republic in 1998 and 2001, respectively. From February 2001 to August 2002 he was an Assistant Professor with the Department of Electronics and Multimedia Communications at the Technical University of Kosice. During August 2002 to July 2003 he was a Researcher with the Slovak Image

Processing Center in Dobsina, Slovak Republic. From January 2003 to March 2003 he was a Postdoctoral Fellow with the Artificial Intelligence and Information Analysis Laboratory, Aristotle University of Thessaloniki, Greece. Since May 2003 he has been a Post-doctoral Fellow with the Edward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto in Toronto, Canada. He is a contributor to four books and he has published over 200 papers in the areas of digital camera image processing, color image and video processing, multimedia security, and microarray image processing.

Dr. Lukac is a Member of the IEEE, EURASIP, and IEEE Circuits and Systems, IEEE Consumer Electronics, and IEEE Signal Processing societies. He is a Guest Co-Editor of the Real-Time Imaging, Special Issue on Multi-Dimensional Image Processing, and of the Computer Vision and Image Understanding, Special Issue on Color Image Processing for Computer Vision and Image Understanding. He is an Associate Editor for the Journal of Real-Time Image Processing. He serves as a Technical Reviewer for various scientific journals and he participates as a Member of numerous international conference committees. In 2003, he was the recipient of the NATO/NSERC Science Award.

Konstantinos N. Plataniotis (S’90–M’92–SM’03) received the B. Engineering degree in Computer Engineering from the Department of Computer Engineering and Informatics, University of Patras, Patras, Greece in 1988 and the M.S and Ph.D degrees in Electrical Engineering from the Florida Institute of Technology (Florida Tech), Melbourne, Florida in 1992 and 1994 respectively. From August 1997 to June 1999

he was an Assistant Professor with the School of Computer Science at Ryerson University. He is currently an Associate Professor at the Edward S. Rogers Sr. Department of Electrical & Computer Engineering where he researches and teaches image processing, adaptive systems, and multimedia signal processing. He co-authored, with A.N. Venetsanopoulos, a book on “Color Image Processing & Applications”, Springer Verlag, May 2000, ISBN 3-540-66953-1, he is a contributor to seven books, and he has published more than 300 papers in refereed journals and conference proceedings in the areas of multimedia signal processing, image processing, adaptive systems, communications systems and stochastic estimation.

Dr. Plataniotis is a Senior Member of IEEE, an Associate Editor for the IEEE Transactions on Neural Networks, a past member of the IEEE Technical Committee on Neural Networks for Signal Processing. He was the Technical Co-Chair of the Canadian Conference on Electrical and Computer Engineering (CCECE) 2001, and CCECE 2004. He is the Technical Program Chair of the 2006 IEEE International Conference in Multimedia and Expo (ICME 2006), the Vice-Chair for 2006 IEEE Intelligent Transportation Systems Conference (ITSC 2006), and the Image Processing Area Editor for the IEEE Signal Processing Society e-letter. He is the 2005 IEEE Canada “Outstanding Engineering Educator” Award recipient and the co-recipient of the 2006 IEEE Transactions on Neural Networks Outstanding Paper Award.