2010 15 vo

11
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 2, FEBRUARY 2010 399 Selective Data Pruning-Based Compression Using High-Order Edge-Directed Interpolation Du ˜ng T. Võ, Member, IEEE, Joel Solé, Member, IEEE, Peng Yin, Member, IEEE, Cristina Gomila, Member, IEEE, and Truong Q. Nguyen, Fellow, IEEE Abstract—This paper proposes a selective data pruning-based compression scheme to improve the rate-distortion relation of compressed images and video sequences. The original frames are pruned to a smaller size before compression. After decoding, they are interpolated back to their original size by an edge-directed interpolation method. The data pruning phase is optimized to obtain the minimal distortion in the interpolation phase. Further- more, a novel high-order interpolation is proposed to adapt the interpolation to several edge directions in the current frame. This high-order filtering uses more surrounding pixels in the frame than the fourth-order edge-directed method and it is more robust. The algorithm is also considered for multiframe-based interpola- tion by using spatio-temporally surrounding pixels coming from the previous frame. Simulation results are shown for both image interpolation and coding applications to validate the effectiveness of the proposed methods. Index Terms—Data pruning, edge-directed interpolation, spa- tial-temporal interpolation, video compression. I. INTRODUCTION N OWADAYS, the request for higher quality video is emerging very fast. Video tends to higher resolution, higher frame-rate and higher bit-depth. New technologies to further reduce bit-rate are strongly demanded to combat the bit-rate increase of this high definition video, especially to meet the network and communication transmission constraints. In video coding, there are two main directions to reduce com- pression bit-rate. One direction is to improve the compression technology and the other one is to perform a preprocessing step that improves the subsequent compression. The first direction can be viewed from the development of the MPEG video coding standard, from MPEG-1 to Manuscript received January 13, 2009; revised September 17, 2009. First published November 03, 2009; current version published January 15, 2010. This work was done while D. T. Võ was with Thomson Corporate Research and Uni- versity of California at San Diego. This work was supported in part by Texas Instruments, Inc. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Hsueh-Ming Hang. D. T. Võ is with the Digital Media Solutions Lab, Samsung Information Sys- tems America, Irvine, CA 92612 USA (e-mail: [email protected]). J. Solé, P. Yin, and C. Gomila are with the Thomson Corporate Re- search, Princeton, NJ 08540 USA (e-mail: {[email protected]; [email protected]; [email protected]). T. Q. Nguyen is with the Department of Electrical and Computer Engineering, University of California at San Diego, La Jolla, CA 92093-0407 USA (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TIP.2009.2035845 H.264/MPEG-4 AVC. For most video coding standards, in- creasing quantization step size is used to reduce bit-rate [1]. However, this technique can result in blocking and other coding artifacts due to the loss of high frequency details. In the second direction, common techniques are low-pass filtering or down- sampling (which can be seen as a filtering process) followed by reconstructing or upsampling at the decoder. For example, low-pass filters were adaptively used based on Human Visual System to eliminate high frequency information in [2] or to simplify the contextual information in [3]. Also, to reduce the bit-rate, some digital television systems uniformly downsized the original sequence and upsized it after decoding. These methods contain a pruning phase to reduce the amount of data to compress and a reconstructing phase to recover the dropped data. The reconstructed video applying these techniques looked blur because they were designed to eliminate high-frequency information with the low-pass filter in the preprocessing step or with the anti-aliasing filter before downsizing. This paper proposes a novel data pruning-based compres- sion scheme to reduce the bit-rate while still keeping a high quality reconstructed frame. The original frames are first op- timally pruned to a smaller size by adaptively dropping rows or columns prior to encoding. At the final stage, an interpola- tion phase is implemented to reconstruct the decoded frames to their original size. By avoiding filtering the remaining rows and columns, the reconstructed frames can still achieve high quality from a lower bit-rate. Main applications of interpolation are upsampling, demosi- acking and displaying for different video formats. For resolu- tion enhancement, the interpolation is implemented to overcome the limitation of low resolution imaging. A wide range of in- terpolation methods has been discussed, starting from conven- tional bilinear and bicubic interpolations to sophisticated itera- tive methods such as projection onto convex sets (POCS) [4] and nonconvex nonlinear partial differential equations [5]. To avoid the jaggedness artifacts occurring along edges, edge-oriented in- terpolation methods were performed using Markov random field [6] or the low resolution (LR) image covariance [7]. Further- more, [8] proposed a 2-D piecewise autoregressive model and a soft-decision estimation to interpolate the missing pixels in a group. This method required a 12 12 matrix inversion and can cause artifacts in the output image when the matrix is badly con- ditioned. A combination of directional filtering and data fusion was also discussed in [9] to estimate missing high resolution (HR) pixels by a linear minimum mean square error estimation. Another group of interpolation algorithms used different kinds of transforms to predict the fine structure of the HR image from 1057-7149/$26.00 © 2010 IEEE Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on April 02,2010 at 13:47:56 EDT from IEEE Xplore. Restrictions apply.

Upload: balaji-pillai

Post on 07-May-2015

358 views

Category:

Technology


1 download

DESCRIPTION

Image processing

TRANSCRIPT

Page 1: 2010 15 vo

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 2, FEBRUARY 2010 399

Selective Data Pruning-Based Compression UsingHigh-Order Edge-Directed Interpolation

Dung T. Võ, Member, IEEE, Joel Solé, Member, IEEE, Peng Yin, Member, IEEE, Cristina Gomila, Member, IEEE,and Truong Q. Nguyen, Fellow, IEEE

Abstract—This paper proposes a selective data pruning-basedcompression scheme to improve the rate-distortion relation ofcompressed images and video sequences. The original frames arepruned to a smaller size before compression. After decoding, theyare interpolated back to their original size by an edge-directedinterpolation method. The data pruning phase is optimized toobtain the minimal distortion in the interpolation phase. Further-more, a novel high-order interpolation is proposed to adapt theinterpolation to several edge directions in the current frame. Thishigh-order filtering uses more surrounding pixels in the framethan the fourth-order edge-directed method and it is more robust.The algorithm is also considered for multiframe-based interpola-tion by using spatio-temporally surrounding pixels coming fromthe previous frame. Simulation results are shown for both imageinterpolation and coding applications to validate the effectivenessof the proposed methods.

Index Terms—Data pruning, edge-directed interpolation, spa-tial-temporal interpolation, video compression.

I. INTRODUCTION

N OWADAYS, the request for higher quality video isemerging very fast. Video tends to higher resolution,

higher frame-rate and higher bit-depth. New technologies tofurther reduce bit-rate are strongly demanded to combat thebit-rate increase of this high definition video, especially to meetthe network and communication transmission constraints. Invideo coding, there are two main directions to reduce com-pression bit-rate. One direction is to improve the compressiontechnology and the other one is to perform a preprocessing stepthat improves the subsequent compression.

The first direction can be viewed from the developmentof the MPEG video coding standard, from MPEG-1 to

Manuscript received January 13, 2009; revised September 17, 2009. Firstpublished November 03, 2009; current version published January 15, 2010. Thiswork was done while D. T. Võ was with Thomson Corporate Research and Uni-versity of California at San Diego. This work was supported in part by TexasInstruments, Inc. The associate editor coordinating the review of this manuscriptand approving it for publication was Dr. Hsueh-Ming Hang.

D. T. Võ is with the Digital Media Solutions Lab, Samsung Information Sys-tems America, Irvine, CA 92612 USA (e-mail: [email protected]).

J. Solé, P. Yin, and C. Gomila are with the Thomson Corporate Re-search, Princeton, NJ 08540 USA (e-mail: {[email protected];[email protected]; [email protected]).

T. Q. Nguyen is with the Department of Electrical and Computer Engineering,University of California at San Diego, La Jolla, CA 92093-0407 USA (e-mail:[email protected]).

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TIP.2009.2035845

H.264/MPEG-4 AVC. For most video coding standards, in-creasing quantization step size is used to reduce bit-rate [1].However, this technique can result in blocking and other codingartifacts due to the loss of high frequency details. In the seconddirection, common techniques are low-pass filtering or down-sampling (which can be seen as a filtering process) followedby reconstructing or upsampling at the decoder. For example,low-pass filters were adaptively used based on Human VisualSystem to eliminate high frequency information in [2] or tosimplify the contextual information in [3]. Also, to reduce thebit-rate, some digital television systems uniformly downsizedthe original sequence and upsized it after decoding. Thesemethods contain a pruning phase to reduce the amount of datato compress and a reconstructing phase to recover the droppeddata. The reconstructed video applying these techniques lookedblur because they were designed to eliminate high-frequencyinformation with the low-pass filter in the preprocessing step orwith the anti-aliasing filter before downsizing.

This paper proposes a novel data pruning-based compres-sion scheme to reduce the bit-rate while still keeping a highquality reconstructed frame. The original frames are first op-timally pruned to a smaller size by adaptively dropping rowsor columns prior to encoding. At the final stage, an interpola-tion phase is implemented to reconstruct the decoded frames totheir original size. By avoiding filtering the remaining rows andcolumns, the reconstructed frames can still achieve high qualityfrom a lower bit-rate.

Main applications of interpolation are upsampling, demosi-acking and displaying for different video formats. For resolu-tion enhancement, the interpolation is implemented to overcomethe limitation of low resolution imaging. A wide range of in-terpolation methods has been discussed, starting from conven-tional bilinear and bicubic interpolations to sophisticated itera-tive methods such as projection onto convex sets (POCS) [4] andnonconvex nonlinear partial differential equations [5]. To avoidthe jaggedness artifacts occurring along edges, edge-oriented in-terpolation methods were performed using Markov random field[6] or the low resolution (LR) image covariance [7]. Further-more, [8] proposed a 2-D piecewise autoregressive model anda soft-decision estimation to interpolate the missing pixels in agroup. This method required a 12 12 matrix inversion and cancause artifacts in the output image when the matrix is badly con-ditioned. A combination of directional filtering and data fusionwas also discussed in [9] to estimate missing high resolution(HR) pixels by a linear minimum mean square error estimation.Another group of interpolation algorithms used different kindsof transforms to predict the fine structure of the HR image from

1057-7149/$26.00 © 2010 IEEE

Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on April 02,2010 at 13:47:56 EDT from IEEE Xplore. Restrictions apply.

Page 2: 2010 15 vo

400 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 2, FEBRUARY 2010

its LR version. Instead of directly interpolating the HR imagein pixel domain, zeros were initially padded for the high fre-quencies from the wavelet transform [10] and the courtourlettransform [11]. These algorithms were then iterated under theconstraints of sparsity and the similarity of low pass output ofthe LR and HR images.

In demosaicking, interpolation is applied to reconstruct themissing color component due to color-filtered image sensor. Thefull-resolution color image can be achieved from the Bayer colorfilter array by interpolating the (R,G,B) planes separately as in[7] or jointly as in [12], [13]. Processing the color plane indepen-dently helps avoiding the misregistration between color planesbut ignores the color planes’ dependency. For joinly color planeinterpolation, the green pixels are first interpolated from the can-didates of horizontal and vertical interpolation. After that, redand blue pixels are reconstructed based on the color differences

and with assumption that these differences are flatover the small areas. An iterative algorithm for demosaickingusing the color difference is discussed in [14]. Interpolation isalso required when video sequences are displayed in differentframe sizes other than its original frame size. In [15], the de-coded frame in unsuitable frame size is upsized and downsizedto achieved the arbitrary target frame size in a pixel domaintranscoder.

When interpolation is used along with data pruning, themethod needs to adapt to the way of pruning the data and to thestructure of surrounding pixels. For instance, there are pruningcases in which only rows or only columns are dropped andupsampling in only one direction is required. This paper de-velops a high-order edge-directed interpolation scheme to dealwith these cases. The algorithm is also considered for the casesof dropping both rows and columns. Furthermore, instead ofusing only spatially neighboring pixels for image interpolation,the algorithm is extended for cases of video interpolation usingspatio-temporally neighboring pixels.

The paper is organized as follows. Section II introduces thedata pruning-based compression method. Section III derivesan optimal data pruning algorithm. The high-order edge-di-rected interpolation methods which are corresponding to thedata pruning-based compression scheme are described in Sec-tion IV. Results for interpolation and coding applications arepresented in Section V. Finally, Section VI gives the concludingremarks and discusses future works.

II. DATA PRUNE-BASED COMPRESSION

The block diagram of the data pruning-based compression forone frame is shown in Fig. 1. At first, the original frame of size

is pruned to frame of smaller size, where and are the number of dropped rows and

columns, respectively. The purpose of data pruning is to reducethe number of bits representing the stored or compressed frame

. Then, frame having the original size is reconstructedby interpolating . The conventional data pruning-based com-pression methods reduce the frame size with a factor of 2 in bothhorizontal and vertical direction by dropping half of the columnsand rows. Because of aliasing, interpolation after downsizingcauses jaggedness artifacts, especially for detail areas with high

Fig. 1. Block diagram of the data pruned-based compression.

frequencies. Only 25% of the data is kept in the pruning phase,fact that also prevents achieving a reconstructed frame with highquality, even without compression.

In data pruning-based compression for video, downsizing inboth spatial and temporal direction is applied to further reducethe bit-rate. In temporally data pruning-based compression, theframe-rate is usually reduced by half and is later reconstructedby motion compensated frame interpolation (MCFI) methods[16], [17] . For fast motion video sequences or for frames atscene change, these methods typically cause blocking, flick-ering and ghosting artifacts. The rate-distortion (R-D) relationof these data pruned compressed sequences is much lower thanthat of the directly compressed sequences due to the high per-centage of data loss (up to 87.5%) and the limitation of currentvideo interpolation methods.

Uniformly pruning image or video sequences ignores thedata-dependent artifacts caused by the interpolation phase. Inthis paper, the proposed data prune-based compression methodadapts to the error resulted from the interpolation phase. Datawhich can be reconstructed with less error has higher priorityto be dropped than data which cause higher error during theinterpolation. The proposed data pruning phase and its cor-responding interpolation phase in Fig. 1 will be discussed inSections III and IV, respectively.

III. OPTIMAL DATA PRUNING

The block diagram of the data pruning phase for one frameis shown in Fig. 2. Only the even rows and columns may bediscarded, while the odd rows and columns are always kept forlater interpolation. To simplify the analysis, the compressionstage in Fig. 1 is ignored. In this phase, the original frame isselectively decimated to the LR frame for cases of droppingall the even rows, all the even columns and all the even rows andcolumns. Then, for each of these 3 downsampling scenarios,

is interpolated back to the HR frame based on all oddrows and columns (upscaling by ratio of 2 2) or all odd rows(upscaling by ratio of 2 1) or all odd columns (upscaling byratio of 1 2). Finally, these 3 reconstructed are compared to

in order to decide the best downsampling scenario and numberof even rows and columns to be dropped before compression.Because of the decimation and interpolation, the reconstructedframe is different than its original frame . The principle ofthe algorithm is that the even rows and columns in that haveleast error compared to its corresponding rows and columns in

are chosen to be dropped. The mean squared errorbetween and is defined as

(1)

Given a target , the data pruning is optimized todiscard the maximum number of pixels while keeping the

Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on April 02,2010 at 13:47:56 EDT from IEEE Xplore. Restrictions apply.

Page 3: 2010 15 vo

VÕ et al.: SELECTIVE DATA PRUNING-BASED COMPRESSION USING HIGH-ORDER EDGE-DIRECTED INTERPOLATION 401

Fig. 2. Block diagram of the data pruning phase.

overall of dropping rows and columns lessthan , that is

(2)

The location of the dropped rows and columns is indicated byand , respectively. If the even column is dropped,

then , otherwise . These indices are storedas side information in the coded bitstream and are used for re-constructing the decoded frame. The same algorithm is appliedto rows.

The line mean square error for one dropped columnis defined as

(3)

and similarly for rows. From (2), lines with smallerhave higher priority to be dropped than lines with larger

. Assume that the rows and columns that aredropped have the smallest and that the maximum

of these lines is . Then, the overallin (1) becomes the averaged of all dropped pixels [see(4), shown at the bottom of the page]. Therefore, the conditionin (2) can be tightened to

(5)

where is the target minimal that the recon-structed frame has to achieve. An example of the proposed op-timal data pruning is shown in Fig. 3 for the 1st frame of the se-

Fig. 3. Data pruning for the 1st frame of Akiyo sequence. (a) Lines indicatedfor pruning. (b) Pruned frame.

quence Akiyo. In Fig. 3(a), the white lines indicate the droppedlines with the target dB. The frame size is re-duced from the standard definition 720 480 to 464 320. Thedata pruned frame in Fig. 3(b) is more compact and it requires asmaller compressed bitstream than the original frame. Most ofdropped lines are located in flat areas where the aliasing doesnot happen.

For video sequences, the algorithm is extended by droppingthe same lines over frames in the whole group of picture

. In this case, the is defined as

(6)where and are the original and reconstructed video se-quences, respectively, and is the number of frames in the

. The for one dropped column is also extendedin the temporal direction as

(7)

and similarly for rows. This case leads to the same condition asin (5).

IV. HIGH-ORDER EDGE-DIRECTED INTERPOLATION

This section proposes a high-order edge-directed interpola-tion method to interpolate the downsized frames in Fig. 1and the data pruned frames in Fig. 2. In [7], the fourth-order

(4)

Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on April 02,2010 at 13:47:56 EDT from IEEE Xplore. Restrictions apply.

Page 4: 2010 15 vo

402 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 2, FEBRUARY 2010

Fig. 4. Block diagram of the single frame-based interpolation phase.

new edge-directed interpolation (NEDI-4) is used to upsize onlyfor the 2 2 ratio. This interpolation can orient to edges in2 directions and causes some artifacts in the intersections ofmore than 2 edges. The proposed methods are higher order in-terpolations that can adapt to more edge directions. For singleframe-based interpolation, the sixth-order edge-directed inter-polation and eighth-order interpolation are developed for in-terpolating the cases with ratio 1 2 or 2 1 (dropping onlyrows or only columns) and ratio 2 2 (dropping both rows andcolumns), respectively. For multiframe-based interpolation, theninth-order edge-directed interpolation is discussed for interpo-lating the case with ratio 1 2 or 2 1 over all the frames of aGOP (dropping only rows or only columns).

A. Single Frame-Based Interpolation

Because the similar interpolation method is used for and, this section will only discuss the case of interpolating .

The block diagram of the interpolation phase is shown in Fig. 4.First, is expanded to of size by inserting a lineof zeros at the line of if its indicator valuefor columns or for rows. is selectively down-sampled by 1 2, 2 1 or 2 2 ratio to form dependingon the chosen data pruning scheme. Then, are directionallyinterpolated to the HR frame of size . Finally, theindicators and determine whether the lines in the finalreconstructed frame are selected from the interpolated or fromthe data pruned frame

ifotherwise.

1) Sixth-Order Edge-Directed Interpolation (NEDI- ): Thesame NEDI-6 is implemented for case of single frame-basedinterpolation with upsampling ratios of 1 2 or 2 1. For thecase of ratio 1 2, the pixel indexes are classified toindexes for odd columns and indexes for odd columns

. The columns of are mapped to the odd columns of theHR frame of size by . Theeven columns of are interpolated from the odd columns bya sixth-order interpolation

(8)

where is the vector of sixth-order model parameters andis the vector of 6-neighboring pixels of as shown inFig. 5(a). In this figure, the solid circles are the mapped LRpixels while the circles are the HR pixels needed to be inter-polated. Assuming that is nearly constant in a local window

Fig. 5. Model parameters of sixth-order and eighth-order edge-directed inter-polation. (a) NEDI-6. (b) NEDI-8.

, the optimal minimizing the MSE between the interpo-lated and original pixels in can be calculated by

(9)

The geometric duality assumption [18] states that the modelvector can be considered constant for different scales andso, it can be estimated from the LR pixels by

(10)

where are 6-neighboring LR pixels of andis the LR model parameter vector as shown in Fig. 5(a).

contains the edge-directed information which is applied to theHR scale for interpolation. The optimal minimum MSE linear

is then obtained by

(11)

where is the vector of all mapped LR pixelsin and is a matrix. The elements of the columnof are the 6-neighboring pixels of shown inFig. 5(a).

2) Eighth-Order Edge-Directed Interpolation (NEDI- ):This section develops an algorithm to deal with singleframe-based interpolation for the case of upsampling withratio of 2 2. Similar to NEDI-6, the pixels in corre-sponding to the LR pixels downsampling by 2 2 in areextracted to form the LR frame of size . Theinterpolation is performed using NEDI-4 as in [7] for the firstround and NEDI-8 for the second round. The interpolationschemes of NEDI-4 and NEDI-8 are shown in Fig. 5(b), wherethe solid circles are the mapped LR pixels and the other pixelsare the HR pixels to be interpolated. Using the quincunxsublattice, two passes are performed in the first round. In thefirst pass, NEDI-4 is used to interpolate type 1 pixels (squareswith lines) from the LR pixels (solid circles). In the secondpass, type 2 pixels (squares) and type 3 pixels (circles) areinterpolated from type 1 and LR pixels.

Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on April 02,2010 at 13:47:56 EDT from IEEE Xplore. Restrictions apply.

Page 5: 2010 15 vo

VÕ et al.: SELECTIVE DATA PRUNING-BASED COMPRESSION USING HIGH-ORDER EDGE-DIRECTED INTERPOLATION 403

Fig. 6. Block diagram of the proposed multiframe-based interpolation for case of upsampling with ratio 1� 2.

Having an initial estimation of all the 8-neighboring pixels,NEDI-8 is implemented to get extra information from 4 direc-tions in the second round. In this round, the model parame-ters can be directly estimated from the HR pixels. Therefore,the overfitting problem of NEDI-4 is reduced while consideringmore edge orientations. For the sake of interpolation consis-tency, NEDI-8 is applied to the pixels of type 3, 2, and 1 as in thisorder. The fourth-order model parameters and eighth-ordermodel parameters for HR scale are shown in Fig. 5(b). Theoptimal is similarly calculated by (11), where is the vectorof all HR pixels in , and matrix isemployed, which is a matrix whose column is com-posed of the 8-neighboring pixels of .

B. Multiframe-Based Interpolation

For multiframe interpolation, using single-frame-based inter-polation algorithm such as NEDI-6 or NEDI-8 can result intemporal inconsistency. This comes from ignoring of temporalcorrelation of the single-frame-based interpolation. A spatio-temporal interpolation method is proposed in this subsectionto reduce the flickering effect. To interpolate one HR pixel inthe current frame, extra surrounding pixels from the previousframe are used together with its surrounding pixels in the cur-rent frame. A multiframe-based ninth-order edge-directed inter-polation (NEDI-9) method is discussed for the case of droppingall the even columns over frames of the whole GOP. A similaralgorithm can be applied to the cases of dropping all even rowsor both even columns and rows.

1) Spatio-Temporal Interpolation Scheme: The block dia-gram of the multiframe-based interpolation is shown in Fig. 6.First, the current compressed data pruned frame is ex-panded to of the original size by inserting zeros as in Sub-section IV-A. Then, is single frame-based interpolated to

using NEDI-6 as in (8). Assume that the previous inter-polated frame is , a block-based motion estimation andmotion compensation are used to align the block of pixels of in-terest in to its matching block in . Interpolatingthe current frame and motion estimating based on larger blockshelp to achieve more accurate motion vectors, especially for thecompressed sequence. The reason is that the interpolated pixelshave less artifacts than the LR pixels after the “filter-like” in-terpolation phase. Based on and its motion compensatedframe from , is spatio-temporally interpolatedusing NEDI-9.

If the matching block is very different from the currentblocks, the spatio-temporal pixels should not be used, thuspreventing the un-related pixels in the previous frame fromcontributing to the output. and are combined to

based on the sum of absolute difference betweenthe current block and its matching block

(12)

where is the block of pixels of interest that includes the HRpixels needed to interpolate and is the motion vectorof block . is calculated by

if

otherwise

where is the threshold to determine whether is chosenfrom or . The final reconstructed frame isselected from the interpolated frame or the data pruned frameby the indicators and

ifotherwise.

2) Ninth-Order Edge-Directed Interpolation (NEDI- ): InNEDI-9, besides the 6 surrounding pixels in the current frame,3 more pixels in the matching block of previous frame are used.The interpolation phase is implemented as shown in Fig. 4. Theinterpolated pixel is the weighted average

(13)

where is the vector of ninth-order model parameters andis the vector of 6-spatial neighboring pixels and 3-spatio-tem-poral neighboring pixels of , and is themotion vector of the current block. The interpolation scheme forNEDI-9 is shown in Fig. 7(a), where solid circles represent theavailable pixels and blank circles represent the pixels to be inter-polated. Equation (13) includes one term for the spatial pixels asin NEDI-6 and the other term for the spatio-temporal pixels inthe previous interpolated frame . The output is edge-di-rected by the first term and temporal-consistent-directed by the

Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on April 02,2010 at 13:47:56 EDT from IEEE Xplore. Restrictions apply.

Page 6: 2010 15 vo

404 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 2, FEBRUARY 2010

Fig. 7. Model parameters of 9th order edge-directed interpolation. (a) Interpo-lation scheme. (b) Parameter estimation.

second term. The second term helps reducing the flickering ef-fect of using only frame-based interpolation.

The model vector is estimated from its LR model vector, where is shown in Fig. 7(b). In this case, for the spatial

parameters, the geometric duality is assumed as in NEDI-6.This assumption is not needed for the spatio-temporal param-eters, because all pixels in the previous frame are available.These parameters are finally estimated as in (11) where isthe vector of all mapped LR pixels inand is a matrix whose column is composedof the 9-spatial and spatio-temporal neighboring pixels of

. The 9-spatial and spatio-temporal neigh-boring pixels of are defined as

V. SIMULATION RESULTS

A. High-Order Edge-Directed Interpolation

Simulations are performed to compare the proposedhigh-order edge-directed interpolations with other interpo-lation methods for a wide range of data in different formats.Both cases of upsampling with ratio of 1 2 and 2 2 areconsidered.

1) Sixth-Order and Ninth-Order Edge-Directed Interpola-tion for Upsampling With Ratio 1 2: The original framesare downsampled by 2 in the horizontal direction (droppingall even columns). The downsized frames are then interpolatedusing bicubic, sinc, autoregression method [8] and the proposedNEDI-6 and NEDI-9 interpolation. Note that other interpolationmethods, such as [7] and [8], can only be applied for downsam-

pling by 2 in both directions. In this simulation, for upsamplingwith ratio 1 2 for these methods, only the LR pixels locatedin an even row and column (solid circles as plotted in Fig. 5(b)are used to interpolate the pixels in even columns (square withlines and circle). The remaining available LR pixels (square)are ignored. For NEDI-6 and NEDI-9, a window size of 17 17pixels is chosen for the model parameter estimation. Only 6HR pixels at the center of teh window are interpolated usingthese model parameters. is shifted by (4,4) pixels over theframe to interpolate all HR pixels. For NEDI-9, for motionestimation is set to 16 16 and the threshold is experimentallychosen to be . This helps achieving the highest PSNRfor the interpolated frames of different sequences. A particularresult is shown in Fig. 8 for a zoomed part of a frame of theForeman sequence. The PSNR values of the interpolated framesusing bicubic, sinc, autoregression, NEDI-6 and NEDI-9 in-terpolation are 38.86 dB, 38.76 dB, 37.39 dB, 39.31 dB and39.42 dB, respectively. These results validate the effectivenessof NEDI-6 and NEDI-9 for edge-directed interpolation, sinceless jaggedness and higher PSNR are attained compared tothe other methods. Comparing to NEDI-6 using only spatialpixels, NEDI-9 using both spatial and spatio-temporal pixelsachieves better visual quality and higher PSNR. When playedas a video sequence, interpolated sequence using NEDI-9 alsohas less flickering artifacts and a higher quality consistent inthe temporal direction than the single frame-based interpolatedsequence using NEDI-6. Because of the ME part, NEDI-9has higher complexity and requires longer running time thanNEDI-6. For the 2nd frame of Foreman sequence, the runningtimes are 0.72 s, 0.34 s, 6.59 s, 28.76 s, 433.36 s, and 4690.42s for bicubic, sinc, autoregression, NEDI-4, NEDI-6, andNEDI-9 methods. Note that sinc and autoregression methodsare in C code while the other methods are written using Matlab.The simulation is run on laptop with Intel 1.83-GHz CPU and1-GB RAM.

2) Eighth-Order Edge-Directed Interpolation for Up-sampling With Ratio 2 2: For the proposed NEDI-8, thecomparison is performed with the Shan’s method [19], bicubic,sinc, and NEDI-4 methods. For NEDI-4 and NEDI-8, thewindow size is chosen to be 17 17 and only 4 HR pixels atthe center of are interpolated using these model parameters.

is also shifted by (4,4) pixels over the frame to interpolateall HR pixels, like in the NEDI-6 and NEDI-9 cases. The frameis expanded by reflecting these pixels over the borders in orderto enhance the pixels near the frame borders in the proposedNEDI-8.

PSNR values are shown in Table I for sequences withdifferent resolutions. To perform a fair comparison to othermethods that use bilinear interpolation for pixels near theborders, pixels at 5 lines or fewer away from the border arenot counted for the PSNR computation. The Table I showsthat NEDI-8 has the highest average PSNR value. The averagePSNR of NEDI-8 is 3.930 dB, 1.054 dB, 1.198 dB, 0.732 dBhigher than the average PSNR value of Shan’s method, bicubic,sinc, and NEDI-4, respectively.

The visual results for a selected part of the Foreman sequenceare shown in Fig. 9. The result using the sinc-based interpola-tion has a lot of jaggedness [Fig. 9(b)]. While the NEDI-4 inter-

Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on April 02,2010 at 13:47:56 EDT from IEEE Xplore. Restrictions apply.

Page 7: 2010 15 vo

VÕ et al.: SELECTIVE DATA PRUNING-BASED COMPRESSION USING HIGH-ORDER EDGE-DIRECTED INTERPOLATION 405

Fig. 8. Comparison of NEDI-6 and NEDI-9 to other methods. (a) Original. (b) Bicubic. (c) Sinc. (d) Autoregression. (e) NEDI-6. (f) NEDI-9.

Fig. 9. Comparison of NEDI-8 to other methods. (a) Original. (b) Sinc. (c) NEDI-4. (d) NEDI-8.

TABLE IPSNR COMPARISON (IN ��)

polation has significant less jaggedness, the interpolated framein Fig. 9(c) still shows jaggedness along the strong edges. Be-cause NEDI-4 only uses pixels of 2 directions, artifacts can beobserved at the intersections of more than 2 edges. On the otherhand, the NEDI-8 interpolated frame in Fig. 9(d) achieves thebest quality with least jaggedness. Using pixels in 4 directions,the NEDI-8 interpolation also has less artifacts at the intersec-tion of more than 2 edges. With respect to objective quality, theproposed NEDI-8 has the highest PSNR values for all the se-quences across different resolutions. Because of the extra roundin the proposed NEDI-8, its running time is longer than NEDI-4.For Foreman image, the running times are 0.45 s, 0.13 s, 11.56s, and 65.90 s for bicubic, sinc, NEDI-4 and NEDI-8 methods.Note that sinc method is in C code while the other methods arewritten using Matlab software.

B. Data Pruning-Based Compression

1) Single-Frame Data Pruning-Based Compression: Thesimulation in this section verifies the validity of the datapruning-based compression method for single frames. Thisdata pruning-based compression is applied to the compressionof images or intra frames. The target is set to 50 dB.Subsequently, the algorithm prunes the frames of Foreman of

size 352 288 to 304 288. An H.264/AVC codec is used tointra code the frames with . NEDI-6 is used forthe edge-directed interpolation. Each even rows and columnsrequire one bit to indicate whether it is kept or dropped. Suchas for the frame of size 352 288, a total ofbits is used to indicate the dropped even lines. These bits aresent as side information in the coded bitstream. For compar-ison, other data pruning-based methods using sinc, bicubic,autoregression, and NEDI-4 interpolation are also given.

The R-D curves are plotted in Fig. 10(a) and their zoomedin parts are plotted in Fig. 10(b). The percentage of bit savingbetween the H264/AVC compression sequence and the NEDI-6data pruning-based compression at the same values of isplotted in Fig. 10(c). The result in Fig. 10(a) shows that the datapruning-based compression using NEDI-6 is better than datapruning-based compression using sinc, bicubic, autoregressionand NEDI-4 methods. The data pruning-based compressionachieves a better R-D than H264/AVC in the range 31–41 dB.In this range, at the same bit-rate, the PSNR value of datapruning-based compression is about 0.3–0.5 dB higher thanthe PSNR value of H.264/AVC compression. At the samePSNR, the data pruning-based compression saves about 5% ofbit-rate comparing to bit-rate of the H.264/AVC compression.As shown in Fig. 10(c), at the same QI, the percentage of bitsaving is about 4.2%–6.6%. The reconstructed frames usingdata pruning-based compression with sinc, bicubic, autoregres-sion, NEDI-4 and NEDI-6 methods are shown in Fig. 11(b)–(f)and their zoomed in part are shown in Fig. 12(b)–(f). The datapruned frames are compressed with and the corre-sponding bit-rate is 1.36 Mbps. The PSNR of the reconstructedframe using data pruning-based compression with sinc, bicubic,

Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on April 02,2010 at 13:47:56 EDT from IEEE Xplore. Restrictions apply.

Page 8: 2010 15 vo

406 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 2, FEBRUARY 2010

Fig. 10. Comparison results for R-D curves of single frame data pruning-based compression. (a) Whole R-D curves. (b) One zoomed in part of (a). (c) Percentageof bit saving.

Fig. 11. Comparison of NEDI-6 to other interpolation methods in case of single frame data pruning-based compression. (a) Original. (b) Sinc. (c) Bicubic.(d) Autoregression (37.78 dB). (e) NEDI-4. (f) NEDI-6.

autoregression, NEDI-4 and NEDI-6 methods are 37.79 dB,37.80 dB, 37.78 dB, 37.42 dB, and 37.91 dB, respectively.The results show that the reconstructed frame using NEDI-6in Fig. 11 has less artifacts than other methods. Because thereconstructed frames using autoregression and NEDI-4 are notbased on the LR pixels located at even rows, the HR pixels arenot consistent to each other and cause some artifacts at the teethareas in Fig. 11(d) and (e).

An additional simulation is performed to analysis the affectof the target PSNR on the pruned frame size and the R-D curveof the data pruning-based compression. The results in Table IIshow that when the target PSNR decreases, more data is con-sidered to be dropped while the PSNR range having better R-Dcurve reduces. The best case to get highest average PSNR im-provement of dB is obtained when the target PSNR is set to50 dB. The table also shows that the compressed bitrate savingincreases when the target PSNR decreases.

2) Multiframe Data Pruning-Based Compression: The datapruning approach is applied to video compression. An experi-ment is performed in which a GOP of 15 frames of Akiyo ispruned with the target dB. Three downsam-pling scenarios of dropping all even rows, all even columns alleven rows and columns then using the interpolation scenariosof factors of 1 2, 2 1 and 2 2 are considered to determinethe best number of lines to be dropped. Simulation shows thatdropping 160 columns and keepping all rows are the best so-lution which achieves the most dropped pixels while still keepsthe PSNR of reconstructed frame higher than 45 dB. As a conse-quence, the frame size is reduced from 720 480 to 320 480lines. An H.264 codec is applied with the GOP structureand . The is averaged over thewhole GOP, so that the same lines are dropped for all the frames.In this way, the side information to determine the dropped linesis greatly reduced. The extra bit-rate is 1.2 Kbps for the whole

Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on April 02,2010 at 13:47:56 EDT from IEEE Xplore. Restrictions apply.

Page 9: 2010 15 vo

VÕ et al.: SELECTIVE DATA PRUNING-BASED COMPRESSION USING HIGH-ORDER EDGE-DIRECTED INTERPOLATION 407

Fig. 12. One zoomed in part of Fig. 11. (a) Original. (b) Sinc. (c) Bicubic. (d) Autoregression. (e) NEDI-4. (f) NEDI-6.

TABLE IIPSNR COMPARISON (IN ��)

GOP, which again is very small compared to the total bit-rate ofthe compressed bitstream. For interpolation, single frame-basedNEDI-6 is used for the first I frame while multiframe-basedNEDI-9 is employed for the following frames. For comparison,the data pruning scheme is applied to the sequence down- andup-sized by 2 2 with the uniform sinc interpolation.

The R-D curves are shown in Fig. 13(a), while Fig. 13(b) arezoomed in parts. These results show that the R-D curve of thesinc data-pruned method is consistently below the curve of theoptimal data pruning method. The proposed method is better inthe range 32–37.5 dB compared to H.264/AVC. The PSNR im-provement at the same bit-rate is around 0.3–0.7 dB in the range.As shown in Fig. 13(c), the percentage of bit-rate saving of theoptimal data pruning-based compressed sequence is 23%–36%compared to the H.264/AVC using the same quantization stepsize. Even having the same bit-rate and PSNR values, the recon-structed frames have less artifacts because they are compressedwith smaller quantization step and . Fig. 14 shows thecomparison between the H.264/AVC compressed frame and theoptimal data pruning-based compressed frame at the quanti-zation level of 35 and 32, respectively. These sequences havenearly same bit-rate of 92 Kbps and 94 Kbps and same PSNR

of 37.83 dB and 37.91 dB respectively for the H.264/AVC andthe proposed data pruning-based compressed sequences. Re-sults show that the proposed data pruning-based compressedframe in Fig. 14(b) has higher visual quality and less artifactsthan the H.264/AVC compressed frame in Fig. 14(a). This meritcan be explained by the interpolation phase, which helps re-ducing the blocking and ringing artifacts, and the smaller quan-tization step level. Because of the ’filter-like’ interpolation, thereconstructed sequence in the low bit-rate has fewer blockingartifact than the direct compressed sequence with high compres-sion level.

Both PSNR curve and visual results validate the effectivenessof the proposed data pruning-based compression. The proposedalgorithm requires an interpolation step in the data pruning andreconstruction phases, so the complexity of data pruning-basedcompression is higher than the normal compression. However,the coding and decoding time of the proposed method decreasesproportionally to the size reduction of the data pruned frame.Such as for case of data pruning from the original frame sizeof 720 480 to 320 480, both encoding and decoding timefor data pruned sequence is only 50% of the encoding and de-coding time for the original sequence. Additional simulationsshow that to further reduce the running time in the encodingphase, a simple interpolator such as bilinear interpolator can beapplied at the data pruning phase in Fig. 2 while still nearlykeeps the same performance when using high-order edge-di-rected interpolators. For structure , the same data pruningphase for structure can be applied without any modifi-cation. The B frames require smaller number of bits for com-pression and the extra bits for indicating the dropped lines be-come significant comparing to the bit for coding frame. So

Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on April 02,2010 at 13:47:56 EDT from IEEE Xplore. Restrictions apply.

Page 10: 2010 15 vo

408 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 2, FEBRUARY 2010

Fig. 13. Comparison results for multiframe data pruning-based compression. (a) R-D curves. (b) Zoom in of 13 (a). (c) Percentage of bit saving.

Fig. 14. Comparison for H.264/AVC compression and optimal data pruning-based compression with same bit-rate and PSNR values. (a) H.264/AVC. (b) Optimaldata pruning-based.

Fig. 15. Comparison for H.264/AVC compression and optimal data pruning-based compression with same bit-rate and PSNR values. (a) H.264/AVC.(b) Optimal data pruning-based.

the R-D improvement using structure is better than theR-D improvement using structure . All simulation resultscan be found at http://videoprocessing.ucsd.edu/~dungvo/dat-aprune.html.

VI. CONCLUSION

The paper proposed a novel data pruning-based compressionmethod to reduce the bit-rate. High-order edge-directed inter-polations using more surrounding pixels are also discussed toadapt to different data pruning schemes. The results show thatthese high-order edge-directed interpolation methods help re-ducing the jaggedness along strong edges as well as reduce theartifacts at the intersection areas. The proposed optimal datapruning-based compression achieves better R-D relation thanthe conventional data pruning-based compression in the low and

medium compression level. The NEDI-6 and NEDI-9 for up-sampling only rows can be also applied for de-interleaving. Forthe same sequence, the R-D performance for single frame datapruning-based compression is much better than the R-D perfor-mance of multifame data pruning-based compression. This isbecause with the same target PSNR, higher percentage of datacan be dropped for a single image than video sequence. Anotherreason is that the same rows/columns are dropped over framesin the GOP and more bits are required to compress the objectsmoving over the dropped lines.

In future work, the location of the dropped lines should beadaptive to the motion of the moving objects. Instead of usingonly the pixels at odd indices, high-order edge-directed inter-polation methods may use more available pixels to estimatemore accurately the model parameters. Additionally, the objec-tive function of the data pruning algorithm may be extended toconsider the coding efficiency of dropping these pixels to fur-ther improve the R-D curve. A more efficient data pruning-basedcompression for dropping the whole frame can also be consid-ered using MCFI methods for video sequences with fast mo-tions.

ACKNOWLEDGMENT

The authors would like to thank Y. Zheng for the interestingdiscussions at Thomson Corporate Research.

REFERENCES

[1] Advanced Video Coding for Generic Audiovisual Services, 2005.[2] N. Vasconcelos and F. Dufaux, “Pre and post-filtering for low bit-rate

video coding,” in Proc. IEEE Conf. Image Process., Oct. 1997, vol. 1,pp. 291–294.

Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on April 02,2010 at 13:47:56 EDT from IEEE Xplore. Restrictions apply.

Page 11: 2010 15 vo

VÕ et al.: SELECTIVE DATA PRUNING-BASED COMPRESSION USING HIGH-ORDER EDGE-DIRECTED INTERPOLATION 409

[3] A. Cavallaro, O. Steiger, and T. Ebrahimi, “Perceptual prefiltering forvideo coding,” in Proc. IEEE Int. Symp. Int. Multimedia, Video andSpeech Processing, Oct. 2004, pp. 510–513.

[4] K. Ratakonda and N. Ahuja, “POCS based adaptive image magnifi-cation,” in Proc. IEEE Conf. Image Process., Oct. 1998, vol. 3, pp.203–207.

[5] Y. Cha and S. Kim, “Edge-forming methods for color image zooming,”IEEE Trans. Image Process., vol. 15, no. 8, pp. 2315–2323, Aug. 2006.

[6] M. Li and T. Q. Nguyen, “Markov random field model-based edge-directed image interpolation,” IEEE Trans. Image Process., vol. 17, no.7, pp. 1121–1128, Jul. 2008.

[7] X. Li and M. T. Orchard, “New edge-directed interpolation,” IEEETrans. Image Process., vol. 10, no. 10, pp. 1521–1527, Oct. 2001.

[8] X. Zhang and X. Wu, “Image interpolation by adaptive 2-D autore-gressive modeling and soft-decision estimation,” IEEE Trans. ImageProcess., vol. 17, no. 6, pp. 887–896, Jun. 2008.

[9] L. Zhang and X. Wu, “An edge-guided image interpolation algorithmvia directional filtering and data fusion,” IEEE Trans. Image Process.,vol. 15, no. 8, pp. 2226–2238, Aug. 2006.

[10] N. Mueller, Y. Lu, and M. N. Do, “Image interpolation using multi-scale geometric representations,” in SPIE Conf. Electronic Imaging,Feb. 2007, vol. 6498.

[11] N. Mueller and T. Q. Nguyen, “Image interpolation using classificationand stitching,” presented at the IEEE Conf. Image Process., Oct. 2008.

[12] S. C. P. I. K. Tam, “Effective color interpolation in CCD color filterarrays using signal correlation,” IEEE Trans. Circuits Syst. VideoTechnol., vol. 13, no. 3, pp. 503–513, Jun. 2003.

[13] D. Menon, S. Andriani, and G. Calvagno, “Demosaicing with direc-tional filtering and a posteriori decision,” IEEE Trans. Image Process.,vol. 16, no. 1, pp. 132–141, Jan. 2007.

[14] X. Li, “Demosaicing by successive approximation,” IEEE Trans. ImageProcess., vol. 14, no. 3, pp. 370–379, Mar. 2005.

[15] G. Shen, B. Zeng, Y.-Q. Zhang, and M. L. Liou, “Transcoder witharbitrarily resizing capability,” in Proc. IEEE Int. Symposium CircuitsSyst., May 2001, vol. 5, pp. 22–28.

[16] B. Choi, J. Han, C. Kim, and S. Ko, “Motion-compensated frame in-terpolation using bilateral motion estimation and adaptive overlappedblock motion compensation,” IEEE Trans. Image Process., vol. 17, no.4, pp. 407–416, Apr. 2007.

[17] A. Huang and T. Nguyen, “A multistage motion vector processingmethod for motion-compensated frame interpolation,” IEEE Trans.Image Process., vol. 17, no. 5, pp. 694–708, May 2008.

[18] S. G. Mallat, A Wavelet Tour of Signal Processing. New York: Aca-demic, 1998.

[19] Q. Shan, Z. Li, J. Jia, and C. Tang, “Fast image/video upsampling,”ACM Transactions on Graphics (SIGGRAPH ASIA 2008), vol. 27,2008.

Dung T. Võ (S’06–M’09) received the B.S. and M.S.degrees from Ho Chi Minh City University of Tech-nology, Vietnam, in 2002 and 2004, respectively, andthe Ph.D. degree from the University of California atSan Diego, La Jolla, in 2009.

He has been a Fellow of the Vietnam EducationFoundation (VEF) since 2005 and has been on theteaching staff of Ho Chi Minh City University ofTechnology since 2002. He interned at MitsubishiElectric Research Laboratories (MERL), Cambridge,MA, and Thomson Corporate Research, Princeton,

NJ, in the summers of 2007 and 2008, respectively. He has been a seniorresearch engineer at the Digital Media Solutions Lab, Samsung InformationSystems America (Samsung US R&D Center), Irvine, CA, since 2009. Hisresearch interests are algorithms and applications for image and video codingand postprocessing.

Joel Solé (M’02) received the M.S. degrees intelecommunications engineering from the TechnicalUniversity of Catalonia (UPC), Barcelona, Spain,and the Ecole Nationale Supérieure des Télécom-munications (ENST), Paris, France, in 2001, and thePh.D. degree from the UPC in 2006.

He is currently a member of the technical staff atCorporate Research, Thomson, Inc., Princeton, NJ.Dr. Sole research interests focus on advanced videocoding and signal processing.

Peng Yin (M’02) received the B.E. degree in elec-trical engineering from the University of Science andTechnology of China in 1996 and the Ph.D. degreein electrical engineering from Princeton University,Princeton, NJ, in 2002.

She is currently a senior member of the tech-nical staff at Corporate Research, Thomson, Inc.,Princeton, NJ. Her current research interest is mainlyon image and video compression. Her previousresearch is on video transcoding, error conceal-ment, and data hiding. She is actively involved in

JVT/MPEG standardization process.Dr. Yin received the IEEE Circuits and Systems Society Best Paper Award for

her article in the IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO

TECHNOLOGY in 2003.

Cristina Gomila (M’01) received the M.S. degree intelecommunication engineering from the TechnicalUniversity of Catalonia, Spain, in 1997, and the Ph.D.degree from the Ecole des Mines de Paris, France, in2001.

She then joined Thomson, Inc., Corporate Re-search Princeton, Princeton, NJ. She was a coremember in the development of Thomson’s FilmGrain Technology and actively contributed to severalMPEG standardization efforts, including AVC andMVC. Since 2005, she has managed the Compres-

sion Research Group at Thomson CR Princeton. Her current research interestsfocus on advanced video coding for professional applications.

Truong Q. Nguyen (F’06) is currently a Professor atthe ECE Department, University of California at SanDiego, La Jolla. He is the coauthor (with Prof. G.Strang) of the popular textbook Wavelets and FilterBanks (Wellesley-Cambridge Press, 1997) and theauthor of several Matlab-based toolboxes on imagecompression, electrocardiogram compression, andfilter bank design. He has over 200 publications. Hisresearch interests are video processing algorithmsand their efficient implementation.

Prof. Nguyen received the IEEE TRANSACTIONS

ON SIGNAL PROCESSING Paper Award (Image and Multidimensional Pro-cessing area) for the paper he co-wrote with Prof. P. P. Vaidyanathan onlinear-phase perfect-reconstruction filter banks (1992). He received the NSFCareer Award in 1995 and is currently the Series Editor (Digital SignalProcessing) for Academic Press. He served as Associate Editor for the IEEETRANSACTIONS ON SIGNAL PROCESSING (1994–1996), the IEEE SIGNAL

PROCESSING LETTERS (2001–2003), the IEEE TRANSACTIONS ON CIRCUITS

AND SYSTEMS (1996–1997, 2001–2004), and the IEEE TRANSACTIONS ON

IMAGE PROCESSING (2004–2005).

Authorized licensed use limited to: Univ of Calif San Diego. Downloaded on April 02,2010 at 13:47:56 EDT from IEEE Xplore. Restrictions apply.