a survey on the techniques for the transport of mpeg-4 video over wireless networks
TRANSCRIPT
-
8/10/2019 A Survey on the Techniques for the Transport of Mpeg-4 Video Over Wireless Networks
1/11
B. Yan and K . W.
Ng: A
Survey
on
the Techniques for the Transport
of MPEG-4
Video Over Wireless Networks
863
A SURVEY ON THE TECHNIQUES FOR THE TRANSPORT
Bo
Yan and Kam W.
g
OF MPEG-4 VIDEO OVER WIRELESS NETWORKS
Abstract In
this paper, we present B survey on the techniques
for the transport
of
MPEC-4 video over wireless networks.
A s
a
new object-based video compression standard,
MPEG-4
has been
proved
t o
be suitable for wireless applications. Howeve r, because
of the chara cteristics of wireless channels, such as varying ch annel
capacity and burst bits errors, some new techniques such as
scalable and error-resilient video coding techniques are required
to be incorporated into IMPEG-4 to improve the coding efficiency.
These techniques are discussed in this paper. The experimental
results show that although these new techniques have improved
the performance greatly, there is still a lot of research
work
to be
pursued.
Index Terms-Error resilient video coding, MPEG-4, Scalable
video coding, Wireless channel.
1 INTRODUCTION
N
the last decade, mobile communication has grown rapidly.
IWith the explosive growth, the need for the robust
transmission of multimedia information-audio, video, text,
graphics, and speech data-over wireless links is becoming an
increasingly important application req uirem ent. Since the video
data may occupy more bandwidth than the other media data
during transmission, it should be given more consideration in
the wireless multimedia communication. As we know, in order
to be transmitted over the wireless network, the
source
video
data must be compressed before transmission. During the last
decad e, many video com pression standards have been proposed
for different applications, such
as
MPEG-I, MPEG-2, H.26X
and
so on
But these traditional video compression techniques
cann ot efficiently meet the requirement for video transmission
in wireless networks. In'the last few years, MPEG-4
has
been
propos ed, which is suitable for wireless networks because of its
characteristics [1][2][3]. In this paper, the techniques of
MPEG-4, which make it suitable for applications in wireless
multimedia communication, will he introduced. This paper is
divided into
four
sections.
Firstly,
an
overview of the MPEG-4 standard is given.
In
this
section, the basic characteristics
of
MPEG-4 will be described,
and the basic coding and decoding procedures will also be
explain ed. Then we will give the reas ons why MPE G-4
is
useful
for wireless multimedia communication. Secondly, it is well
Ba
Yan and
Kam W.
NO are
both with the Dcpanmcnt
ofcomputer
Science
a n d
Engineering. the Chinese University of Ilong Kong, Hong Kong (e-mail:
[byan, w n g ] ~ c s e . c u h ~ . e d u . h l \ ) .
Contributed Paper
known that the available bandw idth o f wireless networks
is
not
constant and may vary over a wide range at any given time. To
make full use
of
the varying channel capacity, scalable video
coding methods are introduced into MPEG-4. In this section,
some general methods are talked about. Thirdly, wireless
chann els are error-prone, burst erro rs of which can degrade the
video quality greatly. In order to alleviate the effect. some
error-resilient video coding techniques are incorporated in
MPEG-4, including error detection, recovery and concealment,
which will be desc ribed. Finally, we can draw a conclusion that
these improvements are not enough and the future work will be
discussed.
11
OVERVIEW OF MPEG-4
A . The
MPEG-4
standard
MPEG-4 is an ISO/IEC standard developed by MPEG
(Moving Picture Experts Group), the committee that also
developed the Emmy Award winning standards known as
MPEG-I, MPEG-2 and MPEG-7. Among these standards,
except for MPEG-7, the others are video compression
standards.
I MPEG-I: it is the general video and audio
compression standard, on which such products as
Video CD and MP3 are based.
MPEG-2: it can provide better video quality, on which
such products as Digital T elevision , Set top box es and
DVD are based.
MPEG-7:
it is different from others and not concerned
about video compression. The standard is for
description and search
of
audio and visual content.
MPEG-4
is
a new object-based method concerned about video
and audio compression, which is different from MPEG-I and
MPEG-2. T he objects can be:
2 )
3)
I Still image s (e.g.
as
a fixed background)
2) Video objects (e.g.
a
talking person-without the
background)
3)
Audio objects (e& the voice associated with that
person)
Although MPEG-4 is concerned about both video and audio
com pression , in this paper only the issues about video are talked
about. The following are the main characteristics of MPEG-4
video:
1
2
Bitrates: typically between 5 kbitis and IO Mbitis;
Drag Formats: progressive a s well as interlaced video;
Manuscript received
May
2 3 ,2 0 0 2
0098 3063100
510.00 2002 IEEE
Authorized licensed use limited to: UNIVERSITY OF ESSEX. Downloaded on November 21, 2008 at 21:49 from IEEE Xplore. Restrictions apply.
-
8/10/2019 A Survey on the Techniques for the Transport of Mpeg-4 Video Over Wireless Networks
2/11
864
IEEETransactionson
Consumer
Electronics,Vol. 48, No.
4,
NOVEMBER
2002
3 )
4)
Resolutions: typically from sub -QCI F to beyond
HDTV;
Com pression Efticiency: From acceptable to near
lossless .
Fig.
2 .
The
Structure of
Mpeg-4 E n d e i
Firstly, the shape coding is applie d. Each macroblock (M B) is
analyzed and classified according to three possible classes:'
transparent (MB outside the ob~iectbut inside the bounding
ig. I . The Structure of an
Mpeg-4
terminal
Fig. I shows how streams coming from the network (or a
storage device), as TransMux Streams, are demultiplexed into
FlexMux Streams and passed to appropriate FlexMux
demultiplexers that retrieve the Elementary Streams. The
Elementary Streams (ESs) are parsed and passed to the
appropriate decoders. Decoding recovers the data i n an AV
object
from
its encoded
form
and performs the necessary
operations to reconstruct the original AV object ready for
rendering on the appropriate device. The reconstructed AV
object is made available to the composition layer for potential
use during scene rendering. Decoded AV objects, along with
scene description information, are used to compose the scene as
described by the author. The user can, to the extent allowed by
the author, interact with the scen e, which is eventually rendered
and presented [ I ] .
There are four hierarchically organized classes in the MPEG-4
visual standard as follows
[21:
Video Session:
Each video session (VS) is made up
of
one or more Video Objects (VO ), corresponding to the
various objects in the scene.
Video
Object:
Each
of
the VOs can have several
scalability layers (spatial, temporal, or SNR),
corresponding to different Video Object Layers
(VOL).
Video Object Layer; Each VOL consists of an ordered
sequence of snapshots in time, called Video Object
Planes (VOP);
Video Object Plane: Each VOP is a snapshot in time
of a VO for a certain VOL.
This way, one VS can have several VOs, each of these VOs
having several VOLs, which are the sequences
in
time ofseveral
VOPs. And finally, each VOP is characterized by its shape,
motion and texture.
Fig. 2 outlines the basic approach
of
the MPEG-4 video
algorithms to encod e not only the rectan gular but
also
arbitrarily
shaped input image sequences. The basic coding structure
involves shape coding (for arbitrarily sh aped VO s) and motion
compensation
as
well as DCT-based texture coding (using
standard 8x8 DCT or shape adaptive D CT).
I)
2
3 )
4)
box);opaque (MB completely inside the object) or border ( M i
over the border). The shape coding method called
Content-based Arithmetic Encoding (CAE) is applied on the
border MBs
[4].
Secondly, similar to other traditional methods, the motion
information is encoded by means of motion vectors. Each M B
can either have one or four motion vectors. When four motion
vectors are used, each of them is associated with an 8 x8 block
within the MB. With the motion vectors, the decoder can know
which block of pixels
in
the previous VOP is closest to the
current one and it can be used for the prediction of th e texture.
In
the receiver, the decoder will use the motion vectors for
motion c ompensation.
Then the texture data can be en code d by two modes: intra and
inter. For the intra mode, a given MB is encoded by itself (with
no
tempora l prediction), only exploring the spatial
redundancies. On the other hand, for the inter mode, motion
compensation is used to explore the temporal redundancy, and
the difference between the current and prediction MB is
encoded. Both the absolute texture value (intra coding) and the
differential texture value (inter coding) are then encoded using
the DCT transform. Then the DC T coefficients are encoded by
run-length co ding and variable length coding (VL C). Instead o f
the regular VLC, reversible VLC can be used to code the
texture, if error resilience is a requirement.
At this stage we get
all
the encoded information including
shape, motion and texture [ 2 ] [ 5 ] .
B. Applica tion Description
Wireless multimedia applications face technical challenges
that are significantly different from the problem s typically
encountered with desktop niultimedia applications. This is
because current wireless channels are subject to inherent
limitations, which are
as
follows.
I) The bandwidth is very narrow and the channel
capacity is not fixed but may vary ov er a wide range at
any given time.
The channel is error-prone with bits errors and burst
errors due to fading and multi-path re flections.
Diversity ofw ireles s networks (e.g. GSM, or Satellite)
in
regard to network topology, protocols, bandwidth,
2 )
'
3 )
Authorized licensed use limited to: UNIVERSITY OF ESSEX. Downloaded on November 21 2008 at 21:49 from IEEE X lore. Restrictions a l .
-
8/10/2019 A Survey on the Techniques for the Transport of Mpeg-4 Video Over Wireless Networks
3/11
6
w and K. . Ng:
A Survey an the Techniques
for
the Transpan of MPEG-4 Video
Over
Wireless Networks
865
reliability etc.
The need of being able to make a trade-off between
quality. performance and cost.
MPEG-4, designed
as
an adaptive representation scheme that
also
accommodates very low bitrate applications, is very
appropriate for wireless multimedia applications. Concretely,
MPEG-4
is
useful because:
It
can provide high compression performance. The
lowest bitrate can be 5 kbps.
Scalable video coding techniques are introduced into
MPEG-4 to provide the varying coding bit rate for the
varying channel capacity.
3) Many error-resilient tools are incorpora ted into
MPEG-4, which guarantee the correctness of the data
in the error-prone wireless channel.
4) Object-based coding functionalities allow for
interaction with audio-visual objects and enable new
interactive applications in a wireless environm ent.
Face animation parameters can be used to reduce
bandwidth consumption
for
real-time communication
applications in a wireless environment, e.g. mobile
conferencing
[ 6 ] .
In the following, the techniques on scalable video c oding and
4)
1)
2
5
error-resilient video coding will be discussed
in
detail.
111. SCALABLE VIDEO CODING
The objective of the traditional video coding has been to
optimize video quality at
a
given bit rate. But this has to be
changed for wireless applications.
I t
is well known that the
bandwidth availability ofw ire les s links is limited and may vary
over a wide range depending on network traffic at any given
time.
In
a
traditional video communication system, the video data
can be compressed into
a
bit rate that is less than, and close to,
the channel capacity, and the decoder reconstructs the video
signal using all the bits received form the channel. But
in
this
model, o ne requirement that milst be satisfied is that the encoder
knows the channel capacity. Actually, the encoder no longer
knows the channel capacity and at which bit rate the video
quality should be optimized because of the varying channel
capacity
i n
the wireless application. Therefo re, the objective
of
video coding for wireless is changed to optimize the video
quality over a given bit rate range instead of at a given bit rate.
The bitstream should be partially decodable at any bit rate
within the bit rate at which the decoder can reconstruct the
optimized quality [7]. Scalable video coding can solve this kind
ofpr oble m, which will be explained
as
follows.
A . What is scalable video coding?
Basically, video compression schemes can be classified into
two approaches: scalable and non-scalable video coding. The
structures of both can be seen from Fig.
3
and Fig. 4. For
simplicity, the encoder and decoder are only shown in
intra-mode and only the DCT is adopted. From Fig. 3, we can
see that the non-scalable video encoder compresses the raw
video data sequence into only one compressed bit-stream.
In
contrast the scalable video encoder generates multiple
sub-streams
as
shown in Fig.
4 .
One of the compressed
sub-streams is the base sub-stream, which can be independently
decoded and provides coarse visual quality. Other compressed
sub-streams, which are called enhancement sub-streams, can
only be decoded together with the base sub-stream and can
provide better visual quality. The complete bit-stream (Le.,
combination
of all
the sub-streams) prov ides the highest quality
181.
-
IQ
e
goded
Vidm
DCT:D i s c r e Cosine Transform
Q: Quantization IQ:
nverseQuantnarion
VLC
:
Variable Length Coding
IDCT lnvene
DCT
VLD variable ~engul
Fig. 3. Non-scalable video
encoder a)
and deader (b)
b)
Fig. 4. Scalablevideo encoder (a) and decoder (b)
Fig.
5
illustrates the difference in video quality betw een the
scalable and non-scalable video coding techniques. In the
figure, the horizontal axis indicates the channel bit rate, while
the vertical axis indicates the video quality received by a user.
The distortion-rate curve indicates the upper bound in quality
for any coding technique at any given bit rate. The two staircase
curves indicate the performance
of
an optimal non-scalable
coding technique. In the technique, when a given bit rate is
chosen--either low, medium, or high-the coder tries to
achieve the optimal quality indicated by having the upper com er
of th e staircase curve very close to the distortion-rate curve . The
receiver can get the best video quality if the channel bit rate
happens to be at the video-coding bit rate. How ever,
as
depicted
by the Fig. 5 , if the channel bit rate is lower than the video
coding bit rate, the received video quality becomes very poor
because it cannot get enough bits to reconstruct the video. In
contrast, if the channel bit rate is higher than the video -coding
bit rate, the received video quality does not become any better
As
mentioned above, the scalable video coding techniques can
compress the video sequence into a base layer and an
enhancem ent layer. Actually the enhan cement layer bitstream is
similar to the base layer bitstream in the sense that it has to be
~71.
Authorized licensed use limited to: UNIVERSITY OF ESSEX. Downloaded on November 21, 2008 at 21:49 from IEEE Xplore. Restrictions apply.
-
8/10/2019 A Survey on the Techniques for the Transport of Mpeg-4 Video Over Wireless Networks
4/11
866
IEEE Transactions on Consumer Electronics, Vol.
48,
No. 4, NOVEMBER
2002
either completely received and decoded or it does not enhance
the video quality at all. So such techniques can ch ange the
residues that
are to
be added to the m otion-compensated block
from the previous
.................................
e 7][9].
, E n h a n c m e n t L a y a
Enhancement, DssCding
eceived
Video
A
Quality
Non-Scalable Distortion-Rali:
Video Coding
Good
Coding FGS
Moderate
Enhancement layet
Bad
L
fig.
7. Temporal
Scalable Video
Coding
Fig. 5 .
The
relationship between
the
hitrate and video quality
non-scalable single staircase curve to a curve with two stairs as
shown in Fig.
5 .
The base-layer bit rate determines where the
first stair is and the enhancement layer bit rate determines the
second stair. More detailed discussions about the techniques
will be presented later. As shown in Fig.
5
the most optimal
technique is
to
achieve the distortion-rate curve at any given
bitrate, which is the o bjective of the fine granularity scalability
(FGS) video-coding technique in MPEG-4 [7].
0
The
categories of scalable vi eo coding
Specifically, compared with decoding the complete
bit-stream, decoding the base sub-stream or multiple
sub-streams produces pictures with degraded quality, or a
smaller image size or a lower frame rate. The scalability of
quality, image size. or frame rates, is called
SN R,
spatial, or
temporal scalability, respectively. These three scalabilities are
basic scalable mechanisms.
I
SN R Scalable Video Coding
Signal-to-noise ratio (S NR) scalability is a scheme to
compress the raw video data into two layers at the same frame
rate and the same spatial resolution, but different quantization
accuracy. Fig. 6 shows the SNR scalability decoder [7]. Firstly,
the base-layer bitstream is decoded by the base layer
variable-length decoder (VLD). Then it is quantized inversely
to produce the reconstructed DCT coefficients. The
enhancement bitstream is decoded by the VLD in the
enhancement layer and the enhancement residues of the DCT
coefficients are produced by the inverse quantizer in the
enha ncem ent layer. Thus the higher accuracy D CT coefficient is
obtained by adding the base-layer reconstructed DCT
coefficients and the enhancement-layer DCT residue. The DCT
coefficients with a higher accuracy are given to the inverse DCT
(IDCT) unit to produce the reconstructed image domain
2 Temporal Scalable Video Coding
Temporal scalability is a scheme to compress the raw video
data into two layers at the sam e spatial resolution, but different
frame rates. The base layer is coded at a lower frame rate.
In
contrast, the enhancement layer compresses a video with a
higher frame rate providing the missing frames. Thu s the codin g
efficiency of temporal scalability
is
high and vely close to
non-scalable coding. Fig.
7
shows a structure of temporal
scalability[7]. In the base layer only P-type prediction is used,
while in the enhance ment layer prediction can be either P-type
or 9-typ e from the base layer
or
P-type from the enhancement
~
ayer [7].
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
I-
w
Fig. 8. Spatial
Scalable
Vidw Decoding
3 ) Spatial Scalable V ideo Coding
Spatial scalability
i s
a scheme
to
compress the raw video data
into two layers at the same frame rate, but different spatial
resolution. The base layer is coded at a lower spatial resolution.
The recons tructed base-layer picture is up-sampled
to
fonn the
prediction for the high-resolution picture in the enhancement
layer. Fig.
8
shows the simplified structure of spatial scalability
[7]. lfthe spatial resolution ofthe base layer is the same as that
of the enhancement layer, Le., the up-sampling factor being 1
this spatial scalability decoder can be considered as an WD
scalability decoder too [7].
Authorzed censeduse m ted to: UNIVERSITYOF ESSEX.Downoadedon November21 2008at 21:49 fromIEEE X ore. Restrct ons a .
-
8/10/2019 A Survey on the Techniques for the Transport of Mpeg-4 Video Over Wireless Networks
5/11
B .
Yan and
K.
W.
Ng:
A Survey on the Techniques for the Transpon of M P E G 4 Video Over Wireless Networks
867
in the bitstream a head o f that of other parts o f the frame 171
. _
Raw
B a x Layer I) Frequency Weighting
video-^-^^ -
Campreraed
S As we know, different D CT coefticients can contrihute to the
~~~~ ~~ ~ ~ . .
-2E-J
7
hifl U C L
-
ompruscd
visual quality differently. Usually, the accuracy of the low
frequency DCT coefficients is more important than that of the
high frequency DCT coefticients. Better visual quality can be
achieved with more bits of the low frequency coefticients.
a)
BassLaycr
B B S ~ L ~ ~ ~ ~
herefore, the bits o f low frequency D CT coefficients can be
Compmaed -- =--E
put into the enhancement bitstream earlier so that they are more
likely to be included in a truncated bitstream.
To
achieve the
E ~ ~ ~ ~ ~ ~ ~
8--3i@EiF
S t n a m
~
I
--
ecoded
~
vidm
lream
____________
Enhancement Layer
1
nhancemat Layer
objective, the frequency weighting mechanism is included into
Compressed-~~~~~~z
the
F G ~
~ 1 ,
S
b)
Fig. 9.
We Structuresofthe FGS Encoder
(a)
and Decoder b)
C. Fine G ranirlarity Scalabilil?, F GSj
According to the introduction to the
SNR,
Temporal and
Spatial scalable video coding, we can see that they can provide
the best video quality at only two bit rates. But for wireless
applications, we need
a
more scalable scheme for video coding
because of the varying channel capacity. Therefore
a
new
scalable coding mechanism, called tine granularity scalability
(FGS), was proposed to MPEG-4 to provide more flexibility in
meeting different demands of wireless channels.
Fig.
9
shows the structures of the FGS encoder and decoder
[S][lO]. From it, we can know that an FGS encoder compresses
a raw video sequence into two sub-streams, i.e., a base layer
bit-stream and an enhance ment bit-stream. Different from other
scalable techniques, an FGS encoder uses bit-plane coding to
represent the enhancement stream. As we know, in the
conventional DCT coding, the quantized DCT co efticients are
coded using run-level coding. The number of consecutive zeros
before
a
nonzero DCT coefficient is called a run and the
absolute value of the nonzero DCT coefficient is called a
level. The major difference between the bit-plane coding
method and the run-level coding m ethod is that the bit-plane
coding method considers each quantized DCT coefficient as a
binary integer of several bits instead of a decimal integer of a
certain value. So any bits coded by bit-plane method can b e used
to reconstruct the DCT coefficients [7][1 1][12]. With the help
ofth e bit-plane method, FGS is capable ofachie ving continuous
rate control for the enhancement stream as shown
i n
Fig. 5 . This
is because the enhancement bit stream can be truncated
anywhere to achieve the target bit-rate. Any bits received from
the enhancement-layer can^. be adopted to improve the video
quality, which is impossible for other scalable video coding
methods. And it is the reason why the FGS has good advantags
over others.
To further improve the performance of FGS enhancement
video, two advanced features are introduced into FGS, i.e.
frequency weighting and selective enhancement. The former
means that different weights are used for different frequency
components, so that more bits of the visually important
frequency components can be put in the bitstream ahead oft hat
ofo the r frequency components. Similar to the former, the latter
uses different weighting for different spatial locations of a fram e
so
that more bits ofsom e more important pan s o f a frame are put
2 Selective Enhancement
For one frame ofth e video, a part of it may be visually m ore
significant than other parts. So the bits of the MBs of interest
may be put ahead so that they can be mo re likely to be include d
in
a truncated bitstream [7].
D. Some Improvements about FGS
In order to cover a wide range of bit rate with a scalable
bit-stream, there is a need to combine FGS with other scalable
video coding methods. Some improvements have been
proposed.
I FGST
Method
FGS-FGST Layer
FGS
VOP
Base layer
~ i g .
O.
The Struchue
of
the
FGST Method
FGST, which means FGS temporal scalability, is to com bine
FGS with temporal scalability
so
that not only quantization
accuracy can be scalable, but also temporal resolution (frame
rate)
can
be scalable. In the mechanism.
the
quality of each
tempora l enhanceme nt frame does not affect the quality o f other
frames, since the tempora l prediction for the tempora l
enhancement frame is restricted to the base layer. Therefore,
there
is
no problem to use bit-plane coding for the entire DCT
coefficients in the tempora l enhance ment frame. And for FGST,
not only the temporal enhancement frames can be dropped a s
in
the regular temporal scalability, but also the quantization
accuracy is scalable within each temporal enhancement frame.
Therefo re, the coding efficiency of it using all bit-plane c oding
in the temporal enhancement frame will be higher than regular
tempora l scalability for coding the DCT coefticients. This
compensates its potential loss of coding eficiency by not
allowing prediction in the enha ncem ent layer. Fig. IO shows the
structure of it [7][13].
Authorzed censeduse m ted to: UNIVERSITYOF ESSEX.Downoadedon November21 2008at 21:49 fromIEEE X ore. Restr ctons a .
-
8/10/2019 A Survey on the Techniques for the Transport of Mpeg-4 Video Over Wireless Networks
6/11
868
IEEE Transactions on Consumer Electronics, Vol.
48,
No. 4, N O V E M B E R 2002
2)
GSS
Method
Similar to FGST, FGSS is to combine FGS with spatial
scalability, which is illustrated in Fig. 1 I
[14].
In the FGSS
scheme, the base layer is still encoded as in the traditional
spatially scalable coding. How ever, the enhancem ent layer now
adopts the bit-plane coding technique. Fig.
1 1
illustrates
1st enhancement
conceptually the structure oft he FGSS coding scheme. From the
figure, the input video sequence is first down-sampled and
existing non-scalable coding techniques. In the traditional
spatially scalable coding, the video will be up-sampled to
provid e the high-resolution for the enhanceme nt layer coding.
lowest layer of frame 1 to the highest layer of frame 5 which
makes the schem e robust.
Base Laser
Laver
2nd enhancement
L a s e r
3rd enhancement
Layer
4th enhancement
L W W
compressed at low resolution to a given hit rate with any
However in FGSS, several FGS lower enhancement layers are
level if the hit rate of the base layer is very low. Then if the
first used to improve the video quality at the low-resolution
quality o f the low-resolution video is good enough in the base
1 2 3 4 5
Fra.es
layer, the video can be up-sampled
so
that the spatial resolution
can be immediately switched to high-resolution in the
enhancement layer. Therefore, the enhancement layers at
Fig. 12.
The Structure
ofthe
PFGS Method
low-resolution are optional. It depends
on
some factors such as
the bit rate of base layer, sequence contents, application
Iv . ERROR-RESILIENTIDEO CODING
IN
MPEG-4
requirements and so
on
[14].
Wireless cha nnels are typically noisy an d suffer from a number
- - - I r--l r-- r 1 v--, of channel degradations such
as
bits errors and burst errors due
to fading and multi-path reflections. The channe l errors can
affect the comp resse d video bit-stream very severely.
On
the
one hand, if the decoder loses information about a frame or a
slEnhancement
group of pixels, it wont be able to decode the frame or the
pixels. And it
also
cannot decode the frames or the pixels coded
2 n d E n h a n c e m m using them. On the other hand, the decoder can lose
synchronization with the encoder because of the lost
3rd Enhancement
information, which leads to the remaining bit-streams being
incomprehensible. Therefore for a robust video compression
I 2 3
4 5
method in the wireless applications, resynchronization and
robust techniques are necessary in the encoder.
Generally the M PEG-4 decoder can apply the syntactic and
semantic error detection techniques to enable the video decod er
to detect when a bitstream is corrupted by channel errors. In
Frames
Fig I The
Structure
of
the
FGSS
Method
3)
PFGS method
Th e PF GS method, progressive fine granularity scalable video
coding, has all the features of FGS, such as tine granularity
hit-rate scalability, channel adaptation, and error recovery.
On
the contrary, the PFGS framework uses several high-quality
references for the predictions in the enhancement-layer
encoding rather than always use the base layer. Using
high-quality references would make motion prediction more
accurate, thus it could improve the coding efficiency. But the
hits of the enhancement-layer are more likely to be lost than
those oft he base-layer. Therefore it may make t he co der fragile.
The PFGS proposed a method to solve the problem as shown in
Fig. 12,which illustrates the framework of th e PFGS [15][ 161.
We can see that a prediction path from the lowest layer to the
highest layer is maintained across several frames, which makes
it robust and qualified for error recovery. For example, if the
enhancem ent layers offr am e are corrupted or not received, the
enhancement layers of frames 2 ,
3
and 4 will be affected
because of the loss of the prediction references. But it will be
fine for frame 5 because there is a prediction path from the
motion com pens ation and DC T, the decoder. can detect
bitstream errors by applying the checks as follows
[
171:
The motion vectors are out ofra nge ;
An
invalid VLC table entry is found;
The DCT coefficient is out of range;
The number of DCT coefficients in a block exceeds
64.
After errors are detected, some techniques incorporated into
MPEG-4 can be applied which can provide the important
properties such as resynchronization, data recovery, and error
concealment. They are as follows:
I
2 )
3)
4)
I Video packet resynchronization;
2 ) Data partitioning (DP);
3)
4)
Header extension cod e (HEC).
Reversible Variable Length C odes (RVLC s);
A . Video Pac ket Res.ynchronization
When errors occur in the bit-stream, the video decoder
decoding the corrupted stream may lose synchronization with
the enco der, i.e. the precise location of the stream in the video is
Authorized licensed use limited to: UNIVERSITY OF ESSEX. Downloaded on November 21 2008 at 21:49 from IEEE X lore. Restrictions a l .
-
8/10/2019 A Survey on the Techniques for the Transport of Mpeg-4 Video Over Wireless Networks
7/11
B. Yan
and
K .
W . Ng:
A
Survey
on
the Techniques for the Transport
of
MPEG-4 Video Over Wireless Networks
Resync.
marker
~
869
QP
HEC Motion data M B M DCT data
M B
no.
uncertain. I t will degrade the decode d video qu ality very greatly
and make
it
unusable.
A
traditional scheme to solve this problem is to introduce
resynchronization markers in the bit-streams at various
locations. W hen the dec oder detects errors, it can hunt for the
next resynchronization marker to regain synchronization. In
previous video coding standards such as MPEG-I , MPEG-2
and H.263, the resynchronization markers are fixed at the
beginning of each row of macro-blocks. But the quantity of
information between tw o markers is not fixed because of the
variable length coding
as
shown in Fig.
13
[I
81.
i
j
i
/ / j i
be decoded and utilized by the decoder. Therefore MPEG-4
introduces two additional fields in addition to the
resynchronization marker at the beginning of each video packet,
as
shown'in Fig.
15.
These are:
M B no.:
The absolute macroblock num ber of the first
macroblock in the video packet, which indicates the
special loca tion of the macroblock in the current
image.
QP: The quantization parame ter, which denotes the
default quantization parameter used to quantize the
DCT coefficients in the video packet.
1)
2)
The predictive encodin g method has
also
been modified for
coding the motion vectors
so
that there are no oredictions across
the video packet boundaries [17][19][20][21].
MB/ QP HEC Combined motion and DCT data
esync.
mmkw
nn
Fig.
1 5 Organization of the
data
within a Video
Packet
Authorzed censeduse m ted to: UNIVERSITYOF ESSEX.Downoadedon November21 2008at 21:49 fromIEEE X ore. Restrct ons a .
-
8/10/2019 A Survey on the Techniques for the Transport of Mpeg-4 Video Over Wireless Networks
8/11
870
IEEE Transactions on Consumer Electronics,Vol.
48, No.
4,
NOVEMBER 2002
motion
data
packet will be discarded by the decoder, because the texture
data is coded based on the motion data and it will he
useless
without the motion data.
If no erro r is detected
in
the motion and texture sections of the
bitstream, but the resynchronization marker is not found at the
end of decoding all the macroblocks of the current packet, an
err or is flagged.
In
this case, only the texture data in the VP nee d
to be discarded. The motion data ca n still be used for the NM B
macroblocks since we have higher confidence in motion vectors
because o f the detection of the M BM .
The data partitioning scheme is only used for I-VOP and
P-VOP. And for the two cases the content
in
the motion and
texture part is different. For I-VOP, the motion part contains the
coding mode and DCs coefficients and the texture part is the
ACs coefficients. For P-VOP, the motion part is the motion
vectors and the texture part contains the DCT coefficients
[
71[201[221.
MB M
Error Error
C. Reversible variable Length code s RVLC)
Forward decoding
I-VOP
Use
VP Header
DC DCT
ata
AC DCT
data
P-VOP
Backward decoding
Fig I 7 The data need to be discarded while us ing the RVLC
As mentioned above, the texture part is coded with variable
length code. Therefore during decoding, the decoder has to
discard all the texture data up to the next resynchronization
marker when it detects an error
in
the texture part because
of
losing synchronization. RVLC can alleviate the problem and
recover m ore DCT d ata from a corrupted texture partition. The
RVLC is a special VLC, which can be decoded in both the
forward and reverse direction. When the decoder detects an
error while decoding the texture part
in
the forward direction, it
can find the next resynchronization marker to decode the texture
part in the backward direction until an error is detected.
Therefore only the data between the two errors locations is
discarded, and other data
in
the part can be recovered. In Fig.
17, only the data
in
the shaded area is discarded. It should be
noted that
the
RVLC scheme can only he used with the help of
resynchronization and data partitioning techniques [17][20]
[23].
I
VPHeader Motiondata Texturedata
1
D. Header Extension Code HEC)
At the beginnine of each video uacket. there
is
the most
frame has to be discarded. So the MP EG-4 standard introduces
the technique named Header Extension Code to reduce the
sensitivity
of
the header data, as shown
in
Fig.
16. In
the
technique, a I-bit field called
HEC
is inserted for every video
packet. If the bit is set, the header information of the video
frame is repeated following the HEC. The duplicated
information can ascertain the decoder to get the correct header
data. The HEC usually is used in the first video packets of the
VOP, but not in all of them [I71 [ 20 ] .
- -
important information for deco ding-th e header data, which
contains information about the spatial dimensions of the video
data, the time stamps associated with the decoding and
presentation of the video data, and the mode in which the
current video object is encoded (INTRA or INTER). If the
hea der information is corrupted by the channel errors, the whole
Fig. I8 illustrates the protection scheme described above, in
which
R , , Rz
and
R,
represent the bit rates ofth e three partitions
respectively and they satisfy the condition: RI
-
8/10/2019 A Survey on the Techniques for the Transport of Mpeg-4 Video Over Wireless Networks
9/11
B. Yan and K. W. Ng: A Survey on the Techniques for the Transport
of MPEG-4
Video Over Wireless Networks
~
87 I
are obtained by puncturing the same mother code and only
one coder and one decoder are required for performing the
coding an d decoding of the whole bitstream
[29].
V . CONCLUSLONS
ND
FUTUREWORK
,This paper has presented two kinds of techniques for the
transport of MPEG-4 over wireless networks, scalable video
coding and error-resilient video coding. Our conclusions and
future work will be based on them respectively.
A .
Scala ble video coding
As a new scalable video coding technique introduced into
MPEG-4, FGS is focused
on
in Section 2. Many experimental
results have proved that the FGS coding method has better
coding efficiency than SNR, Tem poral and Spatial scalabilities.
Generally there are two kinds of video formats often used, i.e.
Common Intermediate Format (CIF) and Quarter CIF (QCIF).
The sizes
of
their images are 352x288 and 176x144
respect ively .
In
the signal processin g field, the video quality is
often evaluated in terms
of
PSNR, Peak Signal Noise Ratio, in
dB. defined as:
255
PSN R
=
20
log,,,
RMSE
1 )
where RMSE is the square root of the Mean Square Error
(MSE):
where we assume f i, jJ and F i,
j )
are the source and the
reconstructed images, containing MxN pixels each [20]. To
evaluate the video quality, only the PSNR of the luminance
component Y )of the frame needs to be considered.
An exam ple of the.experiments made by Li [7] is introduced
as follows. which compare the coding efficiency
of
FGS and
SNR scalability. Table show s the results. As the bit rate
increases, FGS always has better PSNR than the SNR
scalability. At high bit rates, FGS is aboui 2-dB better. The
experiments are m ade.using the video sequence of Coastguard
and Foreman withthe format of QCIF.
TABLE I
PSNR
COMPARISONETWEEN FGS AND MPEC-II
S N R
CODIUG
As mentioned
in
Section 2. several improvements have been
made to improve the coding efficiency of the FGS, including
FGSS, FGST and PFG S. Experimental results have proved that
the coding efficiency
o f
PFGS is the best among these
improvements.
An
example ofexp erimenta l results mad e by Wu
[I51 is as follows, which give a comparison about the coding
efficiency among the FGS, PFGS and Single. It should be noted
that single here means the non-sca lable video coding, which can
get the best video quality a t a certain bitrate. Table 7 shows the
result. From the table, a conclusion can be drawn that PFGS can
arrive at a better video quality than FGS. Therefore the
techniques described in Section
2
are effective and efficient for
wireless multimedia communication with varying channel
capacity. However, the experimental results shown in Table -
also indicate that the coding efficiency gap between the FGS
scheme and non-scalable video coding exceeds 3.0 dB.
Although the PFGS scheme has significantly improved the
coding efficiency of FGS, the gap between the PFGS scheme
and the non-scalable video coding is still large.
TABLE I 1
The future work for the scalable video coding is as follows. In
the future, more studies need to be done
on
how to further
improve the coding efficiency and robustness of the FGS to
transmit the video stream through the wireless channels. As
mentioned above, the coding eficiency gap between the
non-scalable video coding and the PFGS video coding is still
large although it is closing.
Firstly,
as
we know, the high quality references for motion
prediction can improve the coding efficiency. So if more
enhancement layers are used for prediction, the coding
efticiency will be higher. But the bits o fth e enhancem ent layers
are more likely to be discarded bec ause of the limited channel
capaci ty In the situation, if more enhancement layers are used
for prediction, the system will be fragile. To solve it, error
detection and concealment techniques may be introduced into
the enhancement layers. In this way, even if some bits of the
enhancement layers are lost, some techniques can be used to
conceal the errors. With the help of the error-resilient
techniques, the coding efficiency of FGS can be improved
further while keeping the system robust.
Another proposal to improve the FGS is to combine the FGS
and shape coding of the arbitrary shaped objects. An important
benefit of the proposal is that the FGS can significantly benefit
from knowing the sh ape an d position of the various objects in
the scene, so that the selective enhancement mentioned in
Section 3.3 can generate a better video quality. It can also
improve the coding efficiency of the FGS.
6. Error-resil ient video coding
I n Section I V to improve the video quality against the bit
errors, some techniques including resynchronization, DP,
RVLC and HEC have been discussed. Actually all the
techniques have been evaluated thoroughly and independently
Authorized licensed use limited to: UNIVERSITY OF ESSEX. Downloaded on November 21, 2008 at 21:49 from IEEE Xplore. Restrictions apply.
-
8/10/2019 A Survey on the Techniques for the Transport of Mpeg-4 Video Over Wireless Networks
10/11
872
IEEE Transactions on Consumer Electronics, Vol. 48, No.
4,
NOVEMBER
002
verified by two parties before being ac cepted into the MPEG-4
standard, The techniques are rigorously tested over a wide
variety of test sequences, bit rates and error conditions. The
compressed bitstreams are corrupted using random bit errors,
packet
loss
errors, and burst errors. T o evaluate the performance
of the proposed techniques, the PSNR values generated by the
error-resilient video codec w ith and without eac h error resilient
tool are compared. The techniques are also compared on the
basis of the number of frames discarded due to errors and the
number of bits discarded. The comparison results have proved
that these techniques have improved the video qu ality greatly in
error-prone channels [17][31][32].
TABLE Ill
AVERAGE PSNR FOR EEP A N 0 UEP TOOLS
Bit Rate QC lF Format CIF Format
33.4
32.3
30.8
However, these techniques are not enough for wireless
channels with high bit errors rate (BERs), so the channel coding
method, UEP, is introduced. To test the performance of th e UEP
against others such as EEP (Equal Error Protection), in which
all partitions are protected equ ally, many experiments have been
made based on the wireless channel. The results of them have
shown that the UEP technique can improve the quality of the
.reconstructed video at a high channel error rate more greatly
than EEP. Table I11 shows the results of the experiments made
by Wendi [ 2 5 ] in GSM channel. In the table, UEP produces
higher average PSNR for the reconstructed video for both CIF
and Q Cl F images at high channel erro r rates, which verified the
good performance ofth e UEP. U sually
in
many cases, I J EP may
leave more errors in the channel decoded bitstream than EEP,
but these errors are in less important portions of the video
packet and degrade less the video quality.
TABLE I\
COMl
Fatal I
I
1
O I
17.2
Cai has m ade other experiments [24] to compare the coding
performance between the EEP and UEP, in which the source
coding rate is fixed at IOOkbps and channel coding rate is
varying. Table lV shows the simulation result. For example,
total bandwidth of
1 I O
kbps represents 100 kbps of fixed source
coding rate and I O kbps of channel coding rate.
In
this case,
channel coding redundancy is
10 .
Similarly, the redundancy
of other cases can be derived. The experimental results
demonstrate that in bandwidth-stringent cases, UEP is much
better than E EP with the gain
of
nearly 5 dB at 10 redundancy.
When the channel coding redundancy increases to
20
or more,
the performance of EEP is nearly the sam e as that of UEP. T his
is because high redundancy channel coding provides virtually
error free environment for all classes
of
the video bitstream
so
that the advantage o f unequal error protection is diminishing.
In future, the work for error-resilient video coding will be
focused on how to further improve the efficiency of the
error-resilient video coding. The propo sals are as follows.
Firstly, future work about UEP to improve the performance
should consider some issues concerned with the charac teristics
of the wireless channels and video signals.
As
we h o w t he
power problem is an important factor, which greatly affects the
performance of wireless communication. Since the UEP will
generally consume more power than EEP, the power problem
should be taken into account for wireless communication w hen
applying the UEP. This is particularly necessary when the
bandwidth is not quite stringent and the gain of UEP over E EP is
not
so
significant as shown in Table
IV
at the rate of
120
kbps or
over. There is a tradeoff between power and video quality.
Therefore a decision scheme need s to be designed to determine
whether UEP should be adopted under a given channel
condition
Secondly, to improve the perform ance of U EP, it is necessary
to get the optimal allocation of channel coding rates among
different classes of video data. In orde r to carry out the optimal
allocation, the relationship of the error sensitivity among
different partitions of th e video packet is required. Without such
analytical relationship, it is difficult to design the optimal
allocation. Better results may be achieved if rates are chosen
according to accurate sensitivity studies on the bitstream. This
will also be on e part of our future research work.
Finally as mentioned in Section
IV,
by applying the RVLC in
the DC T data part i tion, the decoder can get m ore data to decode
when errors occur. E xperimental results have proved that it has
improved the performance of the MPEC-4 coder greatly in the
wireless channels [3 I ] . Currently, the RVL Cs are used only for
the DCT data partition. In the future work, the RVLCs may be
introduced into the motion data partition to code the motion
vectors. Since the motion part is more im portant than the D CT
part, we expe ct that it will further improve the efficiency of th e
error-resilient video cod ing of MPEG -4:
In conclusion, although many techniques have been
introduced into the MPEG-4 standard, which aim at providing
strong supports for robust transmission of the video data in
wireless multimedia communication, actually they are not
enough at all for all applications.
So
there is still a lot of
research work to be pursued to improve the performance of
video transmission o ver wireless channels.
REFERENCES
[I] Rob Koenen, Overview of the
MPEG-4 Standard ,
ISOIIEC
JTCIISC29IWGI I M4030,2001
121 Snares L.D.,
Pereira
F., MPEG-4:
a
flexible
coding standard far the
emerging
mobile
mult imedia applications , The Ninth IEEE lntemational
Symposium o n ,
Volume: 3 ,
1998,
pp.1335
-1339
01.3
Authorzed censeduse m ted to: UNIVERSITYOF ESSEX.Downoadedon November21 2008at 21:49 fromIEEE X ore. Restrct ons a .
-
8/10/2019 A Survey on the Techniques for the Transport of Mpeg-4 Video Over Wireless Networks
11/11
Yan
a n d
K.
W.
Ng:
A Survey
on
the Techniques for the Transpor t of
MPEG-4
Video
Over
Wireless Networks 873
Yaqin zhang, MPEG-4 Based Multimedia Information System , Digital
Signal Processing Handbook, CRC Press,
1999,
Chapter 7.
MPEG Video SNHC, 'To din g of Audio-Visual Objec ts: Visual
ISOIIEC 14496-2, Doc. ISOIIECIJTCllSC2YlWGI
I
N2202, MPEG
Tokyo Meeting, March
1998.
Shieeru Fukunaea. MPEG-4 Video Verification Model version 1 6 .0 ,
Cob;ng of M&ng Pictures and Audm, lSOllEC JTCllSC29i
WGI 1M3312, March 2000
Requirements Group, MPEG-4 Applicatmns . ISOIIEC
JTClISC29IWGI
I
MPEG 99rN2724, Man h
1999
Weiping Li, Overview of fine granularity scalability in MPEG-4 video
standard , IEEE Transactions
an
Circuits and Systems for Video
Technology. Volume I Issue: 3, March 2001 pp.301 -317
Dapen g Wu, Ha , Y.T., W enwu Zhu, Ya-Qin Zhang. Peha, J.M.,
Streaming video over the Internet: ,approa ches and directions , IEEE
Transactions
on
Circuits and Systems for Video Technology. Volum e:
Issue: 3 ,
March 2001 pp.282 -300
R. Aravind.
M . R .
Civanlar, and A. R. Reibman. Packet
loss
resilience of
MPEG-2 scalable video coding algorithm, IEEE Trans. Circuits
Svst.Video Technol.. vol. 6. DD. 2-35, Oct . 199 6.
[ I O ] Radha H.M., van der
Schaar
M., Yingwsi Chen, The MPEG-4
fine-grained
scalable
video coding method for multimedia streaming
over
IP , lEEE Transactions
on
Multimedia, Volume:
3
Issuc:
I ,
March
2001.
pp.53 -68
[I
] F. Ling.
W .
Li, and
H.
Sun,
Bitplane coding
of
DCT coefficients
for
image and video compression. in Prac. SPlE Visual Communications
and Image Processing (VC IP), San Jose, CA, Jan . 25-27.
1999.
[I21
W .
Li, F. Ling. and H. Sun, Bitpla ne coding of DCT coefficients'.,
ISOllEC JTCIISC29IWGI
I .
MPEG97lM2691. Oct. 22. 1997.
[I31 M. van der Schaar, H. Radha. and Y. Che n, An all FGS solution for
hybrid temporal-SNR scalability, ISOIIEC JTCIIS C29IW GI I,
MPEG99lM5552, Dec. 1999.
1141 Qi Wang, Feng
Wu.
Shipeng Li. Yuzhuo Zhong, Ya-Qin Zhang, Fine-
granularity spatially scalable video coding , 2001 IEEE lntcrnational
Conference on Acoustics, Speech, and Signal Processing. Volume: 3,
1151 Fens
f i
hipeng Li, Ya-Qin Zhang, A f ramewrk for ef f icient
progressive fine
granu laritysc alahle video coding , IEEE Transactions on
Circuits
and
Systems
for
Video Technology, Volume: 1
Issue: 3,
March
2001 pp.332
344
1161 Xiaoy an Sun. Feng Wu, Shipeng Li. Wen Gao. Ya-Qin Zhang,
Macroblock-based progressive fine granularity scalable (PFGS) video
coding with flexible temporal-SNR scalablilities , 2001 lnternatianal
Conference on Image Processing, V olume: 2. Oct 2001 pp. 1025 -1028
[I71 Talluri
R. ,
Error-resilient video coding in the
IS0
MPEG-4 standard ,
IEEE Communications M agazine, Volume: 36
Issue:
6, June
1998,
1181
ITU-T Study Group 16. .'Video Cnd ing For Law Bit Rate
Communication , ITU-T Recommendation H.263, 1996
[I91 Brailean J., Wireless mu ltimedia utilizing MPEG-4 rnor resilient tools ,
Wireless Communications and Networking Conference,
1999.
WCNC.
[20] Delicado F . . Cuenca P.. Orozco-Harbosa L.. Garrido
A , ,
Quiler
F..
Perfomimce evaluation ofenor-resilient mechanisms for MPEG-4 video
communications over wireless ATM links , Communications, Comp uters
and signal Processing, 2001. PACRIM. 2001 IEEE Pacific Rim
Conferenceon, Volume: 2,2001, pp.461
4 6 4
[21] Miki T., Hotani
S. .
K a w a h a w r . ,
Enor
resilience features of
MPEG-4
audio-visual coding and their application to
3G
multimedia terminals ,
Signal Processing Proceedings, 2000. WCCC-ICSP 2000. 5th
htematio nal Conference
on,
Volume:
I ,
2000, pp.40 -43
VOI
(221 Valente
S
Dufour C. Groliere F, Snook D, An efficient enor
concealment implementation for MPE G-4 video streams , IEEE
Transactions on Consumer Electronics, Volume: 47 Issue:
3,
August 200
I
pp.568 -578
1231 Takirhima Y., Wada
M.,
Murakami H., Reversible variable length
codes , IEEE Transactions on Communications, Volume: 43 Issue: 2
Pan:
3 ,
Feb.-March-April 1995 pp.158-162
1241 Cai
J.,
Qian Zhang, Wenwu Zhu, Chen C.W ., An FEC-based enor
control sch eme for wireless MPEG-4 video trans missio n . Wireless
2001 pp.1801
-1804
pp. I I2 -I
9
I Y Y Y
IEEE, 1999, p p . ~ 0 4 - ~ n a v ~ ~ . ~
1251 H einrelman W.R ., Budagavi
M .,
Talluri R., Unequal
error
protection of
MPEG-4 compressed video , Image Processing, 1999. lClP 99.
Proceedings. 1999 lntemational Conference on, Volume: 2, 1999, pp.530
-534
vol.2
[26] Martini M.G., Chiani M., Proportional unequal enor protection for
MPEG-4 video transmission , Communications, 2001. I 2001. IEEE
lntemational Conferenceon,
volume:
4,2001, pp.1033 -1037 01.4
1271 S.T. Wanall,
S.
Fabri, A.H. Sadk a, A.M. Kondor, Prioritisation of Data
Panitionrd MPEG-4 over Mobile Networks , E T
-
Europcan
Transactions
on
Telecommu nications, Vol. 12, No.
3 ,
MaylJune 2001
[28] Hagenauer J.; Rate-compatible punctured convolutional codes (RCPC
codes) and their applications , IEEE Transactions on Communications,
Volume: 36 Issue:
4.
April 1988 pp.389 -400
[29] Martini M.G.. Chiani M, Wireless transmission o f MPEG-4 video:
performance evaluation of unequal
error
protection
over
a black fading
channel , Vehicular Technology Conferencc, 2001. VTC
2001
Spring.
IEEE VTS 53rd. Volume:
3 , 2001
pp.2056-2060
1301 Yuenan WU, Data Com pression , Publish ing House of Electronics
Industv, J U ~ . 000 pp.19-22
[31]
ISOIIEC JTCllSC29lWGl
I
Adhoc Group
on
Core Experiments on
Enor Resilience aspects o MPEC-4 video, Description of Error
Resilience Core Experimcnts . N1 473, Maceio, Brazil, No . 19 96.
1321 T. Miki et al., Revised Error Pattcm Generation Programs for Core
Experiments on Error Resilience , ISOIIEC JT CIISC 29lW GI I
MPEG9611492. Maceio, Brazil, No . 1996.
Bo Yan
received his B.E. and M.E. degrees in electrical
engineering from the X i JiaoTang University, Xi'an,
China in
1998
and 2001 respectively.
Since September 2001, he has
been a
Ph.D. candidate in
the dcpanm ent of computer science
and
engineering in the
Chinese University of Hong Kong. Hong Kong. His
research
inlerests include video compression. video
transmission and wireless communication.
Kam
W .
NG eceived the Ph.D. degree in electrical and electronic engineering
from the University o f Bradford. U .K., in 1977 . He is a Professor in the
Department ofComp uter Science and Engineering at the Chinese Univcrsityaf
Hong
Kong,
Hong Kong. His research interests include reconfigurable
computing and mobile ad hoc networking.
Commun ications and Networking Conference, 2000. WCNC. 2000 IEEE,
2000, pp.124 3 -1247 vo1.3