[ieee 2010 data compression conference - snowbird, ut, usa (2010.03.24-2010.03.26)] 2010 data...

1
LDPC Codes for Information Embedding and Lossy Distributed Source Coding Mina Sartipi Department of Computer Science and Engineering University of Tennessee at Chattanooga Chattanooga, TN 37403 2598 E-mail: [email protected] Inspired by our recently proposed constructive framework for the lossy distributed source coding with side information available at the decoder, we propose a framework for information embedding with side information available at the encoder. Our proposed method is based on sending parity bits using LDPC codes. The process of embedding information M in the host signal Y with length k is shown in Fig. 1. As BSC(p) Yd Yd Y Channel Decoder M ^ Yd M Channel Decoder Fig. 1. Embedding information M is the host signal Y . shown in Fig. 1, the signal Y is mapped to the composite signal Y d using the side information M available at the encoder. This mapping is done such that no serious degradation is caused to Y , ρ(Y d ,Y ) d, and the composite signal is robust against deliberate attacks, which are modeled by BSC(p) in Fig. 1. The receiver recovers M from Y d . To generate Y d , we propose to use a systematic LDPC code with the generator matrix G =[I |P 1 |P 2 ], where I is the identity matrix of dimension k(1 h(d)) × k(1 h(d)). We assume that Y d is a codeword of the matrix G generated from information message y d of length k(1 h(d)), where y d P 1 = M and y d P 2 =0. Using these assumptions on y d and the fact that ρ(Y d ,Y ) d, Y d is found by using the LDPC decoder corresponding to the code G. It can be easily shown that the procedure explained above results in an embedding rate of h(d) h(p), which is known as Gelfand-Pinsker limit. We further provide a detailed design procedure for the LDPC code that guarantees performance close to the Gelfand-Pinsker limit. The parity-check matrix associated with the generator matrix G described above is of the form H = P T 1 P T 2 I . First, we design the equivalent LDPC code with parity-check matrix H = C 1 C 2 C 3 , then using Gaussian elimination an equivalent parity-check matrix in the systematic form can be derived. The conditions that matrix H needs to satisfy are as follows: 1) The submatrix C 2 must be designed such that Y d can be recovered from ˆ Y d . 2) The matrix H must be designed such that Y can be mapped to Y d . Considering the above two conditions, we designed LDPC codes and measured their performance for different lengths. The simulation results show that the gap from the Gelfand-Pinsker theoretical limits for a code of length 952 is 0.2 and the gap decreases as code length increases. 2010 Data Compression Conference 1068-0314/10 $26.00 © 2010 IEEE DOI 10.1109/DCC.2010.87 551

Upload: mina

Post on 01-Mar-2017

214 views

Category:

Documents


2 download

TRANSCRIPT

LDPC Codes for Information Embedding and LossyDistributed Source Coding

Mina SartipiDepartment of Computer Science and Engineering

University of Tennessee at ChattanoogaChattanooga, TN 37403 − 2598E-mail: [email protected]

Inspired by our recently proposed constructive framework for the lossy distributed source coding withside information available at the decoder, we propose a framework for information embedding with sideinformation available at the encoder. Our proposed method is based on sending parity bits using LDPCcodes. The process of embedding information M in the host signal Y with length k is shown in Fig. 1. As

BSC(p)Yd YdY ChannelDecoder

M

^Yd

M

ChannelDecoder

Fig. 1. Embedding information M is the host signal Y .

shown in Fig. 1, the signal Y is mapped to the composite signal Yd using the side information M availableat the encoder. This mapping is done such that no serious degradation is caused to Y , ρ(Yd, Y ) ≤ d, andthe composite signal is robust against deliberate attacks, which are modeled by BSC(p) in Fig. 1. Thereceiver recovers M from Yd.

To generate Yd, we propose to use a systematic LDPC code with the generator matrix G = [I|P1|P2],where I is the identity matrix of dimension k(1− h(d))× k(1− h(d)). We assume that Yd is a codewordof the matrix G generated from information message yd of length k(1 − h(d)), where ydP1 = M andydP2 = 0. Using these assumptions on yd and the fact that ρ(Yd, Y ) ≤ d, Yd is found by using the LDPCdecoder corresponding to the code G. It can be easily shown that the procedure explained above resultsin an embedding rate of h(d) − h(p), which is known as Gelfand-Pinsker limit.

We further provide a detailed design procedure for the LDPC code that guarantees performance closeto the Gelfand-Pinsker limit. The parity-check matrix associated with the generator matrix G described

above is of the form H =

[P T

1

P T2

∣∣∣∣∣I]

. First, we design the equivalent LDPC code with parity-check matrix

H =[

C1

C2

∣∣C3

], then using Gaussian elimination an equivalent parity-check matrix in the systematic form

can be derived. The conditions that matrix H needs to satisfy are as follows:1) The submatrix C2 must be designed such that Yd can be recovered from Yd.2) The matrix H must be designed such that Y can be mapped to Yd.

Considering the above two conditions, we designed LDPC codes and measured their performance fordifferent lengths. The simulation results show that the gap from the Gelfand-Pinsker theoretical limits fora code of length 952 is 0.2 and the gap decreases as code length increases.

2010 Data Compression Conference

1068-0314/10 $26.00 © 2010 IEEE

DOI 10.1109/DCC.2010.87

551