eee436

20
EEE377 Lecture Notes 1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) [email protected] Room 2.14

Upload: said

Post on 22-Mar-2016

58 views

Category:

Documents


0 download

DESCRIPTION

EEE436. DIGITAL COMMUNICATION Coding. Error-Correcting capability of the Convolutional Code. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: EEE436

EEE377 Lecture Notes 1

EEE436DIGITAL COMMUNICATION

Coding

En. Mohd Nazri MahmudMPhil (Cambridge, UK)BEng (Essex, UK)[email protected] 2.14

Page 2: EEE436

EEE377 Lecture Notes 2

Error-Correcting capability of the Convolutional Code

The error-correcting capability of a convolutional code is determined by its constraint length K = L + 1 where L is the number of bits of message sequence in the shift registers and also the free distance, dfree

The constraint length of a convolutional code, expressed in terms of message bits is equal to the number of shifts over which a single message bit can influence the encoder output.

In an encoder with an L-stage shift register, the memory of the encoder equals L message bits and K=L+1 shifts are required for a message bit to enter the shift register and finally come out. Thus the constraint length = K.

The constraint length determines the maximum free distance of a code

Free distance is equal to the minimum Hamming distance between any two codewords in the code.

A convolutional code can correct t errors if dfree > 2t

Page 3: EEE436

EEE377 Lecture Notes 3

Error-correctionThe free distance can be obtained from the state diagram by splitting node a into ao and a1

Page 4: EEE436

EEE377 Lecture Notes 4

RulesA branch multiplies the signal at its input node by the transmittance

characterising that branchA node with incoming branches sums the signals produced by all of

those branchesThe signal at a node is applied equally to all the branches outgoing

from that nodeThe transfer function of the graph is the ratio of the output signal to

the input signal

Page 5: EEE436

EEE377 Lecture Notes 5

D on a branch describes the Hamming weight of the encoder output corresponding to that branch

The exponent of L is always equal to one since the length of each branch is one.

Let T(D,L) denote the transfer function of the signal flow graph Using rules 1, 2 and 3 to obtain the following input-output relations

b = D2Lao + Lc

c= DLb + DLdd=DLb + DLda1=D2Lc

Page 6: EEE436

EEE377 Lecture Notes 6

Solving the set of equations for the ratio a1/a0, the transfer function of the graph is given by

T(D,L) = D5L3 / 1-DL(1+L)We can use the binomial expansion and power series to get the expression for the

distance transfer function (with L=1) asT(D,1)=D5 + 2D6 + 4D7 + ……..

Since the free distance is the minimum Hamming distance between any two codewords in the code and the distance transfer function T(D,1) enumerates the number of codewords that are a given distance apart, it follows that the exponent of the first term in the expansion T(D,1) defines the free distance=5.

Therefore the (2,1,2) convolutional encoder with constraint length K = 3, can only correct up to 2 errors.

Page 7: EEE436

EEE377 Lecture Notes 7

Error-correctionConstraint Length, K Maximum free distance,

dfree

2 3

3 5

4 6

5 7

6 8

7 10

8 10

Page 8: EEE436

EEE377 Lecture Notes 8

Turbo Codes A relatively new class of convolutional codes first introduced in 1993A basic turbo encoder is a recursive systematic encoder that employs two convolutional encoders (recursive systematic convolutional or RSC in parallel, where the second encoder is preceded by a pseudorandom interleaver to permute the symbol sequenceTurbo code is also known as Parallel Concatenated Codes (PCC)

RSC encoder 1

Interleaver RSC encoder 2

Systematic bits, xk

Parity bits, y1k

Parity bits, y2k

Puncture&

MUXTo

transmitter

Message bits

Page 9: EEE436

EEE377 Lecture Notes 9

Turbo Codes The input data stream is applied directly to encoder 1 and the pseudorandomly reordered version of the same data stream is applied to encoder 2Both encoders produce the parity bits.The parity bits and the original bit stream are multiplexed and then transmittedThe block size is determined by the size of the interleaver for example 65, 536 is common)Puncture is applied to remove some parity bits to maintain the code rate at 1/2 . For example, by eliminating odd parity bits from the first RSC and the even parity bits from the second RSC

RSC encoder 1

Interleaver RSC encoder 2

Systematic bits, xk

Parity bits, y1k

Parity bits, y2k

Puncture&

MUXTo

transmitter

Messagebits

Page 10: EEE436

EEE377 Lecture Notes 10

RSC encoder for Turbo encoding

Page 11: EEE436

EEE377 Lecture Notes 11

RSC encoder for Turbo encoding

Non-recursiveRecursive

0001

0011

0001

0001

1111

0001

More 1’s for recursive gives better error performance

Page 12: EEE436

EEE377 Lecture Notes 12

Turbo decoding

Turbo decoder consists of two maximum a posterior (MAP) decoders and a feedback pathDecoding operates on the noisy version of the systematic bits and the two sets ofparity bits in two decoding stages to produce estimate of the original message bitsThe first decoder takes the information from the received signal and calculates the A posterior probability (APP) valueThis value is then used as the a priori probability value for the second decoderThe output is then fedback to the first decoder where the process repeats in an iterativefashion with each iteration producing more refined estimates.

Page 13: EEE436

EEE377 Lecture Notes 13

Turbo decoding uses BCJR algorithm

BCJR ( Bahl, Cocke, Jelinek and Raviv, 1972) algorithm is a maximum a posteriori probability (MAP) decoder which minimizes the bit errors by estimating the a posteriori probabilitities of the individual bits in a code word.It takes into account the recursive character of the RSC codes and computes a log-likelihood ratio to estimate the APP for each bit.

Page 14: EEE436

EEE377 Lecture Notes 14

Low Density Parity Check (LDPC) codes

An LDPC code is specified in terms of its parity-check matrix, H that has the following structural properties

i) Each row consists of ρ 1’sii) Each column consists of γ 1’siii) The number of 1’s in common between any two columns is no

greater than 1; ie λ= 0 or 1iv) Both ρ and γ are small compared with the length of the code

LDPC codes are recognised in the form of (n, γ, ρ)

H is said to be a low density parity check matrix

H has constant row and column weights (ρ and γ )

Density of H = total number of 1’s divided by total number of entries in H

Page 15: EEE436

EEE377 Lecture Notes 15

Low Density Parity Check (LDPC) codes Example (15, 4, 4) LDPC code

Each row consists of ρ=4 1’sEach column consists of γ=4 1’sThe number of 1’s in common

between any two columns is no greater than 1; ie λ= 0 or 1

Both ρ and γ are small compared with the length of the code

Density = 4/15 = 0.267

Page 16: EEE436

EEE377 Lecture Notes 16

Low Density Parity Check (LDPC) codes – Constructing H

For a given choice of ρ and γ, form a kγ by kρ matrix H (where k = a positive integer > 1) that consists of γ k-by-kρ submatrix, H1, H2,….Hγ

Each row of a submatrix has ρ 1’s and each column of a submatrix contains a single 1

Therefore each submatrix has a total of kρ 1’s.

Based on this, construct H1 by appropriately placing the 1’s.

For ,the ith row of H1 contains all its ρ 1’s in columns (i-1) ρ+1 to iρ

The other submatrices are merely column permutations of H1

ki 1

Page 17: EEE436

EEE377 Lecture Notes 17

Low Density Parity Check (LDPC) codes – Example Constructing H

Choice of ρ=4 and γ=3 and k=5

Form a kγ by kρ (15-by-20) matrix H that consists of γ=3 k-by-kρ (5-by-20) submatrix, H1, H2,Hγ=3

Each row of a submatix has ρ=4 1’s and each column of a submatrix contains a single 1

Therefore each submatrix has a total of kρ=20 1’s.

Based on this, construct H1 by appropriately placing the 1’s.

For ,the ith row of H1 contains all its ρ=4 1’s in columns (i-1) ρ+1 to iρ

The other submatrices are merely column permutations of H1

51 i

Page 18: EEE436

EEE377 Lecture Notes 18

Low Density Parity Check (LDPC) codes – Example Constructing H for (20, 3, 4) LDPC code

Page 19: EEE436

EEE377 Lecture Notes 19

Construction of Low Density Parity Check (LDPC) codes

There are many techniques of constructing the LDPC codesConstructing LDPC codes with shorter block is easier than the longer onesFor large block sizes, LDPC codes are commonly constructed by first studying the behaviour of decoders.Among the techniques are Pseudo-random techniques, Combinatorial approaches and finite geometry. These are beyond the scope of this lecture.For this lecture, we see how short LDPC codes are constructed from a given parity check matrix.

For example a (6,3) linear LDPC code given by the following H

0

Page 20: EEE436

EEE377 Lecture Notes 20

Construction of Low Density Parity Check (LDPC) codes

For example a (6,3) linear LDPC code given by the following H

The 8 codewords can be obtained by putting the parity-check matrix H into this form H=[PT I In-k ]; where P=coefficient matrix and In-k = identity matrix

The generator matrix is, G = [In-k I P]

At the receiver, H=[PT I In-k ] is used to check the error syndrome.

0

Exercise: Generate the codeword for m=001 and show how the receiver performsthe error checking