4. channel coding1

27
Channel Coding NC 101

Upload: alok-gaur

Post on 06-Apr-2018

224 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 1/27

Channel Coding

NC 101

Page 2: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 2/27

2

Introduction

Page 3: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 3/27

3

Definitions

• Encoder and Decoder  - The encoder adds redundantbits to the sender's bit stream to create a codeword . Thedecoder uses the redundant bits to detect and/or correctas many bit errors as the particular error-control code will

allow.

• Modulator and Demodulator  - The modulatortransforms the output of the encoder, which is digital,into a format suitable for the channel, which is usuallyanalog (e.g., a telephone channel). The demodulatorattempts to recover the correct channel symbol in thepresence of noise. When the wrong symbol is selected,the decoder tries to correct any errors that result.

Page 4: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 4/27

4

Definitions

• Bit-Error-Rate (BER)  - This is often the figure of merit for a error-control code. We want to keep this number small, typically less than10-4. Bit-error rate is a useful indicator of system performance on anindependent error channel.

• Message-Error-Rate (MER ) - This may be a more appropriatefigure of merit because any operator wants all of his messageserror-free and could care less about the BER.

• Undetected Message Error Rate (UMER)  - The probability that theerror detection decoder fails and an errored message (codeword)slips through undetected. This event happens when the error patternintroduced by the channel is such that the transmitted codeword isconverted into another valid codeword. Practical error detectioncodes ensure that the UMER is very small, often less than 10 –16.

Page 5: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 5/27

5

Definitions

• Coding Gain - The difference (in dB) in the required signal-to-noiseratio to maintain reliable communications after coding is employed.For example, if a communications system requires an SNR of 12 dBto maintain a BER of 10-5, but after coding it requires only 9 dB tomaintain the same BER, then the coding gain is 12 dB - 9 dB = 3dB.

• Code Rate  - Consider an encoder that takes k information bits andadds r redundant bits (also called parity bits) for a total of n = k + r bits per codeword. The code rate is the fraction k/n  and the code iscalled a (n , k ) error-control code. The added parity bits are anoverhead to the communications system, so the system designer

often chooses a code for its ability to achieve high coding gain withfew parity bits.

Page 6: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 6/27

6

What Coding Can do

• Reduce the occurrence of undetected errors: This was one of thefirst uses of error-control coding. Today's error detection codes areso effective that the occurrence of undetected errors is, for allpractical purposes, eliminated.

• Reduce the cost of communications systems: Transmitter poweris expensive, especially on satellite transponders. Coding canreduce the satellite's power needs because messages received atclose to the thermal noise level can still be recovered correctly.

• Overcome Jamming: Error-control coding is one of the mosteffective techniques for reducing the effects of the enemy's jamming.In the presence of pulse jamming, for example, coding can achievecoding gains of over 35 dB.

• Eliminate Interference: As the electromagnetic spectrum becomesmore crowded with man-made signals, error-control coding willmitigate the effects of unintentional interference.

Page 7: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 7/27

7

What Coding Can’t do 

• Shannon's capacity formula sets a lower limit onthe signal-to-noise ratio that we must achieve tomaintain reliable communications.

• If the SNR is below this limit, no error correction code can achieve almost error free transmission.

• As discussed in the earlier example, if the

original probability of error was 0.5, no amountof redundancy will help improve out odds ofgetting better results.

Page 8: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 8/27

8

What is Channel Coding?

• In telecommunication and information theory, forward error correction(FEC) (also called channel coding) is a system of error control for datatransmission, whereby the sender adds systematically generated redundantdata to its messages, also known as an error-correcting code. Americanmathematician Richard Hamming pioneered this field in the 1940's andinvented the first FEC code, the Hamming (7,4) code, in 1950.

• The carefully designed redundancy allows the receiver to detect and correcta limited number of errors occurring anywhere in the message without theneed to ask the sender for additional data. FEC gives the receiver an abilityto correct errors without needing a reverse channel to requestretransmission of data, but this advantage is at the cost of a fixed higher

forward channel bandwidth.

• FEC is therefore applied in situations where retransmissions are relativelycostly, or impossible such as when broadcasting to multiple receivers. Inparticular, FEC information is usually added to mass storage devices toenable recovery of corrupted data.

Page 9: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 9/27

9

Forward Error Correction

• Forward error correction techniques are widely used in satellitecommunication and broadcasting systems, allowing satelliteoperators to improve link budget without the use of expensive poweramplifiers and large dishes.

• Fundamental transmission times of around 500ms point towards theuse of Forward Error Correction (FEC) techniques (in preference toretransmission requests) to correct errors at the receiving terminalwhile avoiding significantly extending latency.

• The maximum fractions of errors or of missing bits that can becorrected is determined by the design of the FEC code, so differentforward error correcting codes are suitable for different conditions.

Page 10: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 10/27

10

Classification

• The two main categories of FEC codes are block codesand convolutional codes.

 – Block codes work on fixed-size blocks (packets) of bits or

symbols of predetermined size. Practical block codes cangenerally be decoded in polynomial time to their block length.

 – Convolutional codes work on bit or symbol streams of arbitrarylength. They are most often decoded with the Viterbi algorithm,though other algorithms are sometimes used. Viterbi decodingallows asymptotically optimal decoding efficiency with increasingconstraint length of the convolutional code, but at the expense ofexponentially increasing complexity.

Page 11: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 11/27

11

Block Codes

• There are many types of block codes, but among theclassical ones the most notable is Reed-Solomon coding because of its widespread use on the Compact disc, theDVD, and in hard disk drives.

• Golay, BCH and Hamming codes are other examples ofclassical block codes.

• Block codes are referred to as (n, k) codes. A block of kinformation bits are coded to become a block of n bits.

Page 12: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 12/27

12

Convolutional Codes

• Convolutional codes are typically specified by threeparameters; (n,k,K ) where – n is the number of output (code) bits – k in the number of input (data) bits – K is the number of memory registers

• The code rate is defined as k/n , same as the block code.

• Additional parameter K refers to the memory of the code.Convolutional codes are not block oriented.

• The code can remember previous (K-1) sets of “k ” databits. This data dependency gives great error correctionability to convolutional codes at the cost computationalcomplexity.

Page 13: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 13/27

13

Block vs. Convolutional Codes

• Each type has its advantages and disadvantages:

• Block codes are well understood and relatively easy toimplement. Correction errors are limited to the block

being used. Less susceptive to busrty error.

• Convolutional Codes are relatively complex butextremely powerful. Their performance in presence of

busrty errors can be unpredictable.

• We will first take a look at binary block codes and simpleintroduction to R-S Codes.

Page 14: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 14/27

14

Definitions

• Hamming Weight: of a code-word U is defined as the number ofnon-zero elements in U. (w(U)).

• Hamming Distance: In the binary world, distances are measuredbetween two binary words by the Hamming distance. The Hammingdistance is the number of disagreements between two binarysequences of the same size.

• It is a measure of how apart binary objects are. – The Hamming distance between sequences 001 and 101 is = 1 – The Hamming distance between sequences 0011001 and

1010100 is = 4

• Hamming distance and weight  are very important and usefulconcepts in coding. The knowledge of Hamming distance is used todetermine the capability of a code to detect and correct errors.

Page 15: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 15/27

15

Code-space and Code Words

• Consider the simple repetition code that we had discussedearlier, (3,1) code. Using 3 bits, there are 8 possiblecombinations, of which only 2 are valid code-words.

• Possible Combinations:

• The two valid codewords are 000 and 111. The Hammingdistance between the two valid code-words is 3.

000 001 010 011 100 101 110 111

V NV NV NV NV NV NV V

Page 16: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 16/27

16

Error Correction

• The minimum Hamming distance between two code-words in the repetition code is 3 (dmin).

• The error correcting ability of a block code is directly

related to it minimum Hamming distance. The number oferrors that can be corrected are:• t = integer((dmin-1)/2)• Number of errors that can be detected are dmin-1

• We want to have a code set with as large a Hammingdistance as possible since this directly effects our abilityto detect errors. It also says that in order to have a largeHamming distance, we need to make our codewordslarger.

Page 17: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 17/27

17

Single Bit Parity

• Consider a single even bit parity code, k=3 andn=4. (b2b1b0p0)

• Parity bit p0 =b2+b1+b0

 – How many code words are possible in this codingtechnique?

 – What is the Hamming weight of each code word?

 – What is the minimum Hamming distance between the

codewords? – How many errors can it correct/detect?

Page 18: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 18/27

18

Single Bit Parity

• For a (4,3) code, the number of possible codewords is 8. Thecodewords are as listed below.

• The minimum Hamming distance between the codewords is 2.

• Number of bit errors it can detect = 1 and number of bit errorsit can correct = 0.

Code

word

0000 0011 0101 0110 1001 1010 1100 1111

weight 0 2 2 2 2 2 2 4

Page 19: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 19/27

19

(6,3) Block Code

• Consider a (6,3) block code as shown below. First 3 bits (b2, b1, b0)represent data and last three bits (p2,p1, p0) are the parity bits. Therelationship between the data and parity bits is as follows:

• p2=b2+b0, p1=b2+b1 and p0=b1+b0 

• How many code words are possible?

• What is the Hamming weight of the code-words?

• What is the minimum Hamming distance?

Page 20: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 20/27

20

(6,3) Block Code

• The code words and the weights are as shown below:

• p2=b2+b0, p1=b2+b1 and p0=b1+b0

• Minimum distance dmin = 3, so it can correct 1 error.

b2 0 0 0 0 1 1 1 1

b1 0 0 1 1 0 0 1 1

b0 0 1 0 1 0 1 0 1

p2 0 1 0 1 1 0 1 0p1 0 0 1 1 1 1 0 0

p0 0 1 1 0 0 1 1 0

weight 0 3 3 4 3 4 4 3

Page 21: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 21/27

Page 22: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 22/27

22

Hamming Codes

• For a Hamming code, the number of parity bits is equal to mand number of data bits equal to 2m  − m  − 1 .

• Using our terminology, the code pairs are

• Where n=2m-1 and k = 2m-m-1=n-m.

• Each one these codes can correct a single bit error and detectup to 2 bit errors. As m increase, the code efficiencyincreases as well and tends towards one.

m 3 4 5 6(n,k) (7,4) (15,11) (31,26) (63,57)

Code Rate 0.5714 0.733 0.838 0.9

Page 23: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 23/27

2323

(7,4) code• Consider a message having four data bits (D) which is to be

transmitted as a 7-bit codeword by adding three error control bits.This would be called a (7,4) code. The three bits to be added arethree EVEN Parity bits (P), where the parity of each is computed ondifferent subsets of the message bits as shown below. 

7 6 5 4 3 2 1

D D D D P P P 7 BIT CODE WORD

D D D P EVEN PARITY

D D D P EVEN PARITY

D D D P EVEN PARITY

Page 24: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 24/27

24

(7,4) Code• Why Those Bits? - The three parity bits (1,2,3) are related to the

data bits (4,5,6,7). Each overlapping circle corresponds to one paritybit and defines the four bits contributing to that parity computation.For example, data bit 4 contributes to parity bits 1 and 2. Each circle(parity bit) encompasses a total of four bits, and each circle must

have EVEN parity. Given four data bits, the three parity bits caneasily be chosen to ensure this condition.

Page 25: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 25/27

25

Designing Block Codes

• So far we have looked at: – What is Forward Error Correction? What are different types of

FEC?

 – What are block codes?

 – Relationship between minimum Hamming distance and errorcorrection and detection ability of a code.

 – The metrics used to compare different block code.

• Code Rate

• Probability of frame error

 – How to compute the frame error rate for different codes?

 – How do you compare the performance for different errorcorrection codes?

 – How do we design a code for the given requirements?

Page 26: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 26/27

26

Probability of Bit Error

• So far we has discussed frame error rate. Frame error ratedepends upon the number of bit errors that can be corrected. Biterror rate can be estimated using the equation shown below,where “t” is the number of errors that can be corrected using the

code.

n

t  j

 jn jb

n

t  j

 jn j M 

 p p jn j

n j

nP

 p p jn j

nP

1

1

)1()!(!

!1

)1()!(!

!

Page 27: 4. Channel Coding1

8/3/2019 4. Channel Coding1

http://slidepdf.com/reader/full/4-channel-coding1 27/27

27

Comparing Probability of Bit Error

• Comparison: For a BSC channel with p=0.01. Comparison overvarious Hamming Codes.

Code Code Rate Probability of bit Error

(7,4) 0.5714 5.8520e-004

(15,11) 0.7333 0.0013

(31,26) 0.8387 0.0026

(63,57) 0.9 0.0046