chapter 11 error-control codingchapter 11 : lecture edition by k.heikkinen

22
Chapter 11 Error-Control Coding Chapter 11 : Lecture edition by K.Heikkinen

Post on 15-Jan-2016

228 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen

Chapter 11

Error-Control CodingChapter 11 :

Lecture edition by K.Heikkinen

Page 2: Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen

Chapter 11 goals

• To understand error-correcting codes in use theorems and their principles– block codes, convolutional codes, etc.

Page 3: Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen

Chapter 11 contents

• Introduction

• Discrete Memoryless Channels

• Linear Block Codes

• Cyclic Codes

• Maximum Likelihood decoding of Convolutional Codes

• Trellis-Coded Modulation

• Coding for Compound-Error Channels

Page 4: Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen

Introduction

• Cost-effective facility for transmitting information at a rate and a level of reliability and quality– signal energy per bit-to-noise power density ratio– achieved practically via error-control coding

• Error-control methods

• Error-correcting codes

Page 5: Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen

Discrete Memoryless Channels

Page 6: Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen

Discrete Memoryless Channels

• Discrete memoryless channles (see fig. 11.1) described by the set of transition probabilities– in simplest form binary coding [0,1] is used of

which BSC is an appropriate example– channel noise modelled as additive white

gaussian noise channel• the two above are so called hard-decision decoding

– other solutions, so called soft-decision coding

Page 7: Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen

Linear Block Codes

• A code is said to be linear if any twowords in the code can be added in modulo-2 arithmetic to produce a third code word in the code

• Linear block code has n bits of which k bits are always identical to the message sequence

• Then n-k bits are computed from the message bits in accordance with a prescribed encoding rule that determines the mathematical structure of the code– these bits are also called parity bits

Page 8: Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen

Linear Block Codes

• Normally code equations are written in the form of matrixes (1-by-k message vector)– P is the k-by-(n-k) coefficient matrix– I (of k) is the k-by-k identity matrix– G is k-by-n generator matrix

• Another way to show the relationship between the message bits and parity bits– H is parity-check matrix

Page 9: Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen

Linear Block Codes

• In Syndrome decoding the generator matrix (G) is used in the encoding at the transmitter and the parity-check matrix (H) atthe receiver– if corrupted bit, r = c+e, this leads to two

important properties• the syndrome is dependant only on the error pattern,

not on the trasmitted code word

• all error patterns that differ by a code word, have same syndrome

Page 10: Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen

Linear Block Codes

• The Hamming distance (or minimum) can be used to calculate the difference of the code words

• We have certain amount (2_power_k) code vectors, of which the subsets constitute a standard array for an (n,k) linear block code

• We pick the error pattern of a given code– coset leaders are the most obvious error patterns

Page 11: Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen

Linear Block Codes

• Example : Let us have H as parity-check matrix which vectors are – (1110), (0101), (0011), (0001), (1000), (1111)– code generator G gives us following codes (c) :

• 000000, 100101,111010, 011111

– Let us find n, k and n-k ?– what will we find if we multiply Hc ?

Page 12: Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen

Linear Block Codes

Examples of (7,4) Hamming code words and error patterns

Page 13: Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen

Cyclic Codes

• Cyclic codes form subclass of linear block codes• A binary code is said to be cyclic if it exhibits the

two following properties– the sum of any two code words in the code is also a

code word (linearity)• this means that we speak linear block codes

– any cyclic shift of a code word in the code is also a code word (cyclic)

• mathematically in polynomial notation

Page 14: Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen

Cyclic Codes

• The polynomial plays major role in the generation of cyclic codes

• If we have a generator polynomial g(x) of an (n,k) cyclic code with certain k polynomials, we can create the generator matrix (G)

• Syndrome polynomial of the received code word corresponds error polynomial

Page 15: Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen

Cyclic Codes

Page 16: Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen

Cyclic Codes

• Example : A (7,4) cyclic code that has a block length of 7, let us find the polynomials to generate the code (see example 3 on the book)– find code polynomials– find generation matrix (G) and parity-check matrix (H)

Page 17: Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen

Cyclic Codes

• Other remarkable cyclic codes– Cyclic redundancy check (CRC) codes– Bose-Chaudhuri-Hocquenghem (BCH) codes– Reed-Solomon codes

Page 18: Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen

Convolutional Codes

• Convolutional codes work in serial manner, which suits better to such kind of applications

• The encoder of a convolutional code can be viewed as a finite-state machine that consists of an M-stage shift register with prescribed connections to n modulo-2 adders, and a multiplexer that serializesthe outputs of the address

Page 19: Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen

Convolutional Codes

• Convolutional codes are portrayed in graphical form by using three different diagrams– Code Tree– Trellis– State Diagram

Page 20: Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen

Maximum Likelihood Decoding of Convolutional Codes

• We can create log-likelihood function to a convolutional code that have a certain hamming distance

• The book presents an example algorithm (Viterbi)– Viterbi algorithm is a maximum-likelihood decoder,

which is optimum for a AWGN (see fig. 11.17)• initialisation• computation step• final step

Page 21: Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen

Trellis-Coded Modulation

• Here coding is described as a process of imposing certain patterns on the transmitted signal

• Trellis-coded modulation has three features– Amount of signal point is larger than what is required,

therefore allowing redundancy without sacrificing bandwidth

– Convolutional coding is used to introduce a certain dependancy between successive signal points

– Soft-decision decoding is done in the receiver

Page 22: Chapter 11 Error-Control CodingChapter 11 : Lecture edition by K.Heikkinen

Coding for Compound-Error Channels

• Compound-error channels exhibit independent and burst error statistics (e.g. PSTN channels, radio channels)

• Error-protection methods (ARQ, FEC)