chapter 11 error-control codingchapter 11 : lecture edition by k.heikkinen
Post on 15-Jan-2016
228 views
TRANSCRIPT
Chapter 11
Error-Control CodingChapter 11 :
Lecture edition by K.Heikkinen
Chapter 11 goals
• To understand error-correcting codes in use theorems and their principles– block codes, convolutional codes, etc.
Chapter 11 contents
• Introduction
• Discrete Memoryless Channels
• Linear Block Codes
• Cyclic Codes
• Maximum Likelihood decoding of Convolutional Codes
• Trellis-Coded Modulation
• Coding for Compound-Error Channels
Introduction
• Cost-effective facility for transmitting information at a rate and a level of reliability and quality– signal energy per bit-to-noise power density ratio– achieved practically via error-control coding
• Error-control methods
• Error-correcting codes
Discrete Memoryless Channels
Discrete Memoryless Channels
• Discrete memoryless channles (see fig. 11.1) described by the set of transition probabilities– in simplest form binary coding [0,1] is used of
which BSC is an appropriate example– channel noise modelled as additive white
gaussian noise channel• the two above are so called hard-decision decoding
– other solutions, so called soft-decision coding
Linear Block Codes
• A code is said to be linear if any twowords in the code can be added in modulo-2 arithmetic to produce a third code word in the code
• Linear block code has n bits of which k bits are always identical to the message sequence
• Then n-k bits are computed from the message bits in accordance with a prescribed encoding rule that determines the mathematical structure of the code– these bits are also called parity bits
Linear Block Codes
• Normally code equations are written in the form of matrixes (1-by-k message vector)– P is the k-by-(n-k) coefficient matrix– I (of k) is the k-by-k identity matrix– G is k-by-n generator matrix
• Another way to show the relationship between the message bits and parity bits– H is parity-check matrix
Linear Block Codes
• In Syndrome decoding the generator matrix (G) is used in the encoding at the transmitter and the parity-check matrix (H) atthe receiver– if corrupted bit, r = c+e, this leads to two
important properties• the syndrome is dependant only on the error pattern,
not on the trasmitted code word
• all error patterns that differ by a code word, have same syndrome
Linear Block Codes
• The Hamming distance (or minimum) can be used to calculate the difference of the code words
• We have certain amount (2_power_k) code vectors, of which the subsets constitute a standard array for an (n,k) linear block code
• We pick the error pattern of a given code– coset leaders are the most obvious error patterns
Linear Block Codes
• Example : Let us have H as parity-check matrix which vectors are – (1110), (0101), (0011), (0001), (1000), (1111)– code generator G gives us following codes (c) :
• 000000, 100101,111010, 011111
– Let us find n, k and n-k ?– what will we find if we multiply Hc ?
Linear Block Codes
Examples of (7,4) Hamming code words and error patterns
Cyclic Codes
• Cyclic codes form subclass of linear block codes• A binary code is said to be cyclic if it exhibits the
two following properties– the sum of any two code words in the code is also a
code word (linearity)• this means that we speak linear block codes
– any cyclic shift of a code word in the code is also a code word (cyclic)
• mathematically in polynomial notation
Cyclic Codes
• The polynomial plays major role in the generation of cyclic codes
• If we have a generator polynomial g(x) of an (n,k) cyclic code with certain k polynomials, we can create the generator matrix (G)
• Syndrome polynomial of the received code word corresponds error polynomial
Cyclic Codes
Cyclic Codes
• Example : A (7,4) cyclic code that has a block length of 7, let us find the polynomials to generate the code (see example 3 on the book)– find code polynomials– find generation matrix (G) and parity-check matrix (H)
Cyclic Codes
• Other remarkable cyclic codes– Cyclic redundancy check (CRC) codes– Bose-Chaudhuri-Hocquenghem (BCH) codes– Reed-Solomon codes
Convolutional Codes
• Convolutional codes work in serial manner, which suits better to such kind of applications
• The encoder of a convolutional code can be viewed as a finite-state machine that consists of an M-stage shift register with prescribed connections to n modulo-2 adders, and a multiplexer that serializesthe outputs of the address
Convolutional Codes
• Convolutional codes are portrayed in graphical form by using three different diagrams– Code Tree– Trellis– State Diagram
Maximum Likelihood Decoding of Convolutional Codes
• We can create log-likelihood function to a convolutional code that have a certain hamming distance
• The book presents an example algorithm (Viterbi)– Viterbi algorithm is a maximum-likelihood decoder,
which is optimum for a AWGN (see fig. 11.17)• initialisation• computation step• final step
Trellis-Coded Modulation
• Here coding is described as a process of imposing certain patterns on the transmitted signal
• Trellis-coded modulation has three features– Amount of signal point is larger than what is required,
therefore allowing redundancy without sacrificing bandwidth
– Convolutional coding is used to introduce a certain dependancy between successive signal points
– Soft-decision decoding is done in the receiver
Coding for Compound-Error Channels
• Compound-error channels exhibit independent and burst error statistics (e.g. PSTN channels, radio channels)
• Error-protection methods (ARQ, FEC)