notes for turbo codes

Upload: maria-aslam

Post on 07-Apr-2018

236 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/6/2019 Notes for Turbo Codes

    1/15

    History of Channel Code : Shannons Channel Coding Theorem

    Kind of Channel Code : Block Codes Convolutional Codes Concatenated Codes

    Turbo Codes:kind of concatenated codes +uses interleaver

    History of channel codes:

  • 8/6/2019 Notes for Turbo Codes

    2/15

    Types of codes

    Block codes

    Figure 2: A channel encoder that generates an (n,k) block code.

    Code rate:Channel data rate:

    where denotes the bit rate of the information source.

    Convolutional codes

  • 8/6/2019 Notes for Turbo Codes

    3/15

    Figure 3: A convolutional encoder with memory Mthat encodes the incoming bits serially.

    Modulo-2 arithmetic- Taking the remainder when dividing by 2:

    BlockcodesA binary block code Cof block length n is a subset of the set of all binary n-tuples

    where for . An n-tuple belonging to the code is

    called a codeword or code vector of the code.

    If the subset is a vector space over {0,1}, the binary block code is then a linear block code.

    Minimum Hamming distance

    The Hamming weight of an n-tuple , , is the number of nonzero

    components of the n-tuple.

    The Hamming distance between any two n-tuples and , denoted , is the number of

    positions in which their components differ. It is clear that where

    is defined as (a component-

    by-component subtraction).

    The minimumHamming distance, , of the block code Cis the smallest Hamming distancebetween pairs of distinct codewords.

  • 8/6/2019 Notes for Turbo Codes

    4/15

    Example .

    Correction and detection ability of a block code

    When is the actual codeword and its possible corrupted received version, then the error

    pattern is then-tuple defined by

    The number of errors is just .

    Error detection We say that a code can detect all patterns oftor fewer errors if the decoder

    chosen never incorrectly decodes whenever the number of errors is less than or equal to t.

    Error correction We say that a code can correct all patterns oftor fewer errors if the decoder

    chosen correctly decodes whenever the number of errors is less than or equal to t.

    The implicit decoding rule: decode to the nearest codeword in terms of Hamming distance.

    Linear codes

    A binary block code Cis linear if, for and in C, is also in C.

    The minimum Hamming distance of a linear block code is equal to the smallest weight of the

    nonzero codewords in the code.

    Example is a linear code because etc.

    If we take kn-tuples , ,

    that are linearly independent and form a matrix

    An (n,k) block code may then be conveniently represented as

  • 8/6/2019 Notes for Turbo Codes

    5/15

    where is the codeword corresponding to the message block

    .

    G is called the generatormatrix of the code.

    A code is systematic if the generator matrixG is of the form

    i.e., where is a identity matrix and P is an matrix. The generator

    matrix is also said to be systematic.

    In a systematic code, the first kbits of the codeword is the same as the corresponding message

    bits.

    For the generator matrixG of a linear code, there exists an matrix,H, whose n-k

    rows are linearly independent such that

    The matrixH is called the parity check matrix of the code.

    For any codeword in the code,

    If the code is systematic,

    where is an identity matrix.

    Clearly, the n-k rows are linearly independent and

  • 8/6/2019 Notes for Turbo Codes

    6/15

    Syndrome decoding

    Given a received vector , the receiver has the task of decoding from .

    Syndrome:

    y The syndrome depends only on the error pattern, and not on the transmitted codeword.y All error patterns that differ by a codeword have the same syndrome.

    Standard array for an (n,k) linear code:

    1. The codewords are placed in a row with all-zero codeword as the left-most one.2. An error pattern is picked and placed under , and a second row is formed by adding

    to each of the remaining codewords in the first row. ( has not appeared previously andhas the least minimum Hamming weight.)

    3. Step 2 is repeated until all the possible error patterns have been account for.

    Each row in the standard array is called a coset. The left-most element of a row is called thecoset leader of the coset. Note that each coset has a unique syndrome.

    Syndrome decoding for an (n,k) linear block code:

    1. For the received vector , compute the syndrome .2. Within the coset characterized by the syndrome , identify the coset leader; call it .3. Compute the codeword

    as the decoded version of the received vector .

    Example Consider the (6,3) linear code generated by the following matrix

    We then have the parity check matrix

    Standard array for the (6,3) linear code:

  • 8/6/2019 Notes for Turbo Codes

    7/15

    Example (cont.) If the code is used for error correction on a BSC with transition probability p,the probability that the decoder decodes correctly is

    The probability that the decoder commits an erroneous decoding is then

    To minimize , the error patterns that are mostly likely to occur for a given channel should be

    chosen as the coset leaders.

    Example The generator matrix for the the repetition code of length 3 ({ 000,111}) is

    Therefore, the parity check matrix of the code is

    Hamming codes

    For any positive , there exists a Hamming code (a linear code)

    with the following parameters:

  • 8/6/2019 Notes for Turbo Codes

    8/15

    The parity check matrixHof this code consists of all the nonzero m-tuple as its columns. Insystematic form, the columns ofHare arranged in the following form:

    where is an identity matrix and the submatrixQ consists of columns

    which are the m-tuples of weight 2 or more.

    Concatenated codes

    Figure 1: Original concatenated coding system.

    Concatenated codesare error-correcting codes that are constructed from two or

    more simpler codes in order to achieve good performance with reasonable

    complexity. Originally introduced by Forney in 1965 to address a theoretical issue,

    they became widely used in space communications in the 1970s. Turbo codes and

    other modern capacity- approaching codes may be regarded as elaborations of this

    approach.

    Capacity-approaching codes

    The field of channel coding was revolutionized by the invention of turbo codes by

    Berrou et al. in 1993 (Berrou et al. 1993) . Turbo codes use multiple carefully chosen

    codes, a pseudo-random interleaver, and iterative decoding to approach the

    Shannon limit within 1 dB

    Turbo codes are error-correcting codes with performance close to the Shannon

    theoretical limit [SHA]. These codes have been invented at ENST Bretagne (now

    TELECOM Bretagne), France, in the beginning of the 90's [BER]. The encoder is

    formed by the parallel concatenation of two convolutional codes separated by an

    interleaver or permuter. An iterative process through the two corresponding

  • 8/6/2019 Notes for Turbo Codes

    9/15

    decoders is used to decode the data received from the channel. Each elementary

    decoder passes to the other soft (probabilistic) information about each bit of the

    sequence to decode. This soft information, called extrinsic information, is updated at

    each iteration.

    Figure 1: Concatenated encoder and decoder

    Contents

    [hide]

    1 Precursor

    2 The genesis of turbo codes

    3 Turbo-decoding

    4 Applications of turbo codes

    5 References

    6 Further reading

    7 See also

    [edit]

    Precursor

    In the 60s, Forney [FOR] introduced the concept of concatenation to obtain coding

    and decoding schemes with high error-correction capacity. Typically, the innerencoder is a convolutional code and the inner decoder, using the Viterbi algorithm, is

    able to process soft information, that is, probabilities or logarithms of probabilities in

    practice. The outer encoder is a block encoder, typically a Reed-Solomon encoder

    and its associated decoder works with the binary decisions supplied by the inner

    decoder, as shown in Figure 1. As the former may deliver errors occurring in

  • 8/6/2019 Notes for Turbo Codes

    10/15

    packets, the role of the deinterleaver is to spread these errors so as to make the

    outer decoding more efficient.

    Though the minimum Hamming distance is very large, the performance of such

    concatenated schemes is not optimal, for two reasons. First, some amount of

    information is lost due to the inability of the inner decoder to provide the outer

    decoder with soft information. Second, if the outer decoder takes benefit from the

    work of the inner one, the converse is not true. The decoder operation is clearly

    dissymmetric.

    To allow the inner decoder to produce soft decisions instead of binary decisions,

    modified versions of the Viterbi algorithm (SOVA: Soft-Output Viterbi algorithm) were

    proposed by Battail [BAT] and Hagenauer & Hoeher [HAG]. But soft inputs are not

    easy to handle in a Reed-Solomon decoder.

    [edit]

    The genesis of turbo codes

    The invention of turbo codes finds its origin in the will to compensate for the

    dissymmetry of the concatenated decoder of Figure 1. To do this, the concept of

    feedback - a well-known technique in electronics is implemented between the twocomponent decoders (Figure 2).

    Figure 2: Decoding the concatenated code with feedback

  • 8/6/2019 Notes for Turbo Codes

    11/15

    The use of feedback requires the existence of Soft-In/Soft-Out (SISO) decoding

    algorithms for both component codes. As the SOVA algorithm was already available

    at the time of the invention, the adoption of convolutional codes appeared natural for

    both codes. For reasons of bandwidth efficiency, serial concatenation is replaced

    with parallel concatenation. Actually, parallel concatenation combining two codes

    with rates and gives a global rate equal to:

    This rate is higher than that of a serially concatenated code, which is:

    for the same values of and , and the lower these rates, the larger the

    difference. Thus, with the same performance of component codes, parallel

    concatenation offers a better global rate, but this advantage is lost when the

    rates come close to unity. Furthermore, in order to ensure a sufficiently

    large dmin for the concatenated code, classical non-systematic non-

    recursive convolutional codes (Figure 3.a) have to be replaced with

    recursive systematic convolutional (RSC) codes (Figure 3.b).

    Figure 3: (A) Non-systematic non-recursive convolutional code with

    polynomials 13,15. (B) Recursive systematic convolutional (RSC) code with

    polynomials 13 (recusivity), 15 (parity)

  • 8/6/2019 Notes for Turbo Codes

    12/15

    Figure 4: A turbo code with component codes 13, 15

    What distinguishes both codes is the minimum input weight . The input

    weight is the number of "1" in an input sequence. Suppose that the

    encoder of Figure 3.a is initialized in state 0, then fed with an all-zero

    sequence, except in one place (that is, ). The encoder will retrieve

    state 0 as soon as the fourth 0 following the 1 will appear at the input. Wehave then . In the same conditions, the encoder of Figure 3.b

    needs a second 1 to retrieve state 0. Without this second 1, this encoder

    will act as a pseudo-random generator, with respect to its output . So,

    and this property is very favourable regarding when parallel

    concatenation is implemented. A typical turbo code is depicted in Figure 4.

    The data are encoded both in the natural order and in a permuted order by

    two RSC codes and that issue parity bits and . In order to

    encode finite-length blocks of data, RSC encoding is terminated by tail bitsor has tail-biting termination. The permutation has to be devised carefully

    because it has a strong impact on . The natural coding rate of a turbo

    code is (three output bits for one input bit). To deal with higher

    coding rates, the parity bits are punctured. For instance, transmitting and

    alternately leads to .

  • 8/6/2019 Notes for Turbo Codes

    13/15

    The original turbo code [BER] uses a parallel concatenation of convolutional

    codes. But other schemes like serial concatenation of convolutional codes

    [BEN] or algebraic turbo codes [PYN] have since been studied. More

    recently, non-binary turbo codes have also been proposed [DOU].

    [edit]

    Turbo-decoding

    Decoding the code of Figure 4 by a global approach is not possible,

    because of the astronomical number of states to consider. A joint

    probabilistic process by the decoders of and , has to be elaborated.

    Because of latency constraints, this joint process is worked out in an

    iterative manner in a digital circuit. Turbo decoding relies on the following

    fundamental criterion:

    when having several probabilistic machines work together on the

    estimation of a common set of symbols, all the machines have to give

    the same decision, with the same probability, about each symbol, as a

    single (global) decoder would.

    To make the composite decoder satisfy this criterion, the structure of Figure5 is adopted. The double loop enables both component decoders to benefit

    from the whole redundancy. The term turbo was given to this feedback

    construction with reference to the principle of the turbo-charged engine.

  • 8/6/2019 Notes for Turbo Codes

    14/15

    Figure 5: A turbo decoder

    The components are SISO decoders, permutation ( ) and inverse

    permutation ( ) memories. The node variables of the decoder are

    Logarithms of Likelihood Ratios (LLR). An LLR related to a particular binary

    datum d is defined as:

    The role of a SISO decoder is to process an input LLR and, thanks to

    local redundancy (i.e. for DEC1, for DEC2), to try to improve it.

    The output LLR of a SISO decoder may be simply written as

    where is the extrinsic information about , provided by the

    decoder. If this works properly, is most of the time negative if

    , and positive if . The composite decoder is constructed

    in such a way that only extrinsic terms are passed by one

    component decoder to the other. The input LLR to a particular

    decoder is formed by the sum of two terms: the information

    symbols stemming from the channel and the extrinsic term

    provided by the other decoder, which serves as a priori

  • 8/6/2019 Notes for Turbo Codes

    15/15

    information. The information symbols are common inputs to both

    decoders, which is why the extrinsic information must not contain

    them. In addition, the outgoing extrinsic information does not

    include the incoming extrinsic information, in order to cut down

    correlation effects in the loop. There are two families of SISO

    algorithms, those based on the SOVA [BAT][HAG], the others

    based on the MAP (also called BCJR or APP) algorithm [BAH] or

    its simplified versions. Turbo decoding is not optimal. This is

    because an iterative process has obviously to begin, during the

    first half-iteration, with only a part of the redundant information

    available (either or ). Fortunately, loss due to sub-optimality

    is small (less than ).

    Applications of turbo codes

    Figure 6: Applications of turbo codes(u can enlarge ths picture to

    seee it )

    Table 1 summarizes normalized or proprietary applications of turbo

    codes, known to date. Most of these applications are detailed and

    commented on in [GRA].