simulation of convolutional

Upload: nithindev-guttikonda

Post on 03-Jun-2018

221 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/12/2019 Simulation of Convolutional

    1/17

    Vi s u a l S y s t e m S i m u l a t o r 1

    .....

    . . . . . . . . . . . . . . . . . . . . . . . . . . . .CONVOLUTIONAL CODES IN VSS

    By: Dr. Kurt R. Matis Director of Systems Research

    SIMULATION OF CONVOLUTIONAL CODES IN VSS

    This document describes the modeling and simulation of short constraint-length convolutional codes used in conjunction with Viterbi decoding in

    the Visual System Simulator (VSS). After a brief review of the history andapplication of convolutional codes, a detailed description of VSS models forencoding/decoding of these codes is presented. Step-by-step examplesillustrate how to construct simulations and analyze results.

    Convolutional Code Basics This note begins with some background information on the use ofconvolutional codes. The development of convolutional codes is discussedalong with a history of important applications. This information is meant to

    provide a perspective on the selection of the convolutional code modelsthat are provided in VSS.

    Transmission efficiency and reliability can be improved by encodinginformation digits in a way that creates an interdependence betweensymbols which are transmitted over a channel. At the receiving end, theinterdependence can be exploited to detect or even correct transmissionerrors, provided erroneous symbols are not received too frequently. Suchcoding is called error-control coding and is shown in the configuration ofFigure 1.

    Figure 1. System Employing Error-Control Coding

    SourceSymbols

    {a i} Transmitter s(t ) }{ k b

    Channel Receiver ChannelDecoder

    ChannelEncoder

    EncodedSymbols

    {bk } r (t )

    ReceivedSymbols Decoded

    Symbols

    { i}

    {r k }

  • 8/12/2019 Simulation of Convolutional

    2/17

    C O N V O L UT I O N A L C O D E S I N V S S

    Simulation of Convolutional Codes in VSS

    2 Vi s u a l S y s t e m S i m u l a t o r

    Encoders for error control are usually called channel encoders todifferentiate them from various encoders used for other purposes withindigital communication systems. The process of error-control coding is alsoreferred to variously as Error Detection And Correction (EDAC) andCyclic Redundancy Check (CRC), although CRC actually refers to aspecialized type of code. Coding used for error detection is often called, asone might expect, error detection coding. It is often used within AutomaticRequest for Query (ARQ) systems to detect a block or packet error andrequest the transmitter to retransmit a message. Again, intuitively, whenerror-control coding is used for the purpose of actually correcting certainreceived symbols at the decoder it is usually callederror correcting codingorerror correction coding. In some cases it is called Forward Error Correction(FEC) coding to distinguish it from the process used in conjunction withthe aforementioned ARQ error-correction scheme. Finally, error control

    codes are sometimes used for both correction of a limited number of errorsand detection of a certain number of errors when there are too many errorsto correct.

    As shown in Figure 1, the encoder inserts redundant symbols into theoriginal symbol sequence, , to create the output sequence, , at arate times the rate of the original transmitted sequence . Thequantity R is referred to as the code rate. Since the symbols, , are nolonger independent, the decoder deals with vectors of received symbols, ,

    which may potentially be very long, perhaps encompassing the entiretransmission.

    From a practical point of view, error control codes impart a dependence tothe symbols within the transmitted data stream that is localized from withina few symbols to a few hundred symbols. This means that the decoder neednot examine the entire transmission before making decisions about thetransmitted symbols, but can decode information symbols by examining asubsequence of symbols, b , which is "near" the original information symbolsequence a . We will quantify these notions when discussing the specificclasses of codes.

    Whatever the length of the sequence, a , the decoder must produce anestimate, , based on the received sequence, r . Equivalently, since there is aone-to-one correspondence between the information sequence a and thecode word, b , the decoder can produce an estimate of the code word, b , andthen map this sequence back into the original information sequence, a .Clearly, = a if and only if .

    a i! " bk ! "1 R # a i! "

    bk ! "r

    b b=

  • 8/12/2019 Simulation of Convolutional

    3/17

    A p p l i c a t i o n N o t e s 3

    C O N V O L UT I O N A L C O D E S I N V S S

    Simulation of Convolutional Codes in VSS

    Convolutional Codes Compared with Block Codes

    The two classes of error control codes are block codes andconvolutional codes. An ( N , K ) block encoder maps blocks of K sourcebits into blocks of N encoded bits. There is complete independencebetween blocks. N is called the block length of the code and R = K / N is the code rate.

    In contrast to block codes, which impose a block structure on the inputinformation sequence, convolutional codes impose a sliding dependencyover a span of input symbols. The encoder can be implemented as a shiftregister into which information bits are shifted ' b ' at a time. After each shift,n modulo-2 sums are computed from the contents of selected shift registerstages. These n modulo-2 sums then constitute the encoder outputsubsequence associated with a particular subsequence of b -input

    information bits. The normalized code rate in this case is R = b / n . Thisindicates the number of bits, on average, that are carried by each symboloutput from the encoder. 1

    The quantity k is called the constraint length of the code and is equal to thelength of the encoding shift register. A shift register of length k has k-1delay or storage elements, each storing one bit. A typical k = 3, R = 1/2encoder is shown in Figure 2.

    1. Unfortunately, the classical literature describing convolutional codes has used some of the same nomenclture historically used to represent quite distinct attributes of block codes. For example, the capital letter N represents the length of a block code word, whereas the small letter n represents the number of adders in a convolutional encoder.

  • 8/12/2019 Simulation of Convolutional

    4/17

    C O N V O L UT I O N A L C O D E S I N V S S

    Simulation of Convolutional Codes in VSS

    4 Vi s u a l S y s t e m S i m u l a t o r

    Figure 2. Convolutional Encoder for k = 7, R = 1/2 Convolutional Code

    Note that the terminology concerning convolutional codes is not uniformin the field. We generally try to follow the usage in Viterbi and Omura [1],

    which is consistent with Figure 2. In this note, we restrict attention tolinear, feedforward convolutional codes which encode a binary input streaminto a binary output stream. This is the encoding model depicted inFigure 2. Other VSS models address the functionality of feedback codes,nonlinear codes and codes designed for use with more general alphabets.

    The number of storage elements in this case is two - always one less thanthe constraint length, k. Depending on the connections to the mod-2adders (described by the so-called code connection vectors), the code willhave different properties as discussed below. The convolutional encodercan be thought of as a finite-state machine with a number of statescorresponding to all possibilities of bit values contained within the storageelements of the shift register. With any given state and any specified input (asingle 0 or 1 in this case), both the next state and the output of theencoder can be predicted. This model of the encoder as a finite-statemachine is embodied in the FSM (finite-state machine) diagram at the rightof the figure. The FSM provides a succinct description of all possibleevolutions of the sequence of states as successive encodings are performed.

    Tabulations of good convolutional code constructions are available for various values of k and R [2], [3], [4]. For any given value of k, n and b , the"code construction" procedure consists of finding connections of themodulo-2 adders shown in Figure 2 to the appropriate shift-register stages

  • 8/12/2019 Simulation of Convolutional

    5/17

    A p p l i c a t i o n N o t e s 5

    C O N V O L UT I O N A L C O D E S I N V S S

    Simulation of Convolutional Codes in VSS

    to generate a sequence with the maximum error-correcting capability. VSScontains extensive tabulations of optimum convolutional codes.

    The evolution in time of the possible sequence of encoder states can bedescribed in terms of a trellis diagram.Figure 3 illustrates a trellis diagramfor the code of Figure 2. Generated bits are determined from acombination of the current input bit sequence of b bits plus the "state" ofthe encoder, which consists of the most recent k - b bits in the shift register.For the k = 3, R = 1/2 code, there are four possible states withbranches into and out of each state. Each branch has n output symbolsassociated with it, in this case, 2.

    Figure 3. Trellis-Code Representation for Encoder of Figure 2

    Convolutional codes were first introduced by Elias in 1955 [5]. In 1961, Wozencraft proposed a probabilistic method for decoding convolutionalcodes called sequential decoding [6]. In 1963, Massey proposed a lessefficient but simple decoding algorithm called "threshold decoding" [7].Finally, in 1967, Viterbi [8] proposed a maximum-likelihood decodingalgorithm that still bears his name.

    The Viterbi decoding algorithm is relatively easy to implement for codes with short constraint lengths and allows effective use of soft-decision

    2b

    2=

    00 00 a a

    11 11 11 11 11

    b b

    10 10 10 10

    01 01 01 0100 00 00

    11 11 11

    01 01 0110 10 10

    c c

    d d

    a =

    b =

    c =

    d =

    00

    11

    10

    01

    00 00 00

  • 8/12/2019 Simulation of Convolutional

    6/17

    C O N V O L UT I O N A L C O D E S I N V S S

    Simulation of Convolutional Codes in VSS

    6 Vi s u a l S y s t e m S i m u l a t o r

    information in the form of real-valued or quantized receiver outputs. Although other forms of decoding are still used widely in certainapplications, Viterbi decoding has become by far the most widely-useddecoding method for convolutional codes. The Viterbi algorithm is aniterative algorithm for correlating received sequences with all possibletransmitted sequences in the trellis to arrive at the one with the best fit.

    The Viterbi algorithm, commonly used with short constraint-lengthconvolutional codes, performs an efficient search of the trellis associated

    with the code. Specifically, the Viterbi algorithm is an iterative algorithm which compares all possible paths through the trellis with the received real- valued sequence. An important feature of the Viterbi algorithm is that it canefficiently perform real-valued correlations against the incoming sequencein the process of measuring the fit between the received sequence and thepossible code sequences. For binary convolutional codes, a linearcorrelation is optimum for measuring the fit between the received sequenceand all possible transmitted sequences. This is sometimes called linearcombining. Since the correlation is employed as a measure of distance, it issometimes called a metric. There are many other types of metrics that areappropriate for different signalling schemes and various channels. The onlymetric considered here is the correlation metric.

    Referring to Figure 3, the Viterbi algorithm operates by visiting each statein the trellis (states are arranged vertically) and calculating branch metricscorresponding to possible n-tuples emitted as bits enter the encoder. For

    the rate 1/2 code presented here, two branches emanate forward from eachstate. These branch metrics are used in the iterative calculation of distanceas the decoder moves forward through each decoding step, or each nodelevel in the trellis.

    Code PerformanceIn this section, code performance is analyzed in terms of decoded bit errorrate (BER) as a function of received signal-to-noise ratio. To provide a basisfor comparison with uncoded system performance, BER is normally

    tabulated as a function of . Here, represents the average energytransmitted per information bit and represents the single-sided powerspectral density of an assumed Additive White Gaussian Noise (AWGN)component. Energy per bit is, of course, related to energy per encodedsymbol by the rate of the code: .

    E b N 0 # E b N 0

    E s E b b n # $=

  • 8/12/2019 Simulation of Convolutional

    7/17

    A p p l i c a t i o n N o t e s 7

    C O N V O L UT I O N A L C O D E S I N V S S

    Simulation of Convolutional Codes in VSS

    The decoder measures the distance between all of the possible transmittedsequences in the trellis and the noisy received sequence. An error isgenerated if the received sequence is closer to some other sequence in thetrellis than the actual transmitted sequence. Let d represent the number ofbit positions in which any two code sequences in the trellis differ. This isknown as the Hamming distance between the two sequences. We denotethe probability of confusing two code sequences differing in d positions as

    . Error probability may be calculated by considering the distancebetween the actual transmitted sequence and all possible contenders in theensemble of code sequences described by the trellis. Intuitively, good codeshave large distances between all possible pairs of code sequences.

    When calculating error probability, it is adequate to assume that the all 0ssequence has been transmitted. For the special class of linear codesconsidered here, the set of distances between the all 0s code sequenceand other sequences in the ensemble is the same as the set of distancesfrom any arbitrary code sequence and other code sequences. This allows thebit error probability for a given convolutional code to be bounded by:

    (1)

    Here, is as described above and represents the so-calledweight spectrum of the code. Each term represents the average

    number of bit errors associated with sequences of weight d. is zero ford less than or equal to , known as the free distance of the code. The weight spectrum is tabulated for many convolutional codes. It can becalculated using a procedure described by McLiece in [9], given thestructure of the encoder in Figure 2. In practice, only a few terms in (1) areneeded, since tends to decrease quite rapidly with d for practicalchannels. The weight spectrum for the industry-standard rate 1/2constraint length 7 convolutional code is given in Table 1 on page 8. Notethat the free distance of this code is 10 and that 4 terms in the weightspectrum have been tabulated. This code is the default code for the

    CNVENC and VITDEC models. Specific tap connections for this code arefound in the on-line documentation for these models.

    P e d % &

    P b1b---' wd P e d % &$

    d 0=

    (

    )

    P e d % & wd ! "wd

    wd d fr ee

    P e d % &

  • 8/12/2019 Simulation of Convolutional

    8/17

    C O N V O L UT I O N A L C O D E S I N V S S

    Simulation of Convolutional Codes in VSS

    8 Vi s u a l S y s t e m S i m u l a t o r

    Table 1. Weight Spectrum of Rate r=1/2, Constraint Length k=7 Convolutional Code

    For linear combining of antipodal signals in AWGN, the sequence errorprobability takes on a particularly simple form:

    , (2)

    where represents the normalized Gaussian tail function.

    Bit Error Results

    Figure 4 shows predicted BER performance for the rate 1/2 constraintlength 7 convolutional code with the weight spectrum shown in Table 1.

    Weight, d No. of bit errors in adversaries of weight d,

    10 36

    12 211

    14 1404

    16 11633

    w d

    P e d % & Q 2dE s N 0 # % &=

    Q x% &

  • 8/12/2019 Simulation of Convolutional

    9/17

    A p p l i c a t i o n N o t e s 9

    C O N V O L UT I O N A L C O D E S I N V S S

    Simulation of Convolutional Codes in VSS

    Figure 4. Upper Union Bound on Bit Error Probability for Rate r=1/2, Constraint Length

    k=7 Convolutional Code, Plotted in Conjunction with Theoretical Uncoded Bit Error

    Probability

    This curve is plotted with the theoretical curve of uncoded BER forantipodal signalling. This would represent the error probability of 2-level

  • 8/12/2019 Simulation of Convolutional

    10/17

    C O N V O L UT I O N A L C O D E S I N V S S

    Simulation of Convolutional Codes in VSS

    10 Vi s u a l S y s t e m S i m u l a t o r

    PAM baseband modulation or BPSK carrier modulation systems withcoherent detection. The same error probability would be attained forcoherent QPSK on a per-channel basis. Note that the uncoded errorprobability is asymptotic to 1/2 at low signal-to-noise ratios (SNRs). Thebound on decoded BER performance, on the other hand, becomes greaterthan 1.0 for low SNRs. This is because the various paths in the trellischaracterized by the weight spectrum are not distinct. Thus, the union ofthe various error events in the trellis is actually less than the sum describedin the union bound of (1). Union bounds tend to be loose at low SNRs,but become tight as SNR increases. For the classes of codes consideredhere, the error probability tends to be dominated by the probability of thelowest order term. This is the term for which for .

    The reduction in required SNR to achieve given BER objectives is calledcoding gain. At interesting error rates (say at ) the rate 1/2constraint length 7 code provides a coding gain of 5 dB over uncodedtransmission. For many years, this was considered the state-of-the-artimprovement that could be practically offered by error-control coding onGaussian channels. This improvement was achieved only at the cost of atwo-fold increase in transmission bandwidth. This was not objectionablefor many non-bandlimited channels, such as the deep space channel. In the1980s and 1990s additional improvements were offered by codingtechnologies such as trellis-coded modulation and turbo codes. Other VSSapplication notes address these techniques.

    Punctured CodesPuncturing is a method of transforming a convolutional code with a givenrate into a code with a higher rate. For example, a rate 1/2 code may expandthe transmitted bandwidth too much, while providing more errorprotection than is necessary for a given application. In these cases, not all ofthe encoded bits need to be transmitted. If the omitted bits are chosencarefully, the decoder can still provide good performance without needingthe full redundancy of the rate 1/2 code. The output bits to delete arespecified by a puncturing matrix or array, which indicates a periodic patternof bits to be discarded from the mother encoder output. This capability isprovided in the CNVENC model and is diagrammed in Figure 5.

    d d fr ee=

    10 6

  • 8/12/2019 Simulation of Convolutional

    11/17

    A p p l i c a t i o n N o t e s 11

    C O N V O L UT I O N A L C O D E S I N V S S

    Simulation of Convolutional Codes in VSS

    Figure 5. Structure of Overall Encoder Showing Mother Encoder and Puncturer

    Note that puncturing is generally a sub-optimum means of creating high-rate codes from lower rate codes. It can be shown that for most rates andconstraint lengths, constructive codes exist that have greater than thebest punctured codes. This is true of most of the tabulated high-rateconstructive codes within this model. Puncturing is often employed tocreate higher rate codes from lower rate codes to allow use of existingcoding hardware. Puncturing is sometimes implemented as a simpleexternal applique to existing off-the-shelf rate 1/2 coders/decoders, thusavoiding the need for a custom chip or program. This is exemplified by theDVB punctured coding scheme described in the CNV_ENC on-line Helpdocumentation [10]. Punctured codes also allow scalable rates in Rate-Compatible coding applications. The amount of redundancy in thetransmitted data stream is controllable by adjusting the number of bitspunctured. With punctured codes, comparable bits between data segments

    with different rates will agree. This is generally not the case withconstructive codes.

    At the decoder, punctured bit positions are stuffed with floating point values of 0.0. This allows punctured bit positions to be treated essentially aserasures by the decoder, giving no weight to these positions in the processof calculating branch metrics. This operation is illustrated in Figure 6. Thedecoder can accept both real-valued inputs and binary digital inputs.

    Normally, a standard decoder expects to receive an antipodal real-valueddecision statistic from the received output. This assumes that thetransmitter (for example, BPSK or QPSK) maps the 0 and 1 bitsemitted from the convolutional encoder into values of plus A (+A) andminus A (-A). Various scalings and normalization may be employed in anyparticular receiver. Within real hardware, in fact, outputs are usuallyquantized to 8 bits with a maximum range of +-1.0. The output levels aresymmetric about zero, to provide the decoder with proper input data forthe correlation operation implicit in the Viterbi algorithm.

    ConvolutionalEncoding

    BinarySignal Puncturing

    (optional)

    EncodedSignal

    d fr ee

  • 8/12/2019 Simulation of Convolutional

    12/17

  • 8/12/2019 Simulation of Convolutional

    13/17

    A p p l i c a t i o n N o t e s 13

    C O N V O L UT I O N A L C O D E S I N V S S

    Simulation of Convolutional Codes in VSS

    Rate 1/2 Example

    Figure 7 illustrates a simulation of a coded QPSK link. In addition to thespecific model names, by choosing Schematic > Add Text the blockdiagram is annotated with generic names that correspond to the function ofeach individual block. Additionally, by choosingOptions > Project Options,clicking the Schematic/Diagrams tab, and selecting the Hide parameters check box, on-screen display of the model parameters is disabled for clarityon the diagram.

    Finally, by choosing Diagram > Show Grid Snap the grid was toggled off.

    Figure 7. Illustration of Coded QPSK Link - Block Diagram

    In this example, a CNVENC model is fed by a RND_D model. TheRND_D model is a general-purpose source of pseudo-random digits. Thebinary encoded sequence is fed into a QPSK_TX transmitter model. Theinput bits are interleaved onto the I/Q rails. Each bit thus maps into anantipodal value of +-A onto either rail in an alternating fashion. Thetransmitted signal is corrupted by the AWGNCH model, which adds whiteGaussian noise of the proper intensity. The receiver employs an integrate-and-dump matched filter which produces a real decision statistic for each ofthe I/Q rails. This decision statistic is fed out of the complex output port,

    which is the lower port on the receiver output. The QPSK receiver also hasa hard decision output at the top port. The QPSK signal is detected insidethe receiver and deinterleaved through this output port. The output data isbinary digital data. In this example, the decoder is fed with the real-valuedoutputs from the I/Q channels by converting the complex data to I/Q data

  • 8/12/2019 Simulation of Convolutional

    14/17

    C O N V O L UT I O N A L C O D E S I N V S S

    Simulation of Convolutional Codes in VSS

    14 Vi s u a l S y s t e m S i m u l a t o r

    (or Real/Imaginary components of the complex-envelope) and thenmultiplexing them into the decoder input port with a C2RI and a MUX2model preceding the decoder. A bit error counter model tallies errors andcalculates average bit error probability for the various stepped SNR values.Figure 8 shows the results of a BER simulation for the rate 1/2 constraintlength 7 convolutional code. As a basis for comparison, the BER results forhard-decision operation as overlaid on the plot. This curve was generated ina simulation in which the decoder input was connected to the DIGITALoutput of the QPSK receiver. This is a binary output, with the I/Q bitsdeinterleaved from the corresponding demodulators/detectors. This is theinverse of the operation performed in the transmitter. Note that the harddecision results exhibit a degradation of about 2 dB at interesting SNRs.

  • 8/12/2019 Simulation of Convolutional

    15/17

    A p p l i c a t i o n N o t e s 15

    C O N V O L UT I O N A L C O D E S I N V S S

    Simulation of Convolutional Codes in VSS

    Figure 8. Error Rates for QPSK System with Rate r = 1/2, constraint length k=7

    Convolutional Code Used in Conjunction with Soft-Decision Viterbi Decoding

    The uncoded bit error probability is also plotted on the graph todemonstrate correspondence with theory. Note that whereas the uncodederror results follow the theoretical result closely at all SNR values simulated,the coded error rate results diverge from the upper bound at low SNRs.

  • 8/12/2019 Simulation of Convolutional

    16/17

  • 8/12/2019 Simulation of Convolutional

    17/17

    A l i t i N t 17

    C O N V O L UT I O N A L C O D E S I N V S S

    Simulation of Convolutional Codes in VSS

    Conclusion We have illustrated the use of standard and punctured convolutional codesin conjunction with Viterbi decoding. Results demonstrate correspondence

    with theoretical bounds. Efficiency of punctured codes and constructivecodes was compared. We hope that this information is helpful to thoseemploying VSS simulations for end-to-end link performance analysis.

    References[1] A.J.Viterbi and J.K. Omura; Principles of Digital Communication and Coding, McGraw-Hill,

    1979.[2] J.P. Oldenwalder, "Optimum Decoding of Convolutional Codes", Ph.D. dissertation, Dept.

    Syst. Sci., Univ. California, Los Angeles, CA, 1970.[3] K.J. Larsen, "Short Convolutional Codes with Maximal Free Distance for Rates 1/2, 1/3 and

    1/4", IEEE Trans. Info. Theory, vol. IT-19, pp.371-372, May 1973.

    [4] D.G. Daut, J.W. Modestino and L.D. Wismer, "New Short Constraint-Length ConvolutionalCode Construction for Selected Rational Rates", IEEE Trans. Info. Theory, vol. IT-28, pp.774-800, September 1982.

    [5] P. Elias, "Coding for Noisy Channels", IRE Conv. Rec. Part 4, pp. 37-47, 1955.[6] J.M. Wozencraft and B. Reiffen, "Sequential Decoding", MIT Press, Cambridge, MA, 1961.[7] J.L. Massey, "Threshold Decoding", MIT Press, Cambridge, MA, 1963.[8] A.J. Viterbi, "Error Bounds for Convolutional Codes and an Asymptotically Optimum Decod-

    ing Algorithm", IEEE Trans. Info. Theory, IT-13, pp.260-269, April 1967.[9] R. McLiece, S. Dolinar, F. Pollara, H. VanTilborg; "Some Easily Analyzable Codes, presented

    at Proceedings of the Third Workshop on ECC, IBM Almaden Research Center; September,1989.

    [10] EN 300 744 V1.1.2 Digita l Video Broadcas ting (DVB), ETSI, pg. 13