de pert tof

60
,__T UNIEVERSITY A DE PERT COMP UT TOF / GINEERING -' GRADUATION PROJECT " ERROR CONTROL CODING" . PROF.DR.FAHRETTIN ,' ZELiHA <;AV 93851

Upload: others

Post on 03-Nov-2021

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: DE PERT TOF

,__ T UNIEVERSITY A

DE PERT

COMP UT

TOF /

GINEERING -'

GRADUATION PROJECT

" ERROR CONTROL CODING" .

PROF.DR.FAHRETTIN ,'

ZELiHA <;AV 93851

Page 2: DE PERT TOF

I EX Chapter I

Error control coding, 1 Error Detection & Correction

Parity and Parity Check Codes Code vectors & Hamming Distances 3

FEC Systems 5 ARQ Systems 7 Linear Block Codes 11

Matrix Representation of Block Codes Syndrome Decoding 14

Convolutional Codes 21 Convolutional Encoding Free Distances & Coding Gain 25 Decoding Methods 30

Chapter II Error Detection, Correction & Control 37

12.1 Parity 12.2 Parity Generating & Checking 12.3 The Disadvantages with Parity 38 12.4 Vertical & Longitudinal Redundancy 40 check (vrc & Ire)

12.1 Cyclic Redundancy Checking 12.5 Computing The Block Check Character 43

1-Single-Precision Checksum 46 2-Double-Precision Checksum 3-Honeywell Checksum 4 7 4-Reuside Checksum

12. 7 Error Correction 48 12.7.1 Hamming Code 50 12. 7 .2 Developing a Hamming Code

12.7.2.1 An Alternative Method 54

Page 3: DE PERT TOF

ERROR CONTROL CODING

Transmission errors in digital communication depend on the signal-to-noise ratio. If a particular system has a fixed value of SIN and the error rate is unacceptably high, then some other means of improving reliability must be sought Error-control coding often provides the best solution.

Error-control coding involves systematic addition of extra digits to the transmitted message. These extra check digits convey no information by themselves, but they make it possible to detect or correct errors in the regenerated message digits. In principle, information theory holds out the promise of nearly errorless transmission, as well be discussed in Chap.15. In practice, we seek some compromise between conflicting considerations of reliability, efficiency and equipment complexity. A multitude of error-control codes have therefore been devised to suit various applications.

This chapter starts with an overview of error-control coding, emphasizing the distinction between error detection and error correction and systems that employ these strategies. Subsequent sections describe the two major types of code implementations, block codes and convolutinal codes. We will stick entirely to binary coding, and we will omit formal mathematical analysis. Detailed treatments of error­ control coding are provided by the references cited in the supplementary reading list.

ERROR DETECTION AND CORRECTION

Coding for error detection, without correction, is simpler than error-correction coding. When a two-way channel exists between source and destination, the receiver can request retransmission of information containing detected errors. This error­ control strategy, called automatic repeat request (ARQ), particularly suits data communication systems such as computer networks. However, when retransmission is impossible or impractical, error control must take the form of forward error correction (FEC) using an error-correcting code. Both strategies will be examined here, after an introduction to simple but illustrative coding techniques. .

Repetition and Parity-Check Codes When you try to talk to someone across a noisy room, you may need to

repeat yourself to be understood. A brute-force approach to binary communication over a noisy channel likewise employs repetition, so each message bit is represented by a codeword consisting of n identical bits. Any transmission error is a received codeword alters the repetition pattern by changing a 1 to a O or vice versa.

If transmission errors occur randomly and independently with probability P=x, then the binomial frequency function from Eq.(1), Sect.4.4,gives the probability of i errors in an n-bit codeword as

1

Page 4: DE PERT TOF

P(i.n}= I ~ I a.1 c1-«r-' (1a)

«<<1 where

I n 1= nl =n (n-1) ..... (n-i+1) i il(n-i) ii (1b)

We will proced on the assumption that «<<0.1 -which does not necessary imply reliable transmission since a.=0.1 satisfies our condition but would be an unacceptable error probability for digital communication. Repetition codes improve reliability when a is sufficiently small that P(l+1,n)<<P(l,n) and, consequently, several errors per word are much less likely than a few errors per word.

Consider, for instance, a triple-repetition code with codeword 000 and 111. All the other received words, such as 001 or 101, clearly indicate the presence of errors. Depending on the decoding scheme, this code can detect or correct erroneous words. For error detection without correction, we say that any word other than 000 or 111 is a detected error. Single and double errors in a word are thereby detected, but triple errors result in an undetected word error with probability.

Pwa=P(3,3)=a.3 For error correction, we use majority-rule decoding based on the assumption that at least two of the three bits are correct. Thus, 001 and 101 are decoded as 000 and 111, respectively. This rule corrects words with single errors, but double or triple errors result in a decoding with probability.

Pwe= P(2,3)+P(3,3)=3cx.2 -2a3

Since Pe= « would be the error probability without coding, we see that either decoding scheme for the triple-repetition code greatly improves reliability if, say, as 0.01. However implementation is gained at the cost of reducing the message bit rate by a factor of 1/3.

More efficient codes are based on the notion of parity. The parity of a binary word is said to be even when the word contains an even number of ts, while odd parity means an odd number of 1s. the codewords for an error-detecting parity check code are constructed with n-1 message bits and one check bit chosen such that all codewords have the same parity. With n=3 and even parity, the valid codewords are 000,011, 101, and 110, the last bit in each word being the parity, check. When a received word has odd parity, 001 for instance, we immediately know that it contains a transmission error-or three errors or, in general, an odd number of errors. Error correction is not possible because we don't know where the errors fall within the word. Furthermore, an even number of errors preserves valid parity and goes unnoticed.

Page 5: DE PERT TOF

Under the condition ex,<<1, double errors occur far more often than four or e errors per word. Hence , the probability of an undetected error in an n-bit -check codeword is

Pwe R: P(2,n) ~n(n-1) ex, 2 (2)

2 comparison purposes, uncoded transmission of words containing n-1 message would have

Puwa=1-P(O,n-1 )~(n-1 )ex,

us if n=10 and a=10-3 then Puwe ~ 10-2 whereas coding yields ,:::¢X10-5 with a rate reduction of just 9/10. These numbers help explain

e popularity of parity checking for error detection in computer systems. As an example of parity checking for error correction, Fig. 13.1-1

ates an error-correcting scheme in which the codeword is formed by arranging k message bits in a square array whose rows and columns are checked by 2squere k parity bits. A transmission error in one message

·i causes a row and column

m3 -,

m1 m2 m4 m5 ms -I m1 ma m9 ~ e, f& Cs

_J

Rgure 13.1-2 Interleaved check bits for error control with burst errors.

parity failure with the error at the intersection, so single errors can be corrected. This code also detects double errors.

Throughout the foregoing discussion we have assumed that transmission errors appear randomly and independently in a codeword. This assumption holds for errors caused by white noise or filtered white noise. But impulse noise produced by lightning and switching transients causes errors to occur in bursts that span several successive bits. Burst errors also appear when radio-transmission systems suffer from rapid fading. Such multiple errors wreak have on the performance of conventional codes and must be combated by special techniques. Parity checking controls burst errors if the check bits are interleaved so that the checked bits are widely spaced, as represented in where a curved line connects the message bits and check bit in one parity word.

Code Vectors and Hamming Distance

Rather than continuing a piecemeal survey of particular codes, we now introduce a more general approach in terms of code vectors. An arbitrary n-bit codeword can be visualized in an n-dimensional space as a vector whose elements or coordinates equal the bits in the codeword. We thus write the codeword 101 in row vector notation as X= ( 1 0 1 ). Figure 13.1-a portrays all possible 3-bit codeword as

3

Page 6: DE PERT TOF

dots corresponding to the vector tips in a three-dimension space. The solid dots in part (a)represent the triple- repetition code, while those in part (b) represent a parity­ heck code.

Notice that the triple-repetition code vectors have greater separation than the parity-check code vectors. This separation, measured in terms of the

Hamming distance, has direct bearing on the error-control power of a code. The Hamming distance d(X, Y) between two vectors X and Y is defined to equal the number of different elements. For instance, if X=(1 0 1) and Y=(1 1 0) then d(X, Y)=2 because the second and third elements are different.

The minimum distance dmin of a particular code is the smallest Hamming distance between valid code vectors. Consequently, error detection is always possible when the number of transmission errors in a codeword is less then dmin so the erroneous word is not a valid vector. Conversely, when the number of errors equals or exceeds dmin, the erroneous word may correspond to another valid vector and the errors be detected.

Further reasoning along this line leads to the following distance requirements for various degrees of error control capability.

Detect up to I errors per word Correct up to t errors per word Correct up to t errors and detect l>t

dmin ~ 1+1 (3a) dmin~2 1+1 (4a) dmi~31+1 (3c)

By way of example, we see from Fig.13.1-3 that the triple-repetition code has dmin=3. Hence, this code could be used to detect t s 3-1=2 errors per word or to correct t ~ (3-1 )/2= 1 error per word-in agreement with our previous observations. A more powerful code with dmin=7 could

rrect triple errors or it could correct double errors and detect quadruple errors.

T_he power of a code obviously depends on the number of bits added to each codeword for error-control purposes. In particular, suppose that the codewords consist of k<n message bits and n-k parity bits checking the message bits. This structure is known as an (n,k) block code. The minimum distance of an (n,k) block code is upper-bounded by

dmin s n-k+1 and the code's efficiency is measured by the code rate

R/'=kln Regrettably, the upper bound in Eq.(4) is realized only by repetition codes, which have k=1 and very inefficient code rate Rc=1/n. Considerable effort has thus been devoted to the search for powerful and reasonably efficient codes, a topic we will return to in the next section.

Page 7: DE PERT TOF

FECSystems

Now we are prepared to examine the forward error correction system ·.agrammed in Fig 13.1-4. Message bits come from an information source atthe rate ., . The encoder takes blocks of k message bits and constructed an (n,k} block code with

inputrnesaqe I Encoder Rc=kln biis • dm1n =2t+1

Transmitter Channel r= rt/ Re

G(f)=n/2

Figure 13.1-4 FEC System

code rate Rc=kln<1 . The bit rate on the channel therefore must be greater than rb , namely

(6)

The code has dmin =2t+ 1 ~ n-k+ 1, and the decoder operates strictly in an error-correction mode. We will investigate the performance of this FEC system when additive white noise causes random errors with probability a<<1. The value of a depends, of course, on the signal energy and noise density at the receiver. If Eb represent the average energy per message bit, then the average energy per code bit is R c Eb and the ratio of bit energy to noise energy to noise density is

(7)

where yb=Ei/n. Our performance criterion will be the probability of output message-bit errors, denoted by Pba to distinguish it from the word error probability Pwa.

The code always corrects up to t errors per word and some patterns of more than t errors may also be correctable, depending upon the specific code vectors. Thus, the probability of a decoding word error is upper-bounded by

n

Pwa.S L P(l,n) 1+1

For a rough but reasonable performance estimate, we will take the approximation

PwaZ P(t+1 .n) z I ~ a.1+1

~+~ which means that an uncorrected word typically has t+1 bit errors. On the average, there will be (k/n)(t+1} message-bit errors per uncorrected word, the remaining

(8)

5

Page 8: DE PERT TOF

errors being in check bits. When Nk bits are transmitted in N>>1 words, the expected total number of erroneous message bits at the output is (k/n)(t+1 )NPwe.

t~1 Pwe~ [ ~-~ a1+1

which we have used Eq.(1b) to combine (t+1)/n with the binomial coefficient. If je noise has a gaussian distribution and the transmission system has been ptirnized (i.e., polar signaling and matched filtering), then the transmission error obability is given by Eq.(16), Sect.11.2, as

(9)

(10) RcYb ~

e gaussian tail approximation invoked here follows, from Eq. (10), Sect.4.4,and is nsistent with the assumption that a. <<1. Thus, our final result for the output error obability of the FEC system becomes

Pbe= r,-1 J[Q(°'12RcYb)] t+1 ~ (11)

ea transmission on the same channel would have

(12)

the signaling rate can be decreased from rJRc to rb

comparison of Eqs{11) and (12) brings out the importance of the code ameters t= {dmin·1)/2 and Rc=kln. The added complexity of an FEC system is ·-ed provided that t and Re yield a value of significantly less than Pube· The

ential approximation show that this essentially requires (t+1 )Rc>1. Hence, a at only corrects single or double errors should have a relatively high code

while more powerful codes may succeed despite lower code rates. The arameter Yb also enters into the comparison, as demonstrated by the example.

13.1-1 Suppose we have a (15, 11) block code with dm1n=3, nd Rc=11/15. An FEC system using this code would have

' .'22/15)yb] and Pba=4a.2, whereas uncoded transmission on the channel would yield Pube = Q("12yb). These three probabilities are

~ versus Yb in dB Fig.13.1-6. If Yb >8 dB, we see that coding decreases er probability by at least an order of magnitude compared to uncoded

ion. At Yb= 10 dB, for instance, uncoded transmission yields r ~.ill.10-6 whereas the FEC system has PbeAi 10-7 even through the

channel bit rate increase the transmission error probability to

6

Page 9: DE PERT TOF

2 4 6 10 12 Yb.dB

Figure 13.1-5 Curves of error probabllltles In Example 13.1-1.

If Yb. DB, however, coding does not significantly improve and actually makes matters worse when Yb <4 dB. Furthermore, an uncoded system could achieve better reliability that the FEC system simply by creasing the signal-to-noise ratio about 1.5 dB. Hence, this particular

code doesn't save much signal much signal power, but it would be effective •• Yb has a fixed value in the vicinity of 8-1 O dB.

ARQ Systems The automatic-repeat-request strategy for error control is based on

error detection and retransmission rather than forward error correction. Consequently, ARQ systems differ from FEC systems in three important respects. First, an {n, k) block code designed for error detection generally ·equires fewer check bits and has a higher kin ratio than code designed for error correction. Second, an ARO system needs a return transmission path and additional hardware in order to implement repeat transmission of codewords with detected errors. Third, the forward transmission bit rate ust make allowance for repeated word transmissions. The net impact of ese differences becomes clearer after we describe the operation of the Q system represented by fig.13.1~.

Each codeword constructed by the encoder is stored temporarily and nsmitted to the destination where the decoder looks for errors. the decoder ues positive acknowledgment (ACK} if no errors are detected, or a negative knowledgment (NAK) if errors are detected. A negative acknowledgment causes input controller to retransmit the appropriate word from those stored by the

put buffer. A particular word may be transmitted just once or it may be transmitted or more times, depending on the occurrence of transmission errors. The ction of the output controller and buffer is to assemble the output bit stream from codewords that have been accepted by the decoder.

Encoder

et.urn transmissl

7

Page 10: DE PERT TOF

pared to forward transmission, return transmission of the ACK,NAK al involves a low bit rate and we can reasonably assume a negligible r probability on the return path. Under this condition, all codewords detected errors are transmitted as many times as necessary, so the

ly output errors appear in words with undetected errors. For an (n,k) k code with dmin= i+1, the corresponding output error probabilities

n

Pwe=LP(i,n)~P(l+1,n)~ I n I a. 1+1 i=l+1 1+1 (13)

P-,= l±LP-1 :'1 a"' n

(14)

ich are identical to the FEC expressions, Eqs(8) and (9), with I in place t, Since the decoder accepts words that have either no rrors or detected errors. The words retransmission probability is given by

~1-[ P(O,n)+Pwa]

But a good error-detecting code should yield Pwa << P(O,n). Hence,

pRl 1-P(O,n)=1-(1-a)"~ na.

ere we have used the approximation (1-a.)"i::,;1- na. based on na.<<1.As for the retransmission process itself, there are three basic ARQ schemes illustrated by the · ing diagra_ms in Flg.1a.1-1. The asterik marks words received with detected errors

ich must be retransmitted. The stop-and-wait scheme in part a requires the transmitter to stop after every word and wait for acknowledgment from the receiver. Just one word needs to be stored by the input buffer, but the transmission time delay

in which direction results in an idle time of duration D ~ai between words. e time is eliminated by the go-back-N scheme in part b where codewords are ·ansmitted continuously. When the receiver sends a NAK signal, the transmitter es back N words in the buffer and

8

Page 11: DE PERT TOF

eel

• /ACK

itted (A} Go back N=3

Go back Go back N=3 N=3

o I T ,- ,- ,· ,- ,- ,- ,- ,- r r ,- ,- ,- , , w ,,,,,,,,,,,,,,,

•. eivec I I I I I I I I I I I I I I I ms ,,,,,,,,,,,,,,, ,,,,,,,,,,,,,,, ,,,,,,,,,,,,,,,

I 1 12 t 1 sf 2 t 1 s I G v t 1 617 t 1 s 17 I I I I I t at • • • • • •

d

~ived woros

Discarded Discarded

-ransmitted words Selesctive repeat

Selective repeat

(C) Ft;ure 13.1-7 ARQ schemas. (A) Stop-and wait;(B)go-back-n; (C) selectlveof'epeat

retransmits starting from that point. The receiver discards the N-1 intervening words, correct or not, in order to preserve proper sequence. The selective-repeat scheme in part c puts the burden of sequencing on the output controller and buffer, so that only words with detected errors need to be retransmitted. Clearty, a selective.-repeat ARO system has the highest throughput efficiency. To set this on a quantitative footing, we observe that the total number of transmission of a

· en word is discrete random variable m governed by the event probabilities P(m=1)=1-p, P(m=2)=P(1-p) etc. The average number of transmitted words per accepted word is then

m=1(1-p)+2p(1-p)+3p2(1-p)+ .

={1-p)(1+2p+3p2+ ..... )= 1 1-p

(16)

· ce 1+2p+3p2+ =(1-pr2. On the average, the system musttransmit m bits for ev,er1 k message bits, so the throughput efficiency is

Rc.;f'{nm)=(k(1-p))/n (17)

9

Page 12: DE PERT TOF

ich ~na, From Eq.(15}. We use the symbol Re' here to reileot the fact that the forward­ mission bit rate r and the message bit rate rb are related by.

parable to the relationship r=r,J Re in an FEC system. Thus, when noise has a gaussian distribution, the transmission error probability a

calculated from Eq.(10} using Re. instead of Re =kin. Furthermore, if <1, then Re·~kln. But an error-detecting code has a larger kin ratio than error-correcting code of equivalent error-control power. Under these ditions, the more elaborate hardware needed for selective-repeat ARQ pay off in terms of better performance than an FEC system would yield e same channel. The expression form m in Eq.(16} also applies to a stop-and wait

0 system. However, the idle time reduces efficiency by the factor .I (Tw+D} where is the round-trip delay and Tw is the word duration

n by T w =n/r~klrb. Hence,

. :e1( 1-p -~ k 1-p n 1+(D7Tw) n 1+(2firt/k)

{18)

which the upper bound comes from writing DfT'w;;;=:-2~rt/k.

go back-N ARQ system has no idle time, but N words must be retransmitted for h word with detected errors. Consequently, we find that

m=1 + l!2.._ t-p

(19)

d where the upper bound reflects the fact that N22tJT w. nllke selective-repeat ARO, the throughput efficiency of the stop-and-wait d go-back-N schemes depends on the round-trip delay. Equations (18)

20) reveal that both of these schemes have reasonable efficiency if delay and bit rate are such that ~rb<<k. However, stop-and-wait ARO very low efficiency when ~rbzk, whereas the go-back-N scheme still be satisfactory provided that the retransmission probability p is

all enough.

Finally, we should at least describe the concept of hybrid ARO ems. These systems consist of an FEC subsystem within the ARO ework, thereby combining desirable properties of both error-control .egies. For instance, a hybrid ARC system might employ a block code dmin=t+l+1, so the decoder can correct up tot errors per word and

ect but not correct words with l>t errors. Errors correction reduces the mber of words that must be retransmitted, thereby increasing the roughput without sacrificing the higher reliability of ARO.

10

Page 13: DE PERT TOF

13.2 Linear Block codes

This section describe the structure. probabilities, and implementation of codes. We start with a matrix representation of the encoding process that ates the check bits for a given block of message bits. Then we use the

· representation to investigate decoding methods for error detection and ·ection. The section closes with a brief introduction to the important class of

block codes.

Matrix Representation of Block Codes

An (n,k) block code consist of n-bit vectors, each vector corresponding to a e block of k<n message bit. Since there are different k-bit message blocks 2" possible n-bit vectors, the fundamental strategy of block coding is to e the 2k code vectors such that the minimum distance is as large as

iole. But the code should also have some structure that facilities the encoding decoding process. We will therefore focus on the class of systematic linear codes.

Let an arbitrary code vector be represented by

X = (X1 X2 Xn)

e the elements x1 x2 are, of course, binary digits. A code is linear if it des the all-zero vector and if the sum of any code vectors produces another or in the code. The sum of two vectors, say X and Z, is defined as

'=4(x ~1 X2 +®? xnZ,) (1) ich the elements are combined according to the rules of mod-2 additional given

Eq .. (2), Sect.11.4. As a consequence of linearity, we can determine a code's minimum distance

the following argument. Let the number of nonzero elements of a vector X be bolized by w(X), called the vector weight. The Hamming distance between any code vectors X and Z is then

d(X,Z)=w(X + Z) X1 <5 z1=1 If x1 * z1 etc. The distance between X and Z therefore equals the

· ht of another code veter X+Z. But if Z=(O 0 ..... 0) then X+Z=X; hence,

X*(O O .... 0) ( 2)

ther words, the minimum distance of a linear block code equals the smallest ero vector weight. systematic block code consists of vectors whose first k elements( or k elements) are identical to the message bits, the remaining n-k ents being check bits. A code vector then takes the form

X=(m1 m2 mk c1 Ci Cq) (3a) 11

Page 14: DE PERT TOF

e

q = n-k

or convenience, we will also code vectors in the partitioned notation

X=(M IC) ich M is a k-bit message vector and C is a q-bit check vector. ·"oned notations lends itself to the matrix representation of block

Given a message vector M, the corresponding code vector X for a ematic linear (n,k) block code can be obtained by a matrix ·p1ication.

X=MG (4) e matrix G is a k x n generator matrix having the general structure

(5a)

e 11< is the k x k identity matrix and P is a k x q submatrix od binary represented by

11 P12 ......... P1q

P21 P13 .......... P2q I (5b} P=

I Pk2 ......... Pkci Pk1

e identity matrix in G simply reproduces the message vector for the first ments of X, while the submatrix P generates the check vector via

C=MP binary matrix multiplication follows the usual rules with mod-2 addition instead

conventional addition. Hence, the jth element of C is computed using the jth mn of P, and

(6b)

j=1,2,3, q. All of these matrix operations are less formidable than they ar because every element equals either O or 1. matrix representation of a block code provides a compact analytical vehicle

d. moreover, leads to hardware implementations of the encoder and decoder. But not tell us how to pick the elements of the P submatrix to achieve specified

parameters such as dmin and Re . Consequently, good codes are discovered the help of considerable inspiration and perspiration, guided by mathematical is. In fact, Hamming(1950) devised the first popular block codes several years

ore the underlying theory was formalized by S1epian{1956).

Example 13.2-1 Hamming code: A hamming code is an (n,k) linear k code with q ~3 check bits and

Page 15: DE PERT TOF

(7a)

de rate is

Re= k =1- q n 2q-1 (7b)

s Re :::::1 if q>>1.lndependent of q, the minimum distance is fixed at

dmin=3· (7c)

Hamming code can be used for single-error correction or double •te ction. To construct a systematic Hamming code, you simply let the k

of the P submatrix consist of q-bit words with two or more is, arranged order.

For example, consider systematic Hamming code with q=3, so n=23 -1=7 and -~=4. According to the previously stated rule, an appropriate generator matrix is

G=

[

1000 101 J 0100 111 0010 110 0001 011

ast three columns constitute the P submatrix whose rows inculude all 3-bit that have two or more Is. Given a block of message bits M=

fil....JBuffer Message register

Messllge

bit

tel transmitter

13.2-1 Encoder for (7,4) Hamming code.

e check-bit equations are obtained by substituting the elements of P into (6}.

Figure 1a.2-1 depicts an encoder that carries out the check-bit ulations for this(7,4) Hamming code. Each block of message bits going e transmitter is also loaded into a message register. The cells of the age register are connected to exclusive-OR gates whose outputs

13

Page 16: DE PERT TOF

check bits. The check bits are stored in another register and the transmitter after the message bits. An input buffer holds k of message bits while the check bits are shifted out. The

en repeats with the next blocks of message bits.

Table 13.2-1 lists the resulting 24 = 16 codewords and their weights . attest nonzero weight equals 3, confirming that

Tab1e 13.2-1 Codewords for the (7,4) Hamming code

C ~{X) M C I W(X)

00 0 1000 11 3 1001

110 3 1010 101 4 1011 111 4 1100 100 3 1101 001 01 3 1110 100 10 4 1111 111

the check-bit equations and tabulate the codewords and their weights show that dmin=3.

Syndrome Decoding

Now let Y stand for the received vector when a particular code vector X has n transmitted. Any transmission errors will result in Y '* X . The decoder detects or

rrects errors in Y using stored information about the code. A direct way of performing error detection would be to compare Y

.•.•..•... every vector in the code. This method requires storing all 2k code tors at the receiver and performing up to 2k comparison. But efficient des generally have large values of k, which implies rather extensive and

expensive decoding hardware. As an example, you need q ~5 to get Re ~ 0.8 with a Hamming code; then n~31, lc!26, and the receiver must store a total of n x 2k >109 bits1 !.

ore practical decoding methods for codes with large k involve parity- check information derived from the code's P submatrix. Associated with any systematic linear {n,k) block code is a q x n matrix H called the parity­ check matrix. This matrix is defined by

(8)

Where Hr denotes the transpose of H and lq is the q x q identity matrix. Relative to error detection, the parity-check matrix has the crucial propety.

14

Page 17: DE PERT TOF

T ' X H =( 0 0 ..... 0) (9)

.-ided that X belongs to the set of code vectors. However, when Y is not vector, the product YHr contains at least one nonzero element.

Therefore, given Hr and a received vector Y, error detection can be

(10)

•IIP'Ul vector called the syndrome. If all elements of S equal zero, then equals the transmitted vector X and there are no transmission errors,

equal some other code vector and the transmission errors are Wdetectable. Otherwise errors are indicated by the presence of nonzero

nts in S. Thus, a decoder for error detection simply takes the form of •9Wndrome calculator. A comparison of Eqs.(10) and (6) shows that the

are needed is essentially the same as the encoding circuit.

Error correction necessarily entails more circuitry but it, too, can be on the syndrome. We develop the decoding method by introducing

bit error vector E whose nonzero elements mark the positions of mnsmission errors in Y. For instance, if X=(1 O 11 O) and Y=(1 0 0 11 )

E=(O o 1 a 1). In general Y=X+E (11a)

X=Y+E (11b)

e a second error in the same bit location would cancel the original error. S ts1ituting Y=X+E into S= YHr and invoking Eq(9), we obtain

(12)

reveals that the syndrome depends entirely on the pattern, not the specific mitted vector.

However, there are only 2q different syndromes generated by the 2" ible n-bit error vectors, including the no-error case. Consequently, a given

9"drome does not 2q uniquely determine by the E . Or, putting this another way, can correct just patterns with one or more errors, and the remaining patterns are rrecttable. We should therefore design the decoder to correct the most likely patterns-namely those patterns with the fewest errors, since single errors are

e probable than double errors, and so forth. This strategy, known as maximum­ ood decoding, is optimum in the sense that it minimize the word error bilrty. Maximum-likelihood decoding corresponds to choosing the code vector

the smallest Hamming distance from the received vector.

15

Page 18: DE PERT TOF

out maximum-likelihood decoding, you must first compute the erated by the 2q-1 most probable error vectors. The table-lookup mmed in Fig 13.2-2 then operates as follows. The decoder calculates

received vector Y and looks up the assumed error vector E stored in sum Y+E generated by exclusive-OR gates finally constitutes the

d. lf there are no errors, or if the errors are uncorrectable, then S=( 0 0 +E=Y. The check bits in the last q elements of Y+E may be omitted if they

.er interset.

·elationship between syndromes and error patterns also sheds some light · n of error-correcting codes, since each of the 2q-1 nonzero syndromes

·t'Zdesent a specific error pattern. Now there are single-error patterns for an i . double-error patterrns, and so forth. Hence, if a code is to correct up tot

word, q and must satsify

icular case of a single-error-correcting code, Eq(13) reduces to 2q-1~n. i· a tt1ore, when E corresponds to a single error in the jth bit of a codeword, we Eq(12) that S is identical to the jth row of H'. Therefore, to provide a

syndromes for each single-error pattern and for the no error pattern, the rows columns of H) must all be different and each must contain at least one

I~ element. The generator matrix of a Hamming code is designed to satisfy irements on H, while q and n satisfy 2q-1=n.

(13)

\Dllftple 13.2-2 Let's apply table-lookup decoding to a (7,4) Hamming code used e-error correction. From Eq.{8) and the P submatrix given in Example

- • .. we obtain th[e ; ~ 71 ~r~-~~c]k matrix.

H=[P11 lq]= 0 1 1 1 0 1 0 1101 001

There are 23-1=7 correctable single-error patterns, and the corresponding omes listed in Table 13.2-2 follow directly from the columns of H. To accommodate ble the decoder needs to store only (q+n)x 2q=80 bits

Table 13.2-2 Syndromes for the (7 ,4) Hamming coda

0000000 1000000 0100000 0010000 0001000 0000100 0000010 00000 01

But suppose a received word happens to have two errors, such E=(1 O O O o 1 0). The decoder calculates S= YHr=EHr=( 1 1 1)

16

Page 19: DE PERT TOF

es table gives the assumed single-error pattern O O O O ). The decoded output word Y+E therefore contains three ansmission errors plus the erroneous correction added by the

e transmission errors per word are sufficiently infrequent, we need concemed about the occasional extra errors committed by the decoder. If

rs are frequent, a more powerful code would be required. For extended Hamming code has an additional check bit that provides

-.....nor detection along with single-error correction; see Prob13.2-12.

I 2 as 13.2-2 Use Eqs. (8) and (10} to show that the jth bit of S given by

ram the syndrome-calculation circuit for a (7,4} Hamming code, and .• J e it with Flg.13.2-1.

Cyclic Codes

e code for a forward-error-correction system must be capable of U cfrg t ~1 errors per word. It should also have a reasonably efficient code rate

iese two parameters are related by the inequality

ows from Eq.(13) with q = n-k =n(1- Re). This inequality underscores the tf we want Rci:::1, we must use codewords with n>>1 and k>>1. However, are requirements for encoding and decoding long codewords may be

__.Hive unless we impose further structural conditions on the code. Cylic codes bclass of linear block codes with a cyclic structure that leads to more

s fcal implementation. Thus, block codes used in FEC systems are almost * IS cyclic codes. o describe a cyclic code, we will find it helpful to change our indexing

e and express an arbitrary n-bit code vector in the form

X=(><n-1 ><n-2 · X1 Xo 5

suppose that X has been loaded into a shift register with feedback ....nection from the first to last stage. Shifting all bits one position to the left yields

cue shift of X , written as L:::,..

X1 = (Xn-2 Xn-3 X1 Xo Xn-1) (16)

•second shift produces X11 = (Xn-3 ••••••• x- Xo ><n-1 Xn-2) and so forth. A linear code is if every cyclic shift of a code vector X is another vector in the code. This property can be treated mathematically by associating a code vector X with

polvnomial

17

Page 20: DE PERT TOF

mes table gives the assumed single-error pattern

u O O O O ). The decoded output word Y+E therefore contains three ansmission errors plus the erroneous correction added by the

i+.:ple transmission errors per word are sufficiently infrequent, we need med about the occasional extra errors committed by the decoder. If

ors are frequent, a more powerful code would be required. For •m::ee .. an extended Hamming code has an additional check bit that provides

error detection along with single-error correction; see Prob13.2-12.

• lses 13.2-2 Use Eqs. (8) and (10) to show that the jth bit of S given by

s= Y1P1i -Q/2P2i+O G YkPl<i-O;k+i

gram the syndrome-calculation circuit for a (7,4) Hamming code, and -•;n,erlM'e it with Flg.13.2-1.

Cyclic Codes

The code for a forward-error-correction system must be capable of _-ecting t ~1 errors per word. It should also have a reasonably efficient code rate

. These two parameters are related by the inequality

follows from Eq.(13) with q = n-k =n(1- Re)- This inequality underscores the that If we want Rc::::l1, we must use codewords with n>>1 and k>>1. However, hardware requirements for encoding and decoding long codewords may be ibitive unless we impose further structural conditions on the code. Cylic codes a subclass of linear block codes with a cyclic structure that leads to more tical implementation. Thus, block codes used in FEC systems are almost

~ cyclic codes. To describe a cyclic code, we will find it helpful to change our indexing

eme and express an arbitrary n-bit code vector in the form

X=(><n-1 Xn-2 X1 Xo) 15

suppose that X has been loaded into a shift register with feedback nnection from the first to last stage. Shifting all bits one position to the left yields e cyclic shift of X , written as

6 X1 = (Xn-2 Xn-a X1 Xo X,,.1) (16)

second shift produces x" = (Xn-3 x1 Xo ><n-1 Xn--2) and so forth. A linear code is yclic if every cyclic shift of a code vector X is another vector in the code. This cyclic property can be treated mathematically by associating a code vector X with e polynomial

17 ,,

Page 21: DE PERT TOF

- 11

an arbitrary real variable. The powers of p denote the positions of the ·· represented by the corresponding coefficients of p. Formally, binary omials are defined in conjunction with Galois fields, a branch of modern

that provides the theory needed for a complete treatment of cyclic codes. informal overview of cyclic codes we will manipulate code polynomials

dinary algebra modified in two respects. First, to be in agreement with our nition for the sum of two code vectors, the sum of two polynomials is

P - lild by mod-2 addition of their respective coefficients. Second, since all I 5 ll&'lts are either O or 1, and since 1 o 1 =O, the subtraction operation is the

mod-2 addition. Consequently, if X(p)+Z(p)=O then X(p)=Z(p). e develop the polynomial interpretation of cyclic shifting by comparing

pX(p)= Xn-1P"+ Xn-2Pn-1+ X1P2+><op

shifted polynomial

X'(p)= Xn-2Pn-1+ X1P2+><op+Xn-1

m these polynomials, noting that (x1 + xQp2=0, etc., we get

pX(p}+ X1(p}= Xn-1P" +xn-1

1181 7 >1, yields similar expressions for multiple shifts. The polynomial p"+1 and its factors play major roles in cyclic codes.

II• • :atty, an (n,k) cyclic code is defined by a generator polynomial of the form G(p)=pq+gq-1Pq-1+ +g1p+1 (19)

= n-x and the coefficients are such that G(p) is a factor of p"+1.Each • 1 C uord then corresponds to the polynomial product

X(p }=QM(P )G(p)

1• 4lmich QM(P) represent a block of k message bits. All such codeword satisfy the ndition in Eq.(18) since G(p) is a factor of both X(p) and p"+1. Any factor of has degree q may serve as the generator polynomial for a cyclic code, but

ct necessarily generate a good code. Tab1e 1s.2-3 lists the generator D 1 wmials of selected cyclic codes that have been demonstrated to posses P - Ne parameters for FEC systems. The table includes some cyclic Hamming

e famous Golay code, and a few members of the important family of SCH discovered by Bose, Chaudhuri, and Hocquenghem. The entries under ote the polynomial's coefficients; thus, for instance, 1 O 1 1 means that

., p:+Q+p+1

18

Page 22: DE PERT TOF

G(p)

0.57 3 0.73 3 0.84 3

1 011 ·10 01·1 100 101

0.46 21 0.68 ·5 0.71

5 1_11 010 001 5 11 101 101 Ob1 7 ·1 ·11 ·1 000 001 011 001 1 '11

12 0.52 7 ·101 011 100 011

ey be systematic or nonsystematic, depending on the term QM(P) in systematic code, we define the message-bit and check-bit

·p)-m pk-1+ + m p+m - k-1 ••••••• 1 a ,-..'p)- C Pq-1+ + C p+CQ '-'l - q-1 ••••••• 1

e codeword polynomials to be p)=pqM(p}+C(p} {21)

u 5 s (20) and (21) therefore require pqM(p)+C(p)=QM(p)G(p), or

- {p) = QM(p)+ C{p) G(p) (22a)

ion says that C(p) equals the remainder left over after dividing pqM(p) as 14 divided by 3 leaves a remainder of 2 since 14/3=4+2/3.

J 7 t sly, we write

C(p)= rem [ pqM(p)]

G(p)

stands for the remainder of the division within the brackets.

n operation needed to generate a systematic cyclic code is easily and C iadty performed by the shift-register encoder diagrammed in Flg.13.2-3

message bits Figure 13.2-3 Shift-register encoder

19

Page 23: DE PERT TOF

Encoding starts with the feedback switch closed, the output switch in the age-bit position, and the register initialized to the all-zero state. The k age bits are shifted into the register and simultaneously delivered to the

srnitter. After k shift cycles, the register contains the q check bits. The feedback h is now opened and the output switch is moved to deliver the check bits to ansmitter 1.

Syndrome calculation at the receiver is equally simple. Given a received vector Y, syndrome is determined from

pj=rem [11El._J G(p)

{23)

if Y(p) is a valid code polynomial, then G{p) will be factor of Y(p} and '-)/G{p) has zero remainder. Otherwise we get a nonzero syndrome polynomial ·eating detected errors.

Besides simplified encoding and syndrome calculation, cyclic codes have er advantages over noncyclic block codes. The foremost advantage comes

om the ingenious error-correcting decoding methods that have been devised for pecific cyclic codes. These methods eliminate the storage needed for table lookup

oding and thus make it practical to use powerful and efficient codes with n>>1. other advantage is the ability of cyclic codes to detect error bursts that span many

successive bits. Detailed exposition of these properties are presented in texts such Un and Costello(1983).

xampte 13.2-3 Consider the cyclic (7,4) Hamming code generated by G(p)=p3+0+p+1. We will use long division to calculate the check-bit polynomial C(p)

en M=(1 1 0 0). We first write the message-bit polynomial M{p)=p3+ p2+0+0 so JM{p)= p6+ p5+0+0+0+0+0. Next. we divide G(p) into pqM(p), keepina in mind that

subtraction is the same as addition in mod-2 arithmetic . Thus,

QM(p)= p3+ p2+p+O p3+0+p+1 I p6+ p5+0+0+0+0+0

ps+ O+ p4+ p3 po+ p4+ p3+0 p5+ 0 + p3+ p2

p4+ O+ p1+ o p4+ O+ p2+ p

O+O +p+O O+O + 0 +O

C(p)=O+p+O

so the complete code polynomial is

Page 24: DE PERT TOF

X(p)= p3M(p)+C(p)= p6+ p5+0+0+0+p+O

in2ut

C To transmitter

(a

_,I

ut Register bits L Registe[. bits afttr shift 01t before shift r2- r,- r0- m r2 r1 ro r, ro + r2 on O + m 0

1 0 0 0 0 1 1 1 0 1 1 1 0 1 0 1 0 1 0 0 1

0 0 1 0 1 0

(b} -;.ire 13.2-4(a) Shift-register encoder for (7,-,) Hamming code; (b) register bits when M=(1100).

hich corresponds to the codeword

X=(1 1 0 0 10 1 0).

ou will find this codeword back in Table 13.2-1, where you will also find the cyclic shift =(1 0 0011 0 1) and all multiples shifts. inally, Ftg 13.2-4 shows the shift-register encoder and the register bits for each cycle

of the encoding process when the input is M=( 1 1 0 0). After four shift cycles, the egister holds C=(O 1 0)--- in agreement with our manual division.

Exercises 13.2-3 Let Y(p)=X(p)+E{p) where E(p) is the error polynomial. Use Eqs.(20) and (23) to show that the syndrome polynomial S(p) depends on E(p} but ot on X(p).

CONVOLUTIONAL CODES

Convolutional codes have a structure that efficiently extends over the entire ansmitted bit stream, rather than being limited to codeword blocks. The -:nnvolutional structure is especially well suited to space and satellite communication systems that require simple encoders and achieve high performance by sophisticated decoding methods. Our treatment of this important family of codes consists of selected examples that introduce the salient features of convolutional encoding and decoding ..

Convolutional Encoding

The fundamental hardware unit for convolutional encoding is a tapped shift ri•~gister with L +1 stages, as diagrammed in Flg.13.3-1. Each tap gain g is binary digit

21

Page 25: DE PERT TOF

esenting a short-circuit connection or an open circuit. The message bits in the er are combined by mod-2 additional to form the encoding bit.

v..- X g e (ij,,f gn V.g 1 'J- }-L L -Af-1 ;- r 'i 0

L

=~m}-19i (mod-2) i=O

name convolutional encoding comes from the fact that Eq( 1) has the form a ary convolutional, analogous to the convolutional integral

x(t)= Jm(H.)g(t •. )d11.

1tice that Xj depends on the current input mi and on the state of the register ned by the previous L message bits. Also notice that a particular bit message uences a span of L + 1 successive encoded bits as it shifts through the register.

To provide the extra bits needed for error control, a complete convolutional coder .must generate output bits at a rate greater than the message bit rate rb. is is achieved by connecting two or more mod-2 summers to the register rleaving the encoded bits via a commutator switch. For example, the encoder

Fig.13.3-2 generates n=2 encoded bits

ich are interleaved by the switch to produce the output steam X=x' Y! i x" x1 ,c X11 1. 1 2. 1. 2. 3. 3 .

e output bit rate is therefore 2rb and the code rate is Rc=1/2 - e an (n,k) block code with Re= k/n=1/2.

However, unlike a block code. the input bits have not been grouped into rds. Instead, each message bit influences a span of n(L+1)=6 successive output . The quantity n(L+1) is called the constraint length measured in terms

Mess~ge bits

State

Encoder bits

figure 13.3-1 Tapped shift register for convolutlonal encoding

of encoded output bits, whereas L is the encode's memory measured in terms of put message bits. We say that this encoder produces an (n,k,L) convolutional de with n=2, k=1, and L=2.

Three different but related graphical representation have been devised for the ~iudy of convolutional encoding: the code tree, the code trellis, and the state diagram. We will present each of these for our (2, 1,2) encoder in Fig 13.s-2, starting

h the code tree. In accordance with normal operating procedure, we presume at the register has been cleared to contain all Os when the first message bit m­

arrives. Hence, the initial state is m_1m0=00 and Eq(2) gives the output >C1 X11=00 if m1=0 or x', x'11=11 if m1=1.The code tree drawn in Fig.13.3-3 begins at a branch point

22

Page 26: DE PERT TOF

ode labeled a representing the initial state. If m1=0, you take the upper branch node a to find the output 00 and·the output 00 and the next state, which is abeled a since mom1=00 in this case. If m1=1, you take the lower branch from a

the output 11 and the next state m0m1=01 signified by the label b. The code ogressively evolves in this fashion for each new input bit. Nodes are labeled

letters denoting the current state mi-2mt-1; you go up down from a node, -,ending on the value of mi; each branch shows the resulting encoded output xi.

calculated from Eq(2), and it terminates at another node labeled with the next . There are 2i possible branches for the jth message bit, but the branch m begins to repeat atj=3 since the register length is L+1=3. Having rved repetition in the code tree, we can construct a more compact picture called

code trellis and shown in Flg.13.3-4a. Here, the nodes on the left denote the four ble current states, while those on the right are the resulting next states. A solid

represent the state transition or branch for mi=O, and a broken line represents branch for mi=1. Each branch is labeled with the resulting output bits xi. x11i.

· g one step further, we coalesce the left and right sides of the trellis to obtain state diagram in Flg.13.Mb. The self-loops at nodes a and d represent the state ·~·ons a-a and d-d.

Given a sequence of message bits and the initial state, you can use either the trellis or state diagram to find the resulting state sequence and output bits.

e procedure is illustrated in Fig.13.Mc, starting at initial state a. States

a=OO [l[l a 111 b

b=01 00 a

c=10 d=11 I I 11

OQ a 11 a

10 C

00

11 b I 01 C

'01 d

10G I

00 a

C

11 a

~

11 '--2

~' b d

01 C 11.

01 d I 00 b

01 C

10 I d r-

11

101 g Flgure13.3-3 Codetr •• for(2,1,2) encoder.

23

Page 27: DE PERT TOF

...,~

(b)

put 1 1 O 1 1 1 0 0 1 O O O e a b d c b d d c a b c a a ut 11 01 01 00 01 10 011111 10 11 00 (c)

Figure 13.3-4-{a)Code trellis; (b) state diagram for (2,1,2)encoder

merous other convolutional codes are obtained by modifying the encoder in ,a.3-2. If we just change the connections to the mod-2 summers, then the code

, trellis, and state diagram retain the same structure since the state and anching pattern reflect only the register contents. The output bits would be erent, of course, since they depend specifically on the summer connections.

r

we extend the shift register to an arbitrary length L + 1 and connect it to n~2 mod- summers, we get an (n,k,L) convolutional code with 1<=1 and code rate _ :::1/n~1/2. The state of the encoder is defined by L previous input bits, so the code ·etlis and state diagram have 2L different states, and the code-tree pattern repeats -L +1 branches. Connecting one comutator terminal directly to the first stage of

register yields the encoded bit stream

X=m x' i m x' x' m x11 x' 1 1 1.......... 2 2 2......... 3 3 3 .

ich defines a systematic convolutional code with Rc=1/n.

Code rates higher than 1/n require k ~ 2 shift registers and an input tnbutor switch. This scheme is illustrated by the {3,2, 1) encoder in Fig.1a.u. The

message bits are distributed alternately between 1<=2 registers, each of length +1=2. We regard the pair of bits mi-1mi as the current input, while the pair mi-3mi-2 onstitute the state of the encoder. For each input pair, the mod-2 summers

generate n=3 encoded output bits given by

x'i= mt-9 mj-2 Qnj X11j= mj-3 0 mj-1 Onj

X1\= mi-20 mi (4l

Thus, the output bit rate is 3rtl2 corresponding to the code rate Rc=kln=2/3. The constraint length is n(L +1 )=6 since a particular input bit influences a span of n=3 output bits from each of its L + 1 =2 register positions.

24

Page 28: DE PERT TOF

State input

m output rate 3/2 Rb

Figure 13.3-6(3,2, 1) encoder

Graphical representation becomes more cumbersome for convolutional es with k>1 because we must deal with input bits in groups of 2k. Consequently,

· branches emanate and terminate at each node, and there are 2kL different es, As an example, Fig 13.u shows the state diagram for the (3,2, 1) encoder in

.1a.u. The branches are labeled with the k=2 input bits followed by the resulting -3 output bits.

The convolutional codes employed for FEC systems usually have small values of and k, while the constraint length typically falls in the range of 10 to 30. All

convolutional encoders require a comutator switch at the output, as shown in Figs.1a.a­ and ras-s, For codes with k> 1, the input distributor switch can be eliminated by ·ng a single register of length kl and shifting the bits in groups of k. In any case, nvolutional encoding hardware is simpler than the hardware for block encoding

· ce message bits enter the register unit at a steady rate rb and an input buffer Is needed.

Exercises 13.3-1 Conider a systematic (3, 1,3) conolutional code. List the possible state and determine the state transition produced by mi=O and mi=1. Then construct and label the state diagram taking the encoded output bits to be mi. mi-2 emi, and

Omi_1_ (See Fig P13.3-4 for a convolutional eight-state pattern.)

Free Distance and Coding Gain

We previously found that the error-control power of a block code depends upon its minimum distance, determined from the weights of the codewords. A convolutional code does not subdivide into codewords, so we consider instead the eight w(X) of an entire transmitted sequence X generated by some message

sequence. The free distan~ of a convolutional code is then defined to be

d/ = [w(x)]rmn X:;t:000 ...

25

Page 29: DE PERT TOF

1 ne va1ue OT a/ serves as a measure OT error-control power. 11 wouio oe exceedingly dull and tiresome task to try to evaluate d/ by listing all possible

·H.ed sequences. Fortunately there's better way based on the normal -,eranng produces of appending afftailor" of Os at the end of a message to clear

egister unit and return the encoder to its initial state. This procedural eliminates anches from the code trellis for the last L transitions.

Take the code trellis in F1g.13.34a, for example. To end up at state a, the next­ state must be either a or c so the last few branches of any transmitted

ence X must follow one of the paths shown in Flg.13.3-7. Here the final state is ed bye, and each branch has been labeled with the number of 1s in the ded bitS-- which equals the weight associated with that branch. The total ht of a transmitted sequence X equals the sum of the branch weights along the

,f X. In accordance with Eq.{5), we seek the path that has the smallest branch- ,t sum, other than the trMal all-zero path.

Looking backwards L + 1 =3 branches from e, we locate the last path that ates from state a before terminating at e. Now suppose all earlier transitions red the all-zero path along the top line, giving the state sequence aa .... abce. an a-a branch hac weight 0, this state sequence corresponds to a minimum t nontrMal path. We therefore conclude that d/=O+O+ 0+2+1 +2=5. There

other minimum-weights paths, such as aa .... abcae and aa aocbce, but not mntrivial path has less weight than dJ=5.

Another approach to the calculation of free distance involves the generating 9-ld!on of a convolutional code. The generating function may be viewed as the

er function of the encoder with respect to state transitions. Thus, instead of ·ng the initial and final states by multiplication. Generating functions provide rtant information about code performance, including the free distance and ding error probability. We will develop the generating function for our (2, 1,2) encoder using the

ied state diagram in F19.1a.Ua. This diagram has been derived from F1g1a.3-4b. four modifications.

01

(a)

26

Page 30: DE PERT TOF

c::::T(D,l)Wa

Figur• 13.3-8 (a) Modified st.rt. diagram for (2, 1,2) •ncoder;(b~uivalent block diagram

~ First, we have eliminated the a-a loop which contributes nothing to the weight

a sequence X. Second, we have drawn the c-a branch as the final c-e transition. ird, we have assigned a state variable Wa at node a, and likewise at all other es. Fourth we have labeled each branch with two 'gain' variables. D and I such the exponent of D equals the branch weight (as in Fig 13.3-7), while the exponent

I equals the corresponding number of nonzero message bits (as signified by the ·d or dashed branch line). For instance since the c-e branch represents x'ix'j=11 d mi =O, it is labeled with 021°=02. This exponential trick allows us to perform ms by multiplying the D and I terms, which will become the independent variables e generating function.

Our modified state diagram now looks like a signal-flow graph of the type metimes used to analyze feedback systems. Specifically, if we treat the nodes as mming junctions and the DI terms as branch gains, then Fig.13.Ua represents e set of algebraic state equation

(6a)

he encoder's generating function T{D,I) can now be defined by the input-output uation

A T(D,I) = WJWa

These equations are also equivalent to the block diagram in Fig.13.3-sb, which er emphasizes the relationship between the state variables, the branch gains,

nd the generating function. Note that minus signs have been introduces here so at the two feedback paths c-b and d-d corresponds to negative feedback~

Next, the expression for T(D,I) is obtained by algebraic solution of Eq(6), or ..,, block-diagram reduction of Flg.13.Ub. using the transfer-function relations for oarallel, cascade, and feedback connections in Flg.3.1.s.(lf you know Mason's rule

u could also apply it to Flg.13.3-8a). Any of these methods produces the final result

T(D,I}= .Q:L_ 1-2D1

27

Page 31: DE PERT TOF

we have 1/(1-2Dtr1 to get the series in Eq{7b). Keeping in mind that T(D,I) 1111JreSent all possible transmitted sequences that terminate with a c-e transition

7b) has the following interpretation: for any d;;::: 5, there are exactly 2d-5 valid are generated by message containing d-4 nonzero bits. The smallest value

X) is the free distance, so we again conclude that d/=5.

As a generalization of Eq.(7), the generating function for an arbitrary olutional code takes the form

T(d,I)= L LA(d,i)Ddli d=d/ i=O

(8)

e, A(d,i} denotes the number of different input-output paths through the modified :e diagram that have weight d and are weight d and are generated by messages tainig I nonzero bits.

Now consider a received sequence Y=X+E, where E represents smission errors. The path of Y then diverges from the path of X and may or not be a valid path for the code in question. When Y does not correspond to a path, a maximum-likelihood decoder should seek out the valid path that has the

allest Hamming distance from Y. Before describing how such a decoder might implemented, we will state the relationship between generating functions, free nee, and error probability in maximum-likelihood decoding of convolutional es.

If transmission errors occur with equal and independent probability« per bit, the probability of a decoded message-bit error is upper-bounded by

P~1 oT(DJ) I k Bl ID=2-voc(1-oc).1=1

, derivation of this bound is given in Lin and Costello (1983, chap.11) or Viterbi Omura (1979,chap.4). When ex. is sufficiently small, series expansion of T(D,I)

ds the approximation

Pbe~M{df) 2dfocdfl2 k

w,«1

where

d/)=l:iA(d/,i) i=1

The quantity M(d/) simply equals the total number of nonzero message over all minimum-weight input-output paths in the modified state diagram.

Equation (10) supports our earlier assertion that the error-control power of a convolution al code depends upon its free distance. For a performance

mparison with uncoded transmission we will make the usual assumption of

Page 32: DE PERT TOF

ian white noise and (S1N)R=2RcYb~10 so Eq(10), Sect.13.1, gives the ~ission error probability

decoded error probability then becomes

pbel::1 M(df)2dfe-(Rcdf/2)yb k( 41tReYb)dJl4 (11)

eas uncoded transmission would yield

Pbe~ 1 e-yb (4,i;yb}1/2 {11)

Since the exponential terms dominate in these expression, we see that ,volutional coding improves reliability when Red/f2. >1. Accordingly , the quantity Jf2. is known as the coding gain, usually expressed in dB.

Explicit design formulas for d/ do not exists, unfortunately, so good lutional codes must be discovered by computer search and simulation. Table

1 lists the maximum free distance and coding gain of convolutional codes for ted values of n,k, and L.. Observe that the free distance and coding gain

ease with increasing memory L when the code rate Re is held fixed. All listed are nonsystematic ;a systematic convolutional code has a smaller d/ than um nonsystematic code with the same rate and memory.

Table 13.3-1 Maximum free distance and coding gain of selacted convolutional codn

0 k Be I cit Bed/0 ~ 4 1 % 3 13 1.63 3 1 1/3 3 10 1.68 2 1 % 3 6 1.50

6 10 2.50 9 12 3.00

3 2 2/3 3 7 2.33 4 3 % 3 8 3.00

Example 13.3-1 The {2, 1,2} encoder back in Fig 13.3-2 has T{D,l)=D51/{1-2DI}, oT(D,1)/ol= D5/(1-2Dl)2.Equation (9) therefore gives

Pba~ 25[a(1-a.)f12 ""'i'a612 [1-4"1a(1-a)]2

d the small-aapproximation agrees with Eq.{10). Specifically, in F11113.3-Sa we find one minimum-weight nontrivial path abce, which has w(X}=5=df and is erated by a message containing one nonzero bit, so M(d/)=1. If yb=10, then Re

29

Page 33: DE PERT TOF

~. oc~8.5x10-4, and maximum-likelihood decoding yields Pba=6.7x10-1, as oared with Puba=4.1x10-6. This rather small reliability improvement agrees with small coding gain Re d//2=5/4.

ises13.3-2 Let the connections to the mod-2 summers in F1g1a.3-2 be changed that x'·= m and X11·= m 2~m· 1 .....m· 1 1 J rv r u··J. Construct the code trellis and modified state diagram for this systematic code. Show that there are two minimum-weight paths in the state diagram, and that :/=4 and M(d/)=3. It is not necessary to find T(D,I) .

.,) Now assume yb=10. Calculate a, Pba, and Puba. What do you conclude about the performance of a conolutional code when Rcdf/2=1?

Decoding Methods There are three generic methods for decoding convolutional codes. At

extreme, the Veterbi algorithm executes maximum-likelihood decoding and ieves optimum performance but requires extensive hardware for computation

d storage. At the other extreme, feedback decoding sacrifices performance in hange for simplified hardware. Between these extremes, sequential decoding roaches optimum performance to a degree that depends upon the decoder's plexity. We will describe how these methods work with a (2, 1,L) code. The nsion to other codes is conceptually straight forward, but becomes messy to ayfor k>1. "

Recall that a maximum-likelihood decoder must examine an entire received uence Y and find a valid path that has the smallest Hamming distance from Y. ever, there are 2N possible paths for an arbitrary message sequence of N (or Nn/k bits in Y), so an exhaustive comparison to 2kL surviving paths, ependent of N, thereby bringing maximum-likelihood decoding into the realm of sibility.

A Viterbi decoder assigns to each branch of each surviving path a metric at equals its Hamming distance from the corresponding branch of Y. (we assume re that Os and 1s have the same transmission-error probability; if not, the branch ietric must be redefined to account for the differing probabilities). Summing the anch metrics yields the path metric, and Y is finally decoded as the surviving path

smallest metric. To illustrate the metric calculations and explain how surviving 1ths are selected, we will walk through an example of Viterbi decoding

Suppose that our (2, 1,2) encoder is used at the transmitter, and the nsmltter, and the received sequence starts with Y=11 01 11. Figure 1a.3-9 shows e first three branches of the valid paths emanating from the initial node aa in the de trellis. The number in parentheses beneath each branch is the branch metric, ined by counting the differences between the encoded bits and the

rresponding bits in Y. The circled number at the right-hand end of each branch is e running path metric, obtained by summing branch metrics from aa. For stance, the metric of the path a0 b1 c2 ~ is 0+2+2=4.

Now observe that another path a0a1a2 a3 also arrives at node ~ and has a smaller metric 2+1+0=3. Regardless of what happens subsequently, this path will

30

Page 34: DE PERT TOF

~ a smaller Hamming distance from Y than the other path arrMng at b.3 and is etore more likely to represent a the actual transmitted sequence. Hence, we ~d the larger-metric path, marked by an X, and we declare the path with the

4•~"er metric to be the survivor at this node. Likewise, we discard the larger metric s arriving at nodes a3,~ and d3, leaving to total of 2kl=4 surviving paths. fact that none of the surviving path metrics equals zero Indicated the presence ectable errors in Y.F191a.3-10 depicts the continuation of F1g.1a.u for a

plete message of N=12 bits, including tail Os. All discarded branches and all expects the running path metrics have been omitted for the sake of clarity.

latter T under a node indicates that the two arriving paths had equal running · , in which case we just flip a coin to choose the survivor (why?). The

i..mmum-likelihood path follows the heavy line from ao to a12 and the final value of path metric signifies at least two transmission sequence Y+E and message ence M written below the trellis.

---------------------~----------------

A Viterbi decoder must calculate two metrics for each node and store 2kL ·ng paths, each consisting of N branches. Hence, decoding complexity

eases exponentially with L and linearly with N. The exponentially factor limits ical applications of the Viterbi algorithm to codes with small values of L.

When N>>1, storage requirements can be reduced by a truncation process on the following metric-divergence effect: if two survMng paths emanated the same node at some point, then the running metric of the less likely path to increase more rapidly than the metric of the other survivor within about SL

ches from the common node. This effect appears several times in Fig.13.3-10; ider, for instance, the two paths emanating__from node b1. Hence, decoding not be delayed until the end of the transmitted sequence. Instead, the first k age bits can be decoded and the first set of branches can be deleted from

, .ory after the first 5Ln received bits have been processed. Successive groups essage bits are then decoded for each additional n bits received thereafter.

Sequential decoding, which was invented before the Viterbi algorithm, also on the metric-divergence effect. A simplified version of the sequential

ithrn is illustrated in Fig.1s.3-11a, using the same trellis, received sequence, and

31

Page 35: DE PERT TOF

0 1 2 3 4 5 6 7 8 9 10 11 12 0 0 0 0 0 0 0 0

3 I B 2_/2~2 /---&,rvivor 1

0 0 /X I I/\ '\I ::_~

(a)

Running metric (b)

Figure 13.3-11 lllustratlon af ~quentlid decoding

metrics as in Fig 13.a-10 .. Starting at a0 the sequential decoder purpose a single path by taking the branch with the smallest branch metric at each successive node. If two or more branches form one node have the same metric, such as at node bi.the decoder selects one at random and continues on. Whenever the current path happens to be unlikely, the running metric rapidly increases and the decoder eventually decides to go back to a lower-metric node and try +another path. There are three of these abandoned paths in our example. Even so, a comparison with Flg.13.s-10 shows that sequential decoding involves less computation than Viterbi decoding.

The decision to backtrack and try again is based on the expected value of the running metric at a given node. Specifically, if a is the transmission error probabilities per bit, then the expected running metric at the jth node of the correct path equals jno, the expected number of bits errors in Y at that point. The sequential decoder abandons a path when its metric exceeds some specified threshold 6. above jna. If no path survives the threshold test, the value of 6. is increased and the decoder backtracks again. Figura 1a.a-11b plots the running metrics versus j, along with jna and the threshold line jna,+6. for a.=1/16 and 6.=2.

Sequential decoding approaches the performance of maximum-likelihood decoding when the threshold is loose enough to permit exploration of all probable paths. However, the frequent backtracking requires more computations

32

Page 36: DE PERT TOF

d results in a decoding delay significantly greater than Viterbi decoding. A tighter resholds reduces computations and decoding delay but may actually eliminate e most probable path, thereby increasing the output error probability compared that of maximum-likelihood decoding with the same coding gain. As mpensation, sequential decoding permits practical application of convolutional

.• odes with large L and large coding gain since the decoder's complexity is entially independent of L.

We have described sequential decoding and Vitebi decoding in terms of algorithm rather than block diagrams of hardware. Indeed, these methods are usually implemented as software for a computer or microprocessor that performs e metric calculations and stores the path data. When circumstances preclude

algorithmic decoding, and a higher error probability is tolerable, feedback decoding ay be the appropriate method. A feedback decoder actsin general like a "sliding ck decoder" that decodes message bits one by one based on a block of L or e successive tree branches. We will focus on the special class of feedback

decoding that employs majority logic to achieve the simplest hardware realization of a convolutional decoder.

Consider a message sequence M= rn-rn, and the systematic (2, 1,L) encoded sequence

(13a)

here

{mod-2) (13b)

We will view the entire sequence X as codeword of indefinite length. Then, rrowing from the matrix representation used for block codes, we will define a

generator matrix G and a parttv-check matrix H such that

X=MG

To represent Eq.(13), must be a semi-infinite matrix with a diagonal structure given ,y

1 Qo Q 91 Q .... ·. · Q 9L 1 Qo Q Q1 Q . . . . . . . Q 9L

G=

(14a)

This matrix extends indefinitely to the right and down, and the triangular blank spaces denote elements that equal zero. The parity-check matrix is

33

Page 37: DE PERT TOF

Qo 1 91 0 Qo 1

1. 91 0 9o 1

H= I

gL Q 9L Q

also extended indefinitely to the right and down. Next, let E be the transmission error pattern in a received sequence Y=X +E.

·11 write these sequences as

that Yi=mi +~ .Hence, given the error bit e1i, the jth message bit is

dback decoder estimates errors from the syndrome sequence

S=YHT=(X+E) HT=E HT

·ng Eq.(14b) for H, the jth bit of S is L L

Sj= z:= Yi-19i ey'1j=i:e1j-19i t>e'j i=O i=O {16)

ere the sums are mod-2 and it is understand that Yi-1= e~1=0 for b:;0. As a specific example, take a (2, 1,6) encoder with Qo= Q2= Qs= Qs=1 and Q1= - 94= O,so

(17a)

{17

~ quation { 17 a) leads directly to the shift-register circuit for syndrome calculation ~agrammed an Fig.1a.a-12. Equation (17b) is called a parity-check sum and will leads us eventually to the remaining portion of the feedback decoder.

34

Page 38: DE PERT TOF

To that end, consider the parity-check table Ftg.1s.3-1aa. where checks ed which error bits appear in the sums 5i-s ,S]-4 ,Sj.., , and SJ. This table brings e fact that e\ .. 6 is checked by all four of the listed sums, while no other bit is ed by more than one. Accordingly, this set of check sums is said to be onal on e'i-6 . The tap gains of the encoder were carefully chosen to obtain

ogonal check sums.

13.3-13 Parity-check table for a systematic (2, 1,6) code.

When the transmission error probability is reasonably small, we expect to at most one or two errors in the 17 transmitted bits represented by the parity- k table. If one of the errors corresponds to e'i-6=1, then the four check sums

contain three 1s. Otherwise, the check sums contain less than three 1s. ce, we can apply these four check sums to a majority-logic gate to generate the ,t likely estimate of e'j-6.

Error correction

Y't-6. mi-5

Syndrome calculator

Chacl<sums

Error feedback Figure 13.3-14 Majority-logic feedback decoder for a systematic (2,1,61 code.

Figure 13.3-14 diagrams a complete majority-logic feedback decoder for our ,tematic (2, 1,6) code.The syndrome calculator from Fig 13,3-12 has two outputs " and Sj. The syndrome bit goes into another shift register with taps that nnect the check sums to the majority-logic gate, whose output equals the

estimated error e\-6.The mod-2 addition yj-6Q e1i-6 carries out error correction oased on Eq(15). The error is also feedback to the syndrome register to improve e reliability of subsequent check sums. This feedback path accounts for the ame feedback decoding.

35

Page 39: DE PERT TOF

vur examp1e oecoaer can correct any smg1e-error or coume-errer panern m nsecutiVe message bits. However, more than two transmission errors ces erroneous corrections and error propagation via the feedback path. These

ult in a higher output error than that of maximum-likelihood decoding. and Costello (1983, chap.13) for the error analysis and further treatment of

rtty-loglc-decodlng.

36

Page 40: DE PERT TOF

ERROR DETECTION, CORRECTIO AND CONTROL

A major design criterion for all telecommunication systems is to hieve error free transmission. Errors, unfortunately, do occur. There are ny types and causes originating from various sources ranging from 1tning strikes to dirty switch contacts at the central office. A method of tecting and in some cases correcting for their occurrence, is a necessity. achieve this, two basic techniques are employed. One is to detect the or and request a retransmission of the corrupted message. The second hnique is to correct the error at the error at the receiving end without ving to retransmit the message. The trade-off for either technique is the dundancy that must be built into the transmitted bit stream. This undancy decreases system throughput

Many of today's communication systems employ elaborate error­ al protocols. Some of these protocols are software packages designed

facilitate file transfers between personal computers and mainframes. ore recent error controllers are completely self-contained within a dware module, thus relieving the CPU of the burden of error control. The ·re process is transparent to the user.

In this chapter we consider some of the most common methods ed for error detection and correction, including error-controlling protocols cifically designed for data-communications equipment.

12.1- Parity

Parity is the most simplest and oldest method of error detection. though it is not very effective in data transmission, it is still widely used e to its simplicity. A single bit called the parity bit is added to a group of bits

epresennnq a letter, number, or symbol. ASCII characters on a keyboard, for ample, are typically encoded into seven bits with an eight bit acting as arity. The parity bit is computed by the transmitting device based on the mber of 1-bits set in the character. Parity can be either odd or even. If d parity is selected, the parity bit is set to a 1 or O to make the total mber of 1-bits in the character, including the parity bit itself, equal to an d value. If even parity is select, the opposite is true; the parity bit is set to a 1

r O to make the total number of 1-bits, including the parity bit itself, equal to an even number. The receiving device performs the same computation on ie received number of 1-bits for each character and checks the computed arity against what was received. If they do not match, an error has been tected. Table1 lists examples of even and odd parity. The selection of even or odd parity is generally arbitrary. In most cases it is

a matter of custom or preference. The transmitting and receiving stations, owever, must be set to same mode. Some system designers prefer odd anty over even. The advantages is that when a string of several data

•.•• aracters are anticipated to be all zeros the parity bit would be set to 1 for each character, thus allowing for ease of character identification and :ynchronization.

Page 41: DE PERT TOF

,e 1 Even and Odd Parity for a Seven-Bit Data Character ata character Odd parity bit I Data character Even parity bit

101000 0111

10110 10001

0 1 1 0

1011101 1110111 0011010 1010111

1 0 1 1

12. 2-Parity Generating and Checking Parity generating circuits can easily be implemented with a

combination of exclusive-bit data word. Odd parity can be obtained y simple adding an inverter at the output of the given circuit. dditional gates can be included in the circuit for extended word ngths. The same circuits can be used for parity checking by adding

another exclusive-OR gate to accommodate the received parity bit. The received data word and parity bit are applied at the circuits input. For even parity, the output should always be low unless an error occurs. Conversely, for odd parity checking, the output should always be high unless an error occurs.

Even parity blt,D7

~dparlty the output is inverted

_D3;_,_~~~~~---1 D4:....-~~~~~~~---l o_ o~~~~~~~~~~~~~~~-'

Figure 12-1 Even parity generating circuit Odd parity generation is obtained by adding an

Inverted at the output.

Rgure 12-2 depicts another design that can be used for parity generation and checking.

12.3 THE DISADVANTAGE WITH PARITY

A major shortcoming with parity is that it is only applicable for detecting when one bit or an odd number of bits have been changed in a character. Parity checking does not detected when an even number of bits have changed. For example, suppose that bit D2 in Example 1 were to change during the course of a transmission for an odd parity system. Example 1 shows how the bit errors is detected. Example1 Parity { odd)

Transmitted O Received O

D7 06 D5 04 D 0 1 1 0 1 0 1 1 0 1

02 1 0

01 0 0

00 1 1

sintle bit error

Page 42: DE PERT TOF

eceive parity bit, a zero is in conflict with the computed number of 1-bits as received; in this case four, an even number of 1-bits. The parity

~rityblt07

o!=l os-J rity- even:0 dd=l Figure 12-2 Even or odd parity generation is achieved In this circuit by setting the appropriate level at the parity set Input

hould have been equal to a value making the total number of 1-bits odd.An or has been properly detected. If, on the other hand, bit 02 and bit 01 were

altered during the transmission, the computed parity bit would still be in greement with the received parity bit. This err would go undetected, as shown

xample2. A little through will reveal that an even number of errors in a aracter, for odd or even parity, will go undetected.

mple Parity (odd) 07 D6 05 04 D3 02 01 DO

Transmittted 0 0 1 1 0 1 1 0 1 Received 0 0 1 1 0 1 a 1 1 y

J

Errors: two bits or an even number of bits go undetected

?arity, being a single-bit error-detection scheme, presents another problem in accommodannq today's high-speed transmission rates. Many errors are a result .• , impulse noise, which tends to be bursty in nature. Noise impulses may last veral milliseconds, consequently destroying several bits. The higher the ansmission rate, teh greater teh effect. Figure 12-3 depicts a 2-ms noise burst mposed on a 4800-bps signal is 208 ms (1/4800). As many as 10 bits are ffected. At least two characters are destroyed here, with the possibility of both naracter errors going undetected.

Page 43: DE PERT TOF

12.4 VERTICAL AND LONGITUDINAL REDUNDANCY CHECK (VRC AND LRC)

Thus far, the discussion of parity has been on a per character basis. This is often referred to as a vertical redundancy check (vrc). Parity can also be computed based on an accumulation of the value of each character's LSB rough MSB, including the vrc bit, as shown in flgure 12-4. This method of parity

checking is referred to as a longitudinal redundancy check(LRC). The resulting ord is called the block check character (BCC).

Additional parity bits in LRC used to produce the BCC provide extra error detection capabilities . Single-bit errors can now be detected and corrected. For example, suppose that the LSB of the letter y in the message

Figure 12-4 was received as a O instead of a 1 . The computed parity bit for the LRC would indicate that a bit was received in error. By itself, the detected LRC rror does not specify which bit in the row of LSB bit received is in error . The me is true for the vrc. The computed parity bit in the column of the character errory.

iiiiiiiii 1 2 3 4 5 6 7 8 9 10

2ms its are destroyed

Figure 12-3 Effected of a 2-ms noise burst on a 4800-bps signal.

ould be a 1 instead of a O. By itself, the detected vrc error does not specify t1ich bit in the y column has been received in error . A cross-check, however ·11 reveal that the intersection of the detected parity error, for the vrc and Ire

check identifies the exact bit was received in error. By inverting this bit, the error an be corrected.

Unfortunately, an even number of bit errors is not detected by either the c or vrc check. Cross-checks cannot be performed; consequent1y bit errors cannot be corrected.

12.1 CYCLIC REDUNDANCY CHECKING (CRC)

Parity checking has major shortcomings. It is much efficient to eliminate e parity bit of each character in the block entirely and utilize the redundant at the end of block.

A more powerful method than the combination of LRC and VRC for rror detection in blocks is cyclic redundancy checking (CRC). CRC is the nost commonly used method

Page 44: DE PERT TOF

H a e sp a sp n i C e sp d a y BBC(LRC)

LSB 0 1 0 0 1 0 0 1 1 1 0 0 1 1 1 0

0 0 1 0 0 0 1 0 1 0 0 0 0 0 0 0

0 0 1 0 0 0 1 0 0 1 0 1 o a a a 1 0 0 0 0 0 1 1 0 0 0 0 0 1 0 1

0 0 1 0 0 0 0 0 0 0 a 0 a 1 0 1

0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0

MSB 1 1 1 a 1 0 1 1 1 1 0 1 1 1 0 1

VRC 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 1

Figure 12.4Computing the block check character (BCC) for a message block with vrc and Ire odd parity checking.

Page 45: DE PERT TOF

•. error detection in block transmission. A minimal amount of hardware is uired ( slightly more than lrc/vrc systems), and its effectiveness in cting

ls greater than 99.9%. CRC involves a division of the transmitted message block , by a

nstant called the generator polynomial. The quotient is discarded and the mainder is transmitted as the block check character (BCC). This is shown in re 12..c;. Some protocols refer to the BCC as the frame check sequence CS). The receiving station performs the same computation on the received m the transmitter. If the two match, then no errors have been detected in e message block. If the two match, either a request for retransmission is ade by the receiver or the errors are corrected through the use of special ding techniques.

Cyclic codes contain a specific number of bits, governed by the size of e character within the message block. There of the most commonly used clic codes are CRC-12, CRC-16, and CRC-CCITT. Blocks containing aracters that are six bits in length typically use CRC-12, a 12-bit CRC.

,.;locks formatted with eight-bit characters typically use CRC-16 or CRC- ITT, both of which are 16-bit codes. The BCC for these three cyclic codes deried from the following generator polynomials, G(x):

CRC-12 generator polynomial: CRC-16 generator polynomial: CRC-CCITT aenerator polvnomial:

G(x)=x12+ x11+ x3+ x2+ x+1 G(x)=x16+ x15+ x2+ 1 G(x)=x16+ x12+ x5+1

combination of multistage shift registers employing feedback through elCctusive-OR gates is used to implement the mathematical function erformed on the message block to obtain the BCC, Figure 12.s depicts three "RC generating circuits for CRC-12, CRC-16, and CRC-CCITT. The BCC

accumulated by shifting the data stream into the data input of the register. ten the final bit of the message block is shifted in , the register contains the CC. The BCC is transmitted at the end of the message block, LSB first.

Generator polynomial, Gcx

. l Quotient discarded

Constanti Mesage bloc t

Next character Constant ext character Constant Next character Constant - · extcharacter Constant Remainder +-- BCC

Fiaure12-5 eemeunne the block check character IBCCl of a messaae block usina CRC.

Page 46: DE PERT TOF

CRC-12 polynomial, G(x)= x12+ x11+ x3+ x2+x+1

817 I 61 5 I 4l 3 12

(a)

CRC-1

HI ·1tt·1H ·1Cl 9181 716 15 1 14 MSB

Data Input

CRC-CCITT mial, G(x)=x16+ x12+ x5+1

Figure 12-6 (a)CRC generating circuit for CRC-12; (b) CRC generating circuit forCRC-16;(c)CRC..CCITT

Page 47: DE PERT TOF

12.5.1 Computing the Block Check Character

The generating polynomial, G(x), and message polynomial, M(x), used computing the sec include degree terms that represent positions is a

up of bits that are a binary 1: For example, given the polynomial

X5+ X2+ X+ 1 its binary representation is

1001 ing terms are represented by a 0. The highest degree in the polynomial

one less than the number of bits in the _binary code. The following discussion trates how the sec can be computed using long division.

· g to Figure 12-1, if we let n equal the total number of bits in transmitted and k equal the number of bits , then n-k equals the number

Openin flag I Message block,M(x)I sec

n=total number of bits in transmitted block

r- k=number of da~ I message block M!~)j _ 1

The number of bits in the sec is equa to n-k

re 12-7 Format of a message block for computing the sec In CRC

bits in the BCC . The message polynomial M(x), is multiplied by xn-k to ieved the correct number of bits for the BCC. The resulting

ccuct is then divided by the generator polynomial, G(x) . The quotient is carded and the end of the message block. The long-division process. ther, an exclusive-OR operation is performed. As we will see in the owing examples , this will yield a BCC having a total number of bits one

equal to the highest degree of the generator polynomial. The entire nsmitted

T(x)=xn-k[M(x)+B(x)]

ere T(x)= total transmitted message block

xn-k= multiplication factor B(x)= BCC

nsider the following examples.

Page 48: DE PERT TOF

mp1e3 In this example, the transmitted message block will include a total umber of bits, n, equal to 14. Nine of the 14 bits are data k, Therefore the 6CC consist of five bits(n-k=5,.

Given:

Generator polynomial, G(x)=x5+ x2+ x+1

=100111

Message polynomial, M(x)= i3+ -/+ x3+ x2+ 1

=101001101

The number of bits in the BCC, n-k=5( the highest degree of the generator polynomial). Compute the value of the BCC.

Solution:

1 .Multiply the message polynomial, M(x), by :iti-k;

:iti-k M(x)= x5( i3+ -/+ x3+ x2+ 1)

=1010110100000

2.Divided )('1-k M(x) by the generator polynomial and discard the quotient. The remainder is the BBC, B(x).

1 o 111111 o .--discard quotient 100111)10100110100000

100111 111010 100111

111011 100111 111000 100111 111110 100111 110010 100111 101010 1Q0111 11010 1 BBC

Page 49: DE PERT TOF

3. To determine the total transmitted message block, T(x), add the BBC, B(x), to xn-k M(x)

T(x)=xn-kM(x)+B(x) =10100111000000

+ 11010 10100110111010

BBC,B{x) transmitted block T(x)

At the receiving end, the transmitted message block, T(x), is divided y the same generating polynomial, G(x) . If the remainder is zero, the block was received without errors.

1 o 111111 Q discard quotient 100111)10110111010

100111 'l 11010 00111

11101 100111 111001 001 I 11101 100111 110100 100111

0111 100111

0 Remainder equals

zero (no errors)

Example : For simplicity, a 16-bit message (k=16) using CR.C=1 be used. The total number of oits in the transmitted mess is therefore 32.

iven: Generator polynomial for CR.C-16, G(xl=x16+ x15+ x2+1 Message Polynomial, M(x)= x15+ x13+ x11+ x10+ x7 + 0+ x4+1

The number of bits in the sec, n-k=16 (the highest degree of the generator polynomial).

olution: 1. Multiply the message polynomial, M(x)~ by xn-k;

x11-k M(x)= x16( x1s+ x13+ x11+ x10+ x7 + X"+ x4+ 1)

= x31+ x29+ ,;;-1 + x26+ x23+ x21+ ;a+ x16 =10101100101100010000000000000000

Transmitted Oa Received Data

Page 50: DE PERT TOF

00101100

01010011

10010111

' 11010100 04H

byte 1 20-I byte 1 20-I 00101100

byte2 01010011 53H byte2 53H

byte 3 97H I 1 o o 1 o 1 1 1 I 97H Check byte3

• • byte DATA

byte4 11010100 04H

4f97 '53 12ci r Carry

~~oa;e~ ~at~ :aec~AH E~ Recelwed checksum

--~- h1101010IEAH Figure12-8 a single-precision checksum is generated and transmitted as a BCC at the end of four-byte lock. The receiver verifies the block by regenerating the checksum and comparing it e against the

original. 1. Single-precision

. G>ouble-precision 3. Honeywell 4. Residue

1- Slnale-P ksum

The most fundamental checksum .computation is the single-precision checksum. Here, the checksum is derived simply the performing a binary addition of each n-bit data word in the message block. Any carry or overflow during the addition process is ignore thus the resultant checksum is also bit in length. Figure 12-1 illustrates how the single -precision checksum is derived transmitted as the sec, and used to verify the integred of the received data for simplicity a four data block is used. ote that the sum of the data exceeds 2n-1 and there for all carry occurs out of the MSS. This carry is ignore an on y e eight-bit (n-bit) checksum is send as the sec. An inhernet problem with tne single-precision checksum is if the MSB of the

n-bit data word becomes logically stuck at (SAi), the checksum becomes SA as well. A little through will reveal that the regenerated checksum on the received data will equal the original checksum and the SAi fault will go undetected. A more elaborate scheme may be necessary.

2-Double- Precision Checksum

As its name implies, the double-precision checksum extends the computed checksum to 2n bits in length, where n is the size of the data word in the message block. For example, the eight-bit data words used in the single­ precision checksum example above would have a 16-bit checksum. Message ocks with 16-bit data words would have a 32-bit checksum, and so forth.

Summation of data words in the message block can now extend up to modulo 22n, there by decreasing the probability of an erroneous checksum. In addition, the SAi (stuck at 1) error discussed earlier would be detected as a checksum error at the received, Figure 12-9 depicts how the double-precision checksum is derived, transmitted as the BCC, and used to verify the integrity of the received data. For simplicitv. a four bvte data block is used again. Hexacsec1ma1 notation bis also l.lsect. Note ttfat the carrvout 01: me M~liili cosmon

Page 51: DE PERT TOF

f the low-order checksum byte is not ignored .Instead, it becomes part of the 6-bit checksum result. Any carryout of the MSB of the 16-bit checksum is

;gnored.

The Honeywell checksum is an alternative from of the double-length. Its length is also 2n bits, where n is again the size of the data word in the message block. The difference is that the Honeywell checksum is based on mterleaving consecutive data words to from double- length words. The double­ length words are then summed together two from a double-precision checksum. This is shown in Figure 12.10 .The advantage of the Honeywell hecksum is that stuck at 1 (SAi) and stuck at o (SA=) bit errors occurring in e same bit positions of all words can be detected during the error in the per and lower words of the checksum. At least two bit positions in the

checksum are affected.

4- Residue Checksum

The last from of checksum in our discussion is the residue checksum. The residue checksum is identical to the single-precision checksum, except at any carryout of the MSB position of the checksum word is 'wrapped

around' and added to the LSB position. This added complexity permits the detection of SA 1 errors that go undetected. This is illustrated in Figure 12.·11

Transmitted Data Transmitted Block Received Data

Byte 1 I 5A Byte 1 I SAi

Byte 2 ,EF Check DATA Byte2 24 sun1' •• Byte 3 Byte 3

Byte 4 C5 lo2l 32C524ER5A •• Byte4

Computed checksum

I 02 I 32 I

Computed checksum

Equal-< kece,vea ~e~ml .___ __... I 02 I .2 I

Flgure12-9 A double-precision checksum is generated and transmitted as a BCC at the end of a four-byte block. The receiver verifies the block by regenerating the checksum and comparing It against the original

Page 52: DE PERT TOF

Transmitted Data Transmitted Block Byte 1 lc3 I

Received Data Byte 1 I C3 ! Byte2 t!E yta3 DB

Byte4 B4

Byte 2 ~E I Oleck Byte 3 08 ,yte 4 B4

interleaved data

FEI C3

DATA

interleaved data 1:: I ::1 qual-<t!WO

I B3 I ge I

LOB

Computed checksum

I ea I se I

Flgure12-10 Structure of the Honeywell checksum. The checksum is generated and transmitted as the BCC at the end of a four-byte block. The receiver verifies the block by regenerating the hecksum and comparing It against the original.

12-7 Error Correction

Two basic techniques are used by communication systems to ensure the reuabletransmlsslon of data.

They are shown Figure 12-12. One technique is to request the retransmission of the data block received in error. This technique, the more popular of the two, is known as automatic repeat request(ARQ). When a data lock is received without error, a positive acknowledgment is sent back to the

transmitter via the reverse channel. ACK alternating in BISYNC is an example of a protocol that uses ARQ for error correction. A second technique is called

Page 53: DE PERT TOF

Transmitted Data Transmitted Block Received Data

3yte 1 113

su,,..~~~~~~•

Byte 1 13

Byte2 F Byte3 6C yte 4 41

carry · · • · • • • 11 164

+

yte 2 tA4 '3yte 3 6C

41 Check DATA

ts* 1~g§j13 I

Wraparound carry

EQU

Figure12-10 structure of the Residue checksum. The checksum is generated and transmitted as the BCC et the end of a four-byte block. The receiver verifies the block by regenerating the hecksum and comparing it against the original.

Acknowledge essage block 2

•Negative acknowledge Automatic repeat request(ARQ) .Aclmo.wledge

(a)

Page 54: DE PERT TOF

Forward channel Transmitting station

no reerse channel) I FEC

Figure12-12 (a)Error correction us,ng the automatic the repeat request(ARQ) technique; error correction using forward error correction (FEC}.

forward error correction (FEC), FEC is used in simplex communications or applications where it is impractical or impossible to request a retransmission of the corrupted message block An example might be the telemetry signals transmitted to an Earth station from a satellite on a deep space mission. A garbled message could take several minutes or even hours to travel the distance between the two stations. Redundant error-correction coding is include in the transmitted data stream .If an error is detected by the receiver, the redundant code is extracted from the message block and used to predict and possibly correct the discrepancy.

12.7.1 Hamming code

In FEC a return path is not used for requesting the retransmission of a message block in error, hence the name forward error correction. Several codes have been developed to suit applications requiring FEC. Those most commonly recognized have been based on the research of mathematician Trichard W. Hamming. These codes are referred to as ,Hamming codes. Hamming codes employ the use of redundant bits that are inserted into the message stream for error correction . The positions of these bits are established and known by the transmitter and received before hand. If the receiver detects an error in the message block, the Hamming bits are used to identify the position of the error. This position, known as the syndrome ,is the underlying principle of the Hamming code.

12.7.1.1 DetJeloplng a Hamming code We will now develop a Hamming code for single-bit FEC. For

simplicity, 1 O data bits will be used . The number of Hamming bits depends on the number of data bits mo,m1 transmitted in the message stream, including the Hamming bits. If n is equal to the real number of bits transmitted ·n a message stream and m is equal to the number of Hamming bits, then m is the smallest number governed by the equation.

Page 55: DE PERT TOF

For a message of 10 data bits, mis equal to 4 and n is equal 14 bits (10+4) 24>(10+4)+1

If the syndrome is to indicate the position of the bit error, check bits, or Hamming bits c0,c1 serving as parity can be inserted into the message stream to perform a parity check based on the binary representation of each bit position. How is this possible? Note in Table 12-2 that the binary representation of each bit position forms an alternating bit pattern in the vertical direction .. Each column proceeding from the LSB to the MSB alternates at one-half the rate of the

TABLE 12-2 Check Bits Can Be u~ed As Parity on Binary Weighted Positions In a Message Stream Bit position in message

Bina. representation Check bit Position set

1 2 3

0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110

Co 1,3,5,7,9,11,13 C1 2,3,6,7,10,11,14 C2 4,5,6,7,12,13,14 C3 8,9,10,11,12,13,14,

5 6 7 8 9 10 11 2 13 14

previous column. The LSB alternates with every positions. The next bit alternates every two bit positions, and so forth. To illustrate how the check bits are encoded, the 10-bit message 1101001110 is labeled m9 through m0, as illustrates in Figure12.13 By inserting the check bits into the message length n is extended to 14 bits. For simplicity, it positions1 ,2,4,8 will be used for the .check bits. Even or odd parity

generation can be performed own the bit positions associated with each can be performed by exclusive -ORing individual bits an a group of the bit. For even parity, PEO through PE3 can serve as weight parity checks over the bit positions listed in Table 12-2 Exclusive-ORing these bit positions together with the data corresponding to the 14 bit message stream shown in Ftgure12.13, we the following:

Page 56: DE PERT TOF

13 11 9 7 6 5 3 1,._ bit position PEQ;;Q=ma Om~ m, O m3 C mz Om 1 CmcP co

14 11 10 7 6 3 2 .- bit position PE1:::O=m90 m60m5 0 m30 m2 0 m~ c-

14 13 12 7 6 5 4 +- bit position PE2=0=m90mt>m10 m3Cm20m1 Oc2

14 13 12 11 10 9 8 +- bit position PE3=0=mg QmG m- Q rnso m5Q m4Q C3

To determine the value of the check bits co through c3 the equations above can be rearranged as follows:

m9mam1m6m5 m4m3m2m1 mo

I 1 I 1 I O I 1 I O I 011 I 1 I 1 I O I '

Original bit stream, 10 bits

I I 11 check bits

mg mam1m5m5m4c;jm3m2m1c2moc1co

1413 12 11 1 o 9 8 7 6 5 4 3 2 1 +-- Bit position Transmitted bit stream, 14 bits

Figure 12-13 Check bits are inserted into a message stream for FEC.

c1=mgO mcP msO mS, m2 0 mo =1 + 1 + 0 + 1 + 1 + 0 = 0 I even parity

c2;;m9 OmiO m1d m3'1 m2d m­ =1 e 1 IBO t>1G 10 1 ;; 1

c3=mgQ mao m10 ma om5J m ;;1 ® 1 ©0 ® 1 ©0 f)0=1

Thus the check bits inserted into the message stream in positions 8,4,2,and1 are

Page 57: DE PERT TOF

C3=1 c2=1 C1=0 co=O

Let us now look at how a bit error can be identified and corrected by the weighted parity checks. Suppose that an error has been detected in the transmitted message stream Bit positions 7 has been lost in the transmission

1 1 a 1 o a 1 1 1 1 1 a o o .,_.transmitted bit stream

1 ost in transmission

11 O 1 O O 1 o 1 1 1 O O O ...--received bit stream j error in bit position 7

The receiver performs an even parity check over the same bit positions discussed above. Even parity should result for each parity check if there are no errors. Since a bit error has occurred, however, the syndrome(location of the error) will be identified by the binary number produced by the parity checks PEO through PE3, as follows:

1413 121110 9 8 7 6 5 4 3 2 1 1 1 0 1 0 0 101 1 1 0 00

Check O: 13 11 9 7 5 3 1 bit position PEO = 1+ 1 +a+ a+ 1 + o + o = 1 (even parity failure)1 Check 1 : 14 11 10 7 6 3 2 bit position PE1 = 1 +1 +O + o + 1 + a + o =1 (even parity failure)1 Check 2 : 14 13 12 7 6 5 4 bit position PE1 = 1 +1 +O + 0 + 1 + 1 + 1 =1 (even parity failure)1 Check 3 : 14 13 12 11 1 a 9 a bit position PE3 = 1 +1 +O +1 + 0 + O + 1 =O (correct)O

syndrome =0111 =7

The resulting syndrome is 0111, or bit position 7. This bit is simply inverted and the parity checks will result in 000( corrected) . The check bits are removed from positions 1, 2 , 4 and 8, there by resulting in the original message. One nice feature of this Hamming code is that once the message is encoded there is no differences between the check bits and the original message bits; that is. The syndrome can just as well identify a check bit in error.

Page 58: DE PERT TOF

17.2.1.

Now we have estaoll alternative method for correcting disadvantage with this m 1101001110,will be usec. remains the same. The H.c ••• transmitted message s and received. The proc

le behind a Hamming code, an error will be given here. The

0-bit message stream, urnner of Hamming bits, four,

, actLal:Y be placed anywhere in the are known by the transmitter

1- Compute the number o bits. Original message stream.: 1

2m>n+1 24>(10+4)+

~ m required for a message of n

1 1 0 (1 Obits)

2 -Insert the Hamming bits Transmitted message

14 13 12 11 1 1 H 1 0 H

· inal message stream.

3 2 1 H 1 0

3 - Express each bit po exclusive-Or each of positions 14, 12, 9, 6, 4 in the value of the Hamm·

1 as a four-bit binary number and gether. Starting from the left, bit

cA.~usive-ORed together. This will b result

+

~-- 4-Places the value of th stream shown in step 2.

14 13 12 11 1 1 1 C 1 1 1 a r_

its the H transmitted message

4 3 2 1 bit position

1 1 0 transmitted bit stream

1 1 1 0 received bit stream

Let us now assume eceived In error.

Page 59: DE PERT TOF

6- The Hamming bits are extracted from the received message stream and exclusive-Ored with the binary representation of the bit positions containing a 1.This will detect the bit positions in error, or the syndrome,

1413 12 1110 9 8 7 6 5 4 3 2 1

1 1 0 0 0 0 1 1 0

1 0 1 1 extracted Hamming bits

1011 Hamming bits + 1110=14

0~01 + 1100=12

1001 + 1001=9

0000 + 0100=4

0 + 0010-2

011 O syndrome equals bit position 6

To detect multiple bit errors, more elaborate FEC techniques are nece Additional redundancy must be built into the message stream. T~is ft.., reduces the efficiency of the channel and lowers the throughput. Unlike ARQ, which is extremely reliable, the best are not particularly in cases where multiple bits are destroye bursts. Generally, FEC is employed only in applications where feasible. The detection of multiole- bit errors is beyond the scene

J

Page 60: DE PERT TOF

'

This graduati theoretic came discovered the c error correction. ,~ knowledge in c project . Thank ';-~

e point where · g gave a lot me. I e were learn the

so I having great e. You allows to