analogue and digital communication assignment

22
Task 1 With the help of the block diagrams, explain, how digital information is modulated, transmitted and received in ASK and FSK modulation schemes. Show waveforms of digital input, modulated signal and output. Amplitude-Shift Keying (ASK) i) Introduction Like AM, ASK is also linear and sensitive to atmospheric noise, distortions, propagation conditions on different routes. Both ASK modulation and demodulation processes are relatively inexpensive. The ASK technique is also commonly used to transmit digital data over optical fibre. For LED transmitters, binary 1 is represented by a short pulse of light and binary 0 by the absence of light. Laser transmitters normally have a fixed "bias" current that causes the device to emit a low light level. This low level represents binary 0, while a higher- amplitude lightwave represents binary 1. The transmission of digital signals is increasing at a rapid rate. Low-frequency analogue signals are often convert to digital format (PAM) before transmission. The source signals are generally refer as baseband signals. We can send analogue and digital signals directly over a medium. From electro-magnetic theory, for efficient radiation of electrical energy from an antenna it must be at least in the order of magnitude of a wavelength in size; c = f , where c is the velocity of light, f is the signal frequency and is the wavelength. For a 1 kHz audio signal, the wavelength is 300 km. An antenna of this size is not practical for efficient transmission. ii) Modulation process The low-frequency signal is often frequency-translated to a higher frequency range for efficient transmission. The process is call modulation. The use of a higher frequency

Upload: ghausalias4646

Post on 02-Jul-2015

142 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Analogue and Digital Communication assignment

Task 1

With the help of the block diagrams explain how digital information is modulated

transmitted and received in ASK and FSK modulation schemes Show waveforms of

digital input modulated signal and output

Amplitude-Shift Keying (ASK)

i) Introduction

Like AM ASK is also linear and sensitive to atmospheric noise distortions propagation conditions on different routes Both ASK modulation and demodulation processes are relatively inexpensive The ASK technique is also commonly used to transmitdigital data over optical fibre For LED transmitters binary 1 is represented by a short pulse of light and binary 0 by the absence of light Laser transmitters normally have a fixed bias current that causes the device to emit a low light level This low level represents binary 0 while a higher-amplitude lightwave represents binary 1

The transmission of digital signals is increasing at a rapid rate Low-frequency analogue signals are often convert to digital format (PAM) before transmission The source signals are generally refer as baseband signals We can send analogue and digital signals directly over a medium From electro-magnetic theory for efficient radiation of electrical energy from an antenna it must be at least in the order of magnitude of a wavelength in size c = f where c is the velocity of light f is the signal frequency and is the wavelength For a 1 kHz audio signal the wavelength is 300 km An antenna of this size is not practical for efficient transmission

ii) Modulation process

The low-frequency signal is often frequency-translated to a higher frequency range for efficient transmission The process is call modulation The use of a higher frequency range reduces antenna size In the modulation process the baseband signals constitute the modulating signal and the high-frequency carrier signal is a sinusoidal waveform There are three basic ways of modulating a sine wave carrier For binary digital modulation binary Amplitude-shift keying (BASK) Modulation also leads to the possibility of frequency multiplexing In a frequency-multiplexed system individual signals are transmitted over adjacent non-overlapping frequency bands They are therefore transmitted in parallel and simultaneously in time If we operate at higher carrier frequencies more bandwidth is available for frequency-multiplexing more signals

Binary Amplitude-Shift Keying (BASK)

A binary amplitude-shift keying (BASK) signal can be defined by

s(t) = A m(t) cos 2πfct where 0 lt t lt T

Where A is a constant m(t) = 1 or 0 fc is the carrier frequency and T is the bit duration It has a power P = A22 so that A = radic2P

Thus equation can be written as s(t) = radic(2P) cos 2πfct

= radic(PT)radic(2T)cos 2πfct

= radic(E)radic(2T)cos 2πfct

where E = P T is the energy contained in a bit duration If we take f1(t) = radic2T cos 2pfct as the orthonormal basis function the applicable signal space or constellation diagram of the BASK signals as below

Figure 11 BASK signal constellation diagram

The BASK signal sequence generated by the binary sequence 0 1 0 1 0 0 1 The amplitude of a carrier is switched or keyed by the binary signal m(t) This is sometimes called on-off keying (OOK)

Figure 12 (a) Binary modulating signal and (b) BASK signal

M-ary Amplitude-Shift Keying ( M -ASK)

An M-ary amplitude-shift keying (M-ASK) signal can be defined by

s(t) = Ai cos2πfct 0letle T

0 elsewhere

Where Ai = A[2i - (M - 1)] for i = 0 1 M - 1 and M gt 4 Here A is a constant fc is the carrier frequency and T is the symbol duration The signal has a power Pi = Ai

22 so that Ai = radic(2Pi) Thus equation can be written as

s(t) = radic(2Pi) cos 2πfct

= radic(PiT)radic(2T)cos 2πfct

= radic(Ei)radic(2T)cos 2πfct 0 lt t lt T

where Ei = PiT is the energy of s(t) contained in a symbol duration for i = 0 1

M - 1 The figure shows the signal constellation diagrams of M-ASK and 4-ASK

signals

Figure 13 (a) M-ASK and (b) 4-ASK signal constellation diagrams

The figure below shows the 4-ASK signal sequence generated by the binary sequence 00 01 10 11

Figure 14 (a) binary sequence (b) 4-ary signal and (b) 4-ASK signal

iii) Transmitting and Receiving process

Figure 15 ASK block diagram

Here is a diagram showing the ideal model for a transmission system using an ASK modulation It can be divided into three blocks The first one represents the transmitter the second one is a linear model of the effects of the channel the third one shows the structure of the receiver The following notation is used

Ht(t) is the carrier signal for the transmission Hc(t) is the impulse response of the channel n(t) is the noise introduced by the channel Hr(t) is the filter at the receiver L is the number of levels that are used for transmission Ts is the time between the generation of two symbol

Different symbols are represented with different voltages If the maximum allowed value for the voltage is A then all the possible values are in the range[-A A] and they are given by the difference between one voltage and the other is Considering the picture the symbols v(n) are generated randomly by the sources then the impulse generator creates impulses with an area of v(n) These impulses are sent to the filterht to be sent through thechannel In other words for each symbol a different carrier wave is sent with the relative amplitude Out of the transmitter the signal s(t) can be expressed in the form In the receiver after the filtering throughhr (t) the signal is where we use the notation

nr(t) = n(t) hr(t) g(t) = ht(t) hc(t) hr(t)

where indicates the convolution between two signals After the AD conversion the signalz[k] can be expressed in the form In this relationship the second term represents the symbol to be extracted The others are unwanted the first one is the effect of noise the second one is due to the intersymbol interferenceIf the filters are chosen so thatg(t) will satisfy the Nyquist ISI criterionthen there will be no intersymbol interference and the value of the sumwill be zero so

z[k] = nr[k] + v[k]g[0] the transmission will be affected only by noise

Frequency-Shift Keying (FSK)

i) Introduction

FSK (Frequency Shift Keying) is also known as frequency shift modulation and frequency shift signaling Frequency Shift Keying is a data signal converted into a specific frequency or tone in order to transmit it over wire cable optical fiber or wireless media to a destination point In Frequency Shift Keying the modulating signals shift the output frequency between predetermined levels Technically FSK has two classifications the non-coherent and coherent FSK In non-coherent FSK the instantaneous frequency is shifted between two discrete values named mark and space frequency respectively On the other hand in coherent Frequency Shift Keying or binary FSK there is no phase discontinuity in the output signal The simplest FSK is binary FSK (BFSK) BFSK literally implies using a pair of discrete frequencies to transmit binary (0s and 1s) information With this scheme the 1 is called the mark frequency and the 0 is called the space frequency

ii) Modulation

In frequency-shift keying the signals transmitted for marks (binary ones) and spaces (binary zeros) are respectively

s1(t)= A cos(ω1t + θc) 0 lt t le T

s2(t)= A cos(ω2t + θc) 0 lt t le T

This is called a discontinuous phase FSK system because the phase of the signal is discontinuous at the switching times A signal of this form can be generated by the following system

Figure 16 FSK discontinuous phase

If the bit intervals and the phases of the signals can be determined (usually by the use of a phase-lock loop) then the signal can be decoded by two separate matched filters

Figure 17 Two separate filters

The first filter is matched to the signal s1(t) and the second to s2(t) Under the assumption that the signals are mutually orthogonal the output of one of the matched filters will be E and the other zero (where E is the energy of the signal) Decoding of the bandpass signal can therefore be achieved by subtracting the outputs of the two filters and comparing the result to a threshold If the signal s1(t) is present then the resulting output will be +E and if s2(t) is present it will be -E Since the noise variance at each filter output is Eᶯ2the noise in the difference signal will be doubled namely σ2= Eᶯ Since the overall output variation is 2E the probability of error is on eqn1

eqn1

The overall performance of a matched filter receiver in this case is therefore the same as for ASK The frequency spectrum of an FSK signal is difficult to obtain This is a general characteristic of FM signals However some rules of thumb can be developed Consider the case where the binary message consists of an alternating sequence of zeros and ones If the two frequencies are each multiples of 1T (eg f1 = mT and f2 =nT ) and are synchronised in phase then the FSK wave is a periodic function

Figure 18 FSK signal wave

This can be viewed as the linear superposition of two OOK signals one delayed by T seconds with respect to the other Since the spectrum of an OOK signal is on eqn2

eqn2

where M(ω) is the transform of the baseband signal m(t) the spectrum of the FSK signal is

the superposition of two of these spectra one for ω1 = ωc ndash Δω and the other for ω2 =ωc +

Δω Nonsynchronous or envelope detection can be performed for FSK signals In this case

the receiver takes these following form

Figure 19 Receiver

Binary Frequency-Shift Keying (BFSK)

A binary frequency-shift keying (BFSK) signal can be defined by

s(t) =A cos 2f 0 t 0 lt t lt T A cos 2f 1 t elsewhere

where A is a constant f0 and f1 are the transmitted frequencies and T is the bit duration The signal has a power P = A22 so that A = radic2P Thus equation can be written as

s(t) =radic2P cos 2f 0 t 0 lt t lt T radic2P cos 2f 1 t elsewhere

radicPT radic2T cos 2f 0 t 0 lt t lt T radicPT radic2T cos 2f 1 t elsewhere

radicE radic2T cos 2f 0 t 0 lt t lt T radicE radic2T cos 2f 1 t elsewhere

where E = PT is the energy contained in a bit duration For orthogonality f0 = mTand f1 = nT for integer n gt integer m and f1 - f0 must be an integer multiple of12T We can take ϕ1(t) =radic2T cos 2 f0t and ϕ 2( t) = 2T sin 2 f1t as theorthonormal basis functions The applicable signal constellation diagram of theorthogonal BFSK signal is shown in below

Figure 110 Orthogonal BFSK signal constellation diagram

Figure 111 (a) Binary sequence (b) BFSK signal and (c) binary modulating and BASK signals

It can be seen that phase continuity is maintained at transitions Further the BFSK signal is the sum of two BASK signals generated by two modulating signals m0(t) and m1(t) Therefore the Fourier transform of the BFSK signal s(t) is S(f) = A2 M1(f - f1) + A2 M1(f + f1)

Figure 112 (a) Modulating signals (b) Spectrum of (a)

M-ary Frequency-Shift Keying ( M -FSK)

An M-ary frequency-shift keying (M-FSK) signal can be defined by

s(t) = A cos (2 fit +Ɵrsquo) 0 lt t lt T 0 elsewherefor i = 0 1 M - 1 Here A is a constant fi is the transmitted frequency isthe initial phase angle and T is the symbol duration It has a power P = A22 so thatA = 2P Thus equation (246) can be written as

s(t) = radic2P cos (2 fit +Ɵrsquo) 0 lt t lt T

=radicPT radic2T cos (2 fit +Ɵrsquo) 0 lt t lt T

=radicEradic2T cos (2 fit +Ɵrsquo) 0 lt t lt T

where E = PT is the energy of s(t) contained in a symbol duration for i = 0 1 M - 1

Figure 245 M-ary orthogonal 3-FSK signal constellation diagram

Figure 246 4-FSK modulation (a) binary signal and (b) 4-FSK signal

Task 3

Draw modulator and receiver block diagram of a QPSK modulation scheme

Quaternary phase shift keying (QPSK) or quadrature PSK as it is sometimes called is an other form of angle-moduled constant-amplitude digital modulation With QPSK four output phases are possible for a single carrier frequency Because there are four different output phases there must be four different input conditions Because the digital input to a QPSK modulator is a binary (base2) signal to produce four different input conditions it takes more than a single input bit With two bits there are four there are four possible conditions 00 01 10 and 11 There fore with QPSK the binary input data are combined into groups of two bits called dibits Each dibit code generates one of the four possible output phases Therefore for each two-bit dibit clocked into the modulator a single output change occurs

Figure 31 Phasor Diagram and Constellation Diagram

Above shows the constellation diagram for QPSK with Gray coding Each adjacent symbol only differs by one bit Sometimes known as quaternary or quadriphase PSK or 4-PSK QPSK uses four points on the constellation diagram equispaced around a circle With four phases QPSK can encode two bits per symbol

Figure 32 Four symbols that represents the four phases in QPSK

Figure above shows depicts the 4 symbols used to represent the four phases in QPSK Analysis shows that this may be used either to double the data rate compared to a BPSK system while maintaining the bandwidth of the signal or to maintain the data-rate of BPSK but halve the bandwidth needed

Basic Configuration of Quadrature Modulation Scheme

Figure 33 Basic configuration of QPSKQPSK signal is generated by two BPSK signal Two orthogonal carrier signals are

used to distinguish the two signals One is given by cos 2fct and the other is given by sin

2fct The two carrier signals remain orthogonal in the area of a period A channel in which cos2fct is used as a carrier signal is generally called an inphase channel or Ich and a channel in which sin 2fct is used as a carrier signal is generally called quadrature-phase channel or Qch Therefore d (t) I and d (t) Q are the data in Ich and Qch respectively Modulation schemes that use Ich and Qch are called quadrature modulation schemes The basic confiuration is shown in figure 33

In the system shown above the input digital data dk is first converted into parallel data with two channels Ich and Qch The data are represented as di(t) and dq(t) The conversion or data allocation is done using a mapping circuit block Then the data allocated to Ich is filtered using a pulse shaping filter in Ich The pulse shaped signal is converted in analog signal by a DA converter and multiplied by a cos 2fct carrier wave The same process is carried out on the data allocated to Qch but it is multiplied by a f t c sin 21048659 carrier wave instead Then the Ich and Qch signals are added and transmitted to the air

At the receiver the received wave passes through a BPF to eliminate any sprurious signals Then it is downconverted to the baseband by multiplying by the RF carrier frequency Then in both the Ich and Qch channels the downcoverted signal is digitally sampled by an AD converters and the digital data is fed to a DSPH In the DSPH the sampled data is filtered with pulse shaping filter to eliminate ISI The signals are then synchronized and the transmitted digital data is recovered

Figure 34 Mapping circuit function for QPSKFor the mapping function a simple circuit is used to allocate the data as illustrated in the following figure This mapping function basically allocates all even bits to Ich and all odd bits to Qch and demapping is just the opposite operation

Conclusion It is clear that in order for QPSK to be useful for high data rate wide bandwidth systemsin multipath fading channels it is necessary to consider using diversity techniques equalization adaptive antennas The approach for performance analysis presented here could be used as a first step in the design process of such more complex systems

Task 5

Discuss Block codes and Convolutional codes in detail and evaluate their performance

Block codes

Consider that a message source can generate M equally likely messages Then initially we represent each message by k binary digits with 2k =M These k bits are the information bearing bits Next we add to each k bit massage r redundant bit Thus each massage has been expanded into a codeword of length n bits with n = k + r The total number of possible n bit codewords is 2n while the total number of possible messages is 2k There are 2k - 2n possible n bit words which do not represent possible messages

Codes formed by taking a block of k information bits and adding r (= n - k) redundant bits to form a codeword are called Block Codes and designated (nk) codes

The Hamming Distance dmin

Consider two distinct five digit codewords C1 = 00000 and C2 = 00011 These have a binary digit difference (or Hamming distance) of 2 in the last two digits The minimum distance in binary digits between any two codewords is known as the minimum Hamming distance dmin For block codes the minimum Hamming distance or the smallest difference between the digits for any two codewords in the complete code set dmin is the property which controls the error correction performance We can thus calculate the error detecting and correcting power of a code from the minimum distance in bits between the codewords

Block error probability and correction capability

If we have an error correcting code which can correct R errors than the probability of a codeword not being correctable is the probability of having more than R errors in n digits We can this calculate this probability by summing all the induvidual error probabilities up to and including R errors in the block

P(gtRrsquo errors) = 1 -

The probability of j errors in n digit codeword is

P(gtRrsquo errors) = (Pe)j (1-Pe)n-j x nCj

Pe is the probitlity of error in a single binary digit

n is the block length

nCj is the number of ways of choosing j error digits positoins with in length n binary digts

Group codes

Group codes are a special kind of block codes They comprise a set of codewords C1CN which contain the all zeros codeword (eg 00000) and exhibit a special property called closure This property means that if any two valid codewords are subject to a bit wise EX OR operation then they will produce another valid codeword in the set The closure property means that to find the minimum Hamming distance see below all that is required is to compare all the remaining codewords in the set with the all zeros codeword instead of comparing all the possible pairs of codewords The saving gets bigger the longer the codeword For example a code set with 100 codewords will require 100 comparisons for a Group code design compared with 100+99+98+ +2+1 for a non-group code

Nearest neighbour decoding

Nearest neighbour decoding assumes that the codeword nearest in Hamming distance to the received word is what was transmitted This inherently contains the assumption that the probability of a small number of terrors is greater than the probability of the larger number of t+1 errors that Pe is small

Nearest neighbour decoding can also be done on a soft decision basis with real non-binary numbers from the receiver The nearest Euclidean distance (nearest to these 5 codewords in terms of a 5-D geometry) is then used and this gives a considerable performance increase over the hard decision decoding described here

Hamming boundThis defines mathematically the error correcting performance of a block code The

upper bound on the performance of block codes is given by the Hamming bound some times called the sphere packing bound If we are trying to create a code to correct t errors with a block length of n with k information digits The upper bound on the performance of block codes as given by the Hamming Bound

2k 2 n 1 + n + nC2 + nC3

++ nCt

Cyclic codes

Cyclic codes are linear block codes with an additional cyclic shift operation For convenience polynomial representations are used for the code words for encoding and decoding since the shifting of a code word is equivalent to a modification of the exponential of a polynomial Specifically if x = (x0 x1xn-1) denotes a code word with elements in a finite field-(a field is an algebraic system formed by collection of elements F together with dyadic(2-operand) operations + and multiplication which are defined for all pairs of field elements in F and which behave in an arithmetically consistent manner A finite field is a field with finite number q of elements and it can be represented by Fq) A polynomial over Fq of degree at most n-1 is given as x(D) = x0 + x1D1 + x2D2 ++xn-1Dn-1

Cyclic codes are extremely suited for error correction because they can be designed to detect many combination of likely errors and the implementation of both encoding and

error-detecting circuits are practical A cyclic code used for error-detection is known as cyclic redundancy check (CRC) code

An error burst of length B in an n-bit received word as a contagious sequence of B bits in which the first and the last bits or any number of intermediate bits are received in error Binary (n k) CRC codes are capable of detecting the all error bursts of length n ndash k or less a fraction of error bursts of length equal to n-k+1 the fraction equals 1-2-(n-k-1) a fraction of error bursts of length greater than n-k+1 the fraction equals to 1-2-(n-k-1) all combination of dmin ndash 1 (or fewer) errors and all error patterns with an odd number of errors if the generation polynomial g(X) for the code as an even number of non-zero coefficients

Convolutional code

Encoder structure

The encoder will be represented in many different but equivalent ways Also the main decoding strategy for convolutional codes based on the Viterbi Algorithm will bedescribed A firm understanding of convolutional codes is an important prerequisite tothe understanding of turbo codes

Convolutional codes are commonly specified by three parameters (nkm)

n = number of output bit

k = number of inputs bits

m = number of memory registers

A convolutional code introduces redundant bits into the data stream through the use of linear shift registers The information bits are input into shift registers and the output encoded bits are obtainedby modulo-2 addition of the input information bits and the contents of the shift registersThe connections to the modulo-2 adders were developed heuristically with no algebraic or combinatorial foundation The code rate r for a convolutional code is defined as r = kn Often the manufacturers of convolutional code chips specify the code by parameters (nkL) The quantity L is called the constraint length of the code and is defined by Constraint Length L = k (m-1)

The constraint length L represents the number of bits in the encoder memory that affect the generation of the n output bit s The constraint length L is also referred to by t he capital letter K which can be confusing wit h the lower ca se k which represents the number of input bits In some books K is defined as equal to product the of k and m Often in commercial spec the codes are specified by (r K) where r = the code rate kn and K is the constraint length The constraint length K however is equal t o L œ 1 as defined in this paper I will be referring t o convolutional codes as (nkm) and not as (rK)

Encoder Representations

1) Generator Representation

Generator representation shows the hardware connection of the shift register taps to the modulo-2 adders A generator vector represents the position of the taps for an output A ldquo1rdquo represents a connection and a ldquo0rdquo represents no connection

2) Tree Diagram Representation

The tree diagram representation shows all possible information and encoded sequences for the convolutional encoder In the tree diagram a solid line represents input information bit 0 and a dashed line represents input information bit 1 The corresponding output encoded bits are shown on the branches of the tree An input information sequence defines a specific path through the tree diagram from left to right

3) State Diagram Representation

The state diagram shows the state information of a convolutional encoder The state information of a convolutional encoder is stored in the shift registers In the state diagram the state information of the encoder is shown in the circles Each new input information bit causes a transition from one state to another The path information between the states denoted as xc represents input information bit x and output encoded bits c It is customary to begin convolutional encoding from the all zerostate

4) Trellis Diagram RepresentationThe trellis diagram is basically a redrawing of the state diagram It shows all possible state transitions at each time step Frequently a legend accompanies the trellis diagram to show the state transitions and the corresponding input and output bit mappings (xc)

Catastrophic Convolutional code

Catastrophic convolutional code causes a large number of bit errors when only asmall number of channel bit errors is received This type of code needs to be avoided andcan be identified by the state diagram A state diagram having a loop in which a nonzeroinformation sequence corresponds to an all-zero output sequence identifies a catastrophicconvolutional code

Hard-Decision and Soft-Decision Decoding

Hard-decision and soft-decision decoding refer to the type of quantization used onthe received bits Hard-decision decoding uses 1-bit quantization on the received channel values Soft-decision decoding uses multi-bit quantization on the received channel values For the ideal soft-decision decoding (infinite-bit quantization) the received channel values are directly used in the channel decoder

Viterbi decoding

Viterbi decoding is the best known implementation of the maximum likely-hood decoding Here we narrow the options systematically at each time tick The principal used to reduce the choices is this

1 The errors occur infrequently The probability of error is small

2 The probability of two errors in a row is much smaller than a single error that is the errors are distributed randomly

The Viterbi decoder examines an entire received sequence of a given length The decoder computes a metric for each path and makes a decision based on this metric All paths are followed until two paths converge on one node Then the path with the higher metric is kept and the one with lower metric is discarded The paths selected are called the survivors

For an N bit sequence total numbers of possible received sequences are 2N Of these only 2kL are valid The Viterbi algorithm applies the maximum-likelihood principles to limit the comparison to 2 to the power of kL surviving paths instead of checking all paths The most common metric used is the Hamming distance metric This is just the dot product between the received codeword and the allowable codeword

Decoding Complexity for Convolutional Codes

For a general convolutional code the input information sequence contains kLbits where k is the number of parallel information bits at one time interval and L is thenumber of time intervals This results in L+m stages in the trellis diagram There areexactly 2kL distinct paths in the trellis diagram and as a result an exhaustive search forthe ML sequence would have a computational complexity on the order of O[2kL]

TheViterbi algorithm reduces this complexity by performing the ML search one stage at a time in the trellis At each node (state) of the trellis there are 2k calculations Thenumber of nodes per stage in the trellis is 2m Therefore the complexity of the Viterbialgorithm is on the order of O[(2k)(2m)(L+m)] This significantly reduces the number ofcalculations required to implement the ML decoding because the number of time intervalsL is now a linear factor and not an exponent factor in the complexity However therewill be an exponential increase in complexity if either k or m increases

Conclusion

Fundamentally convolutional codes do not offer more protection against noise than an equivalent block code In many cases they generally offer greater simplicity of implementation over a block code of equal power

REFERENCES

1 Herbert Taub Schilling Principle of Communication System Mcgraw-Hill 2002

2 Martin S Analog and Digital Communication System Prentice Hall 2001

3 P M Grant and D G Mcruickshank I A Glover and P M Grant Digital Communications Pearson Education 2009

4 V Pless Introduction to the Theory of Error-Correcting Codes 3rd ed New York John Wiley amp Sons 1998

5 Tomasi W Electronic Communication Systems Fundamentals Through Advanced Prentice Hall 2004

6 Lee L H C Error-Control Block Codes for Communications Engineers ArtechHouse 2000

7 httpwwwscribdcomdoc35139573Notes-in-Phase-Shift-Keying-Bpsk-Qpsk

8 httpwwwwikipediaorg

9 MacQuarie University Lecturer Notes httpwwwelecmqeduau~clfiles_pdfelec321

  • Viterbi decoding
Page 2: Analogue and Digital Communication assignment

Thus equation can be written as s(t) = radic(2P) cos 2πfct

= radic(PT)radic(2T)cos 2πfct

= radic(E)radic(2T)cos 2πfct

where E = P T is the energy contained in a bit duration If we take f1(t) = radic2T cos 2pfct as the orthonormal basis function the applicable signal space or constellation diagram of the BASK signals as below

Figure 11 BASK signal constellation diagram

The BASK signal sequence generated by the binary sequence 0 1 0 1 0 0 1 The amplitude of a carrier is switched or keyed by the binary signal m(t) This is sometimes called on-off keying (OOK)

Figure 12 (a) Binary modulating signal and (b) BASK signal

M-ary Amplitude-Shift Keying ( M -ASK)

An M-ary amplitude-shift keying (M-ASK) signal can be defined by

s(t) = Ai cos2πfct 0letle T

0 elsewhere

Where Ai = A[2i - (M - 1)] for i = 0 1 M - 1 and M gt 4 Here A is a constant fc is the carrier frequency and T is the symbol duration The signal has a power Pi = Ai

22 so that Ai = radic(2Pi) Thus equation can be written as

s(t) = radic(2Pi) cos 2πfct

= radic(PiT)radic(2T)cos 2πfct

= radic(Ei)radic(2T)cos 2πfct 0 lt t lt T

where Ei = PiT is the energy of s(t) contained in a symbol duration for i = 0 1

M - 1 The figure shows the signal constellation diagrams of M-ASK and 4-ASK

signals

Figure 13 (a) M-ASK and (b) 4-ASK signal constellation diagrams

The figure below shows the 4-ASK signal sequence generated by the binary sequence 00 01 10 11

Figure 14 (a) binary sequence (b) 4-ary signal and (b) 4-ASK signal

iii) Transmitting and Receiving process

Figure 15 ASK block diagram

Here is a diagram showing the ideal model for a transmission system using an ASK modulation It can be divided into three blocks The first one represents the transmitter the second one is a linear model of the effects of the channel the third one shows the structure of the receiver The following notation is used

Ht(t) is the carrier signal for the transmission Hc(t) is the impulse response of the channel n(t) is the noise introduced by the channel Hr(t) is the filter at the receiver L is the number of levels that are used for transmission Ts is the time between the generation of two symbol

Different symbols are represented with different voltages If the maximum allowed value for the voltage is A then all the possible values are in the range[-A A] and they are given by the difference between one voltage and the other is Considering the picture the symbols v(n) are generated randomly by the sources then the impulse generator creates impulses with an area of v(n) These impulses are sent to the filterht to be sent through thechannel In other words for each symbol a different carrier wave is sent with the relative amplitude Out of the transmitter the signal s(t) can be expressed in the form In the receiver after the filtering throughhr (t) the signal is where we use the notation

nr(t) = n(t) hr(t) g(t) = ht(t) hc(t) hr(t)

where indicates the convolution between two signals After the AD conversion the signalz[k] can be expressed in the form In this relationship the second term represents the symbol to be extracted The others are unwanted the first one is the effect of noise the second one is due to the intersymbol interferenceIf the filters are chosen so thatg(t) will satisfy the Nyquist ISI criterionthen there will be no intersymbol interference and the value of the sumwill be zero so

z[k] = nr[k] + v[k]g[0] the transmission will be affected only by noise

Frequency-Shift Keying (FSK)

i) Introduction

FSK (Frequency Shift Keying) is also known as frequency shift modulation and frequency shift signaling Frequency Shift Keying is a data signal converted into a specific frequency or tone in order to transmit it over wire cable optical fiber or wireless media to a destination point In Frequency Shift Keying the modulating signals shift the output frequency between predetermined levels Technically FSK has two classifications the non-coherent and coherent FSK In non-coherent FSK the instantaneous frequency is shifted between two discrete values named mark and space frequency respectively On the other hand in coherent Frequency Shift Keying or binary FSK there is no phase discontinuity in the output signal The simplest FSK is binary FSK (BFSK) BFSK literally implies using a pair of discrete frequencies to transmit binary (0s and 1s) information With this scheme the 1 is called the mark frequency and the 0 is called the space frequency

ii) Modulation

In frequency-shift keying the signals transmitted for marks (binary ones) and spaces (binary zeros) are respectively

s1(t)= A cos(ω1t + θc) 0 lt t le T

s2(t)= A cos(ω2t + θc) 0 lt t le T

This is called a discontinuous phase FSK system because the phase of the signal is discontinuous at the switching times A signal of this form can be generated by the following system

Figure 16 FSK discontinuous phase

If the bit intervals and the phases of the signals can be determined (usually by the use of a phase-lock loop) then the signal can be decoded by two separate matched filters

Figure 17 Two separate filters

The first filter is matched to the signal s1(t) and the second to s2(t) Under the assumption that the signals are mutually orthogonal the output of one of the matched filters will be E and the other zero (where E is the energy of the signal) Decoding of the bandpass signal can therefore be achieved by subtracting the outputs of the two filters and comparing the result to a threshold If the signal s1(t) is present then the resulting output will be +E and if s2(t) is present it will be -E Since the noise variance at each filter output is Eᶯ2the noise in the difference signal will be doubled namely σ2= Eᶯ Since the overall output variation is 2E the probability of error is on eqn1

eqn1

The overall performance of a matched filter receiver in this case is therefore the same as for ASK The frequency spectrum of an FSK signal is difficult to obtain This is a general characteristic of FM signals However some rules of thumb can be developed Consider the case where the binary message consists of an alternating sequence of zeros and ones If the two frequencies are each multiples of 1T (eg f1 = mT and f2 =nT ) and are synchronised in phase then the FSK wave is a periodic function

Figure 18 FSK signal wave

This can be viewed as the linear superposition of two OOK signals one delayed by T seconds with respect to the other Since the spectrum of an OOK signal is on eqn2

eqn2

where M(ω) is the transform of the baseband signal m(t) the spectrum of the FSK signal is

the superposition of two of these spectra one for ω1 = ωc ndash Δω and the other for ω2 =ωc +

Δω Nonsynchronous or envelope detection can be performed for FSK signals In this case

the receiver takes these following form

Figure 19 Receiver

Binary Frequency-Shift Keying (BFSK)

A binary frequency-shift keying (BFSK) signal can be defined by

s(t) =A cos 2f 0 t 0 lt t lt T A cos 2f 1 t elsewhere

where A is a constant f0 and f1 are the transmitted frequencies and T is the bit duration The signal has a power P = A22 so that A = radic2P Thus equation can be written as

s(t) =radic2P cos 2f 0 t 0 lt t lt T radic2P cos 2f 1 t elsewhere

radicPT radic2T cos 2f 0 t 0 lt t lt T radicPT radic2T cos 2f 1 t elsewhere

radicE radic2T cos 2f 0 t 0 lt t lt T radicE radic2T cos 2f 1 t elsewhere

where E = PT is the energy contained in a bit duration For orthogonality f0 = mTand f1 = nT for integer n gt integer m and f1 - f0 must be an integer multiple of12T We can take ϕ1(t) =radic2T cos 2 f0t and ϕ 2( t) = 2T sin 2 f1t as theorthonormal basis functions The applicable signal constellation diagram of theorthogonal BFSK signal is shown in below

Figure 110 Orthogonal BFSK signal constellation diagram

Figure 111 (a) Binary sequence (b) BFSK signal and (c) binary modulating and BASK signals

It can be seen that phase continuity is maintained at transitions Further the BFSK signal is the sum of two BASK signals generated by two modulating signals m0(t) and m1(t) Therefore the Fourier transform of the BFSK signal s(t) is S(f) = A2 M1(f - f1) + A2 M1(f + f1)

Figure 112 (a) Modulating signals (b) Spectrum of (a)

M-ary Frequency-Shift Keying ( M -FSK)

An M-ary frequency-shift keying (M-FSK) signal can be defined by

s(t) = A cos (2 fit +Ɵrsquo) 0 lt t lt T 0 elsewherefor i = 0 1 M - 1 Here A is a constant fi is the transmitted frequency isthe initial phase angle and T is the symbol duration It has a power P = A22 so thatA = 2P Thus equation (246) can be written as

s(t) = radic2P cos (2 fit +Ɵrsquo) 0 lt t lt T

=radicPT radic2T cos (2 fit +Ɵrsquo) 0 lt t lt T

=radicEradic2T cos (2 fit +Ɵrsquo) 0 lt t lt T

where E = PT is the energy of s(t) contained in a symbol duration for i = 0 1 M - 1

Figure 245 M-ary orthogonal 3-FSK signal constellation diagram

Figure 246 4-FSK modulation (a) binary signal and (b) 4-FSK signal

Task 3

Draw modulator and receiver block diagram of a QPSK modulation scheme

Quaternary phase shift keying (QPSK) or quadrature PSK as it is sometimes called is an other form of angle-moduled constant-amplitude digital modulation With QPSK four output phases are possible for a single carrier frequency Because there are four different output phases there must be four different input conditions Because the digital input to a QPSK modulator is a binary (base2) signal to produce four different input conditions it takes more than a single input bit With two bits there are four there are four possible conditions 00 01 10 and 11 There fore with QPSK the binary input data are combined into groups of two bits called dibits Each dibit code generates one of the four possible output phases Therefore for each two-bit dibit clocked into the modulator a single output change occurs

Figure 31 Phasor Diagram and Constellation Diagram

Above shows the constellation diagram for QPSK with Gray coding Each adjacent symbol only differs by one bit Sometimes known as quaternary or quadriphase PSK or 4-PSK QPSK uses four points on the constellation diagram equispaced around a circle With four phases QPSK can encode two bits per symbol

Figure 32 Four symbols that represents the four phases in QPSK

Figure above shows depicts the 4 symbols used to represent the four phases in QPSK Analysis shows that this may be used either to double the data rate compared to a BPSK system while maintaining the bandwidth of the signal or to maintain the data-rate of BPSK but halve the bandwidth needed

Basic Configuration of Quadrature Modulation Scheme

Figure 33 Basic configuration of QPSKQPSK signal is generated by two BPSK signal Two orthogonal carrier signals are

used to distinguish the two signals One is given by cos 2fct and the other is given by sin

2fct The two carrier signals remain orthogonal in the area of a period A channel in which cos2fct is used as a carrier signal is generally called an inphase channel or Ich and a channel in which sin 2fct is used as a carrier signal is generally called quadrature-phase channel or Qch Therefore d (t) I and d (t) Q are the data in Ich and Qch respectively Modulation schemes that use Ich and Qch are called quadrature modulation schemes The basic confiuration is shown in figure 33

In the system shown above the input digital data dk is first converted into parallel data with two channels Ich and Qch The data are represented as di(t) and dq(t) The conversion or data allocation is done using a mapping circuit block Then the data allocated to Ich is filtered using a pulse shaping filter in Ich The pulse shaped signal is converted in analog signal by a DA converter and multiplied by a cos 2fct carrier wave The same process is carried out on the data allocated to Qch but it is multiplied by a f t c sin 21048659 carrier wave instead Then the Ich and Qch signals are added and transmitted to the air

At the receiver the received wave passes through a BPF to eliminate any sprurious signals Then it is downconverted to the baseband by multiplying by the RF carrier frequency Then in both the Ich and Qch channels the downcoverted signal is digitally sampled by an AD converters and the digital data is fed to a DSPH In the DSPH the sampled data is filtered with pulse shaping filter to eliminate ISI The signals are then synchronized and the transmitted digital data is recovered

Figure 34 Mapping circuit function for QPSKFor the mapping function a simple circuit is used to allocate the data as illustrated in the following figure This mapping function basically allocates all even bits to Ich and all odd bits to Qch and demapping is just the opposite operation

Conclusion It is clear that in order for QPSK to be useful for high data rate wide bandwidth systemsin multipath fading channels it is necessary to consider using diversity techniques equalization adaptive antennas The approach for performance analysis presented here could be used as a first step in the design process of such more complex systems

Task 5

Discuss Block codes and Convolutional codes in detail and evaluate their performance

Block codes

Consider that a message source can generate M equally likely messages Then initially we represent each message by k binary digits with 2k =M These k bits are the information bearing bits Next we add to each k bit massage r redundant bit Thus each massage has been expanded into a codeword of length n bits with n = k + r The total number of possible n bit codewords is 2n while the total number of possible messages is 2k There are 2k - 2n possible n bit words which do not represent possible messages

Codes formed by taking a block of k information bits and adding r (= n - k) redundant bits to form a codeword are called Block Codes and designated (nk) codes

The Hamming Distance dmin

Consider two distinct five digit codewords C1 = 00000 and C2 = 00011 These have a binary digit difference (or Hamming distance) of 2 in the last two digits The minimum distance in binary digits between any two codewords is known as the minimum Hamming distance dmin For block codes the minimum Hamming distance or the smallest difference between the digits for any two codewords in the complete code set dmin is the property which controls the error correction performance We can thus calculate the error detecting and correcting power of a code from the minimum distance in bits between the codewords

Block error probability and correction capability

If we have an error correcting code which can correct R errors than the probability of a codeword not being correctable is the probability of having more than R errors in n digits We can this calculate this probability by summing all the induvidual error probabilities up to and including R errors in the block

P(gtRrsquo errors) = 1 -

The probability of j errors in n digit codeword is

P(gtRrsquo errors) = (Pe)j (1-Pe)n-j x nCj

Pe is the probitlity of error in a single binary digit

n is the block length

nCj is the number of ways of choosing j error digits positoins with in length n binary digts

Group codes

Group codes are a special kind of block codes They comprise a set of codewords C1CN which contain the all zeros codeword (eg 00000) and exhibit a special property called closure This property means that if any two valid codewords are subject to a bit wise EX OR operation then they will produce another valid codeword in the set The closure property means that to find the minimum Hamming distance see below all that is required is to compare all the remaining codewords in the set with the all zeros codeword instead of comparing all the possible pairs of codewords The saving gets bigger the longer the codeword For example a code set with 100 codewords will require 100 comparisons for a Group code design compared with 100+99+98+ +2+1 for a non-group code

Nearest neighbour decoding

Nearest neighbour decoding assumes that the codeword nearest in Hamming distance to the received word is what was transmitted This inherently contains the assumption that the probability of a small number of terrors is greater than the probability of the larger number of t+1 errors that Pe is small

Nearest neighbour decoding can also be done on a soft decision basis with real non-binary numbers from the receiver The nearest Euclidean distance (nearest to these 5 codewords in terms of a 5-D geometry) is then used and this gives a considerable performance increase over the hard decision decoding described here

Hamming boundThis defines mathematically the error correcting performance of a block code The

upper bound on the performance of block codes is given by the Hamming bound some times called the sphere packing bound If we are trying to create a code to correct t errors with a block length of n with k information digits The upper bound on the performance of block codes as given by the Hamming Bound

2k 2 n 1 + n + nC2 + nC3

++ nCt

Cyclic codes

Cyclic codes are linear block codes with an additional cyclic shift operation For convenience polynomial representations are used for the code words for encoding and decoding since the shifting of a code word is equivalent to a modification of the exponential of a polynomial Specifically if x = (x0 x1xn-1) denotes a code word with elements in a finite field-(a field is an algebraic system formed by collection of elements F together with dyadic(2-operand) operations + and multiplication which are defined for all pairs of field elements in F and which behave in an arithmetically consistent manner A finite field is a field with finite number q of elements and it can be represented by Fq) A polynomial over Fq of degree at most n-1 is given as x(D) = x0 + x1D1 + x2D2 ++xn-1Dn-1

Cyclic codes are extremely suited for error correction because they can be designed to detect many combination of likely errors and the implementation of both encoding and

error-detecting circuits are practical A cyclic code used for error-detection is known as cyclic redundancy check (CRC) code

An error burst of length B in an n-bit received word as a contagious sequence of B bits in which the first and the last bits or any number of intermediate bits are received in error Binary (n k) CRC codes are capable of detecting the all error bursts of length n ndash k or less a fraction of error bursts of length equal to n-k+1 the fraction equals 1-2-(n-k-1) a fraction of error bursts of length greater than n-k+1 the fraction equals to 1-2-(n-k-1) all combination of dmin ndash 1 (or fewer) errors and all error patterns with an odd number of errors if the generation polynomial g(X) for the code as an even number of non-zero coefficients

Convolutional code

Encoder structure

The encoder will be represented in many different but equivalent ways Also the main decoding strategy for convolutional codes based on the Viterbi Algorithm will bedescribed A firm understanding of convolutional codes is an important prerequisite tothe understanding of turbo codes

Convolutional codes are commonly specified by three parameters (nkm)

n = number of output bit

k = number of inputs bits

m = number of memory registers

A convolutional code introduces redundant bits into the data stream through the use of linear shift registers The information bits are input into shift registers and the output encoded bits are obtainedby modulo-2 addition of the input information bits and the contents of the shift registersThe connections to the modulo-2 adders were developed heuristically with no algebraic or combinatorial foundation The code rate r for a convolutional code is defined as r = kn Often the manufacturers of convolutional code chips specify the code by parameters (nkL) The quantity L is called the constraint length of the code and is defined by Constraint Length L = k (m-1)

The constraint length L represents the number of bits in the encoder memory that affect the generation of the n output bit s The constraint length L is also referred to by t he capital letter K which can be confusing wit h the lower ca se k which represents the number of input bits In some books K is defined as equal to product the of k and m Often in commercial spec the codes are specified by (r K) where r = the code rate kn and K is the constraint length The constraint length K however is equal t o L œ 1 as defined in this paper I will be referring t o convolutional codes as (nkm) and not as (rK)

Encoder Representations

1) Generator Representation

Generator representation shows the hardware connection of the shift register taps to the modulo-2 adders A generator vector represents the position of the taps for an output A ldquo1rdquo represents a connection and a ldquo0rdquo represents no connection

2) Tree Diagram Representation

The tree diagram representation shows all possible information and encoded sequences for the convolutional encoder In the tree diagram a solid line represents input information bit 0 and a dashed line represents input information bit 1 The corresponding output encoded bits are shown on the branches of the tree An input information sequence defines a specific path through the tree diagram from left to right

3) State Diagram Representation

The state diagram shows the state information of a convolutional encoder The state information of a convolutional encoder is stored in the shift registers In the state diagram the state information of the encoder is shown in the circles Each new input information bit causes a transition from one state to another The path information between the states denoted as xc represents input information bit x and output encoded bits c It is customary to begin convolutional encoding from the all zerostate

4) Trellis Diagram RepresentationThe trellis diagram is basically a redrawing of the state diagram It shows all possible state transitions at each time step Frequently a legend accompanies the trellis diagram to show the state transitions and the corresponding input and output bit mappings (xc)

Catastrophic Convolutional code

Catastrophic convolutional code causes a large number of bit errors when only asmall number of channel bit errors is received This type of code needs to be avoided andcan be identified by the state diagram A state diagram having a loop in which a nonzeroinformation sequence corresponds to an all-zero output sequence identifies a catastrophicconvolutional code

Hard-Decision and Soft-Decision Decoding

Hard-decision and soft-decision decoding refer to the type of quantization used onthe received bits Hard-decision decoding uses 1-bit quantization on the received channel values Soft-decision decoding uses multi-bit quantization on the received channel values For the ideal soft-decision decoding (infinite-bit quantization) the received channel values are directly used in the channel decoder

Viterbi decoding

Viterbi decoding is the best known implementation of the maximum likely-hood decoding Here we narrow the options systematically at each time tick The principal used to reduce the choices is this

1 The errors occur infrequently The probability of error is small

2 The probability of two errors in a row is much smaller than a single error that is the errors are distributed randomly

The Viterbi decoder examines an entire received sequence of a given length The decoder computes a metric for each path and makes a decision based on this metric All paths are followed until two paths converge on one node Then the path with the higher metric is kept and the one with lower metric is discarded The paths selected are called the survivors

For an N bit sequence total numbers of possible received sequences are 2N Of these only 2kL are valid The Viterbi algorithm applies the maximum-likelihood principles to limit the comparison to 2 to the power of kL surviving paths instead of checking all paths The most common metric used is the Hamming distance metric This is just the dot product between the received codeword and the allowable codeword

Decoding Complexity for Convolutional Codes

For a general convolutional code the input information sequence contains kLbits where k is the number of parallel information bits at one time interval and L is thenumber of time intervals This results in L+m stages in the trellis diagram There areexactly 2kL distinct paths in the trellis diagram and as a result an exhaustive search forthe ML sequence would have a computational complexity on the order of O[2kL]

TheViterbi algorithm reduces this complexity by performing the ML search one stage at a time in the trellis At each node (state) of the trellis there are 2k calculations Thenumber of nodes per stage in the trellis is 2m Therefore the complexity of the Viterbialgorithm is on the order of O[(2k)(2m)(L+m)] This significantly reduces the number ofcalculations required to implement the ML decoding because the number of time intervalsL is now a linear factor and not an exponent factor in the complexity However therewill be an exponential increase in complexity if either k or m increases

Conclusion

Fundamentally convolutional codes do not offer more protection against noise than an equivalent block code In many cases they generally offer greater simplicity of implementation over a block code of equal power

REFERENCES

1 Herbert Taub Schilling Principle of Communication System Mcgraw-Hill 2002

2 Martin S Analog and Digital Communication System Prentice Hall 2001

3 P M Grant and D G Mcruickshank I A Glover and P M Grant Digital Communications Pearson Education 2009

4 V Pless Introduction to the Theory of Error-Correcting Codes 3rd ed New York John Wiley amp Sons 1998

5 Tomasi W Electronic Communication Systems Fundamentals Through Advanced Prentice Hall 2004

6 Lee L H C Error-Control Block Codes for Communications Engineers ArtechHouse 2000

7 httpwwwscribdcomdoc35139573Notes-in-Phase-Shift-Keying-Bpsk-Qpsk

8 httpwwwwikipediaorg

9 MacQuarie University Lecturer Notes httpwwwelecmqeduau~clfiles_pdfelec321

  • Viterbi decoding
Page 3: Analogue and Digital Communication assignment

s(t) = radic(2Pi) cos 2πfct

= radic(PiT)radic(2T)cos 2πfct

= radic(Ei)radic(2T)cos 2πfct 0 lt t lt T

where Ei = PiT is the energy of s(t) contained in a symbol duration for i = 0 1

M - 1 The figure shows the signal constellation diagrams of M-ASK and 4-ASK

signals

Figure 13 (a) M-ASK and (b) 4-ASK signal constellation diagrams

The figure below shows the 4-ASK signal sequence generated by the binary sequence 00 01 10 11

Figure 14 (a) binary sequence (b) 4-ary signal and (b) 4-ASK signal

iii) Transmitting and Receiving process

Figure 15 ASK block diagram

Here is a diagram showing the ideal model for a transmission system using an ASK modulation It can be divided into three blocks The first one represents the transmitter the second one is a linear model of the effects of the channel the third one shows the structure of the receiver The following notation is used

Ht(t) is the carrier signal for the transmission Hc(t) is the impulse response of the channel n(t) is the noise introduced by the channel Hr(t) is the filter at the receiver L is the number of levels that are used for transmission Ts is the time between the generation of two symbol

Different symbols are represented with different voltages If the maximum allowed value for the voltage is A then all the possible values are in the range[-A A] and they are given by the difference between one voltage and the other is Considering the picture the symbols v(n) are generated randomly by the sources then the impulse generator creates impulses with an area of v(n) These impulses are sent to the filterht to be sent through thechannel In other words for each symbol a different carrier wave is sent with the relative amplitude Out of the transmitter the signal s(t) can be expressed in the form In the receiver after the filtering throughhr (t) the signal is where we use the notation

nr(t) = n(t) hr(t) g(t) = ht(t) hc(t) hr(t)

where indicates the convolution between two signals After the AD conversion the signalz[k] can be expressed in the form In this relationship the second term represents the symbol to be extracted The others are unwanted the first one is the effect of noise the second one is due to the intersymbol interferenceIf the filters are chosen so thatg(t) will satisfy the Nyquist ISI criterionthen there will be no intersymbol interference and the value of the sumwill be zero so

z[k] = nr[k] + v[k]g[0] the transmission will be affected only by noise

Frequency-Shift Keying (FSK)

i) Introduction

FSK (Frequency Shift Keying) is also known as frequency shift modulation and frequency shift signaling Frequency Shift Keying is a data signal converted into a specific frequency or tone in order to transmit it over wire cable optical fiber or wireless media to a destination point In Frequency Shift Keying the modulating signals shift the output frequency between predetermined levels Technically FSK has two classifications the non-coherent and coherent FSK In non-coherent FSK the instantaneous frequency is shifted between two discrete values named mark and space frequency respectively On the other hand in coherent Frequency Shift Keying or binary FSK there is no phase discontinuity in the output signal The simplest FSK is binary FSK (BFSK) BFSK literally implies using a pair of discrete frequencies to transmit binary (0s and 1s) information With this scheme the 1 is called the mark frequency and the 0 is called the space frequency

ii) Modulation

In frequency-shift keying the signals transmitted for marks (binary ones) and spaces (binary zeros) are respectively

s1(t)= A cos(ω1t + θc) 0 lt t le T

s2(t)= A cos(ω2t + θc) 0 lt t le T

This is called a discontinuous phase FSK system because the phase of the signal is discontinuous at the switching times A signal of this form can be generated by the following system

Figure 16 FSK discontinuous phase

If the bit intervals and the phases of the signals can be determined (usually by the use of a phase-lock loop) then the signal can be decoded by two separate matched filters

Figure 17 Two separate filters

The first filter is matched to the signal s1(t) and the second to s2(t) Under the assumption that the signals are mutually orthogonal the output of one of the matched filters will be E and the other zero (where E is the energy of the signal) Decoding of the bandpass signal can therefore be achieved by subtracting the outputs of the two filters and comparing the result to a threshold If the signal s1(t) is present then the resulting output will be +E and if s2(t) is present it will be -E Since the noise variance at each filter output is Eᶯ2the noise in the difference signal will be doubled namely σ2= Eᶯ Since the overall output variation is 2E the probability of error is on eqn1

eqn1

The overall performance of a matched filter receiver in this case is therefore the same as for ASK The frequency spectrum of an FSK signal is difficult to obtain This is a general characteristic of FM signals However some rules of thumb can be developed Consider the case where the binary message consists of an alternating sequence of zeros and ones If the two frequencies are each multiples of 1T (eg f1 = mT and f2 =nT ) and are synchronised in phase then the FSK wave is a periodic function

Figure 18 FSK signal wave

This can be viewed as the linear superposition of two OOK signals one delayed by T seconds with respect to the other Since the spectrum of an OOK signal is on eqn2

eqn2

where M(ω) is the transform of the baseband signal m(t) the spectrum of the FSK signal is

the superposition of two of these spectra one for ω1 = ωc ndash Δω and the other for ω2 =ωc +

Δω Nonsynchronous or envelope detection can be performed for FSK signals In this case

the receiver takes these following form

Figure 19 Receiver

Binary Frequency-Shift Keying (BFSK)

A binary frequency-shift keying (BFSK) signal can be defined by

s(t) =A cos 2f 0 t 0 lt t lt T A cos 2f 1 t elsewhere

where A is a constant f0 and f1 are the transmitted frequencies and T is the bit duration The signal has a power P = A22 so that A = radic2P Thus equation can be written as

s(t) =radic2P cos 2f 0 t 0 lt t lt T radic2P cos 2f 1 t elsewhere

radicPT radic2T cos 2f 0 t 0 lt t lt T radicPT radic2T cos 2f 1 t elsewhere

radicE radic2T cos 2f 0 t 0 lt t lt T radicE radic2T cos 2f 1 t elsewhere

where E = PT is the energy contained in a bit duration For orthogonality f0 = mTand f1 = nT for integer n gt integer m and f1 - f0 must be an integer multiple of12T We can take ϕ1(t) =radic2T cos 2 f0t and ϕ 2( t) = 2T sin 2 f1t as theorthonormal basis functions The applicable signal constellation diagram of theorthogonal BFSK signal is shown in below

Figure 110 Orthogonal BFSK signal constellation diagram

Figure 111 (a) Binary sequence (b) BFSK signal and (c) binary modulating and BASK signals

It can be seen that phase continuity is maintained at transitions Further the BFSK signal is the sum of two BASK signals generated by two modulating signals m0(t) and m1(t) Therefore the Fourier transform of the BFSK signal s(t) is S(f) = A2 M1(f - f1) + A2 M1(f + f1)

Figure 112 (a) Modulating signals (b) Spectrum of (a)

M-ary Frequency-Shift Keying ( M -FSK)

An M-ary frequency-shift keying (M-FSK) signal can be defined by

s(t) = A cos (2 fit +Ɵrsquo) 0 lt t lt T 0 elsewherefor i = 0 1 M - 1 Here A is a constant fi is the transmitted frequency isthe initial phase angle and T is the symbol duration It has a power P = A22 so thatA = 2P Thus equation (246) can be written as

s(t) = radic2P cos (2 fit +Ɵrsquo) 0 lt t lt T

=radicPT radic2T cos (2 fit +Ɵrsquo) 0 lt t lt T

=radicEradic2T cos (2 fit +Ɵrsquo) 0 lt t lt T

where E = PT is the energy of s(t) contained in a symbol duration for i = 0 1 M - 1

Figure 245 M-ary orthogonal 3-FSK signal constellation diagram

Figure 246 4-FSK modulation (a) binary signal and (b) 4-FSK signal

Task 3

Draw modulator and receiver block diagram of a QPSK modulation scheme

Quaternary phase shift keying (QPSK) or quadrature PSK as it is sometimes called is an other form of angle-moduled constant-amplitude digital modulation With QPSK four output phases are possible for a single carrier frequency Because there are four different output phases there must be four different input conditions Because the digital input to a QPSK modulator is a binary (base2) signal to produce four different input conditions it takes more than a single input bit With two bits there are four there are four possible conditions 00 01 10 and 11 There fore with QPSK the binary input data are combined into groups of two bits called dibits Each dibit code generates one of the four possible output phases Therefore for each two-bit dibit clocked into the modulator a single output change occurs

Figure 31 Phasor Diagram and Constellation Diagram

Above shows the constellation diagram for QPSK with Gray coding Each adjacent symbol only differs by one bit Sometimes known as quaternary or quadriphase PSK or 4-PSK QPSK uses four points on the constellation diagram equispaced around a circle With four phases QPSK can encode two bits per symbol

Figure 32 Four symbols that represents the four phases in QPSK

Figure above shows depicts the 4 symbols used to represent the four phases in QPSK Analysis shows that this may be used either to double the data rate compared to a BPSK system while maintaining the bandwidth of the signal or to maintain the data-rate of BPSK but halve the bandwidth needed

Basic Configuration of Quadrature Modulation Scheme

Figure 33 Basic configuration of QPSKQPSK signal is generated by two BPSK signal Two orthogonal carrier signals are

used to distinguish the two signals One is given by cos 2fct and the other is given by sin

2fct The two carrier signals remain orthogonal in the area of a period A channel in which cos2fct is used as a carrier signal is generally called an inphase channel or Ich and a channel in which sin 2fct is used as a carrier signal is generally called quadrature-phase channel or Qch Therefore d (t) I and d (t) Q are the data in Ich and Qch respectively Modulation schemes that use Ich and Qch are called quadrature modulation schemes The basic confiuration is shown in figure 33

In the system shown above the input digital data dk is first converted into parallel data with two channels Ich and Qch The data are represented as di(t) and dq(t) The conversion or data allocation is done using a mapping circuit block Then the data allocated to Ich is filtered using a pulse shaping filter in Ich The pulse shaped signal is converted in analog signal by a DA converter and multiplied by a cos 2fct carrier wave The same process is carried out on the data allocated to Qch but it is multiplied by a f t c sin 21048659 carrier wave instead Then the Ich and Qch signals are added and transmitted to the air

At the receiver the received wave passes through a BPF to eliminate any sprurious signals Then it is downconverted to the baseband by multiplying by the RF carrier frequency Then in both the Ich and Qch channels the downcoverted signal is digitally sampled by an AD converters and the digital data is fed to a DSPH In the DSPH the sampled data is filtered with pulse shaping filter to eliminate ISI The signals are then synchronized and the transmitted digital data is recovered

Figure 34 Mapping circuit function for QPSKFor the mapping function a simple circuit is used to allocate the data as illustrated in the following figure This mapping function basically allocates all even bits to Ich and all odd bits to Qch and demapping is just the opposite operation

Conclusion It is clear that in order for QPSK to be useful for high data rate wide bandwidth systemsin multipath fading channels it is necessary to consider using diversity techniques equalization adaptive antennas The approach for performance analysis presented here could be used as a first step in the design process of such more complex systems

Task 5

Discuss Block codes and Convolutional codes in detail and evaluate their performance

Block codes

Consider that a message source can generate M equally likely messages Then initially we represent each message by k binary digits with 2k =M These k bits are the information bearing bits Next we add to each k bit massage r redundant bit Thus each massage has been expanded into a codeword of length n bits with n = k + r The total number of possible n bit codewords is 2n while the total number of possible messages is 2k There are 2k - 2n possible n bit words which do not represent possible messages

Codes formed by taking a block of k information bits and adding r (= n - k) redundant bits to form a codeword are called Block Codes and designated (nk) codes

The Hamming Distance dmin

Consider two distinct five digit codewords C1 = 00000 and C2 = 00011 These have a binary digit difference (or Hamming distance) of 2 in the last two digits The minimum distance in binary digits between any two codewords is known as the minimum Hamming distance dmin For block codes the minimum Hamming distance or the smallest difference between the digits for any two codewords in the complete code set dmin is the property which controls the error correction performance We can thus calculate the error detecting and correcting power of a code from the minimum distance in bits between the codewords

Block error probability and correction capability

If we have an error correcting code which can correct R errors than the probability of a codeword not being correctable is the probability of having more than R errors in n digits We can this calculate this probability by summing all the induvidual error probabilities up to and including R errors in the block

P(gtRrsquo errors) = 1 -

The probability of j errors in n digit codeword is

P(gtRrsquo errors) = (Pe)j (1-Pe)n-j x nCj

Pe is the probitlity of error in a single binary digit

n is the block length

nCj is the number of ways of choosing j error digits positoins with in length n binary digts

Group codes

Group codes are a special kind of block codes They comprise a set of codewords C1CN which contain the all zeros codeword (eg 00000) and exhibit a special property called closure This property means that if any two valid codewords are subject to a bit wise EX OR operation then they will produce another valid codeword in the set The closure property means that to find the minimum Hamming distance see below all that is required is to compare all the remaining codewords in the set with the all zeros codeword instead of comparing all the possible pairs of codewords The saving gets bigger the longer the codeword For example a code set with 100 codewords will require 100 comparisons for a Group code design compared with 100+99+98+ +2+1 for a non-group code

Nearest neighbour decoding

Nearest neighbour decoding assumes that the codeword nearest in Hamming distance to the received word is what was transmitted This inherently contains the assumption that the probability of a small number of terrors is greater than the probability of the larger number of t+1 errors that Pe is small

Nearest neighbour decoding can also be done on a soft decision basis with real non-binary numbers from the receiver The nearest Euclidean distance (nearest to these 5 codewords in terms of a 5-D geometry) is then used and this gives a considerable performance increase over the hard decision decoding described here

Hamming boundThis defines mathematically the error correcting performance of a block code The

upper bound on the performance of block codes is given by the Hamming bound some times called the sphere packing bound If we are trying to create a code to correct t errors with a block length of n with k information digits The upper bound on the performance of block codes as given by the Hamming Bound

2k 2 n 1 + n + nC2 + nC3

++ nCt

Cyclic codes

Cyclic codes are linear block codes with an additional cyclic shift operation For convenience polynomial representations are used for the code words for encoding and decoding since the shifting of a code word is equivalent to a modification of the exponential of a polynomial Specifically if x = (x0 x1xn-1) denotes a code word with elements in a finite field-(a field is an algebraic system formed by collection of elements F together with dyadic(2-operand) operations + and multiplication which are defined for all pairs of field elements in F and which behave in an arithmetically consistent manner A finite field is a field with finite number q of elements and it can be represented by Fq) A polynomial over Fq of degree at most n-1 is given as x(D) = x0 + x1D1 + x2D2 ++xn-1Dn-1

Cyclic codes are extremely suited for error correction because they can be designed to detect many combination of likely errors and the implementation of both encoding and

error-detecting circuits are practical A cyclic code used for error-detection is known as cyclic redundancy check (CRC) code

An error burst of length B in an n-bit received word as a contagious sequence of B bits in which the first and the last bits or any number of intermediate bits are received in error Binary (n k) CRC codes are capable of detecting the all error bursts of length n ndash k or less a fraction of error bursts of length equal to n-k+1 the fraction equals 1-2-(n-k-1) a fraction of error bursts of length greater than n-k+1 the fraction equals to 1-2-(n-k-1) all combination of dmin ndash 1 (or fewer) errors and all error patterns with an odd number of errors if the generation polynomial g(X) for the code as an even number of non-zero coefficients

Convolutional code

Encoder structure

The encoder will be represented in many different but equivalent ways Also the main decoding strategy for convolutional codes based on the Viterbi Algorithm will bedescribed A firm understanding of convolutional codes is an important prerequisite tothe understanding of turbo codes

Convolutional codes are commonly specified by three parameters (nkm)

n = number of output bit

k = number of inputs bits

m = number of memory registers

A convolutional code introduces redundant bits into the data stream through the use of linear shift registers The information bits are input into shift registers and the output encoded bits are obtainedby modulo-2 addition of the input information bits and the contents of the shift registersThe connections to the modulo-2 adders were developed heuristically with no algebraic or combinatorial foundation The code rate r for a convolutional code is defined as r = kn Often the manufacturers of convolutional code chips specify the code by parameters (nkL) The quantity L is called the constraint length of the code and is defined by Constraint Length L = k (m-1)

The constraint length L represents the number of bits in the encoder memory that affect the generation of the n output bit s The constraint length L is also referred to by t he capital letter K which can be confusing wit h the lower ca se k which represents the number of input bits In some books K is defined as equal to product the of k and m Often in commercial spec the codes are specified by (r K) where r = the code rate kn and K is the constraint length The constraint length K however is equal t o L œ 1 as defined in this paper I will be referring t o convolutional codes as (nkm) and not as (rK)

Encoder Representations

1) Generator Representation

Generator representation shows the hardware connection of the shift register taps to the modulo-2 adders A generator vector represents the position of the taps for an output A ldquo1rdquo represents a connection and a ldquo0rdquo represents no connection

2) Tree Diagram Representation

The tree diagram representation shows all possible information and encoded sequences for the convolutional encoder In the tree diagram a solid line represents input information bit 0 and a dashed line represents input information bit 1 The corresponding output encoded bits are shown on the branches of the tree An input information sequence defines a specific path through the tree diagram from left to right

3) State Diagram Representation

The state diagram shows the state information of a convolutional encoder The state information of a convolutional encoder is stored in the shift registers In the state diagram the state information of the encoder is shown in the circles Each new input information bit causes a transition from one state to another The path information between the states denoted as xc represents input information bit x and output encoded bits c It is customary to begin convolutional encoding from the all zerostate

4) Trellis Diagram RepresentationThe trellis diagram is basically a redrawing of the state diagram It shows all possible state transitions at each time step Frequently a legend accompanies the trellis diagram to show the state transitions and the corresponding input and output bit mappings (xc)

Catastrophic Convolutional code

Catastrophic convolutional code causes a large number of bit errors when only asmall number of channel bit errors is received This type of code needs to be avoided andcan be identified by the state diagram A state diagram having a loop in which a nonzeroinformation sequence corresponds to an all-zero output sequence identifies a catastrophicconvolutional code

Hard-Decision and Soft-Decision Decoding

Hard-decision and soft-decision decoding refer to the type of quantization used onthe received bits Hard-decision decoding uses 1-bit quantization on the received channel values Soft-decision decoding uses multi-bit quantization on the received channel values For the ideal soft-decision decoding (infinite-bit quantization) the received channel values are directly used in the channel decoder

Viterbi decoding

Viterbi decoding is the best known implementation of the maximum likely-hood decoding Here we narrow the options systematically at each time tick The principal used to reduce the choices is this

1 The errors occur infrequently The probability of error is small

2 The probability of two errors in a row is much smaller than a single error that is the errors are distributed randomly

The Viterbi decoder examines an entire received sequence of a given length The decoder computes a metric for each path and makes a decision based on this metric All paths are followed until two paths converge on one node Then the path with the higher metric is kept and the one with lower metric is discarded The paths selected are called the survivors

For an N bit sequence total numbers of possible received sequences are 2N Of these only 2kL are valid The Viterbi algorithm applies the maximum-likelihood principles to limit the comparison to 2 to the power of kL surviving paths instead of checking all paths The most common metric used is the Hamming distance metric This is just the dot product between the received codeword and the allowable codeword

Decoding Complexity for Convolutional Codes

For a general convolutional code the input information sequence contains kLbits where k is the number of parallel information bits at one time interval and L is thenumber of time intervals This results in L+m stages in the trellis diagram There areexactly 2kL distinct paths in the trellis diagram and as a result an exhaustive search forthe ML sequence would have a computational complexity on the order of O[2kL]

TheViterbi algorithm reduces this complexity by performing the ML search one stage at a time in the trellis At each node (state) of the trellis there are 2k calculations Thenumber of nodes per stage in the trellis is 2m Therefore the complexity of the Viterbialgorithm is on the order of O[(2k)(2m)(L+m)] This significantly reduces the number ofcalculations required to implement the ML decoding because the number of time intervalsL is now a linear factor and not an exponent factor in the complexity However therewill be an exponential increase in complexity if either k or m increases

Conclusion

Fundamentally convolutional codes do not offer more protection against noise than an equivalent block code In many cases they generally offer greater simplicity of implementation over a block code of equal power

REFERENCES

1 Herbert Taub Schilling Principle of Communication System Mcgraw-Hill 2002

2 Martin S Analog and Digital Communication System Prentice Hall 2001

3 P M Grant and D G Mcruickshank I A Glover and P M Grant Digital Communications Pearson Education 2009

4 V Pless Introduction to the Theory of Error-Correcting Codes 3rd ed New York John Wiley amp Sons 1998

5 Tomasi W Electronic Communication Systems Fundamentals Through Advanced Prentice Hall 2004

6 Lee L H C Error-Control Block Codes for Communications Engineers ArtechHouse 2000

7 httpwwwscribdcomdoc35139573Notes-in-Phase-Shift-Keying-Bpsk-Qpsk

8 httpwwwwikipediaorg

9 MacQuarie University Lecturer Notes httpwwwelecmqeduau~clfiles_pdfelec321

  • Viterbi decoding
Page 4: Analogue and Digital Communication assignment

iii) Transmitting and Receiving process

Figure 15 ASK block diagram

Here is a diagram showing the ideal model for a transmission system using an ASK modulation It can be divided into three blocks The first one represents the transmitter the second one is a linear model of the effects of the channel the third one shows the structure of the receiver The following notation is used

Ht(t) is the carrier signal for the transmission Hc(t) is the impulse response of the channel n(t) is the noise introduced by the channel Hr(t) is the filter at the receiver L is the number of levels that are used for transmission Ts is the time between the generation of two symbol

Different symbols are represented with different voltages If the maximum allowed value for the voltage is A then all the possible values are in the range[-A A] and they are given by the difference between one voltage and the other is Considering the picture the symbols v(n) are generated randomly by the sources then the impulse generator creates impulses with an area of v(n) These impulses are sent to the filterht to be sent through thechannel In other words for each symbol a different carrier wave is sent with the relative amplitude Out of the transmitter the signal s(t) can be expressed in the form In the receiver after the filtering throughhr (t) the signal is where we use the notation

nr(t) = n(t) hr(t) g(t) = ht(t) hc(t) hr(t)

where indicates the convolution between two signals After the AD conversion the signalz[k] can be expressed in the form In this relationship the second term represents the symbol to be extracted The others are unwanted the first one is the effect of noise the second one is due to the intersymbol interferenceIf the filters are chosen so thatg(t) will satisfy the Nyquist ISI criterionthen there will be no intersymbol interference and the value of the sumwill be zero so

z[k] = nr[k] + v[k]g[0] the transmission will be affected only by noise

Frequency-Shift Keying (FSK)

i) Introduction

FSK (Frequency Shift Keying) is also known as frequency shift modulation and frequency shift signaling Frequency Shift Keying is a data signal converted into a specific frequency or tone in order to transmit it over wire cable optical fiber or wireless media to a destination point In Frequency Shift Keying the modulating signals shift the output frequency between predetermined levels Technically FSK has two classifications the non-coherent and coherent FSK In non-coherent FSK the instantaneous frequency is shifted between two discrete values named mark and space frequency respectively On the other hand in coherent Frequency Shift Keying or binary FSK there is no phase discontinuity in the output signal The simplest FSK is binary FSK (BFSK) BFSK literally implies using a pair of discrete frequencies to transmit binary (0s and 1s) information With this scheme the 1 is called the mark frequency and the 0 is called the space frequency

ii) Modulation

In frequency-shift keying the signals transmitted for marks (binary ones) and spaces (binary zeros) are respectively

s1(t)= A cos(ω1t + θc) 0 lt t le T

s2(t)= A cos(ω2t + θc) 0 lt t le T

This is called a discontinuous phase FSK system because the phase of the signal is discontinuous at the switching times A signal of this form can be generated by the following system

Figure 16 FSK discontinuous phase

If the bit intervals and the phases of the signals can be determined (usually by the use of a phase-lock loop) then the signal can be decoded by two separate matched filters

Figure 17 Two separate filters

The first filter is matched to the signal s1(t) and the second to s2(t) Under the assumption that the signals are mutually orthogonal the output of one of the matched filters will be E and the other zero (where E is the energy of the signal) Decoding of the bandpass signal can therefore be achieved by subtracting the outputs of the two filters and comparing the result to a threshold If the signal s1(t) is present then the resulting output will be +E and if s2(t) is present it will be -E Since the noise variance at each filter output is Eᶯ2the noise in the difference signal will be doubled namely σ2= Eᶯ Since the overall output variation is 2E the probability of error is on eqn1

eqn1

The overall performance of a matched filter receiver in this case is therefore the same as for ASK The frequency spectrum of an FSK signal is difficult to obtain This is a general characteristic of FM signals However some rules of thumb can be developed Consider the case where the binary message consists of an alternating sequence of zeros and ones If the two frequencies are each multiples of 1T (eg f1 = mT and f2 =nT ) and are synchronised in phase then the FSK wave is a periodic function

Figure 18 FSK signal wave

This can be viewed as the linear superposition of two OOK signals one delayed by T seconds with respect to the other Since the spectrum of an OOK signal is on eqn2

eqn2

where M(ω) is the transform of the baseband signal m(t) the spectrum of the FSK signal is

the superposition of two of these spectra one for ω1 = ωc ndash Δω and the other for ω2 =ωc +

Δω Nonsynchronous or envelope detection can be performed for FSK signals In this case

the receiver takes these following form

Figure 19 Receiver

Binary Frequency-Shift Keying (BFSK)

A binary frequency-shift keying (BFSK) signal can be defined by

s(t) =A cos 2f 0 t 0 lt t lt T A cos 2f 1 t elsewhere

where A is a constant f0 and f1 are the transmitted frequencies and T is the bit duration The signal has a power P = A22 so that A = radic2P Thus equation can be written as

s(t) =radic2P cos 2f 0 t 0 lt t lt T radic2P cos 2f 1 t elsewhere

radicPT radic2T cos 2f 0 t 0 lt t lt T radicPT radic2T cos 2f 1 t elsewhere

radicE radic2T cos 2f 0 t 0 lt t lt T radicE radic2T cos 2f 1 t elsewhere

where E = PT is the energy contained in a bit duration For orthogonality f0 = mTand f1 = nT for integer n gt integer m and f1 - f0 must be an integer multiple of12T We can take ϕ1(t) =radic2T cos 2 f0t and ϕ 2( t) = 2T sin 2 f1t as theorthonormal basis functions The applicable signal constellation diagram of theorthogonal BFSK signal is shown in below

Figure 110 Orthogonal BFSK signal constellation diagram

Figure 111 (a) Binary sequence (b) BFSK signal and (c) binary modulating and BASK signals

It can be seen that phase continuity is maintained at transitions Further the BFSK signal is the sum of two BASK signals generated by two modulating signals m0(t) and m1(t) Therefore the Fourier transform of the BFSK signal s(t) is S(f) = A2 M1(f - f1) + A2 M1(f + f1)

Figure 112 (a) Modulating signals (b) Spectrum of (a)

M-ary Frequency-Shift Keying ( M -FSK)

An M-ary frequency-shift keying (M-FSK) signal can be defined by

s(t) = A cos (2 fit +Ɵrsquo) 0 lt t lt T 0 elsewherefor i = 0 1 M - 1 Here A is a constant fi is the transmitted frequency isthe initial phase angle and T is the symbol duration It has a power P = A22 so thatA = 2P Thus equation (246) can be written as

s(t) = radic2P cos (2 fit +Ɵrsquo) 0 lt t lt T

=radicPT radic2T cos (2 fit +Ɵrsquo) 0 lt t lt T

=radicEradic2T cos (2 fit +Ɵrsquo) 0 lt t lt T

where E = PT is the energy of s(t) contained in a symbol duration for i = 0 1 M - 1

Figure 245 M-ary orthogonal 3-FSK signal constellation diagram

Figure 246 4-FSK modulation (a) binary signal and (b) 4-FSK signal

Task 3

Draw modulator and receiver block diagram of a QPSK modulation scheme

Quaternary phase shift keying (QPSK) or quadrature PSK as it is sometimes called is an other form of angle-moduled constant-amplitude digital modulation With QPSK four output phases are possible for a single carrier frequency Because there are four different output phases there must be four different input conditions Because the digital input to a QPSK modulator is a binary (base2) signal to produce four different input conditions it takes more than a single input bit With two bits there are four there are four possible conditions 00 01 10 and 11 There fore with QPSK the binary input data are combined into groups of two bits called dibits Each dibit code generates one of the four possible output phases Therefore for each two-bit dibit clocked into the modulator a single output change occurs

Figure 31 Phasor Diagram and Constellation Diagram

Above shows the constellation diagram for QPSK with Gray coding Each adjacent symbol only differs by one bit Sometimes known as quaternary or quadriphase PSK or 4-PSK QPSK uses four points on the constellation diagram equispaced around a circle With four phases QPSK can encode two bits per symbol

Figure 32 Four symbols that represents the four phases in QPSK

Figure above shows depicts the 4 symbols used to represent the four phases in QPSK Analysis shows that this may be used either to double the data rate compared to a BPSK system while maintaining the bandwidth of the signal or to maintain the data-rate of BPSK but halve the bandwidth needed

Basic Configuration of Quadrature Modulation Scheme

Figure 33 Basic configuration of QPSKQPSK signal is generated by two BPSK signal Two orthogonal carrier signals are

used to distinguish the two signals One is given by cos 2fct and the other is given by sin

2fct The two carrier signals remain orthogonal in the area of a period A channel in which cos2fct is used as a carrier signal is generally called an inphase channel or Ich and a channel in which sin 2fct is used as a carrier signal is generally called quadrature-phase channel or Qch Therefore d (t) I and d (t) Q are the data in Ich and Qch respectively Modulation schemes that use Ich and Qch are called quadrature modulation schemes The basic confiuration is shown in figure 33

In the system shown above the input digital data dk is first converted into parallel data with two channels Ich and Qch The data are represented as di(t) and dq(t) The conversion or data allocation is done using a mapping circuit block Then the data allocated to Ich is filtered using a pulse shaping filter in Ich The pulse shaped signal is converted in analog signal by a DA converter and multiplied by a cos 2fct carrier wave The same process is carried out on the data allocated to Qch but it is multiplied by a f t c sin 21048659 carrier wave instead Then the Ich and Qch signals are added and transmitted to the air

At the receiver the received wave passes through a BPF to eliminate any sprurious signals Then it is downconverted to the baseband by multiplying by the RF carrier frequency Then in both the Ich and Qch channels the downcoverted signal is digitally sampled by an AD converters and the digital data is fed to a DSPH In the DSPH the sampled data is filtered with pulse shaping filter to eliminate ISI The signals are then synchronized and the transmitted digital data is recovered

Figure 34 Mapping circuit function for QPSKFor the mapping function a simple circuit is used to allocate the data as illustrated in the following figure This mapping function basically allocates all even bits to Ich and all odd bits to Qch and demapping is just the opposite operation

Conclusion It is clear that in order for QPSK to be useful for high data rate wide bandwidth systemsin multipath fading channels it is necessary to consider using diversity techniques equalization adaptive antennas The approach for performance analysis presented here could be used as a first step in the design process of such more complex systems

Task 5

Discuss Block codes and Convolutional codes in detail and evaluate their performance

Block codes

Consider that a message source can generate M equally likely messages Then initially we represent each message by k binary digits with 2k =M These k bits are the information bearing bits Next we add to each k bit massage r redundant bit Thus each massage has been expanded into a codeword of length n bits with n = k + r The total number of possible n bit codewords is 2n while the total number of possible messages is 2k There are 2k - 2n possible n bit words which do not represent possible messages

Codes formed by taking a block of k information bits and adding r (= n - k) redundant bits to form a codeword are called Block Codes and designated (nk) codes

The Hamming Distance dmin

Consider two distinct five digit codewords C1 = 00000 and C2 = 00011 These have a binary digit difference (or Hamming distance) of 2 in the last two digits The minimum distance in binary digits between any two codewords is known as the minimum Hamming distance dmin For block codes the minimum Hamming distance or the smallest difference between the digits for any two codewords in the complete code set dmin is the property which controls the error correction performance We can thus calculate the error detecting and correcting power of a code from the minimum distance in bits between the codewords

Block error probability and correction capability

If we have an error correcting code which can correct R errors than the probability of a codeword not being correctable is the probability of having more than R errors in n digits We can this calculate this probability by summing all the induvidual error probabilities up to and including R errors in the block

P(gtRrsquo errors) = 1 -

The probability of j errors in n digit codeword is

P(gtRrsquo errors) = (Pe)j (1-Pe)n-j x nCj

Pe is the probitlity of error in a single binary digit

n is the block length

nCj is the number of ways of choosing j error digits positoins with in length n binary digts

Group codes

Group codes are a special kind of block codes They comprise a set of codewords C1CN which contain the all zeros codeword (eg 00000) and exhibit a special property called closure This property means that if any two valid codewords are subject to a bit wise EX OR operation then they will produce another valid codeword in the set The closure property means that to find the minimum Hamming distance see below all that is required is to compare all the remaining codewords in the set with the all zeros codeword instead of comparing all the possible pairs of codewords The saving gets bigger the longer the codeword For example a code set with 100 codewords will require 100 comparisons for a Group code design compared with 100+99+98+ +2+1 for a non-group code

Nearest neighbour decoding

Nearest neighbour decoding assumes that the codeword nearest in Hamming distance to the received word is what was transmitted This inherently contains the assumption that the probability of a small number of terrors is greater than the probability of the larger number of t+1 errors that Pe is small

Nearest neighbour decoding can also be done on a soft decision basis with real non-binary numbers from the receiver The nearest Euclidean distance (nearest to these 5 codewords in terms of a 5-D geometry) is then used and this gives a considerable performance increase over the hard decision decoding described here

Hamming boundThis defines mathematically the error correcting performance of a block code The

upper bound on the performance of block codes is given by the Hamming bound some times called the sphere packing bound If we are trying to create a code to correct t errors with a block length of n with k information digits The upper bound on the performance of block codes as given by the Hamming Bound

2k 2 n 1 + n + nC2 + nC3

++ nCt

Cyclic codes

Cyclic codes are linear block codes with an additional cyclic shift operation For convenience polynomial representations are used for the code words for encoding and decoding since the shifting of a code word is equivalent to a modification of the exponential of a polynomial Specifically if x = (x0 x1xn-1) denotes a code word with elements in a finite field-(a field is an algebraic system formed by collection of elements F together with dyadic(2-operand) operations + and multiplication which are defined for all pairs of field elements in F and which behave in an arithmetically consistent manner A finite field is a field with finite number q of elements and it can be represented by Fq) A polynomial over Fq of degree at most n-1 is given as x(D) = x0 + x1D1 + x2D2 ++xn-1Dn-1

Cyclic codes are extremely suited for error correction because they can be designed to detect many combination of likely errors and the implementation of both encoding and

error-detecting circuits are practical A cyclic code used for error-detection is known as cyclic redundancy check (CRC) code

An error burst of length B in an n-bit received word as a contagious sequence of B bits in which the first and the last bits or any number of intermediate bits are received in error Binary (n k) CRC codes are capable of detecting the all error bursts of length n ndash k or less a fraction of error bursts of length equal to n-k+1 the fraction equals 1-2-(n-k-1) a fraction of error bursts of length greater than n-k+1 the fraction equals to 1-2-(n-k-1) all combination of dmin ndash 1 (or fewer) errors and all error patterns with an odd number of errors if the generation polynomial g(X) for the code as an even number of non-zero coefficients

Convolutional code

Encoder structure

The encoder will be represented in many different but equivalent ways Also the main decoding strategy for convolutional codes based on the Viterbi Algorithm will bedescribed A firm understanding of convolutional codes is an important prerequisite tothe understanding of turbo codes

Convolutional codes are commonly specified by three parameters (nkm)

n = number of output bit

k = number of inputs bits

m = number of memory registers

A convolutional code introduces redundant bits into the data stream through the use of linear shift registers The information bits are input into shift registers and the output encoded bits are obtainedby modulo-2 addition of the input information bits and the contents of the shift registersThe connections to the modulo-2 adders were developed heuristically with no algebraic or combinatorial foundation The code rate r for a convolutional code is defined as r = kn Often the manufacturers of convolutional code chips specify the code by parameters (nkL) The quantity L is called the constraint length of the code and is defined by Constraint Length L = k (m-1)

The constraint length L represents the number of bits in the encoder memory that affect the generation of the n output bit s The constraint length L is also referred to by t he capital letter K which can be confusing wit h the lower ca se k which represents the number of input bits In some books K is defined as equal to product the of k and m Often in commercial spec the codes are specified by (r K) where r = the code rate kn and K is the constraint length The constraint length K however is equal t o L œ 1 as defined in this paper I will be referring t o convolutional codes as (nkm) and not as (rK)

Encoder Representations

1) Generator Representation

Generator representation shows the hardware connection of the shift register taps to the modulo-2 adders A generator vector represents the position of the taps for an output A ldquo1rdquo represents a connection and a ldquo0rdquo represents no connection

2) Tree Diagram Representation

The tree diagram representation shows all possible information and encoded sequences for the convolutional encoder In the tree diagram a solid line represents input information bit 0 and a dashed line represents input information bit 1 The corresponding output encoded bits are shown on the branches of the tree An input information sequence defines a specific path through the tree diagram from left to right

3) State Diagram Representation

The state diagram shows the state information of a convolutional encoder The state information of a convolutional encoder is stored in the shift registers In the state diagram the state information of the encoder is shown in the circles Each new input information bit causes a transition from one state to another The path information between the states denoted as xc represents input information bit x and output encoded bits c It is customary to begin convolutional encoding from the all zerostate

4) Trellis Diagram RepresentationThe trellis diagram is basically a redrawing of the state diagram It shows all possible state transitions at each time step Frequently a legend accompanies the trellis diagram to show the state transitions and the corresponding input and output bit mappings (xc)

Catastrophic Convolutional code

Catastrophic convolutional code causes a large number of bit errors when only asmall number of channel bit errors is received This type of code needs to be avoided andcan be identified by the state diagram A state diagram having a loop in which a nonzeroinformation sequence corresponds to an all-zero output sequence identifies a catastrophicconvolutional code

Hard-Decision and Soft-Decision Decoding

Hard-decision and soft-decision decoding refer to the type of quantization used onthe received bits Hard-decision decoding uses 1-bit quantization on the received channel values Soft-decision decoding uses multi-bit quantization on the received channel values For the ideal soft-decision decoding (infinite-bit quantization) the received channel values are directly used in the channel decoder

Viterbi decoding

Viterbi decoding is the best known implementation of the maximum likely-hood decoding Here we narrow the options systematically at each time tick The principal used to reduce the choices is this

1 The errors occur infrequently The probability of error is small

2 The probability of two errors in a row is much smaller than a single error that is the errors are distributed randomly

The Viterbi decoder examines an entire received sequence of a given length The decoder computes a metric for each path and makes a decision based on this metric All paths are followed until two paths converge on one node Then the path with the higher metric is kept and the one with lower metric is discarded The paths selected are called the survivors

For an N bit sequence total numbers of possible received sequences are 2N Of these only 2kL are valid The Viterbi algorithm applies the maximum-likelihood principles to limit the comparison to 2 to the power of kL surviving paths instead of checking all paths The most common metric used is the Hamming distance metric This is just the dot product between the received codeword and the allowable codeword

Decoding Complexity for Convolutional Codes

For a general convolutional code the input information sequence contains kLbits where k is the number of parallel information bits at one time interval and L is thenumber of time intervals This results in L+m stages in the trellis diagram There areexactly 2kL distinct paths in the trellis diagram and as a result an exhaustive search forthe ML sequence would have a computational complexity on the order of O[2kL]

TheViterbi algorithm reduces this complexity by performing the ML search one stage at a time in the trellis At each node (state) of the trellis there are 2k calculations Thenumber of nodes per stage in the trellis is 2m Therefore the complexity of the Viterbialgorithm is on the order of O[(2k)(2m)(L+m)] This significantly reduces the number ofcalculations required to implement the ML decoding because the number of time intervalsL is now a linear factor and not an exponent factor in the complexity However therewill be an exponential increase in complexity if either k or m increases

Conclusion

Fundamentally convolutional codes do not offer more protection against noise than an equivalent block code In many cases they generally offer greater simplicity of implementation over a block code of equal power

REFERENCES

1 Herbert Taub Schilling Principle of Communication System Mcgraw-Hill 2002

2 Martin S Analog and Digital Communication System Prentice Hall 2001

3 P M Grant and D G Mcruickshank I A Glover and P M Grant Digital Communications Pearson Education 2009

4 V Pless Introduction to the Theory of Error-Correcting Codes 3rd ed New York John Wiley amp Sons 1998

5 Tomasi W Electronic Communication Systems Fundamentals Through Advanced Prentice Hall 2004

6 Lee L H C Error-Control Block Codes for Communications Engineers ArtechHouse 2000

7 httpwwwscribdcomdoc35139573Notes-in-Phase-Shift-Keying-Bpsk-Qpsk

8 httpwwwwikipediaorg

9 MacQuarie University Lecturer Notes httpwwwelecmqeduau~clfiles_pdfelec321

  • Viterbi decoding
Page 5: Analogue and Digital Communication assignment

Frequency-Shift Keying (FSK)

i) Introduction

FSK (Frequency Shift Keying) is also known as frequency shift modulation and frequency shift signaling Frequency Shift Keying is a data signal converted into a specific frequency or tone in order to transmit it over wire cable optical fiber or wireless media to a destination point In Frequency Shift Keying the modulating signals shift the output frequency between predetermined levels Technically FSK has two classifications the non-coherent and coherent FSK In non-coherent FSK the instantaneous frequency is shifted between two discrete values named mark and space frequency respectively On the other hand in coherent Frequency Shift Keying or binary FSK there is no phase discontinuity in the output signal The simplest FSK is binary FSK (BFSK) BFSK literally implies using a pair of discrete frequencies to transmit binary (0s and 1s) information With this scheme the 1 is called the mark frequency and the 0 is called the space frequency

ii) Modulation

In frequency-shift keying the signals transmitted for marks (binary ones) and spaces (binary zeros) are respectively

s1(t)= A cos(ω1t + θc) 0 lt t le T

s2(t)= A cos(ω2t + θc) 0 lt t le T

This is called a discontinuous phase FSK system because the phase of the signal is discontinuous at the switching times A signal of this form can be generated by the following system

Figure 16 FSK discontinuous phase

If the bit intervals and the phases of the signals can be determined (usually by the use of a phase-lock loop) then the signal can be decoded by two separate matched filters

Figure 17 Two separate filters

The first filter is matched to the signal s1(t) and the second to s2(t) Under the assumption that the signals are mutually orthogonal the output of one of the matched filters will be E and the other zero (where E is the energy of the signal) Decoding of the bandpass signal can therefore be achieved by subtracting the outputs of the two filters and comparing the result to a threshold If the signal s1(t) is present then the resulting output will be +E and if s2(t) is present it will be -E Since the noise variance at each filter output is Eᶯ2the noise in the difference signal will be doubled namely σ2= Eᶯ Since the overall output variation is 2E the probability of error is on eqn1

eqn1

The overall performance of a matched filter receiver in this case is therefore the same as for ASK The frequency spectrum of an FSK signal is difficult to obtain This is a general characteristic of FM signals However some rules of thumb can be developed Consider the case where the binary message consists of an alternating sequence of zeros and ones If the two frequencies are each multiples of 1T (eg f1 = mT and f2 =nT ) and are synchronised in phase then the FSK wave is a periodic function

Figure 18 FSK signal wave

This can be viewed as the linear superposition of two OOK signals one delayed by T seconds with respect to the other Since the spectrum of an OOK signal is on eqn2

eqn2

where M(ω) is the transform of the baseband signal m(t) the spectrum of the FSK signal is

the superposition of two of these spectra one for ω1 = ωc ndash Δω and the other for ω2 =ωc +

Δω Nonsynchronous or envelope detection can be performed for FSK signals In this case

the receiver takes these following form

Figure 19 Receiver

Binary Frequency-Shift Keying (BFSK)

A binary frequency-shift keying (BFSK) signal can be defined by

s(t) =A cos 2f 0 t 0 lt t lt T A cos 2f 1 t elsewhere

where A is a constant f0 and f1 are the transmitted frequencies and T is the bit duration The signal has a power P = A22 so that A = radic2P Thus equation can be written as

s(t) =radic2P cos 2f 0 t 0 lt t lt T radic2P cos 2f 1 t elsewhere

radicPT radic2T cos 2f 0 t 0 lt t lt T radicPT radic2T cos 2f 1 t elsewhere

radicE radic2T cos 2f 0 t 0 lt t lt T radicE radic2T cos 2f 1 t elsewhere

where E = PT is the energy contained in a bit duration For orthogonality f0 = mTand f1 = nT for integer n gt integer m and f1 - f0 must be an integer multiple of12T We can take ϕ1(t) =radic2T cos 2 f0t and ϕ 2( t) = 2T sin 2 f1t as theorthonormal basis functions The applicable signal constellation diagram of theorthogonal BFSK signal is shown in below

Figure 110 Orthogonal BFSK signal constellation diagram

Figure 111 (a) Binary sequence (b) BFSK signal and (c) binary modulating and BASK signals

It can be seen that phase continuity is maintained at transitions Further the BFSK signal is the sum of two BASK signals generated by two modulating signals m0(t) and m1(t) Therefore the Fourier transform of the BFSK signal s(t) is S(f) = A2 M1(f - f1) + A2 M1(f + f1)

Figure 112 (a) Modulating signals (b) Spectrum of (a)

M-ary Frequency-Shift Keying ( M -FSK)

An M-ary frequency-shift keying (M-FSK) signal can be defined by

s(t) = A cos (2 fit +Ɵrsquo) 0 lt t lt T 0 elsewherefor i = 0 1 M - 1 Here A is a constant fi is the transmitted frequency isthe initial phase angle and T is the symbol duration It has a power P = A22 so thatA = 2P Thus equation (246) can be written as

s(t) = radic2P cos (2 fit +Ɵrsquo) 0 lt t lt T

=radicPT radic2T cos (2 fit +Ɵrsquo) 0 lt t lt T

=radicEradic2T cos (2 fit +Ɵrsquo) 0 lt t lt T

where E = PT is the energy of s(t) contained in a symbol duration for i = 0 1 M - 1

Figure 245 M-ary orthogonal 3-FSK signal constellation diagram

Figure 246 4-FSK modulation (a) binary signal and (b) 4-FSK signal

Task 3

Draw modulator and receiver block diagram of a QPSK modulation scheme

Quaternary phase shift keying (QPSK) or quadrature PSK as it is sometimes called is an other form of angle-moduled constant-amplitude digital modulation With QPSK four output phases are possible for a single carrier frequency Because there are four different output phases there must be four different input conditions Because the digital input to a QPSK modulator is a binary (base2) signal to produce four different input conditions it takes more than a single input bit With two bits there are four there are four possible conditions 00 01 10 and 11 There fore with QPSK the binary input data are combined into groups of two bits called dibits Each dibit code generates one of the four possible output phases Therefore for each two-bit dibit clocked into the modulator a single output change occurs

Figure 31 Phasor Diagram and Constellation Diagram

Above shows the constellation diagram for QPSK with Gray coding Each adjacent symbol only differs by one bit Sometimes known as quaternary or quadriphase PSK or 4-PSK QPSK uses four points on the constellation diagram equispaced around a circle With four phases QPSK can encode two bits per symbol

Figure 32 Four symbols that represents the four phases in QPSK

Figure above shows depicts the 4 symbols used to represent the four phases in QPSK Analysis shows that this may be used either to double the data rate compared to a BPSK system while maintaining the bandwidth of the signal or to maintain the data-rate of BPSK but halve the bandwidth needed

Basic Configuration of Quadrature Modulation Scheme

Figure 33 Basic configuration of QPSKQPSK signal is generated by two BPSK signal Two orthogonal carrier signals are

used to distinguish the two signals One is given by cos 2fct and the other is given by sin

2fct The two carrier signals remain orthogonal in the area of a period A channel in which cos2fct is used as a carrier signal is generally called an inphase channel or Ich and a channel in which sin 2fct is used as a carrier signal is generally called quadrature-phase channel or Qch Therefore d (t) I and d (t) Q are the data in Ich and Qch respectively Modulation schemes that use Ich and Qch are called quadrature modulation schemes The basic confiuration is shown in figure 33

In the system shown above the input digital data dk is first converted into parallel data with two channels Ich and Qch The data are represented as di(t) and dq(t) The conversion or data allocation is done using a mapping circuit block Then the data allocated to Ich is filtered using a pulse shaping filter in Ich The pulse shaped signal is converted in analog signal by a DA converter and multiplied by a cos 2fct carrier wave The same process is carried out on the data allocated to Qch but it is multiplied by a f t c sin 21048659 carrier wave instead Then the Ich and Qch signals are added and transmitted to the air

At the receiver the received wave passes through a BPF to eliminate any sprurious signals Then it is downconverted to the baseband by multiplying by the RF carrier frequency Then in both the Ich and Qch channels the downcoverted signal is digitally sampled by an AD converters and the digital data is fed to a DSPH In the DSPH the sampled data is filtered with pulse shaping filter to eliminate ISI The signals are then synchronized and the transmitted digital data is recovered

Figure 34 Mapping circuit function for QPSKFor the mapping function a simple circuit is used to allocate the data as illustrated in the following figure This mapping function basically allocates all even bits to Ich and all odd bits to Qch and demapping is just the opposite operation

Conclusion It is clear that in order for QPSK to be useful for high data rate wide bandwidth systemsin multipath fading channels it is necessary to consider using diversity techniques equalization adaptive antennas The approach for performance analysis presented here could be used as a first step in the design process of such more complex systems

Task 5

Discuss Block codes and Convolutional codes in detail and evaluate their performance

Block codes

Consider that a message source can generate M equally likely messages Then initially we represent each message by k binary digits with 2k =M These k bits are the information bearing bits Next we add to each k bit massage r redundant bit Thus each massage has been expanded into a codeword of length n bits with n = k + r The total number of possible n bit codewords is 2n while the total number of possible messages is 2k There are 2k - 2n possible n bit words which do not represent possible messages

Codes formed by taking a block of k information bits and adding r (= n - k) redundant bits to form a codeword are called Block Codes and designated (nk) codes

The Hamming Distance dmin

Consider two distinct five digit codewords C1 = 00000 and C2 = 00011 These have a binary digit difference (or Hamming distance) of 2 in the last two digits The minimum distance in binary digits between any two codewords is known as the minimum Hamming distance dmin For block codes the minimum Hamming distance or the smallest difference between the digits for any two codewords in the complete code set dmin is the property which controls the error correction performance We can thus calculate the error detecting and correcting power of a code from the minimum distance in bits between the codewords

Block error probability and correction capability

If we have an error correcting code which can correct R errors than the probability of a codeword not being correctable is the probability of having more than R errors in n digits We can this calculate this probability by summing all the induvidual error probabilities up to and including R errors in the block

P(gtRrsquo errors) = 1 -

The probability of j errors in n digit codeword is

P(gtRrsquo errors) = (Pe)j (1-Pe)n-j x nCj

Pe is the probitlity of error in a single binary digit

n is the block length

nCj is the number of ways of choosing j error digits positoins with in length n binary digts

Group codes

Group codes are a special kind of block codes They comprise a set of codewords C1CN which contain the all zeros codeword (eg 00000) and exhibit a special property called closure This property means that if any two valid codewords are subject to a bit wise EX OR operation then they will produce another valid codeword in the set The closure property means that to find the minimum Hamming distance see below all that is required is to compare all the remaining codewords in the set with the all zeros codeword instead of comparing all the possible pairs of codewords The saving gets bigger the longer the codeword For example a code set with 100 codewords will require 100 comparisons for a Group code design compared with 100+99+98+ +2+1 for a non-group code

Nearest neighbour decoding

Nearest neighbour decoding assumes that the codeword nearest in Hamming distance to the received word is what was transmitted This inherently contains the assumption that the probability of a small number of terrors is greater than the probability of the larger number of t+1 errors that Pe is small

Nearest neighbour decoding can also be done on a soft decision basis with real non-binary numbers from the receiver The nearest Euclidean distance (nearest to these 5 codewords in terms of a 5-D geometry) is then used and this gives a considerable performance increase over the hard decision decoding described here

Hamming boundThis defines mathematically the error correcting performance of a block code The

upper bound on the performance of block codes is given by the Hamming bound some times called the sphere packing bound If we are trying to create a code to correct t errors with a block length of n with k information digits The upper bound on the performance of block codes as given by the Hamming Bound

2k 2 n 1 + n + nC2 + nC3

++ nCt

Cyclic codes

Cyclic codes are linear block codes with an additional cyclic shift operation For convenience polynomial representations are used for the code words for encoding and decoding since the shifting of a code word is equivalent to a modification of the exponential of a polynomial Specifically if x = (x0 x1xn-1) denotes a code word with elements in a finite field-(a field is an algebraic system formed by collection of elements F together with dyadic(2-operand) operations + and multiplication which are defined for all pairs of field elements in F and which behave in an arithmetically consistent manner A finite field is a field with finite number q of elements and it can be represented by Fq) A polynomial over Fq of degree at most n-1 is given as x(D) = x0 + x1D1 + x2D2 ++xn-1Dn-1

Cyclic codes are extremely suited for error correction because they can be designed to detect many combination of likely errors and the implementation of both encoding and

error-detecting circuits are practical A cyclic code used for error-detection is known as cyclic redundancy check (CRC) code

An error burst of length B in an n-bit received word as a contagious sequence of B bits in which the first and the last bits or any number of intermediate bits are received in error Binary (n k) CRC codes are capable of detecting the all error bursts of length n ndash k or less a fraction of error bursts of length equal to n-k+1 the fraction equals 1-2-(n-k-1) a fraction of error bursts of length greater than n-k+1 the fraction equals to 1-2-(n-k-1) all combination of dmin ndash 1 (or fewer) errors and all error patterns with an odd number of errors if the generation polynomial g(X) for the code as an even number of non-zero coefficients

Convolutional code

Encoder structure

The encoder will be represented in many different but equivalent ways Also the main decoding strategy for convolutional codes based on the Viterbi Algorithm will bedescribed A firm understanding of convolutional codes is an important prerequisite tothe understanding of turbo codes

Convolutional codes are commonly specified by three parameters (nkm)

n = number of output bit

k = number of inputs bits

m = number of memory registers

A convolutional code introduces redundant bits into the data stream through the use of linear shift registers The information bits are input into shift registers and the output encoded bits are obtainedby modulo-2 addition of the input information bits and the contents of the shift registersThe connections to the modulo-2 adders were developed heuristically with no algebraic or combinatorial foundation The code rate r for a convolutional code is defined as r = kn Often the manufacturers of convolutional code chips specify the code by parameters (nkL) The quantity L is called the constraint length of the code and is defined by Constraint Length L = k (m-1)

The constraint length L represents the number of bits in the encoder memory that affect the generation of the n output bit s The constraint length L is also referred to by t he capital letter K which can be confusing wit h the lower ca se k which represents the number of input bits In some books K is defined as equal to product the of k and m Often in commercial spec the codes are specified by (r K) where r = the code rate kn and K is the constraint length The constraint length K however is equal t o L œ 1 as defined in this paper I will be referring t o convolutional codes as (nkm) and not as (rK)

Encoder Representations

1) Generator Representation

Generator representation shows the hardware connection of the shift register taps to the modulo-2 adders A generator vector represents the position of the taps for an output A ldquo1rdquo represents a connection and a ldquo0rdquo represents no connection

2) Tree Diagram Representation

The tree diagram representation shows all possible information and encoded sequences for the convolutional encoder In the tree diagram a solid line represents input information bit 0 and a dashed line represents input information bit 1 The corresponding output encoded bits are shown on the branches of the tree An input information sequence defines a specific path through the tree diagram from left to right

3) State Diagram Representation

The state diagram shows the state information of a convolutional encoder The state information of a convolutional encoder is stored in the shift registers In the state diagram the state information of the encoder is shown in the circles Each new input information bit causes a transition from one state to another The path information between the states denoted as xc represents input information bit x and output encoded bits c It is customary to begin convolutional encoding from the all zerostate

4) Trellis Diagram RepresentationThe trellis diagram is basically a redrawing of the state diagram It shows all possible state transitions at each time step Frequently a legend accompanies the trellis diagram to show the state transitions and the corresponding input and output bit mappings (xc)

Catastrophic Convolutional code

Catastrophic convolutional code causes a large number of bit errors when only asmall number of channel bit errors is received This type of code needs to be avoided andcan be identified by the state diagram A state diagram having a loop in which a nonzeroinformation sequence corresponds to an all-zero output sequence identifies a catastrophicconvolutional code

Hard-Decision and Soft-Decision Decoding

Hard-decision and soft-decision decoding refer to the type of quantization used onthe received bits Hard-decision decoding uses 1-bit quantization on the received channel values Soft-decision decoding uses multi-bit quantization on the received channel values For the ideal soft-decision decoding (infinite-bit quantization) the received channel values are directly used in the channel decoder

Viterbi decoding

Viterbi decoding is the best known implementation of the maximum likely-hood decoding Here we narrow the options systematically at each time tick The principal used to reduce the choices is this

1 The errors occur infrequently The probability of error is small

2 The probability of two errors in a row is much smaller than a single error that is the errors are distributed randomly

The Viterbi decoder examines an entire received sequence of a given length The decoder computes a metric for each path and makes a decision based on this metric All paths are followed until two paths converge on one node Then the path with the higher metric is kept and the one with lower metric is discarded The paths selected are called the survivors

For an N bit sequence total numbers of possible received sequences are 2N Of these only 2kL are valid The Viterbi algorithm applies the maximum-likelihood principles to limit the comparison to 2 to the power of kL surviving paths instead of checking all paths The most common metric used is the Hamming distance metric This is just the dot product between the received codeword and the allowable codeword

Decoding Complexity for Convolutional Codes

For a general convolutional code the input information sequence contains kLbits where k is the number of parallel information bits at one time interval and L is thenumber of time intervals This results in L+m stages in the trellis diagram There areexactly 2kL distinct paths in the trellis diagram and as a result an exhaustive search forthe ML sequence would have a computational complexity on the order of O[2kL]

TheViterbi algorithm reduces this complexity by performing the ML search one stage at a time in the trellis At each node (state) of the trellis there are 2k calculations Thenumber of nodes per stage in the trellis is 2m Therefore the complexity of the Viterbialgorithm is on the order of O[(2k)(2m)(L+m)] This significantly reduces the number ofcalculations required to implement the ML decoding because the number of time intervalsL is now a linear factor and not an exponent factor in the complexity However therewill be an exponential increase in complexity if either k or m increases

Conclusion

Fundamentally convolutional codes do not offer more protection against noise than an equivalent block code In many cases they generally offer greater simplicity of implementation over a block code of equal power

REFERENCES

1 Herbert Taub Schilling Principle of Communication System Mcgraw-Hill 2002

2 Martin S Analog and Digital Communication System Prentice Hall 2001

3 P M Grant and D G Mcruickshank I A Glover and P M Grant Digital Communications Pearson Education 2009

4 V Pless Introduction to the Theory of Error-Correcting Codes 3rd ed New York John Wiley amp Sons 1998

5 Tomasi W Electronic Communication Systems Fundamentals Through Advanced Prentice Hall 2004

6 Lee L H C Error-Control Block Codes for Communications Engineers ArtechHouse 2000

7 httpwwwscribdcomdoc35139573Notes-in-Phase-Shift-Keying-Bpsk-Qpsk

8 httpwwwwikipediaorg

9 MacQuarie University Lecturer Notes httpwwwelecmqeduau~clfiles_pdfelec321

  • Viterbi decoding
Page 6: Analogue and Digital Communication assignment

The first filter is matched to the signal s1(t) and the second to s2(t) Under the assumption that the signals are mutually orthogonal the output of one of the matched filters will be E and the other zero (where E is the energy of the signal) Decoding of the bandpass signal can therefore be achieved by subtracting the outputs of the two filters and comparing the result to a threshold If the signal s1(t) is present then the resulting output will be +E and if s2(t) is present it will be -E Since the noise variance at each filter output is Eᶯ2the noise in the difference signal will be doubled namely σ2= Eᶯ Since the overall output variation is 2E the probability of error is on eqn1

eqn1

The overall performance of a matched filter receiver in this case is therefore the same as for ASK The frequency spectrum of an FSK signal is difficult to obtain This is a general characteristic of FM signals However some rules of thumb can be developed Consider the case where the binary message consists of an alternating sequence of zeros and ones If the two frequencies are each multiples of 1T (eg f1 = mT and f2 =nT ) and are synchronised in phase then the FSK wave is a periodic function

Figure 18 FSK signal wave

This can be viewed as the linear superposition of two OOK signals one delayed by T seconds with respect to the other Since the spectrum of an OOK signal is on eqn2

eqn2

where M(ω) is the transform of the baseband signal m(t) the spectrum of the FSK signal is

the superposition of two of these spectra one for ω1 = ωc ndash Δω and the other for ω2 =ωc +

Δω Nonsynchronous or envelope detection can be performed for FSK signals In this case

the receiver takes these following form

Figure 19 Receiver

Binary Frequency-Shift Keying (BFSK)

A binary frequency-shift keying (BFSK) signal can be defined by

s(t) =A cos 2f 0 t 0 lt t lt T A cos 2f 1 t elsewhere

where A is a constant f0 and f1 are the transmitted frequencies and T is the bit duration The signal has a power P = A22 so that A = radic2P Thus equation can be written as

s(t) =radic2P cos 2f 0 t 0 lt t lt T radic2P cos 2f 1 t elsewhere

radicPT radic2T cos 2f 0 t 0 lt t lt T radicPT radic2T cos 2f 1 t elsewhere

radicE radic2T cos 2f 0 t 0 lt t lt T radicE radic2T cos 2f 1 t elsewhere

where E = PT is the energy contained in a bit duration For orthogonality f0 = mTand f1 = nT for integer n gt integer m and f1 - f0 must be an integer multiple of12T We can take ϕ1(t) =radic2T cos 2 f0t and ϕ 2( t) = 2T sin 2 f1t as theorthonormal basis functions The applicable signal constellation diagram of theorthogonal BFSK signal is shown in below

Figure 110 Orthogonal BFSK signal constellation diagram

Figure 111 (a) Binary sequence (b) BFSK signal and (c) binary modulating and BASK signals

It can be seen that phase continuity is maintained at transitions Further the BFSK signal is the sum of two BASK signals generated by two modulating signals m0(t) and m1(t) Therefore the Fourier transform of the BFSK signal s(t) is S(f) = A2 M1(f - f1) + A2 M1(f + f1)

Figure 112 (a) Modulating signals (b) Spectrum of (a)

M-ary Frequency-Shift Keying ( M -FSK)

An M-ary frequency-shift keying (M-FSK) signal can be defined by

s(t) = A cos (2 fit +Ɵrsquo) 0 lt t lt T 0 elsewherefor i = 0 1 M - 1 Here A is a constant fi is the transmitted frequency isthe initial phase angle and T is the symbol duration It has a power P = A22 so thatA = 2P Thus equation (246) can be written as

s(t) = radic2P cos (2 fit +Ɵrsquo) 0 lt t lt T

=radicPT radic2T cos (2 fit +Ɵrsquo) 0 lt t lt T

=radicEradic2T cos (2 fit +Ɵrsquo) 0 lt t lt T

where E = PT is the energy of s(t) contained in a symbol duration for i = 0 1 M - 1

Figure 245 M-ary orthogonal 3-FSK signal constellation diagram

Figure 246 4-FSK modulation (a) binary signal and (b) 4-FSK signal

Task 3

Draw modulator and receiver block diagram of a QPSK modulation scheme

Quaternary phase shift keying (QPSK) or quadrature PSK as it is sometimes called is an other form of angle-moduled constant-amplitude digital modulation With QPSK four output phases are possible for a single carrier frequency Because there are four different output phases there must be four different input conditions Because the digital input to a QPSK modulator is a binary (base2) signal to produce four different input conditions it takes more than a single input bit With two bits there are four there are four possible conditions 00 01 10 and 11 There fore with QPSK the binary input data are combined into groups of two bits called dibits Each dibit code generates one of the four possible output phases Therefore for each two-bit dibit clocked into the modulator a single output change occurs

Figure 31 Phasor Diagram and Constellation Diagram

Above shows the constellation diagram for QPSK with Gray coding Each adjacent symbol only differs by one bit Sometimes known as quaternary or quadriphase PSK or 4-PSK QPSK uses four points on the constellation diagram equispaced around a circle With four phases QPSK can encode two bits per symbol

Figure 32 Four symbols that represents the four phases in QPSK

Figure above shows depicts the 4 symbols used to represent the four phases in QPSK Analysis shows that this may be used either to double the data rate compared to a BPSK system while maintaining the bandwidth of the signal or to maintain the data-rate of BPSK but halve the bandwidth needed

Basic Configuration of Quadrature Modulation Scheme

Figure 33 Basic configuration of QPSKQPSK signal is generated by two BPSK signal Two orthogonal carrier signals are

used to distinguish the two signals One is given by cos 2fct and the other is given by sin

2fct The two carrier signals remain orthogonal in the area of a period A channel in which cos2fct is used as a carrier signal is generally called an inphase channel or Ich and a channel in which sin 2fct is used as a carrier signal is generally called quadrature-phase channel or Qch Therefore d (t) I and d (t) Q are the data in Ich and Qch respectively Modulation schemes that use Ich and Qch are called quadrature modulation schemes The basic confiuration is shown in figure 33

In the system shown above the input digital data dk is first converted into parallel data with two channels Ich and Qch The data are represented as di(t) and dq(t) The conversion or data allocation is done using a mapping circuit block Then the data allocated to Ich is filtered using a pulse shaping filter in Ich The pulse shaped signal is converted in analog signal by a DA converter and multiplied by a cos 2fct carrier wave The same process is carried out on the data allocated to Qch but it is multiplied by a f t c sin 21048659 carrier wave instead Then the Ich and Qch signals are added and transmitted to the air

At the receiver the received wave passes through a BPF to eliminate any sprurious signals Then it is downconverted to the baseband by multiplying by the RF carrier frequency Then in both the Ich and Qch channels the downcoverted signal is digitally sampled by an AD converters and the digital data is fed to a DSPH In the DSPH the sampled data is filtered with pulse shaping filter to eliminate ISI The signals are then synchronized and the transmitted digital data is recovered

Figure 34 Mapping circuit function for QPSKFor the mapping function a simple circuit is used to allocate the data as illustrated in the following figure This mapping function basically allocates all even bits to Ich and all odd bits to Qch and demapping is just the opposite operation

Conclusion It is clear that in order for QPSK to be useful for high data rate wide bandwidth systemsin multipath fading channels it is necessary to consider using diversity techniques equalization adaptive antennas The approach for performance analysis presented here could be used as a first step in the design process of such more complex systems

Task 5

Discuss Block codes and Convolutional codes in detail and evaluate their performance

Block codes

Consider that a message source can generate M equally likely messages Then initially we represent each message by k binary digits with 2k =M These k bits are the information bearing bits Next we add to each k bit massage r redundant bit Thus each massage has been expanded into a codeword of length n bits with n = k + r The total number of possible n bit codewords is 2n while the total number of possible messages is 2k There are 2k - 2n possible n bit words which do not represent possible messages

Codes formed by taking a block of k information bits and adding r (= n - k) redundant bits to form a codeword are called Block Codes and designated (nk) codes

The Hamming Distance dmin

Consider two distinct five digit codewords C1 = 00000 and C2 = 00011 These have a binary digit difference (or Hamming distance) of 2 in the last two digits The minimum distance in binary digits between any two codewords is known as the minimum Hamming distance dmin For block codes the minimum Hamming distance or the smallest difference between the digits for any two codewords in the complete code set dmin is the property which controls the error correction performance We can thus calculate the error detecting and correcting power of a code from the minimum distance in bits between the codewords

Block error probability and correction capability

If we have an error correcting code which can correct R errors than the probability of a codeword not being correctable is the probability of having more than R errors in n digits We can this calculate this probability by summing all the induvidual error probabilities up to and including R errors in the block

P(gtRrsquo errors) = 1 -

The probability of j errors in n digit codeword is

P(gtRrsquo errors) = (Pe)j (1-Pe)n-j x nCj

Pe is the probitlity of error in a single binary digit

n is the block length

nCj is the number of ways of choosing j error digits positoins with in length n binary digts

Group codes

Group codes are a special kind of block codes They comprise a set of codewords C1CN which contain the all zeros codeword (eg 00000) and exhibit a special property called closure This property means that if any two valid codewords are subject to a bit wise EX OR operation then they will produce another valid codeword in the set The closure property means that to find the minimum Hamming distance see below all that is required is to compare all the remaining codewords in the set with the all zeros codeword instead of comparing all the possible pairs of codewords The saving gets bigger the longer the codeword For example a code set with 100 codewords will require 100 comparisons for a Group code design compared with 100+99+98+ +2+1 for a non-group code

Nearest neighbour decoding

Nearest neighbour decoding assumes that the codeword nearest in Hamming distance to the received word is what was transmitted This inherently contains the assumption that the probability of a small number of terrors is greater than the probability of the larger number of t+1 errors that Pe is small

Nearest neighbour decoding can also be done on a soft decision basis with real non-binary numbers from the receiver The nearest Euclidean distance (nearest to these 5 codewords in terms of a 5-D geometry) is then used and this gives a considerable performance increase over the hard decision decoding described here

Hamming boundThis defines mathematically the error correcting performance of a block code The

upper bound on the performance of block codes is given by the Hamming bound some times called the sphere packing bound If we are trying to create a code to correct t errors with a block length of n with k information digits The upper bound on the performance of block codes as given by the Hamming Bound

2k 2 n 1 + n + nC2 + nC3

++ nCt

Cyclic codes

Cyclic codes are linear block codes with an additional cyclic shift operation For convenience polynomial representations are used for the code words for encoding and decoding since the shifting of a code word is equivalent to a modification of the exponential of a polynomial Specifically if x = (x0 x1xn-1) denotes a code word with elements in a finite field-(a field is an algebraic system formed by collection of elements F together with dyadic(2-operand) operations + and multiplication which are defined for all pairs of field elements in F and which behave in an arithmetically consistent manner A finite field is a field with finite number q of elements and it can be represented by Fq) A polynomial over Fq of degree at most n-1 is given as x(D) = x0 + x1D1 + x2D2 ++xn-1Dn-1

Cyclic codes are extremely suited for error correction because they can be designed to detect many combination of likely errors and the implementation of both encoding and

error-detecting circuits are practical A cyclic code used for error-detection is known as cyclic redundancy check (CRC) code

An error burst of length B in an n-bit received word as a contagious sequence of B bits in which the first and the last bits or any number of intermediate bits are received in error Binary (n k) CRC codes are capable of detecting the all error bursts of length n ndash k or less a fraction of error bursts of length equal to n-k+1 the fraction equals 1-2-(n-k-1) a fraction of error bursts of length greater than n-k+1 the fraction equals to 1-2-(n-k-1) all combination of dmin ndash 1 (or fewer) errors and all error patterns with an odd number of errors if the generation polynomial g(X) for the code as an even number of non-zero coefficients

Convolutional code

Encoder structure

The encoder will be represented in many different but equivalent ways Also the main decoding strategy for convolutional codes based on the Viterbi Algorithm will bedescribed A firm understanding of convolutional codes is an important prerequisite tothe understanding of turbo codes

Convolutional codes are commonly specified by three parameters (nkm)

n = number of output bit

k = number of inputs bits

m = number of memory registers

A convolutional code introduces redundant bits into the data stream through the use of linear shift registers The information bits are input into shift registers and the output encoded bits are obtainedby modulo-2 addition of the input information bits and the contents of the shift registersThe connections to the modulo-2 adders were developed heuristically with no algebraic or combinatorial foundation The code rate r for a convolutional code is defined as r = kn Often the manufacturers of convolutional code chips specify the code by parameters (nkL) The quantity L is called the constraint length of the code and is defined by Constraint Length L = k (m-1)

The constraint length L represents the number of bits in the encoder memory that affect the generation of the n output bit s The constraint length L is also referred to by t he capital letter K which can be confusing wit h the lower ca se k which represents the number of input bits In some books K is defined as equal to product the of k and m Often in commercial spec the codes are specified by (r K) where r = the code rate kn and K is the constraint length The constraint length K however is equal t o L œ 1 as defined in this paper I will be referring t o convolutional codes as (nkm) and not as (rK)

Encoder Representations

1) Generator Representation

Generator representation shows the hardware connection of the shift register taps to the modulo-2 adders A generator vector represents the position of the taps for an output A ldquo1rdquo represents a connection and a ldquo0rdquo represents no connection

2) Tree Diagram Representation

The tree diagram representation shows all possible information and encoded sequences for the convolutional encoder In the tree diagram a solid line represents input information bit 0 and a dashed line represents input information bit 1 The corresponding output encoded bits are shown on the branches of the tree An input information sequence defines a specific path through the tree diagram from left to right

3) State Diagram Representation

The state diagram shows the state information of a convolutional encoder The state information of a convolutional encoder is stored in the shift registers In the state diagram the state information of the encoder is shown in the circles Each new input information bit causes a transition from one state to another The path information between the states denoted as xc represents input information bit x and output encoded bits c It is customary to begin convolutional encoding from the all zerostate

4) Trellis Diagram RepresentationThe trellis diagram is basically a redrawing of the state diagram It shows all possible state transitions at each time step Frequently a legend accompanies the trellis diagram to show the state transitions and the corresponding input and output bit mappings (xc)

Catastrophic Convolutional code

Catastrophic convolutional code causes a large number of bit errors when only asmall number of channel bit errors is received This type of code needs to be avoided andcan be identified by the state diagram A state diagram having a loop in which a nonzeroinformation sequence corresponds to an all-zero output sequence identifies a catastrophicconvolutional code

Hard-Decision and Soft-Decision Decoding

Hard-decision and soft-decision decoding refer to the type of quantization used onthe received bits Hard-decision decoding uses 1-bit quantization on the received channel values Soft-decision decoding uses multi-bit quantization on the received channel values For the ideal soft-decision decoding (infinite-bit quantization) the received channel values are directly used in the channel decoder

Viterbi decoding

Viterbi decoding is the best known implementation of the maximum likely-hood decoding Here we narrow the options systematically at each time tick The principal used to reduce the choices is this

1 The errors occur infrequently The probability of error is small

2 The probability of two errors in a row is much smaller than a single error that is the errors are distributed randomly

The Viterbi decoder examines an entire received sequence of a given length The decoder computes a metric for each path and makes a decision based on this metric All paths are followed until two paths converge on one node Then the path with the higher metric is kept and the one with lower metric is discarded The paths selected are called the survivors

For an N bit sequence total numbers of possible received sequences are 2N Of these only 2kL are valid The Viterbi algorithm applies the maximum-likelihood principles to limit the comparison to 2 to the power of kL surviving paths instead of checking all paths The most common metric used is the Hamming distance metric This is just the dot product between the received codeword and the allowable codeword

Decoding Complexity for Convolutional Codes

For a general convolutional code the input information sequence contains kLbits where k is the number of parallel information bits at one time interval and L is thenumber of time intervals This results in L+m stages in the trellis diagram There areexactly 2kL distinct paths in the trellis diagram and as a result an exhaustive search forthe ML sequence would have a computational complexity on the order of O[2kL]

TheViterbi algorithm reduces this complexity by performing the ML search one stage at a time in the trellis At each node (state) of the trellis there are 2k calculations Thenumber of nodes per stage in the trellis is 2m Therefore the complexity of the Viterbialgorithm is on the order of O[(2k)(2m)(L+m)] This significantly reduces the number ofcalculations required to implement the ML decoding because the number of time intervalsL is now a linear factor and not an exponent factor in the complexity However therewill be an exponential increase in complexity if either k or m increases

Conclusion

Fundamentally convolutional codes do not offer more protection against noise than an equivalent block code In many cases they generally offer greater simplicity of implementation over a block code of equal power

REFERENCES

1 Herbert Taub Schilling Principle of Communication System Mcgraw-Hill 2002

2 Martin S Analog and Digital Communication System Prentice Hall 2001

3 P M Grant and D G Mcruickshank I A Glover and P M Grant Digital Communications Pearson Education 2009

4 V Pless Introduction to the Theory of Error-Correcting Codes 3rd ed New York John Wiley amp Sons 1998

5 Tomasi W Electronic Communication Systems Fundamentals Through Advanced Prentice Hall 2004

6 Lee L H C Error-Control Block Codes for Communications Engineers ArtechHouse 2000

7 httpwwwscribdcomdoc35139573Notes-in-Phase-Shift-Keying-Bpsk-Qpsk

8 httpwwwwikipediaorg

9 MacQuarie University Lecturer Notes httpwwwelecmqeduau~clfiles_pdfelec321

  • Viterbi decoding
Page 7: Analogue and Digital Communication assignment

Figure 19 Receiver

Binary Frequency-Shift Keying (BFSK)

A binary frequency-shift keying (BFSK) signal can be defined by

s(t) =A cos 2f 0 t 0 lt t lt T A cos 2f 1 t elsewhere

where A is a constant f0 and f1 are the transmitted frequencies and T is the bit duration The signal has a power P = A22 so that A = radic2P Thus equation can be written as

s(t) =radic2P cos 2f 0 t 0 lt t lt T radic2P cos 2f 1 t elsewhere

radicPT radic2T cos 2f 0 t 0 lt t lt T radicPT radic2T cos 2f 1 t elsewhere

radicE radic2T cos 2f 0 t 0 lt t lt T radicE radic2T cos 2f 1 t elsewhere

where E = PT is the energy contained in a bit duration For orthogonality f0 = mTand f1 = nT for integer n gt integer m and f1 - f0 must be an integer multiple of12T We can take ϕ1(t) =radic2T cos 2 f0t and ϕ 2( t) = 2T sin 2 f1t as theorthonormal basis functions The applicable signal constellation diagram of theorthogonal BFSK signal is shown in below

Figure 110 Orthogonal BFSK signal constellation diagram

Figure 111 (a) Binary sequence (b) BFSK signal and (c) binary modulating and BASK signals

It can be seen that phase continuity is maintained at transitions Further the BFSK signal is the sum of two BASK signals generated by two modulating signals m0(t) and m1(t) Therefore the Fourier transform of the BFSK signal s(t) is S(f) = A2 M1(f - f1) + A2 M1(f + f1)

Figure 112 (a) Modulating signals (b) Spectrum of (a)

M-ary Frequency-Shift Keying ( M -FSK)

An M-ary frequency-shift keying (M-FSK) signal can be defined by

s(t) = A cos (2 fit +Ɵrsquo) 0 lt t lt T 0 elsewherefor i = 0 1 M - 1 Here A is a constant fi is the transmitted frequency isthe initial phase angle and T is the symbol duration It has a power P = A22 so thatA = 2P Thus equation (246) can be written as

s(t) = radic2P cos (2 fit +Ɵrsquo) 0 lt t lt T

=radicPT radic2T cos (2 fit +Ɵrsquo) 0 lt t lt T

=radicEradic2T cos (2 fit +Ɵrsquo) 0 lt t lt T

where E = PT is the energy of s(t) contained in a symbol duration for i = 0 1 M - 1

Figure 245 M-ary orthogonal 3-FSK signal constellation diagram

Figure 246 4-FSK modulation (a) binary signal and (b) 4-FSK signal

Task 3

Draw modulator and receiver block diagram of a QPSK modulation scheme

Quaternary phase shift keying (QPSK) or quadrature PSK as it is sometimes called is an other form of angle-moduled constant-amplitude digital modulation With QPSK four output phases are possible for a single carrier frequency Because there are four different output phases there must be four different input conditions Because the digital input to a QPSK modulator is a binary (base2) signal to produce four different input conditions it takes more than a single input bit With two bits there are four there are four possible conditions 00 01 10 and 11 There fore with QPSK the binary input data are combined into groups of two bits called dibits Each dibit code generates one of the four possible output phases Therefore for each two-bit dibit clocked into the modulator a single output change occurs

Figure 31 Phasor Diagram and Constellation Diagram

Above shows the constellation diagram for QPSK with Gray coding Each adjacent symbol only differs by one bit Sometimes known as quaternary or quadriphase PSK or 4-PSK QPSK uses four points on the constellation diagram equispaced around a circle With four phases QPSK can encode two bits per symbol

Figure 32 Four symbols that represents the four phases in QPSK

Figure above shows depicts the 4 symbols used to represent the four phases in QPSK Analysis shows that this may be used either to double the data rate compared to a BPSK system while maintaining the bandwidth of the signal or to maintain the data-rate of BPSK but halve the bandwidth needed

Basic Configuration of Quadrature Modulation Scheme

Figure 33 Basic configuration of QPSKQPSK signal is generated by two BPSK signal Two orthogonal carrier signals are

used to distinguish the two signals One is given by cos 2fct and the other is given by sin

2fct The two carrier signals remain orthogonal in the area of a period A channel in which cos2fct is used as a carrier signal is generally called an inphase channel or Ich and a channel in which sin 2fct is used as a carrier signal is generally called quadrature-phase channel or Qch Therefore d (t) I and d (t) Q are the data in Ich and Qch respectively Modulation schemes that use Ich and Qch are called quadrature modulation schemes The basic confiuration is shown in figure 33

In the system shown above the input digital data dk is first converted into parallel data with two channels Ich and Qch The data are represented as di(t) and dq(t) The conversion or data allocation is done using a mapping circuit block Then the data allocated to Ich is filtered using a pulse shaping filter in Ich The pulse shaped signal is converted in analog signal by a DA converter and multiplied by a cos 2fct carrier wave The same process is carried out on the data allocated to Qch but it is multiplied by a f t c sin 21048659 carrier wave instead Then the Ich and Qch signals are added and transmitted to the air

At the receiver the received wave passes through a BPF to eliminate any sprurious signals Then it is downconverted to the baseband by multiplying by the RF carrier frequency Then in both the Ich and Qch channels the downcoverted signal is digitally sampled by an AD converters and the digital data is fed to a DSPH In the DSPH the sampled data is filtered with pulse shaping filter to eliminate ISI The signals are then synchronized and the transmitted digital data is recovered

Figure 34 Mapping circuit function for QPSKFor the mapping function a simple circuit is used to allocate the data as illustrated in the following figure This mapping function basically allocates all even bits to Ich and all odd bits to Qch and demapping is just the opposite operation

Conclusion It is clear that in order for QPSK to be useful for high data rate wide bandwidth systemsin multipath fading channels it is necessary to consider using diversity techniques equalization adaptive antennas The approach for performance analysis presented here could be used as a first step in the design process of such more complex systems

Task 5

Discuss Block codes and Convolutional codes in detail and evaluate their performance

Block codes

Consider that a message source can generate M equally likely messages Then initially we represent each message by k binary digits with 2k =M These k bits are the information bearing bits Next we add to each k bit massage r redundant bit Thus each massage has been expanded into a codeword of length n bits with n = k + r The total number of possible n bit codewords is 2n while the total number of possible messages is 2k There are 2k - 2n possible n bit words which do not represent possible messages

Codes formed by taking a block of k information bits and adding r (= n - k) redundant bits to form a codeword are called Block Codes and designated (nk) codes

The Hamming Distance dmin

Consider two distinct five digit codewords C1 = 00000 and C2 = 00011 These have a binary digit difference (or Hamming distance) of 2 in the last two digits The minimum distance in binary digits between any two codewords is known as the minimum Hamming distance dmin For block codes the minimum Hamming distance or the smallest difference between the digits for any two codewords in the complete code set dmin is the property which controls the error correction performance We can thus calculate the error detecting and correcting power of a code from the minimum distance in bits between the codewords

Block error probability and correction capability

If we have an error correcting code which can correct R errors than the probability of a codeword not being correctable is the probability of having more than R errors in n digits We can this calculate this probability by summing all the induvidual error probabilities up to and including R errors in the block

P(gtRrsquo errors) = 1 -

The probability of j errors in n digit codeword is

P(gtRrsquo errors) = (Pe)j (1-Pe)n-j x nCj

Pe is the probitlity of error in a single binary digit

n is the block length

nCj is the number of ways of choosing j error digits positoins with in length n binary digts

Group codes

Group codes are a special kind of block codes They comprise a set of codewords C1CN which contain the all zeros codeword (eg 00000) and exhibit a special property called closure This property means that if any two valid codewords are subject to a bit wise EX OR operation then they will produce another valid codeword in the set The closure property means that to find the minimum Hamming distance see below all that is required is to compare all the remaining codewords in the set with the all zeros codeword instead of comparing all the possible pairs of codewords The saving gets bigger the longer the codeword For example a code set with 100 codewords will require 100 comparisons for a Group code design compared with 100+99+98+ +2+1 for a non-group code

Nearest neighbour decoding

Nearest neighbour decoding assumes that the codeword nearest in Hamming distance to the received word is what was transmitted This inherently contains the assumption that the probability of a small number of terrors is greater than the probability of the larger number of t+1 errors that Pe is small

Nearest neighbour decoding can also be done on a soft decision basis with real non-binary numbers from the receiver The nearest Euclidean distance (nearest to these 5 codewords in terms of a 5-D geometry) is then used and this gives a considerable performance increase over the hard decision decoding described here

Hamming boundThis defines mathematically the error correcting performance of a block code The

upper bound on the performance of block codes is given by the Hamming bound some times called the sphere packing bound If we are trying to create a code to correct t errors with a block length of n with k information digits The upper bound on the performance of block codes as given by the Hamming Bound

2k 2 n 1 + n + nC2 + nC3

++ nCt

Cyclic codes

Cyclic codes are linear block codes with an additional cyclic shift operation For convenience polynomial representations are used for the code words for encoding and decoding since the shifting of a code word is equivalent to a modification of the exponential of a polynomial Specifically if x = (x0 x1xn-1) denotes a code word with elements in a finite field-(a field is an algebraic system formed by collection of elements F together with dyadic(2-operand) operations + and multiplication which are defined for all pairs of field elements in F and which behave in an arithmetically consistent manner A finite field is a field with finite number q of elements and it can be represented by Fq) A polynomial over Fq of degree at most n-1 is given as x(D) = x0 + x1D1 + x2D2 ++xn-1Dn-1

Cyclic codes are extremely suited for error correction because they can be designed to detect many combination of likely errors and the implementation of both encoding and

error-detecting circuits are practical A cyclic code used for error-detection is known as cyclic redundancy check (CRC) code

An error burst of length B in an n-bit received word as a contagious sequence of B bits in which the first and the last bits or any number of intermediate bits are received in error Binary (n k) CRC codes are capable of detecting the all error bursts of length n ndash k or less a fraction of error bursts of length equal to n-k+1 the fraction equals 1-2-(n-k-1) a fraction of error bursts of length greater than n-k+1 the fraction equals to 1-2-(n-k-1) all combination of dmin ndash 1 (or fewer) errors and all error patterns with an odd number of errors if the generation polynomial g(X) for the code as an even number of non-zero coefficients

Convolutional code

Encoder structure

The encoder will be represented in many different but equivalent ways Also the main decoding strategy for convolutional codes based on the Viterbi Algorithm will bedescribed A firm understanding of convolutional codes is an important prerequisite tothe understanding of turbo codes

Convolutional codes are commonly specified by three parameters (nkm)

n = number of output bit

k = number of inputs bits

m = number of memory registers

A convolutional code introduces redundant bits into the data stream through the use of linear shift registers The information bits are input into shift registers and the output encoded bits are obtainedby modulo-2 addition of the input information bits and the contents of the shift registersThe connections to the modulo-2 adders were developed heuristically with no algebraic or combinatorial foundation The code rate r for a convolutional code is defined as r = kn Often the manufacturers of convolutional code chips specify the code by parameters (nkL) The quantity L is called the constraint length of the code and is defined by Constraint Length L = k (m-1)

The constraint length L represents the number of bits in the encoder memory that affect the generation of the n output bit s The constraint length L is also referred to by t he capital letter K which can be confusing wit h the lower ca se k which represents the number of input bits In some books K is defined as equal to product the of k and m Often in commercial spec the codes are specified by (r K) where r = the code rate kn and K is the constraint length The constraint length K however is equal t o L œ 1 as defined in this paper I will be referring t o convolutional codes as (nkm) and not as (rK)

Encoder Representations

1) Generator Representation

Generator representation shows the hardware connection of the shift register taps to the modulo-2 adders A generator vector represents the position of the taps for an output A ldquo1rdquo represents a connection and a ldquo0rdquo represents no connection

2) Tree Diagram Representation

The tree diagram representation shows all possible information and encoded sequences for the convolutional encoder In the tree diagram a solid line represents input information bit 0 and a dashed line represents input information bit 1 The corresponding output encoded bits are shown on the branches of the tree An input information sequence defines a specific path through the tree diagram from left to right

3) State Diagram Representation

The state diagram shows the state information of a convolutional encoder The state information of a convolutional encoder is stored in the shift registers In the state diagram the state information of the encoder is shown in the circles Each new input information bit causes a transition from one state to another The path information between the states denoted as xc represents input information bit x and output encoded bits c It is customary to begin convolutional encoding from the all zerostate

4) Trellis Diagram RepresentationThe trellis diagram is basically a redrawing of the state diagram It shows all possible state transitions at each time step Frequently a legend accompanies the trellis diagram to show the state transitions and the corresponding input and output bit mappings (xc)

Catastrophic Convolutional code

Catastrophic convolutional code causes a large number of bit errors when only asmall number of channel bit errors is received This type of code needs to be avoided andcan be identified by the state diagram A state diagram having a loop in which a nonzeroinformation sequence corresponds to an all-zero output sequence identifies a catastrophicconvolutional code

Hard-Decision and Soft-Decision Decoding

Hard-decision and soft-decision decoding refer to the type of quantization used onthe received bits Hard-decision decoding uses 1-bit quantization on the received channel values Soft-decision decoding uses multi-bit quantization on the received channel values For the ideal soft-decision decoding (infinite-bit quantization) the received channel values are directly used in the channel decoder

Viterbi decoding

Viterbi decoding is the best known implementation of the maximum likely-hood decoding Here we narrow the options systematically at each time tick The principal used to reduce the choices is this

1 The errors occur infrequently The probability of error is small

2 The probability of two errors in a row is much smaller than a single error that is the errors are distributed randomly

The Viterbi decoder examines an entire received sequence of a given length The decoder computes a metric for each path and makes a decision based on this metric All paths are followed until two paths converge on one node Then the path with the higher metric is kept and the one with lower metric is discarded The paths selected are called the survivors

For an N bit sequence total numbers of possible received sequences are 2N Of these only 2kL are valid The Viterbi algorithm applies the maximum-likelihood principles to limit the comparison to 2 to the power of kL surviving paths instead of checking all paths The most common metric used is the Hamming distance metric This is just the dot product between the received codeword and the allowable codeword

Decoding Complexity for Convolutional Codes

For a general convolutional code the input information sequence contains kLbits where k is the number of parallel information bits at one time interval and L is thenumber of time intervals This results in L+m stages in the trellis diagram There areexactly 2kL distinct paths in the trellis diagram and as a result an exhaustive search forthe ML sequence would have a computational complexity on the order of O[2kL]

TheViterbi algorithm reduces this complexity by performing the ML search one stage at a time in the trellis At each node (state) of the trellis there are 2k calculations Thenumber of nodes per stage in the trellis is 2m Therefore the complexity of the Viterbialgorithm is on the order of O[(2k)(2m)(L+m)] This significantly reduces the number ofcalculations required to implement the ML decoding because the number of time intervalsL is now a linear factor and not an exponent factor in the complexity However therewill be an exponential increase in complexity if either k or m increases

Conclusion

Fundamentally convolutional codes do not offer more protection against noise than an equivalent block code In many cases they generally offer greater simplicity of implementation over a block code of equal power

REFERENCES

1 Herbert Taub Schilling Principle of Communication System Mcgraw-Hill 2002

2 Martin S Analog and Digital Communication System Prentice Hall 2001

3 P M Grant and D G Mcruickshank I A Glover and P M Grant Digital Communications Pearson Education 2009

4 V Pless Introduction to the Theory of Error-Correcting Codes 3rd ed New York John Wiley amp Sons 1998

5 Tomasi W Electronic Communication Systems Fundamentals Through Advanced Prentice Hall 2004

6 Lee L H C Error-Control Block Codes for Communications Engineers ArtechHouse 2000

7 httpwwwscribdcomdoc35139573Notes-in-Phase-Shift-Keying-Bpsk-Qpsk

8 httpwwwwikipediaorg

9 MacQuarie University Lecturer Notes httpwwwelecmqeduau~clfiles_pdfelec321

  • Viterbi decoding
Page 8: Analogue and Digital Communication assignment

Figure 111 (a) Binary sequence (b) BFSK signal and (c) binary modulating and BASK signals

It can be seen that phase continuity is maintained at transitions Further the BFSK signal is the sum of two BASK signals generated by two modulating signals m0(t) and m1(t) Therefore the Fourier transform of the BFSK signal s(t) is S(f) = A2 M1(f - f1) + A2 M1(f + f1)

Figure 112 (a) Modulating signals (b) Spectrum of (a)

M-ary Frequency-Shift Keying ( M -FSK)

An M-ary frequency-shift keying (M-FSK) signal can be defined by

s(t) = A cos (2 fit +Ɵrsquo) 0 lt t lt T 0 elsewherefor i = 0 1 M - 1 Here A is a constant fi is the transmitted frequency isthe initial phase angle and T is the symbol duration It has a power P = A22 so thatA = 2P Thus equation (246) can be written as

s(t) = radic2P cos (2 fit +Ɵrsquo) 0 lt t lt T

=radicPT radic2T cos (2 fit +Ɵrsquo) 0 lt t lt T

=radicEradic2T cos (2 fit +Ɵrsquo) 0 lt t lt T

where E = PT is the energy of s(t) contained in a symbol duration for i = 0 1 M - 1

Figure 245 M-ary orthogonal 3-FSK signal constellation diagram

Figure 246 4-FSK modulation (a) binary signal and (b) 4-FSK signal

Task 3

Draw modulator and receiver block diagram of a QPSK modulation scheme

Quaternary phase shift keying (QPSK) or quadrature PSK as it is sometimes called is an other form of angle-moduled constant-amplitude digital modulation With QPSK four output phases are possible for a single carrier frequency Because there are four different output phases there must be four different input conditions Because the digital input to a QPSK modulator is a binary (base2) signal to produce four different input conditions it takes more than a single input bit With two bits there are four there are four possible conditions 00 01 10 and 11 There fore with QPSK the binary input data are combined into groups of two bits called dibits Each dibit code generates one of the four possible output phases Therefore for each two-bit dibit clocked into the modulator a single output change occurs

Figure 31 Phasor Diagram and Constellation Diagram

Above shows the constellation diagram for QPSK with Gray coding Each adjacent symbol only differs by one bit Sometimes known as quaternary or quadriphase PSK or 4-PSK QPSK uses four points on the constellation diagram equispaced around a circle With four phases QPSK can encode two bits per symbol

Figure 32 Four symbols that represents the four phases in QPSK

Figure above shows depicts the 4 symbols used to represent the four phases in QPSK Analysis shows that this may be used either to double the data rate compared to a BPSK system while maintaining the bandwidth of the signal or to maintain the data-rate of BPSK but halve the bandwidth needed

Basic Configuration of Quadrature Modulation Scheme

Figure 33 Basic configuration of QPSKQPSK signal is generated by two BPSK signal Two orthogonal carrier signals are

used to distinguish the two signals One is given by cos 2fct and the other is given by sin

2fct The two carrier signals remain orthogonal in the area of a period A channel in which cos2fct is used as a carrier signal is generally called an inphase channel or Ich and a channel in which sin 2fct is used as a carrier signal is generally called quadrature-phase channel or Qch Therefore d (t) I and d (t) Q are the data in Ich and Qch respectively Modulation schemes that use Ich and Qch are called quadrature modulation schemes The basic confiuration is shown in figure 33

In the system shown above the input digital data dk is first converted into parallel data with two channels Ich and Qch The data are represented as di(t) and dq(t) The conversion or data allocation is done using a mapping circuit block Then the data allocated to Ich is filtered using a pulse shaping filter in Ich The pulse shaped signal is converted in analog signal by a DA converter and multiplied by a cos 2fct carrier wave The same process is carried out on the data allocated to Qch but it is multiplied by a f t c sin 21048659 carrier wave instead Then the Ich and Qch signals are added and transmitted to the air

At the receiver the received wave passes through a BPF to eliminate any sprurious signals Then it is downconverted to the baseband by multiplying by the RF carrier frequency Then in both the Ich and Qch channels the downcoverted signal is digitally sampled by an AD converters and the digital data is fed to a DSPH In the DSPH the sampled data is filtered with pulse shaping filter to eliminate ISI The signals are then synchronized and the transmitted digital data is recovered

Figure 34 Mapping circuit function for QPSKFor the mapping function a simple circuit is used to allocate the data as illustrated in the following figure This mapping function basically allocates all even bits to Ich and all odd bits to Qch and demapping is just the opposite operation

Conclusion It is clear that in order for QPSK to be useful for high data rate wide bandwidth systemsin multipath fading channels it is necessary to consider using diversity techniques equalization adaptive antennas The approach for performance analysis presented here could be used as a first step in the design process of such more complex systems

Task 5

Discuss Block codes and Convolutional codes in detail and evaluate their performance

Block codes

Consider that a message source can generate M equally likely messages Then initially we represent each message by k binary digits with 2k =M These k bits are the information bearing bits Next we add to each k bit massage r redundant bit Thus each massage has been expanded into a codeword of length n bits with n = k + r The total number of possible n bit codewords is 2n while the total number of possible messages is 2k There are 2k - 2n possible n bit words which do not represent possible messages

Codes formed by taking a block of k information bits and adding r (= n - k) redundant bits to form a codeword are called Block Codes and designated (nk) codes

The Hamming Distance dmin

Consider two distinct five digit codewords C1 = 00000 and C2 = 00011 These have a binary digit difference (or Hamming distance) of 2 in the last two digits The minimum distance in binary digits between any two codewords is known as the minimum Hamming distance dmin For block codes the minimum Hamming distance or the smallest difference between the digits for any two codewords in the complete code set dmin is the property which controls the error correction performance We can thus calculate the error detecting and correcting power of a code from the minimum distance in bits between the codewords

Block error probability and correction capability

If we have an error correcting code which can correct R errors than the probability of a codeword not being correctable is the probability of having more than R errors in n digits We can this calculate this probability by summing all the induvidual error probabilities up to and including R errors in the block

P(gtRrsquo errors) = 1 -

The probability of j errors in n digit codeword is

P(gtRrsquo errors) = (Pe)j (1-Pe)n-j x nCj

Pe is the probitlity of error in a single binary digit

n is the block length

nCj is the number of ways of choosing j error digits positoins with in length n binary digts

Group codes

Group codes are a special kind of block codes They comprise a set of codewords C1CN which contain the all zeros codeword (eg 00000) and exhibit a special property called closure This property means that if any two valid codewords are subject to a bit wise EX OR operation then they will produce another valid codeword in the set The closure property means that to find the minimum Hamming distance see below all that is required is to compare all the remaining codewords in the set with the all zeros codeword instead of comparing all the possible pairs of codewords The saving gets bigger the longer the codeword For example a code set with 100 codewords will require 100 comparisons for a Group code design compared with 100+99+98+ +2+1 for a non-group code

Nearest neighbour decoding

Nearest neighbour decoding assumes that the codeword nearest in Hamming distance to the received word is what was transmitted This inherently contains the assumption that the probability of a small number of terrors is greater than the probability of the larger number of t+1 errors that Pe is small

Nearest neighbour decoding can also be done on a soft decision basis with real non-binary numbers from the receiver The nearest Euclidean distance (nearest to these 5 codewords in terms of a 5-D geometry) is then used and this gives a considerable performance increase over the hard decision decoding described here

Hamming boundThis defines mathematically the error correcting performance of a block code The

upper bound on the performance of block codes is given by the Hamming bound some times called the sphere packing bound If we are trying to create a code to correct t errors with a block length of n with k information digits The upper bound on the performance of block codes as given by the Hamming Bound

2k 2 n 1 + n + nC2 + nC3

++ nCt

Cyclic codes

Cyclic codes are linear block codes with an additional cyclic shift operation For convenience polynomial representations are used for the code words for encoding and decoding since the shifting of a code word is equivalent to a modification of the exponential of a polynomial Specifically if x = (x0 x1xn-1) denotes a code word with elements in a finite field-(a field is an algebraic system formed by collection of elements F together with dyadic(2-operand) operations + and multiplication which are defined for all pairs of field elements in F and which behave in an arithmetically consistent manner A finite field is a field with finite number q of elements and it can be represented by Fq) A polynomial over Fq of degree at most n-1 is given as x(D) = x0 + x1D1 + x2D2 ++xn-1Dn-1

Cyclic codes are extremely suited for error correction because they can be designed to detect many combination of likely errors and the implementation of both encoding and

error-detecting circuits are practical A cyclic code used for error-detection is known as cyclic redundancy check (CRC) code

An error burst of length B in an n-bit received word as a contagious sequence of B bits in which the first and the last bits or any number of intermediate bits are received in error Binary (n k) CRC codes are capable of detecting the all error bursts of length n ndash k or less a fraction of error bursts of length equal to n-k+1 the fraction equals 1-2-(n-k-1) a fraction of error bursts of length greater than n-k+1 the fraction equals to 1-2-(n-k-1) all combination of dmin ndash 1 (or fewer) errors and all error patterns with an odd number of errors if the generation polynomial g(X) for the code as an even number of non-zero coefficients

Convolutional code

Encoder structure

The encoder will be represented in many different but equivalent ways Also the main decoding strategy for convolutional codes based on the Viterbi Algorithm will bedescribed A firm understanding of convolutional codes is an important prerequisite tothe understanding of turbo codes

Convolutional codes are commonly specified by three parameters (nkm)

n = number of output bit

k = number of inputs bits

m = number of memory registers

A convolutional code introduces redundant bits into the data stream through the use of linear shift registers The information bits are input into shift registers and the output encoded bits are obtainedby modulo-2 addition of the input information bits and the contents of the shift registersThe connections to the modulo-2 adders were developed heuristically with no algebraic or combinatorial foundation The code rate r for a convolutional code is defined as r = kn Often the manufacturers of convolutional code chips specify the code by parameters (nkL) The quantity L is called the constraint length of the code and is defined by Constraint Length L = k (m-1)

The constraint length L represents the number of bits in the encoder memory that affect the generation of the n output bit s The constraint length L is also referred to by t he capital letter K which can be confusing wit h the lower ca se k which represents the number of input bits In some books K is defined as equal to product the of k and m Often in commercial spec the codes are specified by (r K) where r = the code rate kn and K is the constraint length The constraint length K however is equal t o L œ 1 as defined in this paper I will be referring t o convolutional codes as (nkm) and not as (rK)

Encoder Representations

1) Generator Representation

Generator representation shows the hardware connection of the shift register taps to the modulo-2 adders A generator vector represents the position of the taps for an output A ldquo1rdquo represents a connection and a ldquo0rdquo represents no connection

2) Tree Diagram Representation

The tree diagram representation shows all possible information and encoded sequences for the convolutional encoder In the tree diagram a solid line represents input information bit 0 and a dashed line represents input information bit 1 The corresponding output encoded bits are shown on the branches of the tree An input information sequence defines a specific path through the tree diagram from left to right

3) State Diagram Representation

The state diagram shows the state information of a convolutional encoder The state information of a convolutional encoder is stored in the shift registers In the state diagram the state information of the encoder is shown in the circles Each new input information bit causes a transition from one state to another The path information between the states denoted as xc represents input information bit x and output encoded bits c It is customary to begin convolutional encoding from the all zerostate

4) Trellis Diagram RepresentationThe trellis diagram is basically a redrawing of the state diagram It shows all possible state transitions at each time step Frequently a legend accompanies the trellis diagram to show the state transitions and the corresponding input and output bit mappings (xc)

Catastrophic Convolutional code

Catastrophic convolutional code causes a large number of bit errors when only asmall number of channel bit errors is received This type of code needs to be avoided andcan be identified by the state diagram A state diagram having a loop in which a nonzeroinformation sequence corresponds to an all-zero output sequence identifies a catastrophicconvolutional code

Hard-Decision and Soft-Decision Decoding

Hard-decision and soft-decision decoding refer to the type of quantization used onthe received bits Hard-decision decoding uses 1-bit quantization on the received channel values Soft-decision decoding uses multi-bit quantization on the received channel values For the ideal soft-decision decoding (infinite-bit quantization) the received channel values are directly used in the channel decoder

Viterbi decoding

Viterbi decoding is the best known implementation of the maximum likely-hood decoding Here we narrow the options systematically at each time tick The principal used to reduce the choices is this

1 The errors occur infrequently The probability of error is small

2 The probability of two errors in a row is much smaller than a single error that is the errors are distributed randomly

The Viterbi decoder examines an entire received sequence of a given length The decoder computes a metric for each path and makes a decision based on this metric All paths are followed until two paths converge on one node Then the path with the higher metric is kept and the one with lower metric is discarded The paths selected are called the survivors

For an N bit sequence total numbers of possible received sequences are 2N Of these only 2kL are valid The Viterbi algorithm applies the maximum-likelihood principles to limit the comparison to 2 to the power of kL surviving paths instead of checking all paths The most common metric used is the Hamming distance metric This is just the dot product between the received codeword and the allowable codeword

Decoding Complexity for Convolutional Codes

For a general convolutional code the input information sequence contains kLbits where k is the number of parallel information bits at one time interval and L is thenumber of time intervals This results in L+m stages in the trellis diagram There areexactly 2kL distinct paths in the trellis diagram and as a result an exhaustive search forthe ML sequence would have a computational complexity on the order of O[2kL]

TheViterbi algorithm reduces this complexity by performing the ML search one stage at a time in the trellis At each node (state) of the trellis there are 2k calculations Thenumber of nodes per stage in the trellis is 2m Therefore the complexity of the Viterbialgorithm is on the order of O[(2k)(2m)(L+m)] This significantly reduces the number ofcalculations required to implement the ML decoding because the number of time intervalsL is now a linear factor and not an exponent factor in the complexity However therewill be an exponential increase in complexity if either k or m increases

Conclusion

Fundamentally convolutional codes do not offer more protection against noise than an equivalent block code In many cases they generally offer greater simplicity of implementation over a block code of equal power

REFERENCES

1 Herbert Taub Schilling Principle of Communication System Mcgraw-Hill 2002

2 Martin S Analog and Digital Communication System Prentice Hall 2001

3 P M Grant and D G Mcruickshank I A Glover and P M Grant Digital Communications Pearson Education 2009

4 V Pless Introduction to the Theory of Error-Correcting Codes 3rd ed New York John Wiley amp Sons 1998

5 Tomasi W Electronic Communication Systems Fundamentals Through Advanced Prentice Hall 2004

6 Lee L H C Error-Control Block Codes for Communications Engineers ArtechHouse 2000

7 httpwwwscribdcomdoc35139573Notes-in-Phase-Shift-Keying-Bpsk-Qpsk

8 httpwwwwikipediaorg

9 MacQuarie University Lecturer Notes httpwwwelecmqeduau~clfiles_pdfelec321

  • Viterbi decoding
Page 9: Analogue and Digital Communication assignment

M-ary Frequency-Shift Keying ( M -FSK)

An M-ary frequency-shift keying (M-FSK) signal can be defined by

s(t) = A cos (2 fit +Ɵrsquo) 0 lt t lt T 0 elsewherefor i = 0 1 M - 1 Here A is a constant fi is the transmitted frequency isthe initial phase angle and T is the symbol duration It has a power P = A22 so thatA = 2P Thus equation (246) can be written as

s(t) = radic2P cos (2 fit +Ɵrsquo) 0 lt t lt T

=radicPT radic2T cos (2 fit +Ɵrsquo) 0 lt t lt T

=radicEradic2T cos (2 fit +Ɵrsquo) 0 lt t lt T

where E = PT is the energy of s(t) contained in a symbol duration for i = 0 1 M - 1

Figure 245 M-ary orthogonal 3-FSK signal constellation diagram

Figure 246 4-FSK modulation (a) binary signal and (b) 4-FSK signal

Task 3

Draw modulator and receiver block diagram of a QPSK modulation scheme

Quaternary phase shift keying (QPSK) or quadrature PSK as it is sometimes called is an other form of angle-moduled constant-amplitude digital modulation With QPSK four output phases are possible for a single carrier frequency Because there are four different output phases there must be four different input conditions Because the digital input to a QPSK modulator is a binary (base2) signal to produce four different input conditions it takes more than a single input bit With two bits there are four there are four possible conditions 00 01 10 and 11 There fore with QPSK the binary input data are combined into groups of two bits called dibits Each dibit code generates one of the four possible output phases Therefore for each two-bit dibit clocked into the modulator a single output change occurs

Figure 31 Phasor Diagram and Constellation Diagram

Above shows the constellation diagram for QPSK with Gray coding Each adjacent symbol only differs by one bit Sometimes known as quaternary or quadriphase PSK or 4-PSK QPSK uses four points on the constellation diagram equispaced around a circle With four phases QPSK can encode two bits per symbol

Figure 32 Four symbols that represents the four phases in QPSK

Figure above shows depicts the 4 symbols used to represent the four phases in QPSK Analysis shows that this may be used either to double the data rate compared to a BPSK system while maintaining the bandwidth of the signal or to maintain the data-rate of BPSK but halve the bandwidth needed

Basic Configuration of Quadrature Modulation Scheme

Figure 33 Basic configuration of QPSKQPSK signal is generated by two BPSK signal Two orthogonal carrier signals are

used to distinguish the two signals One is given by cos 2fct and the other is given by sin

2fct The two carrier signals remain orthogonal in the area of a period A channel in which cos2fct is used as a carrier signal is generally called an inphase channel or Ich and a channel in which sin 2fct is used as a carrier signal is generally called quadrature-phase channel or Qch Therefore d (t) I and d (t) Q are the data in Ich and Qch respectively Modulation schemes that use Ich and Qch are called quadrature modulation schemes The basic confiuration is shown in figure 33

In the system shown above the input digital data dk is first converted into parallel data with two channels Ich and Qch The data are represented as di(t) and dq(t) The conversion or data allocation is done using a mapping circuit block Then the data allocated to Ich is filtered using a pulse shaping filter in Ich The pulse shaped signal is converted in analog signal by a DA converter and multiplied by a cos 2fct carrier wave The same process is carried out on the data allocated to Qch but it is multiplied by a f t c sin 21048659 carrier wave instead Then the Ich and Qch signals are added and transmitted to the air

At the receiver the received wave passes through a BPF to eliminate any sprurious signals Then it is downconverted to the baseband by multiplying by the RF carrier frequency Then in both the Ich and Qch channels the downcoverted signal is digitally sampled by an AD converters and the digital data is fed to a DSPH In the DSPH the sampled data is filtered with pulse shaping filter to eliminate ISI The signals are then synchronized and the transmitted digital data is recovered

Figure 34 Mapping circuit function for QPSKFor the mapping function a simple circuit is used to allocate the data as illustrated in the following figure This mapping function basically allocates all even bits to Ich and all odd bits to Qch and demapping is just the opposite operation

Conclusion It is clear that in order for QPSK to be useful for high data rate wide bandwidth systemsin multipath fading channels it is necessary to consider using diversity techniques equalization adaptive antennas The approach for performance analysis presented here could be used as a first step in the design process of such more complex systems

Task 5

Discuss Block codes and Convolutional codes in detail and evaluate their performance

Block codes

Consider that a message source can generate M equally likely messages Then initially we represent each message by k binary digits with 2k =M These k bits are the information bearing bits Next we add to each k bit massage r redundant bit Thus each massage has been expanded into a codeword of length n bits with n = k + r The total number of possible n bit codewords is 2n while the total number of possible messages is 2k There are 2k - 2n possible n bit words which do not represent possible messages

Codes formed by taking a block of k information bits and adding r (= n - k) redundant bits to form a codeword are called Block Codes and designated (nk) codes

The Hamming Distance dmin

Consider two distinct five digit codewords C1 = 00000 and C2 = 00011 These have a binary digit difference (or Hamming distance) of 2 in the last two digits The minimum distance in binary digits between any two codewords is known as the minimum Hamming distance dmin For block codes the minimum Hamming distance or the smallest difference between the digits for any two codewords in the complete code set dmin is the property which controls the error correction performance We can thus calculate the error detecting and correcting power of a code from the minimum distance in bits between the codewords

Block error probability and correction capability

If we have an error correcting code which can correct R errors than the probability of a codeword not being correctable is the probability of having more than R errors in n digits We can this calculate this probability by summing all the induvidual error probabilities up to and including R errors in the block

P(gtRrsquo errors) = 1 -

The probability of j errors in n digit codeword is

P(gtRrsquo errors) = (Pe)j (1-Pe)n-j x nCj

Pe is the probitlity of error in a single binary digit

n is the block length

nCj is the number of ways of choosing j error digits positoins with in length n binary digts

Group codes

Group codes are a special kind of block codes They comprise a set of codewords C1CN which contain the all zeros codeword (eg 00000) and exhibit a special property called closure This property means that if any two valid codewords are subject to a bit wise EX OR operation then they will produce another valid codeword in the set The closure property means that to find the minimum Hamming distance see below all that is required is to compare all the remaining codewords in the set with the all zeros codeword instead of comparing all the possible pairs of codewords The saving gets bigger the longer the codeword For example a code set with 100 codewords will require 100 comparisons for a Group code design compared with 100+99+98+ +2+1 for a non-group code

Nearest neighbour decoding

Nearest neighbour decoding assumes that the codeword nearest in Hamming distance to the received word is what was transmitted This inherently contains the assumption that the probability of a small number of terrors is greater than the probability of the larger number of t+1 errors that Pe is small

Nearest neighbour decoding can also be done on a soft decision basis with real non-binary numbers from the receiver The nearest Euclidean distance (nearest to these 5 codewords in terms of a 5-D geometry) is then used and this gives a considerable performance increase over the hard decision decoding described here

Hamming boundThis defines mathematically the error correcting performance of a block code The

upper bound on the performance of block codes is given by the Hamming bound some times called the sphere packing bound If we are trying to create a code to correct t errors with a block length of n with k information digits The upper bound on the performance of block codes as given by the Hamming Bound

2k 2 n 1 + n + nC2 + nC3

++ nCt

Cyclic codes

Cyclic codes are linear block codes with an additional cyclic shift operation For convenience polynomial representations are used for the code words for encoding and decoding since the shifting of a code word is equivalent to a modification of the exponential of a polynomial Specifically if x = (x0 x1xn-1) denotes a code word with elements in a finite field-(a field is an algebraic system formed by collection of elements F together with dyadic(2-operand) operations + and multiplication which are defined for all pairs of field elements in F and which behave in an arithmetically consistent manner A finite field is a field with finite number q of elements and it can be represented by Fq) A polynomial over Fq of degree at most n-1 is given as x(D) = x0 + x1D1 + x2D2 ++xn-1Dn-1

Cyclic codes are extremely suited for error correction because they can be designed to detect many combination of likely errors and the implementation of both encoding and

error-detecting circuits are practical A cyclic code used for error-detection is known as cyclic redundancy check (CRC) code

An error burst of length B in an n-bit received word as a contagious sequence of B bits in which the first and the last bits or any number of intermediate bits are received in error Binary (n k) CRC codes are capable of detecting the all error bursts of length n ndash k or less a fraction of error bursts of length equal to n-k+1 the fraction equals 1-2-(n-k-1) a fraction of error bursts of length greater than n-k+1 the fraction equals to 1-2-(n-k-1) all combination of dmin ndash 1 (or fewer) errors and all error patterns with an odd number of errors if the generation polynomial g(X) for the code as an even number of non-zero coefficients

Convolutional code

Encoder structure

The encoder will be represented in many different but equivalent ways Also the main decoding strategy for convolutional codes based on the Viterbi Algorithm will bedescribed A firm understanding of convolutional codes is an important prerequisite tothe understanding of turbo codes

Convolutional codes are commonly specified by three parameters (nkm)

n = number of output bit

k = number of inputs bits

m = number of memory registers

A convolutional code introduces redundant bits into the data stream through the use of linear shift registers The information bits are input into shift registers and the output encoded bits are obtainedby modulo-2 addition of the input information bits and the contents of the shift registersThe connections to the modulo-2 adders were developed heuristically with no algebraic or combinatorial foundation The code rate r for a convolutional code is defined as r = kn Often the manufacturers of convolutional code chips specify the code by parameters (nkL) The quantity L is called the constraint length of the code and is defined by Constraint Length L = k (m-1)

The constraint length L represents the number of bits in the encoder memory that affect the generation of the n output bit s The constraint length L is also referred to by t he capital letter K which can be confusing wit h the lower ca se k which represents the number of input bits In some books K is defined as equal to product the of k and m Often in commercial spec the codes are specified by (r K) where r = the code rate kn and K is the constraint length The constraint length K however is equal t o L œ 1 as defined in this paper I will be referring t o convolutional codes as (nkm) and not as (rK)

Encoder Representations

1) Generator Representation

Generator representation shows the hardware connection of the shift register taps to the modulo-2 adders A generator vector represents the position of the taps for an output A ldquo1rdquo represents a connection and a ldquo0rdquo represents no connection

2) Tree Diagram Representation

The tree diagram representation shows all possible information and encoded sequences for the convolutional encoder In the tree diagram a solid line represents input information bit 0 and a dashed line represents input information bit 1 The corresponding output encoded bits are shown on the branches of the tree An input information sequence defines a specific path through the tree diagram from left to right

3) State Diagram Representation

The state diagram shows the state information of a convolutional encoder The state information of a convolutional encoder is stored in the shift registers In the state diagram the state information of the encoder is shown in the circles Each new input information bit causes a transition from one state to another The path information between the states denoted as xc represents input information bit x and output encoded bits c It is customary to begin convolutional encoding from the all zerostate

4) Trellis Diagram RepresentationThe trellis diagram is basically a redrawing of the state diagram It shows all possible state transitions at each time step Frequently a legend accompanies the trellis diagram to show the state transitions and the corresponding input and output bit mappings (xc)

Catastrophic Convolutional code

Catastrophic convolutional code causes a large number of bit errors when only asmall number of channel bit errors is received This type of code needs to be avoided andcan be identified by the state diagram A state diagram having a loop in which a nonzeroinformation sequence corresponds to an all-zero output sequence identifies a catastrophicconvolutional code

Hard-Decision and Soft-Decision Decoding

Hard-decision and soft-decision decoding refer to the type of quantization used onthe received bits Hard-decision decoding uses 1-bit quantization on the received channel values Soft-decision decoding uses multi-bit quantization on the received channel values For the ideal soft-decision decoding (infinite-bit quantization) the received channel values are directly used in the channel decoder

Viterbi decoding

Viterbi decoding is the best known implementation of the maximum likely-hood decoding Here we narrow the options systematically at each time tick The principal used to reduce the choices is this

1 The errors occur infrequently The probability of error is small

2 The probability of two errors in a row is much smaller than a single error that is the errors are distributed randomly

The Viterbi decoder examines an entire received sequence of a given length The decoder computes a metric for each path and makes a decision based on this metric All paths are followed until two paths converge on one node Then the path with the higher metric is kept and the one with lower metric is discarded The paths selected are called the survivors

For an N bit sequence total numbers of possible received sequences are 2N Of these only 2kL are valid The Viterbi algorithm applies the maximum-likelihood principles to limit the comparison to 2 to the power of kL surviving paths instead of checking all paths The most common metric used is the Hamming distance metric This is just the dot product between the received codeword and the allowable codeword

Decoding Complexity for Convolutional Codes

For a general convolutional code the input information sequence contains kLbits where k is the number of parallel information bits at one time interval and L is thenumber of time intervals This results in L+m stages in the trellis diagram There areexactly 2kL distinct paths in the trellis diagram and as a result an exhaustive search forthe ML sequence would have a computational complexity on the order of O[2kL]

TheViterbi algorithm reduces this complexity by performing the ML search one stage at a time in the trellis At each node (state) of the trellis there are 2k calculations Thenumber of nodes per stage in the trellis is 2m Therefore the complexity of the Viterbialgorithm is on the order of O[(2k)(2m)(L+m)] This significantly reduces the number ofcalculations required to implement the ML decoding because the number of time intervalsL is now a linear factor and not an exponent factor in the complexity However therewill be an exponential increase in complexity if either k or m increases

Conclusion

Fundamentally convolutional codes do not offer more protection against noise than an equivalent block code In many cases they generally offer greater simplicity of implementation over a block code of equal power

REFERENCES

1 Herbert Taub Schilling Principle of Communication System Mcgraw-Hill 2002

2 Martin S Analog and Digital Communication System Prentice Hall 2001

3 P M Grant and D G Mcruickshank I A Glover and P M Grant Digital Communications Pearson Education 2009

4 V Pless Introduction to the Theory of Error-Correcting Codes 3rd ed New York John Wiley amp Sons 1998

5 Tomasi W Electronic Communication Systems Fundamentals Through Advanced Prentice Hall 2004

6 Lee L H C Error-Control Block Codes for Communications Engineers ArtechHouse 2000

7 httpwwwscribdcomdoc35139573Notes-in-Phase-Shift-Keying-Bpsk-Qpsk

8 httpwwwwikipediaorg

9 MacQuarie University Lecturer Notes httpwwwelecmqeduau~clfiles_pdfelec321

  • Viterbi decoding
Page 10: Analogue and Digital Communication assignment

Task 3

Draw modulator and receiver block diagram of a QPSK modulation scheme

Quaternary phase shift keying (QPSK) or quadrature PSK as it is sometimes called is an other form of angle-moduled constant-amplitude digital modulation With QPSK four output phases are possible for a single carrier frequency Because there are four different output phases there must be four different input conditions Because the digital input to a QPSK modulator is a binary (base2) signal to produce four different input conditions it takes more than a single input bit With two bits there are four there are four possible conditions 00 01 10 and 11 There fore with QPSK the binary input data are combined into groups of two bits called dibits Each dibit code generates one of the four possible output phases Therefore for each two-bit dibit clocked into the modulator a single output change occurs

Figure 31 Phasor Diagram and Constellation Diagram

Above shows the constellation diagram for QPSK with Gray coding Each adjacent symbol only differs by one bit Sometimes known as quaternary or quadriphase PSK or 4-PSK QPSK uses four points on the constellation diagram equispaced around a circle With four phases QPSK can encode two bits per symbol

Figure 32 Four symbols that represents the four phases in QPSK

Figure above shows depicts the 4 symbols used to represent the four phases in QPSK Analysis shows that this may be used either to double the data rate compared to a BPSK system while maintaining the bandwidth of the signal or to maintain the data-rate of BPSK but halve the bandwidth needed

Basic Configuration of Quadrature Modulation Scheme

Figure 33 Basic configuration of QPSKQPSK signal is generated by two BPSK signal Two orthogonal carrier signals are

used to distinguish the two signals One is given by cos 2fct and the other is given by sin

2fct The two carrier signals remain orthogonal in the area of a period A channel in which cos2fct is used as a carrier signal is generally called an inphase channel or Ich and a channel in which sin 2fct is used as a carrier signal is generally called quadrature-phase channel or Qch Therefore d (t) I and d (t) Q are the data in Ich and Qch respectively Modulation schemes that use Ich and Qch are called quadrature modulation schemes The basic confiuration is shown in figure 33

In the system shown above the input digital data dk is first converted into parallel data with two channels Ich and Qch The data are represented as di(t) and dq(t) The conversion or data allocation is done using a mapping circuit block Then the data allocated to Ich is filtered using a pulse shaping filter in Ich The pulse shaped signal is converted in analog signal by a DA converter and multiplied by a cos 2fct carrier wave The same process is carried out on the data allocated to Qch but it is multiplied by a f t c sin 21048659 carrier wave instead Then the Ich and Qch signals are added and transmitted to the air

At the receiver the received wave passes through a BPF to eliminate any sprurious signals Then it is downconverted to the baseband by multiplying by the RF carrier frequency Then in both the Ich and Qch channels the downcoverted signal is digitally sampled by an AD converters and the digital data is fed to a DSPH In the DSPH the sampled data is filtered with pulse shaping filter to eliminate ISI The signals are then synchronized and the transmitted digital data is recovered

Figure 34 Mapping circuit function for QPSKFor the mapping function a simple circuit is used to allocate the data as illustrated in the following figure This mapping function basically allocates all even bits to Ich and all odd bits to Qch and demapping is just the opposite operation

Conclusion It is clear that in order for QPSK to be useful for high data rate wide bandwidth systemsin multipath fading channels it is necessary to consider using diversity techniques equalization adaptive antennas The approach for performance analysis presented here could be used as a first step in the design process of such more complex systems

Task 5

Discuss Block codes and Convolutional codes in detail and evaluate their performance

Block codes

Consider that a message source can generate M equally likely messages Then initially we represent each message by k binary digits with 2k =M These k bits are the information bearing bits Next we add to each k bit massage r redundant bit Thus each massage has been expanded into a codeword of length n bits with n = k + r The total number of possible n bit codewords is 2n while the total number of possible messages is 2k There are 2k - 2n possible n bit words which do not represent possible messages

Codes formed by taking a block of k information bits and adding r (= n - k) redundant bits to form a codeword are called Block Codes and designated (nk) codes

The Hamming Distance dmin

Consider two distinct five digit codewords C1 = 00000 and C2 = 00011 These have a binary digit difference (or Hamming distance) of 2 in the last two digits The minimum distance in binary digits between any two codewords is known as the minimum Hamming distance dmin For block codes the minimum Hamming distance or the smallest difference between the digits for any two codewords in the complete code set dmin is the property which controls the error correction performance We can thus calculate the error detecting and correcting power of a code from the minimum distance in bits between the codewords

Block error probability and correction capability

If we have an error correcting code which can correct R errors than the probability of a codeword not being correctable is the probability of having more than R errors in n digits We can this calculate this probability by summing all the induvidual error probabilities up to and including R errors in the block

P(gtRrsquo errors) = 1 -

The probability of j errors in n digit codeword is

P(gtRrsquo errors) = (Pe)j (1-Pe)n-j x nCj

Pe is the probitlity of error in a single binary digit

n is the block length

nCj is the number of ways of choosing j error digits positoins with in length n binary digts

Group codes

Group codes are a special kind of block codes They comprise a set of codewords C1CN which contain the all zeros codeword (eg 00000) and exhibit a special property called closure This property means that if any two valid codewords are subject to a bit wise EX OR operation then they will produce another valid codeword in the set The closure property means that to find the minimum Hamming distance see below all that is required is to compare all the remaining codewords in the set with the all zeros codeword instead of comparing all the possible pairs of codewords The saving gets bigger the longer the codeword For example a code set with 100 codewords will require 100 comparisons for a Group code design compared with 100+99+98+ +2+1 for a non-group code

Nearest neighbour decoding

Nearest neighbour decoding assumes that the codeword nearest in Hamming distance to the received word is what was transmitted This inherently contains the assumption that the probability of a small number of terrors is greater than the probability of the larger number of t+1 errors that Pe is small

Nearest neighbour decoding can also be done on a soft decision basis with real non-binary numbers from the receiver The nearest Euclidean distance (nearest to these 5 codewords in terms of a 5-D geometry) is then used and this gives a considerable performance increase over the hard decision decoding described here

Hamming boundThis defines mathematically the error correcting performance of a block code The

upper bound on the performance of block codes is given by the Hamming bound some times called the sphere packing bound If we are trying to create a code to correct t errors with a block length of n with k information digits The upper bound on the performance of block codes as given by the Hamming Bound

2k 2 n 1 + n + nC2 + nC3

++ nCt

Cyclic codes

Cyclic codes are linear block codes with an additional cyclic shift operation For convenience polynomial representations are used for the code words for encoding and decoding since the shifting of a code word is equivalent to a modification of the exponential of a polynomial Specifically if x = (x0 x1xn-1) denotes a code word with elements in a finite field-(a field is an algebraic system formed by collection of elements F together with dyadic(2-operand) operations + and multiplication which are defined for all pairs of field elements in F and which behave in an arithmetically consistent manner A finite field is a field with finite number q of elements and it can be represented by Fq) A polynomial over Fq of degree at most n-1 is given as x(D) = x0 + x1D1 + x2D2 ++xn-1Dn-1

Cyclic codes are extremely suited for error correction because they can be designed to detect many combination of likely errors and the implementation of both encoding and

error-detecting circuits are practical A cyclic code used for error-detection is known as cyclic redundancy check (CRC) code

An error burst of length B in an n-bit received word as a contagious sequence of B bits in which the first and the last bits or any number of intermediate bits are received in error Binary (n k) CRC codes are capable of detecting the all error bursts of length n ndash k or less a fraction of error bursts of length equal to n-k+1 the fraction equals 1-2-(n-k-1) a fraction of error bursts of length greater than n-k+1 the fraction equals to 1-2-(n-k-1) all combination of dmin ndash 1 (or fewer) errors and all error patterns with an odd number of errors if the generation polynomial g(X) for the code as an even number of non-zero coefficients

Convolutional code

Encoder structure

The encoder will be represented in many different but equivalent ways Also the main decoding strategy for convolutional codes based on the Viterbi Algorithm will bedescribed A firm understanding of convolutional codes is an important prerequisite tothe understanding of turbo codes

Convolutional codes are commonly specified by three parameters (nkm)

n = number of output bit

k = number of inputs bits

m = number of memory registers

A convolutional code introduces redundant bits into the data stream through the use of linear shift registers The information bits are input into shift registers and the output encoded bits are obtainedby modulo-2 addition of the input information bits and the contents of the shift registersThe connections to the modulo-2 adders were developed heuristically with no algebraic or combinatorial foundation The code rate r for a convolutional code is defined as r = kn Often the manufacturers of convolutional code chips specify the code by parameters (nkL) The quantity L is called the constraint length of the code and is defined by Constraint Length L = k (m-1)

The constraint length L represents the number of bits in the encoder memory that affect the generation of the n output bit s The constraint length L is also referred to by t he capital letter K which can be confusing wit h the lower ca se k which represents the number of input bits In some books K is defined as equal to product the of k and m Often in commercial spec the codes are specified by (r K) where r = the code rate kn and K is the constraint length The constraint length K however is equal t o L œ 1 as defined in this paper I will be referring t o convolutional codes as (nkm) and not as (rK)

Encoder Representations

1) Generator Representation

Generator representation shows the hardware connection of the shift register taps to the modulo-2 adders A generator vector represents the position of the taps for an output A ldquo1rdquo represents a connection and a ldquo0rdquo represents no connection

2) Tree Diagram Representation

The tree diagram representation shows all possible information and encoded sequences for the convolutional encoder In the tree diagram a solid line represents input information bit 0 and a dashed line represents input information bit 1 The corresponding output encoded bits are shown on the branches of the tree An input information sequence defines a specific path through the tree diagram from left to right

3) State Diagram Representation

The state diagram shows the state information of a convolutional encoder The state information of a convolutional encoder is stored in the shift registers In the state diagram the state information of the encoder is shown in the circles Each new input information bit causes a transition from one state to another The path information between the states denoted as xc represents input information bit x and output encoded bits c It is customary to begin convolutional encoding from the all zerostate

4) Trellis Diagram RepresentationThe trellis diagram is basically a redrawing of the state diagram It shows all possible state transitions at each time step Frequently a legend accompanies the trellis diagram to show the state transitions and the corresponding input and output bit mappings (xc)

Catastrophic Convolutional code

Catastrophic convolutional code causes a large number of bit errors when only asmall number of channel bit errors is received This type of code needs to be avoided andcan be identified by the state diagram A state diagram having a loop in which a nonzeroinformation sequence corresponds to an all-zero output sequence identifies a catastrophicconvolutional code

Hard-Decision and Soft-Decision Decoding

Hard-decision and soft-decision decoding refer to the type of quantization used onthe received bits Hard-decision decoding uses 1-bit quantization on the received channel values Soft-decision decoding uses multi-bit quantization on the received channel values For the ideal soft-decision decoding (infinite-bit quantization) the received channel values are directly used in the channel decoder

Viterbi decoding

Viterbi decoding is the best known implementation of the maximum likely-hood decoding Here we narrow the options systematically at each time tick The principal used to reduce the choices is this

1 The errors occur infrequently The probability of error is small

2 The probability of two errors in a row is much smaller than a single error that is the errors are distributed randomly

The Viterbi decoder examines an entire received sequence of a given length The decoder computes a metric for each path and makes a decision based on this metric All paths are followed until two paths converge on one node Then the path with the higher metric is kept and the one with lower metric is discarded The paths selected are called the survivors

For an N bit sequence total numbers of possible received sequences are 2N Of these only 2kL are valid The Viterbi algorithm applies the maximum-likelihood principles to limit the comparison to 2 to the power of kL surviving paths instead of checking all paths The most common metric used is the Hamming distance metric This is just the dot product between the received codeword and the allowable codeword

Decoding Complexity for Convolutional Codes

For a general convolutional code the input information sequence contains kLbits where k is the number of parallel information bits at one time interval and L is thenumber of time intervals This results in L+m stages in the trellis diagram There areexactly 2kL distinct paths in the trellis diagram and as a result an exhaustive search forthe ML sequence would have a computational complexity on the order of O[2kL]

TheViterbi algorithm reduces this complexity by performing the ML search one stage at a time in the trellis At each node (state) of the trellis there are 2k calculations Thenumber of nodes per stage in the trellis is 2m Therefore the complexity of the Viterbialgorithm is on the order of O[(2k)(2m)(L+m)] This significantly reduces the number ofcalculations required to implement the ML decoding because the number of time intervalsL is now a linear factor and not an exponent factor in the complexity However therewill be an exponential increase in complexity if either k or m increases

Conclusion

Fundamentally convolutional codes do not offer more protection against noise than an equivalent block code In many cases they generally offer greater simplicity of implementation over a block code of equal power

REFERENCES

1 Herbert Taub Schilling Principle of Communication System Mcgraw-Hill 2002

2 Martin S Analog and Digital Communication System Prentice Hall 2001

3 P M Grant and D G Mcruickshank I A Glover and P M Grant Digital Communications Pearson Education 2009

4 V Pless Introduction to the Theory of Error-Correcting Codes 3rd ed New York John Wiley amp Sons 1998

5 Tomasi W Electronic Communication Systems Fundamentals Through Advanced Prentice Hall 2004

6 Lee L H C Error-Control Block Codes for Communications Engineers ArtechHouse 2000

7 httpwwwscribdcomdoc35139573Notes-in-Phase-Shift-Keying-Bpsk-Qpsk

8 httpwwwwikipediaorg

9 MacQuarie University Lecturer Notes httpwwwelecmqeduau~clfiles_pdfelec321

  • Viterbi decoding
Page 11: Analogue and Digital Communication assignment

Figure 32 Four symbols that represents the four phases in QPSK

Figure above shows depicts the 4 symbols used to represent the four phases in QPSK Analysis shows that this may be used either to double the data rate compared to a BPSK system while maintaining the bandwidth of the signal or to maintain the data-rate of BPSK but halve the bandwidth needed

Basic Configuration of Quadrature Modulation Scheme

Figure 33 Basic configuration of QPSKQPSK signal is generated by two BPSK signal Two orthogonal carrier signals are

used to distinguish the two signals One is given by cos 2fct and the other is given by sin

2fct The two carrier signals remain orthogonal in the area of a period A channel in which cos2fct is used as a carrier signal is generally called an inphase channel or Ich and a channel in which sin 2fct is used as a carrier signal is generally called quadrature-phase channel or Qch Therefore d (t) I and d (t) Q are the data in Ich and Qch respectively Modulation schemes that use Ich and Qch are called quadrature modulation schemes The basic confiuration is shown in figure 33

In the system shown above the input digital data dk is first converted into parallel data with two channels Ich and Qch The data are represented as di(t) and dq(t) The conversion or data allocation is done using a mapping circuit block Then the data allocated to Ich is filtered using a pulse shaping filter in Ich The pulse shaped signal is converted in analog signal by a DA converter and multiplied by a cos 2fct carrier wave The same process is carried out on the data allocated to Qch but it is multiplied by a f t c sin 21048659 carrier wave instead Then the Ich and Qch signals are added and transmitted to the air

At the receiver the received wave passes through a BPF to eliminate any sprurious signals Then it is downconverted to the baseband by multiplying by the RF carrier frequency Then in both the Ich and Qch channels the downcoverted signal is digitally sampled by an AD converters and the digital data is fed to a DSPH In the DSPH the sampled data is filtered with pulse shaping filter to eliminate ISI The signals are then synchronized and the transmitted digital data is recovered

Figure 34 Mapping circuit function for QPSKFor the mapping function a simple circuit is used to allocate the data as illustrated in the following figure This mapping function basically allocates all even bits to Ich and all odd bits to Qch and demapping is just the opposite operation

Conclusion It is clear that in order for QPSK to be useful for high data rate wide bandwidth systemsin multipath fading channels it is necessary to consider using diversity techniques equalization adaptive antennas The approach for performance analysis presented here could be used as a first step in the design process of such more complex systems

Task 5

Discuss Block codes and Convolutional codes in detail and evaluate their performance

Block codes

Consider that a message source can generate M equally likely messages Then initially we represent each message by k binary digits with 2k =M These k bits are the information bearing bits Next we add to each k bit massage r redundant bit Thus each massage has been expanded into a codeword of length n bits with n = k + r The total number of possible n bit codewords is 2n while the total number of possible messages is 2k There are 2k - 2n possible n bit words which do not represent possible messages

Codes formed by taking a block of k information bits and adding r (= n - k) redundant bits to form a codeword are called Block Codes and designated (nk) codes

The Hamming Distance dmin

Consider two distinct five digit codewords C1 = 00000 and C2 = 00011 These have a binary digit difference (or Hamming distance) of 2 in the last two digits The minimum distance in binary digits between any two codewords is known as the minimum Hamming distance dmin For block codes the minimum Hamming distance or the smallest difference between the digits for any two codewords in the complete code set dmin is the property which controls the error correction performance We can thus calculate the error detecting and correcting power of a code from the minimum distance in bits between the codewords

Block error probability and correction capability

If we have an error correcting code which can correct R errors than the probability of a codeword not being correctable is the probability of having more than R errors in n digits We can this calculate this probability by summing all the induvidual error probabilities up to and including R errors in the block

P(gtRrsquo errors) = 1 -

The probability of j errors in n digit codeword is

P(gtRrsquo errors) = (Pe)j (1-Pe)n-j x nCj

Pe is the probitlity of error in a single binary digit

n is the block length

nCj is the number of ways of choosing j error digits positoins with in length n binary digts

Group codes

Group codes are a special kind of block codes They comprise a set of codewords C1CN which contain the all zeros codeword (eg 00000) and exhibit a special property called closure This property means that if any two valid codewords are subject to a bit wise EX OR operation then they will produce another valid codeword in the set The closure property means that to find the minimum Hamming distance see below all that is required is to compare all the remaining codewords in the set with the all zeros codeword instead of comparing all the possible pairs of codewords The saving gets bigger the longer the codeword For example a code set with 100 codewords will require 100 comparisons for a Group code design compared with 100+99+98+ +2+1 for a non-group code

Nearest neighbour decoding

Nearest neighbour decoding assumes that the codeword nearest in Hamming distance to the received word is what was transmitted This inherently contains the assumption that the probability of a small number of terrors is greater than the probability of the larger number of t+1 errors that Pe is small

Nearest neighbour decoding can also be done on a soft decision basis with real non-binary numbers from the receiver The nearest Euclidean distance (nearest to these 5 codewords in terms of a 5-D geometry) is then used and this gives a considerable performance increase over the hard decision decoding described here

Hamming boundThis defines mathematically the error correcting performance of a block code The

upper bound on the performance of block codes is given by the Hamming bound some times called the sphere packing bound If we are trying to create a code to correct t errors with a block length of n with k information digits The upper bound on the performance of block codes as given by the Hamming Bound

2k 2 n 1 + n + nC2 + nC3

++ nCt

Cyclic codes

Cyclic codes are linear block codes with an additional cyclic shift operation For convenience polynomial representations are used for the code words for encoding and decoding since the shifting of a code word is equivalent to a modification of the exponential of a polynomial Specifically if x = (x0 x1xn-1) denotes a code word with elements in a finite field-(a field is an algebraic system formed by collection of elements F together with dyadic(2-operand) operations + and multiplication which are defined for all pairs of field elements in F and which behave in an arithmetically consistent manner A finite field is a field with finite number q of elements and it can be represented by Fq) A polynomial over Fq of degree at most n-1 is given as x(D) = x0 + x1D1 + x2D2 ++xn-1Dn-1

Cyclic codes are extremely suited for error correction because they can be designed to detect many combination of likely errors and the implementation of both encoding and

error-detecting circuits are practical A cyclic code used for error-detection is known as cyclic redundancy check (CRC) code

An error burst of length B in an n-bit received word as a contagious sequence of B bits in which the first and the last bits or any number of intermediate bits are received in error Binary (n k) CRC codes are capable of detecting the all error bursts of length n ndash k or less a fraction of error bursts of length equal to n-k+1 the fraction equals 1-2-(n-k-1) a fraction of error bursts of length greater than n-k+1 the fraction equals to 1-2-(n-k-1) all combination of dmin ndash 1 (or fewer) errors and all error patterns with an odd number of errors if the generation polynomial g(X) for the code as an even number of non-zero coefficients

Convolutional code

Encoder structure

The encoder will be represented in many different but equivalent ways Also the main decoding strategy for convolutional codes based on the Viterbi Algorithm will bedescribed A firm understanding of convolutional codes is an important prerequisite tothe understanding of turbo codes

Convolutional codes are commonly specified by three parameters (nkm)

n = number of output bit

k = number of inputs bits

m = number of memory registers

A convolutional code introduces redundant bits into the data stream through the use of linear shift registers The information bits are input into shift registers and the output encoded bits are obtainedby modulo-2 addition of the input information bits and the contents of the shift registersThe connections to the modulo-2 adders were developed heuristically with no algebraic or combinatorial foundation The code rate r for a convolutional code is defined as r = kn Often the manufacturers of convolutional code chips specify the code by parameters (nkL) The quantity L is called the constraint length of the code and is defined by Constraint Length L = k (m-1)

The constraint length L represents the number of bits in the encoder memory that affect the generation of the n output bit s The constraint length L is also referred to by t he capital letter K which can be confusing wit h the lower ca se k which represents the number of input bits In some books K is defined as equal to product the of k and m Often in commercial spec the codes are specified by (r K) where r = the code rate kn and K is the constraint length The constraint length K however is equal t o L œ 1 as defined in this paper I will be referring t o convolutional codes as (nkm) and not as (rK)

Encoder Representations

1) Generator Representation

Generator representation shows the hardware connection of the shift register taps to the modulo-2 adders A generator vector represents the position of the taps for an output A ldquo1rdquo represents a connection and a ldquo0rdquo represents no connection

2) Tree Diagram Representation

The tree diagram representation shows all possible information and encoded sequences for the convolutional encoder In the tree diagram a solid line represents input information bit 0 and a dashed line represents input information bit 1 The corresponding output encoded bits are shown on the branches of the tree An input information sequence defines a specific path through the tree diagram from left to right

3) State Diagram Representation

The state diagram shows the state information of a convolutional encoder The state information of a convolutional encoder is stored in the shift registers In the state diagram the state information of the encoder is shown in the circles Each new input information bit causes a transition from one state to another The path information between the states denoted as xc represents input information bit x and output encoded bits c It is customary to begin convolutional encoding from the all zerostate

4) Trellis Diagram RepresentationThe trellis diagram is basically a redrawing of the state diagram It shows all possible state transitions at each time step Frequently a legend accompanies the trellis diagram to show the state transitions and the corresponding input and output bit mappings (xc)

Catastrophic Convolutional code

Catastrophic convolutional code causes a large number of bit errors when only asmall number of channel bit errors is received This type of code needs to be avoided andcan be identified by the state diagram A state diagram having a loop in which a nonzeroinformation sequence corresponds to an all-zero output sequence identifies a catastrophicconvolutional code

Hard-Decision and Soft-Decision Decoding

Hard-decision and soft-decision decoding refer to the type of quantization used onthe received bits Hard-decision decoding uses 1-bit quantization on the received channel values Soft-decision decoding uses multi-bit quantization on the received channel values For the ideal soft-decision decoding (infinite-bit quantization) the received channel values are directly used in the channel decoder

Viterbi decoding

Viterbi decoding is the best known implementation of the maximum likely-hood decoding Here we narrow the options systematically at each time tick The principal used to reduce the choices is this

1 The errors occur infrequently The probability of error is small

2 The probability of two errors in a row is much smaller than a single error that is the errors are distributed randomly

The Viterbi decoder examines an entire received sequence of a given length The decoder computes a metric for each path and makes a decision based on this metric All paths are followed until two paths converge on one node Then the path with the higher metric is kept and the one with lower metric is discarded The paths selected are called the survivors

For an N bit sequence total numbers of possible received sequences are 2N Of these only 2kL are valid The Viterbi algorithm applies the maximum-likelihood principles to limit the comparison to 2 to the power of kL surviving paths instead of checking all paths The most common metric used is the Hamming distance metric This is just the dot product between the received codeword and the allowable codeword

Decoding Complexity for Convolutional Codes

For a general convolutional code the input information sequence contains kLbits where k is the number of parallel information bits at one time interval and L is thenumber of time intervals This results in L+m stages in the trellis diagram There areexactly 2kL distinct paths in the trellis diagram and as a result an exhaustive search forthe ML sequence would have a computational complexity on the order of O[2kL]

TheViterbi algorithm reduces this complexity by performing the ML search one stage at a time in the trellis At each node (state) of the trellis there are 2k calculations Thenumber of nodes per stage in the trellis is 2m Therefore the complexity of the Viterbialgorithm is on the order of O[(2k)(2m)(L+m)] This significantly reduces the number ofcalculations required to implement the ML decoding because the number of time intervalsL is now a linear factor and not an exponent factor in the complexity However therewill be an exponential increase in complexity if either k or m increases

Conclusion

Fundamentally convolutional codes do not offer more protection against noise than an equivalent block code In many cases they generally offer greater simplicity of implementation over a block code of equal power

REFERENCES

1 Herbert Taub Schilling Principle of Communication System Mcgraw-Hill 2002

2 Martin S Analog and Digital Communication System Prentice Hall 2001

3 P M Grant and D G Mcruickshank I A Glover and P M Grant Digital Communications Pearson Education 2009

4 V Pless Introduction to the Theory of Error-Correcting Codes 3rd ed New York John Wiley amp Sons 1998

5 Tomasi W Electronic Communication Systems Fundamentals Through Advanced Prentice Hall 2004

6 Lee L H C Error-Control Block Codes for Communications Engineers ArtechHouse 2000

7 httpwwwscribdcomdoc35139573Notes-in-Phase-Shift-Keying-Bpsk-Qpsk

8 httpwwwwikipediaorg

9 MacQuarie University Lecturer Notes httpwwwelecmqeduau~clfiles_pdfelec321

  • Viterbi decoding
Page 12: Analogue and Digital Communication assignment

2fct The two carrier signals remain orthogonal in the area of a period A channel in which cos2fct is used as a carrier signal is generally called an inphase channel or Ich and a channel in which sin 2fct is used as a carrier signal is generally called quadrature-phase channel or Qch Therefore d (t) I and d (t) Q are the data in Ich and Qch respectively Modulation schemes that use Ich and Qch are called quadrature modulation schemes The basic confiuration is shown in figure 33

In the system shown above the input digital data dk is first converted into parallel data with two channels Ich and Qch The data are represented as di(t) and dq(t) The conversion or data allocation is done using a mapping circuit block Then the data allocated to Ich is filtered using a pulse shaping filter in Ich The pulse shaped signal is converted in analog signal by a DA converter and multiplied by a cos 2fct carrier wave The same process is carried out on the data allocated to Qch but it is multiplied by a f t c sin 21048659 carrier wave instead Then the Ich and Qch signals are added and transmitted to the air

At the receiver the received wave passes through a BPF to eliminate any sprurious signals Then it is downconverted to the baseband by multiplying by the RF carrier frequency Then in both the Ich and Qch channels the downcoverted signal is digitally sampled by an AD converters and the digital data is fed to a DSPH In the DSPH the sampled data is filtered with pulse shaping filter to eliminate ISI The signals are then synchronized and the transmitted digital data is recovered

Figure 34 Mapping circuit function for QPSKFor the mapping function a simple circuit is used to allocate the data as illustrated in the following figure This mapping function basically allocates all even bits to Ich and all odd bits to Qch and demapping is just the opposite operation

Conclusion It is clear that in order for QPSK to be useful for high data rate wide bandwidth systemsin multipath fading channels it is necessary to consider using diversity techniques equalization adaptive antennas The approach for performance analysis presented here could be used as a first step in the design process of such more complex systems

Task 5

Discuss Block codes and Convolutional codes in detail and evaluate their performance

Block codes

Consider that a message source can generate M equally likely messages Then initially we represent each message by k binary digits with 2k =M These k bits are the information bearing bits Next we add to each k bit massage r redundant bit Thus each massage has been expanded into a codeword of length n bits with n = k + r The total number of possible n bit codewords is 2n while the total number of possible messages is 2k There are 2k - 2n possible n bit words which do not represent possible messages

Codes formed by taking a block of k information bits and adding r (= n - k) redundant bits to form a codeword are called Block Codes and designated (nk) codes

The Hamming Distance dmin

Consider two distinct five digit codewords C1 = 00000 and C2 = 00011 These have a binary digit difference (or Hamming distance) of 2 in the last two digits The minimum distance in binary digits between any two codewords is known as the minimum Hamming distance dmin For block codes the minimum Hamming distance or the smallest difference between the digits for any two codewords in the complete code set dmin is the property which controls the error correction performance We can thus calculate the error detecting and correcting power of a code from the minimum distance in bits between the codewords

Block error probability and correction capability

If we have an error correcting code which can correct R errors than the probability of a codeword not being correctable is the probability of having more than R errors in n digits We can this calculate this probability by summing all the induvidual error probabilities up to and including R errors in the block

P(gtRrsquo errors) = 1 -

The probability of j errors in n digit codeword is

P(gtRrsquo errors) = (Pe)j (1-Pe)n-j x nCj

Pe is the probitlity of error in a single binary digit

n is the block length

nCj is the number of ways of choosing j error digits positoins with in length n binary digts

Group codes

Group codes are a special kind of block codes They comprise a set of codewords C1CN which contain the all zeros codeword (eg 00000) and exhibit a special property called closure This property means that if any two valid codewords are subject to a bit wise EX OR operation then they will produce another valid codeword in the set The closure property means that to find the minimum Hamming distance see below all that is required is to compare all the remaining codewords in the set with the all zeros codeword instead of comparing all the possible pairs of codewords The saving gets bigger the longer the codeword For example a code set with 100 codewords will require 100 comparisons for a Group code design compared with 100+99+98+ +2+1 for a non-group code

Nearest neighbour decoding

Nearest neighbour decoding assumes that the codeword nearest in Hamming distance to the received word is what was transmitted This inherently contains the assumption that the probability of a small number of terrors is greater than the probability of the larger number of t+1 errors that Pe is small

Nearest neighbour decoding can also be done on a soft decision basis with real non-binary numbers from the receiver The nearest Euclidean distance (nearest to these 5 codewords in terms of a 5-D geometry) is then used and this gives a considerable performance increase over the hard decision decoding described here

Hamming boundThis defines mathematically the error correcting performance of a block code The

upper bound on the performance of block codes is given by the Hamming bound some times called the sphere packing bound If we are trying to create a code to correct t errors with a block length of n with k information digits The upper bound on the performance of block codes as given by the Hamming Bound

2k 2 n 1 + n + nC2 + nC3

++ nCt

Cyclic codes

Cyclic codes are linear block codes with an additional cyclic shift operation For convenience polynomial representations are used for the code words for encoding and decoding since the shifting of a code word is equivalent to a modification of the exponential of a polynomial Specifically if x = (x0 x1xn-1) denotes a code word with elements in a finite field-(a field is an algebraic system formed by collection of elements F together with dyadic(2-operand) operations + and multiplication which are defined for all pairs of field elements in F and which behave in an arithmetically consistent manner A finite field is a field with finite number q of elements and it can be represented by Fq) A polynomial over Fq of degree at most n-1 is given as x(D) = x0 + x1D1 + x2D2 ++xn-1Dn-1

Cyclic codes are extremely suited for error correction because they can be designed to detect many combination of likely errors and the implementation of both encoding and

error-detecting circuits are practical A cyclic code used for error-detection is known as cyclic redundancy check (CRC) code

An error burst of length B in an n-bit received word as a contagious sequence of B bits in which the first and the last bits or any number of intermediate bits are received in error Binary (n k) CRC codes are capable of detecting the all error bursts of length n ndash k or less a fraction of error bursts of length equal to n-k+1 the fraction equals 1-2-(n-k-1) a fraction of error bursts of length greater than n-k+1 the fraction equals to 1-2-(n-k-1) all combination of dmin ndash 1 (or fewer) errors and all error patterns with an odd number of errors if the generation polynomial g(X) for the code as an even number of non-zero coefficients

Convolutional code

Encoder structure

The encoder will be represented in many different but equivalent ways Also the main decoding strategy for convolutional codes based on the Viterbi Algorithm will bedescribed A firm understanding of convolutional codes is an important prerequisite tothe understanding of turbo codes

Convolutional codes are commonly specified by three parameters (nkm)

n = number of output bit

k = number of inputs bits

m = number of memory registers

A convolutional code introduces redundant bits into the data stream through the use of linear shift registers The information bits are input into shift registers and the output encoded bits are obtainedby modulo-2 addition of the input information bits and the contents of the shift registersThe connections to the modulo-2 adders were developed heuristically with no algebraic or combinatorial foundation The code rate r for a convolutional code is defined as r = kn Often the manufacturers of convolutional code chips specify the code by parameters (nkL) The quantity L is called the constraint length of the code and is defined by Constraint Length L = k (m-1)

The constraint length L represents the number of bits in the encoder memory that affect the generation of the n output bit s The constraint length L is also referred to by t he capital letter K which can be confusing wit h the lower ca se k which represents the number of input bits In some books K is defined as equal to product the of k and m Often in commercial spec the codes are specified by (r K) where r = the code rate kn and K is the constraint length The constraint length K however is equal t o L œ 1 as defined in this paper I will be referring t o convolutional codes as (nkm) and not as (rK)

Encoder Representations

1) Generator Representation

Generator representation shows the hardware connection of the shift register taps to the modulo-2 adders A generator vector represents the position of the taps for an output A ldquo1rdquo represents a connection and a ldquo0rdquo represents no connection

2) Tree Diagram Representation

The tree diagram representation shows all possible information and encoded sequences for the convolutional encoder In the tree diagram a solid line represents input information bit 0 and a dashed line represents input information bit 1 The corresponding output encoded bits are shown on the branches of the tree An input information sequence defines a specific path through the tree diagram from left to right

3) State Diagram Representation

The state diagram shows the state information of a convolutional encoder The state information of a convolutional encoder is stored in the shift registers In the state diagram the state information of the encoder is shown in the circles Each new input information bit causes a transition from one state to another The path information between the states denoted as xc represents input information bit x and output encoded bits c It is customary to begin convolutional encoding from the all zerostate

4) Trellis Diagram RepresentationThe trellis diagram is basically a redrawing of the state diagram It shows all possible state transitions at each time step Frequently a legend accompanies the trellis diagram to show the state transitions and the corresponding input and output bit mappings (xc)

Catastrophic Convolutional code

Catastrophic convolutional code causes a large number of bit errors when only asmall number of channel bit errors is received This type of code needs to be avoided andcan be identified by the state diagram A state diagram having a loop in which a nonzeroinformation sequence corresponds to an all-zero output sequence identifies a catastrophicconvolutional code

Hard-Decision and Soft-Decision Decoding

Hard-decision and soft-decision decoding refer to the type of quantization used onthe received bits Hard-decision decoding uses 1-bit quantization on the received channel values Soft-decision decoding uses multi-bit quantization on the received channel values For the ideal soft-decision decoding (infinite-bit quantization) the received channel values are directly used in the channel decoder

Viterbi decoding

Viterbi decoding is the best known implementation of the maximum likely-hood decoding Here we narrow the options systematically at each time tick The principal used to reduce the choices is this

1 The errors occur infrequently The probability of error is small

2 The probability of two errors in a row is much smaller than a single error that is the errors are distributed randomly

The Viterbi decoder examines an entire received sequence of a given length The decoder computes a metric for each path and makes a decision based on this metric All paths are followed until two paths converge on one node Then the path with the higher metric is kept and the one with lower metric is discarded The paths selected are called the survivors

For an N bit sequence total numbers of possible received sequences are 2N Of these only 2kL are valid The Viterbi algorithm applies the maximum-likelihood principles to limit the comparison to 2 to the power of kL surviving paths instead of checking all paths The most common metric used is the Hamming distance metric This is just the dot product between the received codeword and the allowable codeword

Decoding Complexity for Convolutional Codes

For a general convolutional code the input information sequence contains kLbits where k is the number of parallel information bits at one time interval and L is thenumber of time intervals This results in L+m stages in the trellis diagram There areexactly 2kL distinct paths in the trellis diagram and as a result an exhaustive search forthe ML sequence would have a computational complexity on the order of O[2kL]

TheViterbi algorithm reduces this complexity by performing the ML search one stage at a time in the trellis At each node (state) of the trellis there are 2k calculations Thenumber of nodes per stage in the trellis is 2m Therefore the complexity of the Viterbialgorithm is on the order of O[(2k)(2m)(L+m)] This significantly reduces the number ofcalculations required to implement the ML decoding because the number of time intervalsL is now a linear factor and not an exponent factor in the complexity However therewill be an exponential increase in complexity if either k or m increases

Conclusion

Fundamentally convolutional codes do not offer more protection against noise than an equivalent block code In many cases they generally offer greater simplicity of implementation over a block code of equal power

REFERENCES

1 Herbert Taub Schilling Principle of Communication System Mcgraw-Hill 2002

2 Martin S Analog and Digital Communication System Prentice Hall 2001

3 P M Grant and D G Mcruickshank I A Glover and P M Grant Digital Communications Pearson Education 2009

4 V Pless Introduction to the Theory of Error-Correcting Codes 3rd ed New York John Wiley amp Sons 1998

5 Tomasi W Electronic Communication Systems Fundamentals Through Advanced Prentice Hall 2004

6 Lee L H C Error-Control Block Codes for Communications Engineers ArtechHouse 2000

7 httpwwwscribdcomdoc35139573Notes-in-Phase-Shift-Keying-Bpsk-Qpsk

8 httpwwwwikipediaorg

9 MacQuarie University Lecturer Notes httpwwwelecmqeduau~clfiles_pdfelec321

  • Viterbi decoding
Page 13: Analogue and Digital Communication assignment

Discuss Block codes and Convolutional codes in detail and evaluate their performance

Block codes

Consider that a message source can generate M equally likely messages Then initially we represent each message by k binary digits with 2k =M These k bits are the information bearing bits Next we add to each k bit massage r redundant bit Thus each massage has been expanded into a codeword of length n bits with n = k + r The total number of possible n bit codewords is 2n while the total number of possible messages is 2k There are 2k - 2n possible n bit words which do not represent possible messages

Codes formed by taking a block of k information bits and adding r (= n - k) redundant bits to form a codeword are called Block Codes and designated (nk) codes

The Hamming Distance dmin

Consider two distinct five digit codewords C1 = 00000 and C2 = 00011 These have a binary digit difference (or Hamming distance) of 2 in the last two digits The minimum distance in binary digits between any two codewords is known as the minimum Hamming distance dmin For block codes the minimum Hamming distance or the smallest difference between the digits for any two codewords in the complete code set dmin is the property which controls the error correction performance We can thus calculate the error detecting and correcting power of a code from the minimum distance in bits between the codewords

Block error probability and correction capability

If we have an error correcting code which can correct R errors than the probability of a codeword not being correctable is the probability of having more than R errors in n digits We can this calculate this probability by summing all the induvidual error probabilities up to and including R errors in the block

P(gtRrsquo errors) = 1 -

The probability of j errors in n digit codeword is

P(gtRrsquo errors) = (Pe)j (1-Pe)n-j x nCj

Pe is the probitlity of error in a single binary digit

n is the block length

nCj is the number of ways of choosing j error digits positoins with in length n binary digts

Group codes

Group codes are a special kind of block codes They comprise a set of codewords C1CN which contain the all zeros codeword (eg 00000) and exhibit a special property called closure This property means that if any two valid codewords are subject to a bit wise EX OR operation then they will produce another valid codeword in the set The closure property means that to find the minimum Hamming distance see below all that is required is to compare all the remaining codewords in the set with the all zeros codeword instead of comparing all the possible pairs of codewords The saving gets bigger the longer the codeword For example a code set with 100 codewords will require 100 comparisons for a Group code design compared with 100+99+98+ +2+1 for a non-group code

Nearest neighbour decoding

Nearest neighbour decoding assumes that the codeword nearest in Hamming distance to the received word is what was transmitted This inherently contains the assumption that the probability of a small number of terrors is greater than the probability of the larger number of t+1 errors that Pe is small

Nearest neighbour decoding can also be done on a soft decision basis with real non-binary numbers from the receiver The nearest Euclidean distance (nearest to these 5 codewords in terms of a 5-D geometry) is then used and this gives a considerable performance increase over the hard decision decoding described here

Hamming boundThis defines mathematically the error correcting performance of a block code The

upper bound on the performance of block codes is given by the Hamming bound some times called the sphere packing bound If we are trying to create a code to correct t errors with a block length of n with k information digits The upper bound on the performance of block codes as given by the Hamming Bound

2k 2 n 1 + n + nC2 + nC3

++ nCt

Cyclic codes

Cyclic codes are linear block codes with an additional cyclic shift operation For convenience polynomial representations are used for the code words for encoding and decoding since the shifting of a code word is equivalent to a modification of the exponential of a polynomial Specifically if x = (x0 x1xn-1) denotes a code word with elements in a finite field-(a field is an algebraic system formed by collection of elements F together with dyadic(2-operand) operations + and multiplication which are defined for all pairs of field elements in F and which behave in an arithmetically consistent manner A finite field is a field with finite number q of elements and it can be represented by Fq) A polynomial over Fq of degree at most n-1 is given as x(D) = x0 + x1D1 + x2D2 ++xn-1Dn-1

Cyclic codes are extremely suited for error correction because they can be designed to detect many combination of likely errors and the implementation of both encoding and

error-detecting circuits are practical A cyclic code used for error-detection is known as cyclic redundancy check (CRC) code

An error burst of length B in an n-bit received word as a contagious sequence of B bits in which the first and the last bits or any number of intermediate bits are received in error Binary (n k) CRC codes are capable of detecting the all error bursts of length n ndash k or less a fraction of error bursts of length equal to n-k+1 the fraction equals 1-2-(n-k-1) a fraction of error bursts of length greater than n-k+1 the fraction equals to 1-2-(n-k-1) all combination of dmin ndash 1 (or fewer) errors and all error patterns with an odd number of errors if the generation polynomial g(X) for the code as an even number of non-zero coefficients

Convolutional code

Encoder structure

The encoder will be represented in many different but equivalent ways Also the main decoding strategy for convolutional codes based on the Viterbi Algorithm will bedescribed A firm understanding of convolutional codes is an important prerequisite tothe understanding of turbo codes

Convolutional codes are commonly specified by three parameters (nkm)

n = number of output bit

k = number of inputs bits

m = number of memory registers

A convolutional code introduces redundant bits into the data stream through the use of linear shift registers The information bits are input into shift registers and the output encoded bits are obtainedby modulo-2 addition of the input information bits and the contents of the shift registersThe connections to the modulo-2 adders were developed heuristically with no algebraic or combinatorial foundation The code rate r for a convolutional code is defined as r = kn Often the manufacturers of convolutional code chips specify the code by parameters (nkL) The quantity L is called the constraint length of the code and is defined by Constraint Length L = k (m-1)

The constraint length L represents the number of bits in the encoder memory that affect the generation of the n output bit s The constraint length L is also referred to by t he capital letter K which can be confusing wit h the lower ca se k which represents the number of input bits In some books K is defined as equal to product the of k and m Often in commercial spec the codes are specified by (r K) where r = the code rate kn and K is the constraint length The constraint length K however is equal t o L œ 1 as defined in this paper I will be referring t o convolutional codes as (nkm) and not as (rK)

Encoder Representations

1) Generator Representation

Generator representation shows the hardware connection of the shift register taps to the modulo-2 adders A generator vector represents the position of the taps for an output A ldquo1rdquo represents a connection and a ldquo0rdquo represents no connection

2) Tree Diagram Representation

The tree diagram representation shows all possible information and encoded sequences for the convolutional encoder In the tree diagram a solid line represents input information bit 0 and a dashed line represents input information bit 1 The corresponding output encoded bits are shown on the branches of the tree An input information sequence defines a specific path through the tree diagram from left to right

3) State Diagram Representation

The state diagram shows the state information of a convolutional encoder The state information of a convolutional encoder is stored in the shift registers In the state diagram the state information of the encoder is shown in the circles Each new input information bit causes a transition from one state to another The path information between the states denoted as xc represents input information bit x and output encoded bits c It is customary to begin convolutional encoding from the all zerostate

4) Trellis Diagram RepresentationThe trellis diagram is basically a redrawing of the state diagram It shows all possible state transitions at each time step Frequently a legend accompanies the trellis diagram to show the state transitions and the corresponding input and output bit mappings (xc)

Catastrophic Convolutional code

Catastrophic convolutional code causes a large number of bit errors when only asmall number of channel bit errors is received This type of code needs to be avoided andcan be identified by the state diagram A state diagram having a loop in which a nonzeroinformation sequence corresponds to an all-zero output sequence identifies a catastrophicconvolutional code

Hard-Decision and Soft-Decision Decoding

Hard-decision and soft-decision decoding refer to the type of quantization used onthe received bits Hard-decision decoding uses 1-bit quantization on the received channel values Soft-decision decoding uses multi-bit quantization on the received channel values For the ideal soft-decision decoding (infinite-bit quantization) the received channel values are directly used in the channel decoder

Viterbi decoding

Viterbi decoding is the best known implementation of the maximum likely-hood decoding Here we narrow the options systematically at each time tick The principal used to reduce the choices is this

1 The errors occur infrequently The probability of error is small

2 The probability of two errors in a row is much smaller than a single error that is the errors are distributed randomly

The Viterbi decoder examines an entire received sequence of a given length The decoder computes a metric for each path and makes a decision based on this metric All paths are followed until two paths converge on one node Then the path with the higher metric is kept and the one with lower metric is discarded The paths selected are called the survivors

For an N bit sequence total numbers of possible received sequences are 2N Of these only 2kL are valid The Viterbi algorithm applies the maximum-likelihood principles to limit the comparison to 2 to the power of kL surviving paths instead of checking all paths The most common metric used is the Hamming distance metric This is just the dot product between the received codeword and the allowable codeword

Decoding Complexity for Convolutional Codes

For a general convolutional code the input information sequence contains kLbits where k is the number of parallel information bits at one time interval and L is thenumber of time intervals This results in L+m stages in the trellis diagram There areexactly 2kL distinct paths in the trellis diagram and as a result an exhaustive search forthe ML sequence would have a computational complexity on the order of O[2kL]

TheViterbi algorithm reduces this complexity by performing the ML search one stage at a time in the trellis At each node (state) of the trellis there are 2k calculations Thenumber of nodes per stage in the trellis is 2m Therefore the complexity of the Viterbialgorithm is on the order of O[(2k)(2m)(L+m)] This significantly reduces the number ofcalculations required to implement the ML decoding because the number of time intervalsL is now a linear factor and not an exponent factor in the complexity However therewill be an exponential increase in complexity if either k or m increases

Conclusion

Fundamentally convolutional codes do not offer more protection against noise than an equivalent block code In many cases they generally offer greater simplicity of implementation over a block code of equal power

REFERENCES

1 Herbert Taub Schilling Principle of Communication System Mcgraw-Hill 2002

2 Martin S Analog and Digital Communication System Prentice Hall 2001

3 P M Grant and D G Mcruickshank I A Glover and P M Grant Digital Communications Pearson Education 2009

4 V Pless Introduction to the Theory of Error-Correcting Codes 3rd ed New York John Wiley amp Sons 1998

5 Tomasi W Electronic Communication Systems Fundamentals Through Advanced Prentice Hall 2004

6 Lee L H C Error-Control Block Codes for Communications Engineers ArtechHouse 2000

7 httpwwwscribdcomdoc35139573Notes-in-Phase-Shift-Keying-Bpsk-Qpsk

8 httpwwwwikipediaorg

9 MacQuarie University Lecturer Notes httpwwwelecmqeduau~clfiles_pdfelec321

  • Viterbi decoding
Page 14: Analogue and Digital Communication assignment

Group codes are a special kind of block codes They comprise a set of codewords C1CN which contain the all zeros codeword (eg 00000) and exhibit a special property called closure This property means that if any two valid codewords are subject to a bit wise EX OR operation then they will produce another valid codeword in the set The closure property means that to find the minimum Hamming distance see below all that is required is to compare all the remaining codewords in the set with the all zeros codeword instead of comparing all the possible pairs of codewords The saving gets bigger the longer the codeword For example a code set with 100 codewords will require 100 comparisons for a Group code design compared with 100+99+98+ +2+1 for a non-group code

Nearest neighbour decoding

Nearest neighbour decoding assumes that the codeword nearest in Hamming distance to the received word is what was transmitted This inherently contains the assumption that the probability of a small number of terrors is greater than the probability of the larger number of t+1 errors that Pe is small

Nearest neighbour decoding can also be done on a soft decision basis with real non-binary numbers from the receiver The nearest Euclidean distance (nearest to these 5 codewords in terms of a 5-D geometry) is then used and this gives a considerable performance increase over the hard decision decoding described here

Hamming boundThis defines mathematically the error correcting performance of a block code The

upper bound on the performance of block codes is given by the Hamming bound some times called the sphere packing bound If we are trying to create a code to correct t errors with a block length of n with k information digits The upper bound on the performance of block codes as given by the Hamming Bound

2k 2 n 1 + n + nC2 + nC3

++ nCt

Cyclic codes

Cyclic codes are linear block codes with an additional cyclic shift operation For convenience polynomial representations are used for the code words for encoding and decoding since the shifting of a code word is equivalent to a modification of the exponential of a polynomial Specifically if x = (x0 x1xn-1) denotes a code word with elements in a finite field-(a field is an algebraic system formed by collection of elements F together with dyadic(2-operand) operations + and multiplication which are defined for all pairs of field elements in F and which behave in an arithmetically consistent manner A finite field is a field with finite number q of elements and it can be represented by Fq) A polynomial over Fq of degree at most n-1 is given as x(D) = x0 + x1D1 + x2D2 ++xn-1Dn-1

Cyclic codes are extremely suited for error correction because they can be designed to detect many combination of likely errors and the implementation of both encoding and

error-detecting circuits are practical A cyclic code used for error-detection is known as cyclic redundancy check (CRC) code

An error burst of length B in an n-bit received word as a contagious sequence of B bits in which the first and the last bits or any number of intermediate bits are received in error Binary (n k) CRC codes are capable of detecting the all error bursts of length n ndash k or less a fraction of error bursts of length equal to n-k+1 the fraction equals 1-2-(n-k-1) a fraction of error bursts of length greater than n-k+1 the fraction equals to 1-2-(n-k-1) all combination of dmin ndash 1 (or fewer) errors and all error patterns with an odd number of errors if the generation polynomial g(X) for the code as an even number of non-zero coefficients

Convolutional code

Encoder structure

The encoder will be represented in many different but equivalent ways Also the main decoding strategy for convolutional codes based on the Viterbi Algorithm will bedescribed A firm understanding of convolutional codes is an important prerequisite tothe understanding of turbo codes

Convolutional codes are commonly specified by three parameters (nkm)

n = number of output bit

k = number of inputs bits

m = number of memory registers

A convolutional code introduces redundant bits into the data stream through the use of linear shift registers The information bits are input into shift registers and the output encoded bits are obtainedby modulo-2 addition of the input information bits and the contents of the shift registersThe connections to the modulo-2 adders were developed heuristically with no algebraic or combinatorial foundation The code rate r for a convolutional code is defined as r = kn Often the manufacturers of convolutional code chips specify the code by parameters (nkL) The quantity L is called the constraint length of the code and is defined by Constraint Length L = k (m-1)

The constraint length L represents the number of bits in the encoder memory that affect the generation of the n output bit s The constraint length L is also referred to by t he capital letter K which can be confusing wit h the lower ca se k which represents the number of input bits In some books K is defined as equal to product the of k and m Often in commercial spec the codes are specified by (r K) where r = the code rate kn and K is the constraint length The constraint length K however is equal t o L œ 1 as defined in this paper I will be referring t o convolutional codes as (nkm) and not as (rK)

Encoder Representations

1) Generator Representation

Generator representation shows the hardware connection of the shift register taps to the modulo-2 adders A generator vector represents the position of the taps for an output A ldquo1rdquo represents a connection and a ldquo0rdquo represents no connection

2) Tree Diagram Representation

The tree diagram representation shows all possible information and encoded sequences for the convolutional encoder In the tree diagram a solid line represents input information bit 0 and a dashed line represents input information bit 1 The corresponding output encoded bits are shown on the branches of the tree An input information sequence defines a specific path through the tree diagram from left to right

3) State Diagram Representation

The state diagram shows the state information of a convolutional encoder The state information of a convolutional encoder is stored in the shift registers In the state diagram the state information of the encoder is shown in the circles Each new input information bit causes a transition from one state to another The path information between the states denoted as xc represents input information bit x and output encoded bits c It is customary to begin convolutional encoding from the all zerostate

4) Trellis Diagram RepresentationThe trellis diagram is basically a redrawing of the state diagram It shows all possible state transitions at each time step Frequently a legend accompanies the trellis diagram to show the state transitions and the corresponding input and output bit mappings (xc)

Catastrophic Convolutional code

Catastrophic convolutional code causes a large number of bit errors when only asmall number of channel bit errors is received This type of code needs to be avoided andcan be identified by the state diagram A state diagram having a loop in which a nonzeroinformation sequence corresponds to an all-zero output sequence identifies a catastrophicconvolutional code

Hard-Decision and Soft-Decision Decoding

Hard-decision and soft-decision decoding refer to the type of quantization used onthe received bits Hard-decision decoding uses 1-bit quantization on the received channel values Soft-decision decoding uses multi-bit quantization on the received channel values For the ideal soft-decision decoding (infinite-bit quantization) the received channel values are directly used in the channel decoder

Viterbi decoding

Viterbi decoding is the best known implementation of the maximum likely-hood decoding Here we narrow the options systematically at each time tick The principal used to reduce the choices is this

1 The errors occur infrequently The probability of error is small

2 The probability of two errors in a row is much smaller than a single error that is the errors are distributed randomly

The Viterbi decoder examines an entire received sequence of a given length The decoder computes a metric for each path and makes a decision based on this metric All paths are followed until two paths converge on one node Then the path with the higher metric is kept and the one with lower metric is discarded The paths selected are called the survivors

For an N bit sequence total numbers of possible received sequences are 2N Of these only 2kL are valid The Viterbi algorithm applies the maximum-likelihood principles to limit the comparison to 2 to the power of kL surviving paths instead of checking all paths The most common metric used is the Hamming distance metric This is just the dot product between the received codeword and the allowable codeword

Decoding Complexity for Convolutional Codes

For a general convolutional code the input information sequence contains kLbits where k is the number of parallel information bits at one time interval and L is thenumber of time intervals This results in L+m stages in the trellis diagram There areexactly 2kL distinct paths in the trellis diagram and as a result an exhaustive search forthe ML sequence would have a computational complexity on the order of O[2kL]

TheViterbi algorithm reduces this complexity by performing the ML search one stage at a time in the trellis At each node (state) of the trellis there are 2k calculations Thenumber of nodes per stage in the trellis is 2m Therefore the complexity of the Viterbialgorithm is on the order of O[(2k)(2m)(L+m)] This significantly reduces the number ofcalculations required to implement the ML decoding because the number of time intervalsL is now a linear factor and not an exponent factor in the complexity However therewill be an exponential increase in complexity if either k or m increases

Conclusion

Fundamentally convolutional codes do not offer more protection against noise than an equivalent block code In many cases they generally offer greater simplicity of implementation over a block code of equal power

REFERENCES

1 Herbert Taub Schilling Principle of Communication System Mcgraw-Hill 2002

2 Martin S Analog and Digital Communication System Prentice Hall 2001

3 P M Grant and D G Mcruickshank I A Glover and P M Grant Digital Communications Pearson Education 2009

4 V Pless Introduction to the Theory of Error-Correcting Codes 3rd ed New York John Wiley amp Sons 1998

5 Tomasi W Electronic Communication Systems Fundamentals Through Advanced Prentice Hall 2004

6 Lee L H C Error-Control Block Codes for Communications Engineers ArtechHouse 2000

7 httpwwwscribdcomdoc35139573Notes-in-Phase-Shift-Keying-Bpsk-Qpsk

8 httpwwwwikipediaorg

9 MacQuarie University Lecturer Notes httpwwwelecmqeduau~clfiles_pdfelec321

  • Viterbi decoding
Page 15: Analogue and Digital Communication assignment

error-detecting circuits are practical A cyclic code used for error-detection is known as cyclic redundancy check (CRC) code

An error burst of length B in an n-bit received word as a contagious sequence of B bits in which the first and the last bits or any number of intermediate bits are received in error Binary (n k) CRC codes are capable of detecting the all error bursts of length n ndash k or less a fraction of error bursts of length equal to n-k+1 the fraction equals 1-2-(n-k-1) a fraction of error bursts of length greater than n-k+1 the fraction equals to 1-2-(n-k-1) all combination of dmin ndash 1 (or fewer) errors and all error patterns with an odd number of errors if the generation polynomial g(X) for the code as an even number of non-zero coefficients

Convolutional code

Encoder structure

The encoder will be represented in many different but equivalent ways Also the main decoding strategy for convolutional codes based on the Viterbi Algorithm will bedescribed A firm understanding of convolutional codes is an important prerequisite tothe understanding of turbo codes

Convolutional codes are commonly specified by three parameters (nkm)

n = number of output bit

k = number of inputs bits

m = number of memory registers

A convolutional code introduces redundant bits into the data stream through the use of linear shift registers The information bits are input into shift registers and the output encoded bits are obtainedby modulo-2 addition of the input information bits and the contents of the shift registersThe connections to the modulo-2 adders were developed heuristically with no algebraic or combinatorial foundation The code rate r for a convolutional code is defined as r = kn Often the manufacturers of convolutional code chips specify the code by parameters (nkL) The quantity L is called the constraint length of the code and is defined by Constraint Length L = k (m-1)

The constraint length L represents the number of bits in the encoder memory that affect the generation of the n output bit s The constraint length L is also referred to by t he capital letter K which can be confusing wit h the lower ca se k which represents the number of input bits In some books K is defined as equal to product the of k and m Often in commercial spec the codes are specified by (r K) where r = the code rate kn and K is the constraint length The constraint length K however is equal t o L œ 1 as defined in this paper I will be referring t o convolutional codes as (nkm) and not as (rK)

Encoder Representations

1) Generator Representation

Generator representation shows the hardware connection of the shift register taps to the modulo-2 adders A generator vector represents the position of the taps for an output A ldquo1rdquo represents a connection and a ldquo0rdquo represents no connection

2) Tree Diagram Representation

The tree diagram representation shows all possible information and encoded sequences for the convolutional encoder In the tree diagram a solid line represents input information bit 0 and a dashed line represents input information bit 1 The corresponding output encoded bits are shown on the branches of the tree An input information sequence defines a specific path through the tree diagram from left to right

3) State Diagram Representation

The state diagram shows the state information of a convolutional encoder The state information of a convolutional encoder is stored in the shift registers In the state diagram the state information of the encoder is shown in the circles Each new input information bit causes a transition from one state to another The path information between the states denoted as xc represents input information bit x and output encoded bits c It is customary to begin convolutional encoding from the all zerostate

4) Trellis Diagram RepresentationThe trellis diagram is basically a redrawing of the state diagram It shows all possible state transitions at each time step Frequently a legend accompanies the trellis diagram to show the state transitions and the corresponding input and output bit mappings (xc)

Catastrophic Convolutional code

Catastrophic convolutional code causes a large number of bit errors when only asmall number of channel bit errors is received This type of code needs to be avoided andcan be identified by the state diagram A state diagram having a loop in which a nonzeroinformation sequence corresponds to an all-zero output sequence identifies a catastrophicconvolutional code

Hard-Decision and Soft-Decision Decoding

Hard-decision and soft-decision decoding refer to the type of quantization used onthe received bits Hard-decision decoding uses 1-bit quantization on the received channel values Soft-decision decoding uses multi-bit quantization on the received channel values For the ideal soft-decision decoding (infinite-bit quantization) the received channel values are directly used in the channel decoder

Viterbi decoding

Viterbi decoding is the best known implementation of the maximum likely-hood decoding Here we narrow the options systematically at each time tick The principal used to reduce the choices is this

1 The errors occur infrequently The probability of error is small

2 The probability of two errors in a row is much smaller than a single error that is the errors are distributed randomly

The Viterbi decoder examines an entire received sequence of a given length The decoder computes a metric for each path and makes a decision based on this metric All paths are followed until two paths converge on one node Then the path with the higher metric is kept and the one with lower metric is discarded The paths selected are called the survivors

For an N bit sequence total numbers of possible received sequences are 2N Of these only 2kL are valid The Viterbi algorithm applies the maximum-likelihood principles to limit the comparison to 2 to the power of kL surviving paths instead of checking all paths The most common metric used is the Hamming distance metric This is just the dot product between the received codeword and the allowable codeword

Decoding Complexity for Convolutional Codes

For a general convolutional code the input information sequence contains kLbits where k is the number of parallel information bits at one time interval and L is thenumber of time intervals This results in L+m stages in the trellis diagram There areexactly 2kL distinct paths in the trellis diagram and as a result an exhaustive search forthe ML sequence would have a computational complexity on the order of O[2kL]

TheViterbi algorithm reduces this complexity by performing the ML search one stage at a time in the trellis At each node (state) of the trellis there are 2k calculations Thenumber of nodes per stage in the trellis is 2m Therefore the complexity of the Viterbialgorithm is on the order of O[(2k)(2m)(L+m)] This significantly reduces the number ofcalculations required to implement the ML decoding because the number of time intervalsL is now a linear factor and not an exponent factor in the complexity However therewill be an exponential increase in complexity if either k or m increases

Conclusion

Fundamentally convolutional codes do not offer more protection against noise than an equivalent block code In many cases they generally offer greater simplicity of implementation over a block code of equal power

REFERENCES

1 Herbert Taub Schilling Principle of Communication System Mcgraw-Hill 2002

2 Martin S Analog and Digital Communication System Prentice Hall 2001

3 P M Grant and D G Mcruickshank I A Glover and P M Grant Digital Communications Pearson Education 2009

4 V Pless Introduction to the Theory of Error-Correcting Codes 3rd ed New York John Wiley amp Sons 1998

5 Tomasi W Electronic Communication Systems Fundamentals Through Advanced Prentice Hall 2004

6 Lee L H C Error-Control Block Codes for Communications Engineers ArtechHouse 2000

7 httpwwwscribdcomdoc35139573Notes-in-Phase-Shift-Keying-Bpsk-Qpsk

8 httpwwwwikipediaorg

9 MacQuarie University Lecturer Notes httpwwwelecmqeduau~clfiles_pdfelec321

  • Viterbi decoding
Page 16: Analogue and Digital Communication assignment

Encoder Representations

1) Generator Representation

Generator representation shows the hardware connection of the shift register taps to the modulo-2 adders A generator vector represents the position of the taps for an output A ldquo1rdquo represents a connection and a ldquo0rdquo represents no connection

2) Tree Diagram Representation

The tree diagram representation shows all possible information and encoded sequences for the convolutional encoder In the tree diagram a solid line represents input information bit 0 and a dashed line represents input information bit 1 The corresponding output encoded bits are shown on the branches of the tree An input information sequence defines a specific path through the tree diagram from left to right

3) State Diagram Representation

The state diagram shows the state information of a convolutional encoder The state information of a convolutional encoder is stored in the shift registers In the state diagram the state information of the encoder is shown in the circles Each new input information bit causes a transition from one state to another The path information between the states denoted as xc represents input information bit x and output encoded bits c It is customary to begin convolutional encoding from the all zerostate

4) Trellis Diagram RepresentationThe trellis diagram is basically a redrawing of the state diagram It shows all possible state transitions at each time step Frequently a legend accompanies the trellis diagram to show the state transitions and the corresponding input and output bit mappings (xc)

Catastrophic Convolutional code

Catastrophic convolutional code causes a large number of bit errors when only asmall number of channel bit errors is received This type of code needs to be avoided andcan be identified by the state diagram A state diagram having a loop in which a nonzeroinformation sequence corresponds to an all-zero output sequence identifies a catastrophicconvolutional code

Hard-Decision and Soft-Decision Decoding

Hard-decision and soft-decision decoding refer to the type of quantization used onthe received bits Hard-decision decoding uses 1-bit quantization on the received channel values Soft-decision decoding uses multi-bit quantization on the received channel values For the ideal soft-decision decoding (infinite-bit quantization) the received channel values are directly used in the channel decoder

Viterbi decoding

Viterbi decoding is the best known implementation of the maximum likely-hood decoding Here we narrow the options systematically at each time tick The principal used to reduce the choices is this

1 The errors occur infrequently The probability of error is small

2 The probability of two errors in a row is much smaller than a single error that is the errors are distributed randomly

The Viterbi decoder examines an entire received sequence of a given length The decoder computes a metric for each path and makes a decision based on this metric All paths are followed until two paths converge on one node Then the path with the higher metric is kept and the one with lower metric is discarded The paths selected are called the survivors

For an N bit sequence total numbers of possible received sequences are 2N Of these only 2kL are valid The Viterbi algorithm applies the maximum-likelihood principles to limit the comparison to 2 to the power of kL surviving paths instead of checking all paths The most common metric used is the Hamming distance metric This is just the dot product between the received codeword and the allowable codeword

Decoding Complexity for Convolutional Codes

For a general convolutional code the input information sequence contains kLbits where k is the number of parallel information bits at one time interval and L is thenumber of time intervals This results in L+m stages in the trellis diagram There areexactly 2kL distinct paths in the trellis diagram and as a result an exhaustive search forthe ML sequence would have a computational complexity on the order of O[2kL]

TheViterbi algorithm reduces this complexity by performing the ML search one stage at a time in the trellis At each node (state) of the trellis there are 2k calculations Thenumber of nodes per stage in the trellis is 2m Therefore the complexity of the Viterbialgorithm is on the order of O[(2k)(2m)(L+m)] This significantly reduces the number ofcalculations required to implement the ML decoding because the number of time intervalsL is now a linear factor and not an exponent factor in the complexity However therewill be an exponential increase in complexity if either k or m increases

Conclusion

Fundamentally convolutional codes do not offer more protection against noise than an equivalent block code In many cases they generally offer greater simplicity of implementation over a block code of equal power

REFERENCES

1 Herbert Taub Schilling Principle of Communication System Mcgraw-Hill 2002

2 Martin S Analog and Digital Communication System Prentice Hall 2001

3 P M Grant and D G Mcruickshank I A Glover and P M Grant Digital Communications Pearson Education 2009

4 V Pless Introduction to the Theory of Error-Correcting Codes 3rd ed New York John Wiley amp Sons 1998

5 Tomasi W Electronic Communication Systems Fundamentals Through Advanced Prentice Hall 2004

6 Lee L H C Error-Control Block Codes for Communications Engineers ArtechHouse 2000

7 httpwwwscribdcomdoc35139573Notes-in-Phase-Shift-Keying-Bpsk-Qpsk

8 httpwwwwikipediaorg

9 MacQuarie University Lecturer Notes httpwwwelecmqeduau~clfiles_pdfelec321

  • Viterbi decoding
Page 17: Analogue and Digital Communication assignment

Viterbi decoding

Viterbi decoding is the best known implementation of the maximum likely-hood decoding Here we narrow the options systematically at each time tick The principal used to reduce the choices is this

1 The errors occur infrequently The probability of error is small

2 The probability of two errors in a row is much smaller than a single error that is the errors are distributed randomly

The Viterbi decoder examines an entire received sequence of a given length The decoder computes a metric for each path and makes a decision based on this metric All paths are followed until two paths converge on one node Then the path with the higher metric is kept and the one with lower metric is discarded The paths selected are called the survivors

For an N bit sequence total numbers of possible received sequences are 2N Of these only 2kL are valid The Viterbi algorithm applies the maximum-likelihood principles to limit the comparison to 2 to the power of kL surviving paths instead of checking all paths The most common metric used is the Hamming distance metric This is just the dot product between the received codeword and the allowable codeword

Decoding Complexity for Convolutional Codes

For a general convolutional code the input information sequence contains kLbits where k is the number of parallel information bits at one time interval and L is thenumber of time intervals This results in L+m stages in the trellis diagram There areexactly 2kL distinct paths in the trellis diagram and as a result an exhaustive search forthe ML sequence would have a computational complexity on the order of O[2kL]

TheViterbi algorithm reduces this complexity by performing the ML search one stage at a time in the trellis At each node (state) of the trellis there are 2k calculations Thenumber of nodes per stage in the trellis is 2m Therefore the complexity of the Viterbialgorithm is on the order of O[(2k)(2m)(L+m)] This significantly reduces the number ofcalculations required to implement the ML decoding because the number of time intervalsL is now a linear factor and not an exponent factor in the complexity However therewill be an exponential increase in complexity if either k or m increases

Conclusion

Fundamentally convolutional codes do not offer more protection against noise than an equivalent block code In many cases they generally offer greater simplicity of implementation over a block code of equal power

REFERENCES

1 Herbert Taub Schilling Principle of Communication System Mcgraw-Hill 2002

2 Martin S Analog and Digital Communication System Prentice Hall 2001

3 P M Grant and D G Mcruickshank I A Glover and P M Grant Digital Communications Pearson Education 2009

4 V Pless Introduction to the Theory of Error-Correcting Codes 3rd ed New York John Wiley amp Sons 1998

5 Tomasi W Electronic Communication Systems Fundamentals Through Advanced Prentice Hall 2004

6 Lee L H C Error-Control Block Codes for Communications Engineers ArtechHouse 2000

7 httpwwwscribdcomdoc35139573Notes-in-Phase-Shift-Keying-Bpsk-Qpsk

8 httpwwwwikipediaorg

9 MacQuarie University Lecturer Notes httpwwwelecmqeduau~clfiles_pdfelec321

  • Viterbi decoding
Page 18: Analogue and Digital Communication assignment

REFERENCES

1 Herbert Taub Schilling Principle of Communication System Mcgraw-Hill 2002

2 Martin S Analog and Digital Communication System Prentice Hall 2001

3 P M Grant and D G Mcruickshank I A Glover and P M Grant Digital Communications Pearson Education 2009

4 V Pless Introduction to the Theory of Error-Correcting Codes 3rd ed New York John Wiley amp Sons 1998

5 Tomasi W Electronic Communication Systems Fundamentals Through Advanced Prentice Hall 2004

6 Lee L H C Error-Control Block Codes for Communications Engineers ArtechHouse 2000

7 httpwwwscribdcomdoc35139573Notes-in-Phase-Shift-Keying-Bpsk-Qpsk

8 httpwwwwikipediaorg

9 MacQuarie University Lecturer Notes httpwwwelecmqeduau~clfiles_pdfelec321

  • Viterbi decoding