equalization. fig. digital communication system using an adaptive equaliser at the receiver
TRANSCRIPT
Equalization
Equalization
Fig. Digital communication system using an adaptive equaliser at the receiver.
Equalization
Equalization compensates for or mitigates inter-symbol interference (ISI) created by multipaths in time dispersive channels (frequency selective fading channels).
Equalizer must be “adaptive”, since channels are time varying.
Zero forcing equalizer
Design from frequency domain viewpoint.
Zero forcing equalizer
∴ must compensate for the channel distortion.
⇒ Inverse channel filter completely eliminates ISI cau⇒sed by the channel ⇒ Zero Forcing equaliser, ⇒ ZF.
Zero forcing equalizer
Zero forcing equalizer
Fig. Pulses having a raised cosine spectrum
Zero forcing equalizer
Zero forcing equalizer
Example:
A two-path channel with impulse response
The transfer function is
The inverse channel filter has the transfer function
Zero forcing equalizer
Since DSP is generally adopted for automatic equalizers it is convenient to use discrete time (sampled) represe⇒
ntation of signal.
Received signal
For simplicity, assume say
Zero forcing equalizer
Denote a T-time delay element by Z− 1, then
Zero forcing equalizer
The transfer function of the inverse channel filter is
This can be realized by a circuit known as the linear transversal filter.
Zero forcing equalizer
Zero forcing equalizer
The exact ZF equalizer is of infinite length but usually implemented by a truncated (finite) length approximation.
For , a 2-tap version of the ZF equalizer has coefficients
Modeling of ISI channels
Complex envelope of any modulated signal can be expressed as
where ha(t) is the amplitude shaping pulse.
Modeling of ISI channels
In general, ASK, PSK, and QAM are included, but most FSK waveforms are not.
Received complex envelope is
where is channel impulse response.
Maximum likelihood receiver has impulse response
matched to f(t)
Modeling of ISI channels
Output:
where nb(t) is output noise and
Least Mean Square Equalizers
Fig. A basic equaliser during training
Least Mean Square Equalizers
Minimization of the mean square error (MSE), ⇒ MMSE.
Equalizer input
h(t): impulse response of tandem combination of transmit filter, channel and receiver filter.
In the absence of noise and ISI
The error due to noise and ISI at t=kT is given by The error is
Least Mean Square Equalizers
The MSE is
In order to minimize , we require
……
Least Mean Square Equalizers
Least Mean Square Equalizers
The optimum tap coefficients are obtained as W = R−1 P.
But this is solved on the knowledge of xk's, which are the transmitted pilot data.
A given sequence of xk's called a test signal, reference signal or training signal is transmitted prior to the information signal, (periodically).
By detecting the training sequence, the adaptive algorithm in the receiver is able to compute and update the optimum wnk‘s -- until the next training sequence is sent.
Least Mean Square Equalizers
Example: Determine the tap coefficients of a 2-tap MMSE for:
Now, given that
Least Mean Square Equalizers
Mean Square Error (MSE) for optimum weights Let:
Mean Square Error (MSE) for optimum weights Now, the optimum weight vector was obtained as
Substituting this into the MSE formula above, we have
Mean Square Error (MSE) for optimum weights
Now, apply 3 matrix algebra rules:
For any square matrix
For any matrix product
For any square matrix
Mean Square Error (MSE) for optimum weights
For the example
MSE for zero forcing equalizers Recall for ZF equalizer
Assuming the same channel and noise as for the MMSE equalizer
for MMSE
MSE for zero forcing equalizers The ZF equalizer is an inverse filter; it amplifies noise ⇒
at frequencies where the channel transfer function has high attenuation.
The LMS algorithm tends to find optimum tap coefficients compromising between the effects of ISI and noise power increase, while the ZF equalizer design does not take noise into account.
Diversity Techniques
Mitigates fading effects by using multiple received signals which experienced different fading conditions.
Space diversity: With multiple antennas. Polarization diversity: Using differently polarized
waves. Frequency diversity: With multiple frequencies. Time diversity: By transmission of the same signal in
different times. Angle diversity: Using directive antenna aimed at
different directions.
Signal combining methods. Maximal Ratio combining.
Diversity Techniques
Equal gain combining. Selection (switching) combining.
Space diversity is classified into micro-diversity and macro-diversity.
Micro-diversity: Antennas are spaced closely to the order of a wavelength. Effective for fast fading where signal fades in a distance of the order of a wavelength.
Macro (site) diversity: Antennas are spaced wide enough to cope with the topographical conditions ( eg: buildings, roads, terrain). Effective for shadowing, where signal fades due to the topographical obstructions.
PDF of SNR for diversity systems Consider an M-branch space diversity system.
Signal received at each branch has Rayleigh distribution.
All branch signals are independent of one another.
Assume the same mean signal and noise power the s⇒ame mean SNR for all branches.
Instantaneous
PDF of SNR for diversity systems
Probability that takes values less than some threshold x is,
Selection Diversity
Selection Diversity
Branch selection unit selects the branch that has the largest SNR.
Events in which the selector output SNR, , is less than some value, x,is exactly the set of events in which each is simultaneously below x.
Since independent fading is assumed in each of the M branches,
Selection Diversity
Maximal Ratio Combining
Maximal Ratio Combining
is complex envelope of signal in the k-th branch.
The complex equivalent low-pass signal u(t) containing the information is common to all branches.
Assume u(t) normalized to unit mean square envelope such that
Maximal Ratio Combining
Assume time variation of gk (t) is much slower than that of u(t) .
Let nk(t) be the complex envelope of the additive Gaussian noise in the k-th receiver (branch).
⇒ usually all k N are equal.
Maximal Ratio Combining
Now define SNR of k-th branch as
Now,
Where are the complex combining weight factors. These factors are changed from instant to instant as the
branch signals change over the short term fading.
Maximal Ratio Combining
These factors are changed from instant to instant as the branch signals change over the short term fading.
How should be chosen to achieve maximum combiner output SNR at each instant?
Assuming nk(t)’s are mutually independent (uncorrelated), we have
Maximal Ratio Combining
Instantaneous output SNR, ,
Maximal Ratio Combining
Apply the Schwarz Inequality for complex valued numbers.
The equality holds if for all k, where K is an arbitrary complex constant.
Let
Maximal Ratio Combining
with equality holding if and only if , for each k.
Optimum weight for each branch has magnitude proportional to the signal magnitude and inversely proportional to the branch noise power level, and has a phase, canceling out the signal (channel ) phase.
This phase alignment allows coherent addition of branch signals “co-phasing”.⇒
Maximal Ratio Combining
each has a chi-square distribution.
is distributed as chi-square with 2M degrees of freedom.
Average SNR, , is simply the sum of the individual
for each branch, which is Γ,
Convolutional Codes
Department of Electrical Engineering
Wang Jin
Overview
Background Definition Speciality An Example State Diagram Code Trellis Transfer Function Summary Assignment
Background
Convolutional code is a kind of code using in digital communication systems
Using in additive white Gaussian noise channel
To improve the performance of radio and satellite communication systems
Include two parts: encoding and decoding
Block codes Vs Convolutional Codes Block codes take k input bits and produce n
output bits, where k and n are large There is no data dependency between blocks Useful for data communications
Convolution codes take a small number of input bits and produce a small number of output bits each time period Data passes through convolutional codes in a
continuous stream Useful for low-latency communication
Definition
A type of error-correction code in which each k-bit information symbol (each k-bit string) to
be encoded is transformed into an n-bit symbol, where n>k
the transformation is a function of the last M information symbols, where M is the constraint length of the code
Speciality
k bits are input, n bits are output k and n are very small (usually k=1~3, n=2~6).
Frequently, we will see that k=1 Output depends not only on current set of k input
bits, but also on past input The “constraint length” M is defined as the
number of shifts, over which a single message it can influence the encoder output
Frequently, we will see that k=1
An Example
A simple rate k/n= 1/2 convolutional code encoder (M=3)
The box represents one element of a serial register
+
+
Code digits
Binary information
digits
Input Output
An Example (cont’d) The content of the shift registers is shifted from left
to right Plus sign represents modulo-2 (XOR) addition Output by encoder are multiplexed into serial binary
digits For every binary digit enters the encoder, two code
digits are output A generator sequence specifies the connections of a
modulo-2 (XOR) adder to the encoder shift register. In this example, there are two generator sequences,
g1=[1 1 1] and g2=[1 0 1]
An Example (cont’d)
t=0
t=1
t=2
+
+
Code digits
Binary information
digits
Input Output
x2 x1 x0
x3 x2 x1
x4 x3 x2
x5 x4 x3t=3
When t=3, the content of the
initial state (x2, x1,
x0 ) is missing.
To Determine the Output Codeword There are essentially two ways
State diagram approach Transform-domain approach
Only concentrate on state diagram approach Contents of shift registers make up “state” of code:
Most recent input is most significant bit of state Oldest input is least significant bit of state (this convention is sometimes reverse)
Arcs connecting states represent allowable transitions Arcs are labeled with output bits transmitted during
transition
To Determine the Output Code Word ---State Diagram Rate k/n=1/2 convolutional code encoder
(M=3)
State is defined by the most (M-1) message bits moves into the encoder
D0 D1 D2
+
+
Code digits
Binary information
digits
State(recent M-1 digits)
State Diagram (cont’d)
There are four states [00], [01], [10], [11] corresponding to the (M-1) bits
Generally, assuming the encoder starts in the all-zero [00] state
State Diagram (cont’d)
Easiest way to determine the state diagram is to first determine the state table as shown below
Input (x3)
Start state (x2, x1)
Final state (x3, x2)
Output
0 0 0 0 0 0 0 1 0 0 1 0 1 1 0 0 1 0 0 1 1 1 0 1 1 0 0 0 0 1 0 0 1 1 0 1 1 0 1 1 0 1 0 1 1 0 1 0 1 1 1 1 1 1 1 0
State Diagram (cont’d)
1/01 means (for example), that the input binary digit to the encoder was 1 and the corresponding codeword output is 01
10
00
11
01
1/10
1/01
0/11
0/01
0/00
1/11
1/00
0/10
Trellis Representation of Convolutional Code
State diagram is “unfolded” a function of time Time indicated by movement towards right
Code Trellis It is simply another way of drawing the
state diagram Code trellis for rate k/n=1/2 ,M=3
convolutional code shown below00
11
10
01 01
00
11
10
0/11
0/01 1/01
0/001/11
1/00
0/10
1/10
Start state Final state
Encoding Example Using Trellis Diagram Trellis diagram, similar to state diagram, also
shows the evolution in time of the state encoder
Consider the r=1/2, M=3 convolutional code
Encoding Example Using Trellis Diagram
00
01
10
11
01
10
11
000/00
0/11
1/11
1/00
0/10
1/010/01
1/10
State
Input data 0 1 0 0 1
Output 00 11 10 11 11
Distance Structure of a Convolutional code The Hamming distance between any two distinct
code sequences is the number of bits in which they differ:
The minimum free Hamming distance of a convolutional code is the smallest Hamming distance separating any two distinct code sequences:
The Transfer Function This is also known as the generating function
or the complete path enumerator. Consider the r=1/2 , M=3 convolutional code
example and redraw the state diagram.
a0 ba1c
d
JND2
JN
JD JD2
JDJND
JND
The Transfer Function (Con’d) State “a” has been split into an initial state “a0”and a
final state “a1” We are interested in the number of paths that
diverge from the all aero path at state “a” at some point in time and remerges with the all-zero path.
Each branch transition is labeled with a term , where are all integers such that: -----corresponds to the length of the branch -----Hamming weigh of the input zero for a “0” input and one
for a “1” input -----Hamming weight of the encoder output for that branch
The Transfer Function (Con’d) Assuming a unity input, we can write the set of
equations
By solving these equations,
From the transfer function, there is one path at a Hamming distance of 5 from the all-zero path. This path is of length 3 branches and corresponds to a difference of one input information bit from the all zero path. Other terms can be interpreted similarly. The minimum distance is thus 5.
Search for Good Codes
We would like convolutional codes with large free distance Must avoid “catastrophic codes”
Generators for best convolutional codes are generally found via computer search Search is constrained to codes with regular
structure Search is simplified because any permutation of
identical generators is equivalent Search is simplified because of linearity
Best Rate ½ Codes
M Generators
(in Octal)
dfree
3 5 7 5
4 15 17 6
5 23 35 7
6 53 75 8
7 133 171 10
8 247 371 10
9 561 753 12
Best Rate 1/3 Codes
M Generators
(in Octal)
dfree
3 5 7 7 8
4 13 15 17 10
5 25 33 37 12
6 47 53 75 13
7 133 145 171 15
8 225 331 367 16
9 557 663 711 18
Best Rate 2/3 Codes
M Generators
(in Octal)
dfree
2 17 16 15 4
3 27 75 72 6
4 236 155 337 7
Summary
What is convolutional code The transformation of a convolutional code We can represent convolutional codes as
generators, block diagrams, state diagrams and trellis diagrams
Convolutional codes are useful for real-time applications because they can be continuously encoded and decoded
Assignment
Question: Construct the state table and state diagram for the encoder below.
+
+
Code digits
Binary information
digits
Input (k=1) Output (n=3)
+
THANK YOU