dct lecture 24 2011
TRANSCRIPT
-
8/3/2019 Dct Lecture 24 2011
1/49
Convolutional codes
A convolutional encoder:
Encoding is done as a continuous process
Data stream may be shifted into the encoder as fixed
size blocks but are not block coded Encoder is a machine with memory
Block codes are based on algebraic/combinatorialtechniques.
Convolutional codes are based on constructiontechniques.
-
8/3/2019 Dct Lecture 24 2011
2/49
Convolutional codes
A Convolutional code is specified by three parameters
or
is the coding rate, determining thenumber of data bits per coded word.
Kis the constraint length of the encoder, where theencoder has K-1 memory elements each of size k.
),,( Kkn),/( Knk
nkRc /
-
8/3/2019 Dct Lecture 24 2011
3/49
1u
2u
A Rate Convolutional encoder
(representations)
1u
2u
k = 1 bit , n = 2 bits , Rate = k/n =
L = no. of memory elements
K = L + 1 = 2 + 1 = 3 or (L+1) x n = 6
k = 2 bits , n = 4 bits , Rate = k/n =
L = no. of memory elements
K = L + 1 = 2 + 1 = 3 or (L+1) x n = 12
-
8/3/2019 Dct Lecture 24 2011
4/49
A Rate Convolutional encoder
Input databits
Output codedbitsm
First coded bit1u
2u
Second coded bit
21,uu
(Branch word)
-
8/3/2019 Dct Lecture 24 2011
5/49
A Rate Convolutional encoder)101(mMessage sequence:
1 0 01t
1u
2u
11
21 uu
0 1 02t
1u
2u
01
21 uu
1 0 13t
1u
2u
0021
uu
0 1 04t
1u
2u
0121
uu
Time Output OutputTime(Branch word) (Branch word)
-
8/3/2019 Dct Lecture 24 2011
6/49
A Rate Convolutional encoder
Encoder)101(m )1110001011(U
0 0 15t
1u
2u
11
21 uu
0 0 06t
1u
2u
00
21 uu
Time Output Time Output(Branch word) (Branch word)
-
8/3/2019 Dct Lecture 24 2011
7/49
Effective code rate
Initialize the memory before encoding the first bit(all-zero)
Clear out the memory after encoding the last bit(all-zero), a tail of zero-bits is appended to data bits.
Effective code rate : M is the number of data bits and k=1 is assumed:
Data Encoder CodewordTail
ceff RKMn
MR
)1(
-
8/3/2019 Dct Lecture 24 2011
8/49
Encoder representation
Vector representation:
m
1u
2u
21 uu
)101(
)111(
2
1
g
g
-
8/3/2019 Dct Lecture 24 2011
9/49
Encoder representation
Impulse response representation:
1110001011
1110111
0000000
1110111
OutputInput m
Modulo-2 sum:
11001
0101011100
111011:sequenceOutput
001:sequenceInput
21 uu
Branch wordRegistercontents
-
8/3/2019 Dct Lecture 24 2011
10/49
Encoder representation
Polynomial representation:
The output sequence is found as follows:
22)2(
2
)2(
1
)2(
02
22)1(
2
)1(
1
)1(
01
1..)(
1..)(
XXgXggX
XXXgXggX
g
g
)()(withinterlaced)()()( 21 XXXXX gmgmU
-
8/3/2019 Dct Lecture 24 2011
11/49
Encoder representation
In more detail:
1110001011
)1,1()0,1()0,0()0,1()1,1()(.0.0.01)()(
.01)()(
1)1)(1()()(
1)1)(1()()(
432
432
2
432
1
422
2
4322
1
U
Ugm
gm
gm
gm
XXXXXXXXXXX
XXXXXX
XXXXX
XXXXXXXX
-
8/3/2019 Dct Lecture 24 2011
12/49
State diagram
A finite-state machine only encounters a finitenumber of states.
State of a machine: the smallest amount ofinformation that, together with a current inputto the machine, can predict the output of themachine.
In a Convolutional encoder, the state isrepresented by the content of the memory.
Hence, there are states, K is theconstraint length.
12
K
-
8/3/2019 Dct Lecture 24 2011
13/49
State diagram
A state diagram is a way to represent theencoder.
A state diagram contains all the states and allpossible transitions between them.
Only two transitions initiate from a state
Only two transitions end up in a state
-
8/3/2019 Dct Lecture 24 2011
14/49
A Rate Convolutional encoder
Input data
bits
Output coded
bitsm
1u
2u
First coded bit
Second coded bit
21,uu
(Branch word)
-
8/3/2019 Dct Lecture 24 2011
15/49
State diagram
CurrentState
Input NextState
Output
S0
00
0 S0 00
1 S2 11S1
01
0 S0 11
1 S2 00
S2
10
0 S1 10
1 S3 01S3
11
0 S1 01
1 S3 10
10 01
00
11
0S
1S2S
3S
1/11
1/00
1/01
1/10
0/11
0/00
0/01
0/10
Input
Output
(Branch word)
-
8/3/2019 Dct Lecture 24 2011
16/49
Trellis Representation
Trellis diagram is an extension of the state diagramthat shows the passage of time.
Timeit 1it
State 000 S
011 S
102 S
113 S
0/00
1/10
0/11
0/10
0/01
1/11
1/01
1/00
-
8/3/2019 Dct Lecture 24 2011
17/49
Trellis
6t
1t
2t 3t 4t 5
t
Input bits
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
1 0 1 0 0
11 10 00 10 11
Output bits
Tail bits
-
8/3/2019 Dct Lecture 24 2011
18/49
Trellis
1/11
0/00
0/10
1/11
1/01
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
0/00
0/11
0/00
6t
1t
2t
3t
4t 5t
1 0 1 0 0
11 10 00 10 11
Input bits
Output bits
Tail bits
-
8/3/2019 Dct Lecture 24 2011
19/49
Block diagram of the DCS
sequencereceived
321 ,...),...,,,( iZZZZZ
Information
source
Rate 1/n
Conv. encoderModulator
Information
sink
Rate 1/n
Conv. decoder Demodulator
sequenceInput
,...),...,( mi,m2m1m
sequenceCodeword
321 ,...),...,,,( iUUUU
G(m)U
,...),...,,( 21 immmm
Channel
-
8/3/2019 Dct Lecture 24 2011
20/49
Soft and hard decision
decoding In hard decision: The demodulator makes a firm or hard decision
whether 1 or 0 is transmitted and provides no other
information for the decoder such as the reliability of thedecision.
In Soft decision:
The demodulator provides the decoder with some sideinformation together with the decision. The sideinformation provides the decoder with a measure ofconfidence for the decision.
-
8/3/2019 Dct Lecture 24 2011
21/49
Soft and hard decision
decoding ML hard-decisions decoding rule:
Choose the path in the trellis with minimum
Hamming distance from the received sequence
ML soft-decisions decoding rule:
Choose the path in the trellis with minimumEuclidean distance from the received sequence
MLMaximum Likelihood
-
8/3/2019 Dct Lecture 24 2011
22/49
Decoding of Convolutional Codes
Maximum likelihood decoding of convolutional codes
Finding the code branch in the trellis that was most likely
transmitted
Based on calculating code Hamming distances for each branchforming encoded word
Assume that the information symbols applied into an AWGN
channel are equally alike and independent,
0 1 2... ...jx x x xx
0 1... ...
jy y yy
non -erroneous code:
Decoder(=DistanceCalculation &Comparison)x
y
-
8/3/2019 Dct Lecture 24 2011
23/49
Decoding of Convolutional Codes
Probability to decode the symbols is then
The most likely path through the trellis will maximize this
metric.
Since probabilities are often small numbers, often ln( ) is
taken from both sides, yielding,
0
( , ) ( | )j j
j
p p y x
y x
received code:
non -erroneous code:
1
ln ( , ) ln ( )j mj
j
p p y x
y x
-
8/3/2019 Dct Lecture 24 2011
24/49
Example of Exhaustive
Maximal Likelihood Detection
Assume a three bit message is transmitted [encoded by
(2,1,2) convolutional encoder]. To clear the decoder,
two zero-bits are appended after message. Thus 5 bits
are encoded resulting in 10 bits of code. Assume
channel error probability is p = 0.1. After the channel
10,01,10,11,00 is produced (including some errors).
What comes after the decoder, e.g. what was most likely
the transmitted code and what were the respective
message bits?
-
8/3/2019 Dct Lecture 24 2011
25/49
Example of Exhaustive
Maximal Likelihood Detection
a
b
c
d
states
decoder outputsif this path is selected
-
8/3/2019 Dct Lecture 24 2011
26/49
Received Sequence 10 01 10 11 00
All Zero Sequence 00 00 00 00 00
Hamming Distance 5
Path metric 5 (-2.3) + 5 (-0.11) = -12.05
ln [ p(0/0)] = ln [p(1/1)] = ln (0.9) = - 0.11
ln [ p(1/0)] = ln [p(0/1)] = ln (0.1) = - 2.30
Example of Exhaustive
Maximal Likelihood Detection
-
8/3/2019 Dct Lecture 24 2011
27/49
Example of Exhaustive
Maximal Likelihood Detection
-
8/3/2019 Dct Lecture 24 2011
28/49
correct:1+1+2+2+2=8;8 ( 0.11) 0.88
false:1+1+0+0+0=2;2 ( 2.30) 4.6
total path metric: 5.48
Example of Exhaustive
Maximal Likelihood Detection
-
8/3/2019 Dct Lecture 24 2011
29/49
The Viterbi Algorithm
Viterbi algorithm performs ML decoding
finds a path through trellis with the largest metric,
processes demodulator outputs in an iterative manner,
At each step in the trellis, it compares the metric of
all paths entering each state, and keeps only the path
with the largest metric, called the survivor, together
with its metric,
It proceeds in the trellis by eliminating the least likely
paths.
-
8/3/2019 Dct Lecture 24 2011
30/49
Example of Hard decision
Viterbi decoding
1/11
0/00
0/10
1/11
1/01
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
0/00
0/11
0/00
6t
1t
2t 3t 4t 5
t
)101(m
)1110001011(U )1110000011(Z
-
8/3/2019 Dct Lecture 24 2011
31/49
Example of Hard decision
Viterbi decoding-contd Label all branches with the branch metric (Hamming distance)
0
2
1
2
1
0
2
1
1
2
1
0
0
1
0
2
1
0
2
6t
1t
2t 3t 4t 5
t
1
0
i =1
-
8/3/2019 Dct Lecture 24 2011
32/49
Example of Hard decision
Viterbi decoding-contd
0
2
1
2
1
0
2
1
1
2
1
0
0
1
0
2
1
0
2
6t
1t
2t 3t 4t 5
t
1
0 2
0
State metricPath metric
i=2
-
8/3/2019 Dct Lecture 24 2011
33/49
Example of Hard decision
Viterbi decoding-contd
0
2
1
2
1
0
2
1
1
2
1
0
0
1
0
2
1
0
2
6t
1t
2t 3t 4t 5
t
1
0 2
0
2
4
1
1
i=3
-
8/3/2019 Dct Lecture 24 2011
34/49
Example of Hard decision
Viterbi decoding-contd
0
2
1
2
1
0
2
1
1
2
1
0
0
1
0
2
1
0
2
6t
1t
2t 3t 4t 5
t
1
0 2
0
2
4
1
1 2
2
1
2
XX
XX
i=4
-
8/3/2019 Dct Lecture 24 2011
35/49
Example of Hard decision
Viterbi decoding-contd
0
2
1
2
1
0
2
1
1
2
1
0
0
1
0
2
1
0
2
6t
1t
2t 3t 4t 5
t
1
0 2
0
2
4
1
1 2
2
1
2 3
1
XX
XX
X
i=5
-
8/3/2019 Dct Lecture 24 2011
36/49
Example of Hard decision
Viterbi decoding-contd
0
2
1
2
1
0
2
1
1
2
1
0
0
1
0
2
1
0
2
6t
1t
2t 3t 4t 5
t
1
0 2
0
2
4
1
1 2
2
1
2 3
1
XX
XX
X
i=6
X
1
-
8/3/2019 Dct Lecture 24 2011
37/49
Example of Hard decision
Viterbi decoding-contd
0
2
1
2
1
0
2
1
1
2
1
0
0
1
0
2
1
0
2
6t
1t
2t 3t 4t 5
t
1
0 2
0
2
4
1
1 2
2
1
2 3
1
XX
XX
X
X
1
)101(m)101(m
)1110001011(U)1110000011(Z
-
8/3/2019 Dct Lecture 24 2011
38/49
Example of Hard decision
Viterbi decoding
1/11
0/00
0/10
1/11
1/01
0/00
0/11
0/10
0/01
1/11
1/01
1/00
0/00
0/11
0/10
0/01
0/00
0/11
0/00
6t
1t
2t 3t 4t 5
t
)101(m
)1110001011(U )0110111011(Z
-
8/3/2019 Dct Lecture 24 2011
39/49
Example of Hard decision
Viterbi decoding-contd
0
2
0
1
2
1
0
1
1
0
1
2
2
1
0
2
1
1
1
6t
1t
2t 3t 4t 5
t
1
0
i=1
-
8/3/2019 Dct Lecture 24 2011
40/49
Example of Hard decision
Viterbi decoding-contd
0
2
0
1
2
1
0
1
1
0
1
2
2
1
0
2
1
1
1
6t
1t
2t 3t 4t 5
t
1
0 2
0
i=2
-
8/3/2019 Dct Lecture 24 2011
41/49
Example of Hard decision
Viterbi decoding-contd
0
2
0
1
2
1
0
1
1
0
1
2
2
1
0
2
1
1
1
6t
1t
2t 3t 4t 5
t
1
0 2 3
0
2
30
i=3
-
8/3/2019 Dct Lecture 24 2011
42/49
Example of Hard decision
Viterbi decoding-contd
0
2
0
1
2
1
01
1
0
1
2
2
1
0
2
1
1
1
6t
1t
2t 3t 4t 5
t
1
0 2 3 0
3
2
3
0
2
30
i=4
-
8/3/2019 Dct Lecture 24 2011
43/49
Example of Hard decision
Viterbi decoding-contd
0
2
0
1
2
1
01
1
0
1
2
2
1
0
2
1
1
1
6t
1t
2t 3t 4t 5
t
1
0 2 3 0 1
3
2
3
20
2
30
i=5
-
8/3/2019 Dct Lecture 24 2011
44/49
Example of Hard decision
Viterbi decoding-contd
0
2
0
1
2
1
01
1
0
1
2
2
1
0
2
1
1
1
6t
1t
2t 3t 4t 5
t
1
0 2 3 0 1 2
3
2
3
20
2
30
i=6
-
8/3/2019 Dct Lecture 24 2011
45/49
Example of Hard decisionViterbi decoding-contd
0
2
0
1
2
1
01
1
0
1
2
2
1
0
2
1
1
1
6t
1t
2t 3t 4t 5
t
1
0 2 3 0 1 2
3
2
3
20
2
30
)100( m
)101(m
)1110001011(U
)0110111011(Z
-
8/3/2019 Dct Lecture 24 2011
46/49
Free distance of Convolutional
Codes Since the code is linear, the minimum distance of thecode is the minimum distance between each of thecodewords and the all-zero codeword.
This is the minimum distance in the set of all arbitrarylong paths along the trellis that diverge and remergeto the all-zero path.
It is called the minimum free distance or the free
distance of the code, denoted by
ffree dd or
-
8/3/2019 Dct Lecture 24 2011
47/49
Free distance
2
0
1
2
1
0
2
1
1
2
1
0
0
2
1
1
0
2
0
6t
1t
2t
3t
4t 5t
Hamming weight
of the branchAll-zero path
The path diverging and remerging to
all-zero path with minimum weight
5fd
-
8/3/2019 Dct Lecture 24 2011
48/49
Performance bounds
Error correction capability of Convolutional codes isgiven by,
The coding gain is upper bounded by,2/)1(
fdt
)(log10gaincoding 10 fcdR
-
8/3/2019 Dct Lecture 24 2011
49/49
How to end-up decoding?
In the previous example it was assumed that theregister was finally filled with zeros thus finding theminimum distance path
In practice with long code words zeroing requiresfeeding of long sequence of zeros to the end of themessage bits: this wastes channel capacity &introduces delay
To avoid this path memory truncationis applied:
It has been experimentally tested that negligibleerror rate increase occurs by truncating at 5L
Note this also introduces a delay of 5L