channel coding 1 - uni-bremen.de...channel coding 1 dr.-ing. dirk wübben institute for...
TRANSCRIPT
Channel Coding 1
Dr.-Ing. Dirk WübbenInstitute for Telecommunications and High-Frequency Techniques
Department of Communications EngineeringRoom: N2300, Phone: 0421/218-62385
www.ant.uni-bremen.de/courses/cc1/
LectureMonday, 08:30 – 10:00 in N3130
ExerciseWednesday, 15:00 – 17:00 in N1250
Dates for exercises will be announced during lectures.
TutorShayan Hassanpour
Room: N2390Phone 218-62387
Sportturm (SpT), Room: C3165
Sportturm (SpT), Room: C3220
2
Outline Channel Coding I
1. Introduction Declarations and definitions, general principle of channel coding Structure of digital communication systems
2. Introduction to Information Theory Probabilities, measure of information SHANNON‘s channel capacity for different channels
3. Linear Block Codes Properties of block codes and general decoding principles Bounds on error rate performance Representation of block codes with generator and parity check matrices Cyclic block codes (CRC-Code, Reed-Solomon and BCH codes)
4. Convolutional Codes Structure, algebraic and graphical representation Distance properties and error rate performance Optimal decoding with Viterbi algorithm
3
Convolutional Codes
Basics Implementation of encoder and algebraic description Graphical representation in finite state diagram and Trellis diagram
Classification of convolutional encoders Non-recursive and recursive convolutional encoders Catastrophic convolutional encoders Truncated, terminated and tailbiting convolutional codes
Optimal decoding MAP and ML criterion Viterbi algorithm
Puncturing of convolutional codes Distance properties of convolutional codes Error rate performance
BASICS
4
5
Basics of Convolutional Codes (Faltungscodes)
Shift register (Schieberegister) structure with Lc·k memory elements ⇒ memory leads to statistical dependence of successive code words
In each cycle k bits are shifted⇒ each bit affects the output word Lc times ⇒ Lc is called constraint length (Einflusslänge), memory depth m= Lc-1
Coded symbols are calculated by modulo 2 additions of memory contents⇒ generators
Code word contains n bits code rate Rc = k/n Our further investigation is restricted to codes with rate Rc = 1/n, i.e. k=1!
u21 k… 21 k… 21 k…
2 1n …n-1x
Lc
Structure and Encoding
Example: (n, k, Lc)=(2,1,3)-convolutional code with generators g1= 78 and g2= 58Encoder is non-systematic and non-recursive (NSC-Code)
Rc = 1/2, Lc = 3 m = 2 2m states Information word u = [1 0 0 1 1 | 0 0] tail bits to finish in state 0
6
g10 =1
g20 =1
g12 =1
g22 =1
g11 =1
g21 =0
u (l ) state following state output1 00 10 110 10 01 100 01 00 111 00 10 111 10 11 010 11 01 010 01 00 11
1
1
1
0
1
1
1
1
0
1
0
1
1
1
u(l-1) u(l-2)u(l)
0 00011001
1 0001100
0 100110
0 00011
1 0001
1 100
0 10
0 0
7
Equivalence of Block Codes and Convolutional Codes
Convolutional codes: k information bits are mapped onto code word x including n bits Code words x are interdependent due to the memory
Block codes: Code words x are independent
Block codes are convolutional codes without memory
Only convolutional encoded sequences of finite length are utilized in practice Finite convolutional encoded sequence can be viewed as single code word
generated by a block code
Convolutional codes are a special case of block codes
8
Properties of Convolutional Codes
Only a small number of simple convolutional codes are of practical interest
Convolutional codes are not constructed by algebraic methods but by computer search(with the advantage of a simple mathematical description)
Convolutional decoders can easily process soft-decision input and compute soft-decision output(only hard-decision decoding has been considered for block codes)
Similar to block codes, systematic and non-systematic encoders are distinguished for convolutional codes (mostly non-systematic convolutional encoders are of practical interest)
9
Algebraic Description (1)
Description by generators (Generatoren) gj (octal) Example code with Lc=3 and Rc = 1/2
Encoding by discrete convolution in GF(2)
generally for ν=1,…,n
[ ][ ]
1 1,0 1,1 1,2 8
2 2,0 2,1 2,2 8
1 1 1 7
1 0 1 5
g g g
g g g
= = = = = =
g
g
( ) ,0
mod 2m
i ii
x g uν ν −=
= ⋅∑
1 1
2 2
= ∗= ∗
x u gx u g
Octal description:• Left-MSB (not always in literature)• For Lc≠3κ append zeros from the left• Example: g = [1 0 0 1 1] 010 011 238Problem:• In literature you will sometimes find Right-MSB • Sometimes zeros are appended from the right
( equivalent codes)
g10 =1
g20 =1
g12 =1
g22 =1
g11 =1
g21 =0
u()u(l-2)u(l-1)
x1()
x2()
10
Algebraic Description (2)
z-Transform (delay: z-1) D-Transform (delay: D)
Generator polynomials
Encoding (polynomial multiplication)
Encoded sequence
with generator matrix
Code space
( )0
ii
iz x zX −
∞
=
= ⋅∑ ( )0
ii
iDX D x∞
=
= ⋅∑
( )( )
2 21 1,0 1,1 1,2
2 22 2,0 2,1 2,2
1
1
G D g g D g D D D
G D g g D g D D
= + + = + +
= + + = +( ) ,
0
mi
ii
G D g Dν ν=
= ⋅∑
( ) ( ) ( )X D U D G Dν ν= ⋅
( ) [ ] ( ) ( )1 2( ) ( ) ( )nD X D X D X D U D D= = ⋅X G
Example
( ) [ ]1 2( ) ( ) ( )nD G D G D G D=G
( ) ( ) ( ){ }, GF(2)iU D D U D uΓ = ⋅ ∈G
for ν=1,…,n
11
Algebraic Description (3)
Example: u = [1 0 0 1 1]
Generator polynomials
Encoding
( ) [ ]( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
[ ]
1 2
1 2
3 4 2 3 4 2
2 3 6 2 3 4 5 6
( ) ( )
1 1 1 1
1 1
11 10 11 11 01 01 11
D X D X D
U D G D U D G D
D D D D D D D
D D D D D D D D D
=
= ⋅ ⋅ = + + ⋅ + + + + ⋅ + = + + + + + + + + +
=
X
( ) 3 41U D D D= + +
( ) ( )2 21 21 1G D D D G D D= + + = + u(-1) u(-2)
u()
x1()
x2()
12
Interpretation as a Block Code
Encoding can (alternatively) be described by matrix multiplication The input sequence is not necessarily finite, thus the generator matrix G of
convolutional codes is semi-infinite (here for n = 2)
Example from previous slide
= ⋅x u G1,0 2,0 1,1 2,1 1,2 2,2 1, 2,
1,0 2,0 1,1 2,1 1, 1 2, 1 1, 2,
1,0 2,0 1, 2 2, 2 1, 1 2, 1 1, 2,
m m
m m m m
m m m m m m
g g g g g g g gg g g g g g g g
g g g g g g g g− −
− − − −
=
G
[ ] [ ]
11 10 1111 10 11
1 0 0 1 1 11 10 11 11 01 01 1111 10 1111 10 11
11 10 11
= ⋅ =
x
G is a convolution matrix Toeplitz structure
Graphical Representation in Finite State Diagram
Convolutional encoder can be interpreted as Mealy state machine Output signal depends on current state and current input signal x() = fx( u( ), S( ) ) Next state S(+1) depends on current state and current input S(+1) = fS( u( ), S( ) )
Description by state transition diagram (Zustandsdiagramm) with 2m states Example: (2,1,3)-NSC-code with generators g1 = 78 and g2 = 58
13
0/00
1/11
0/10
1/00
1/01
1/10
0/01
0/110 0
1 0 0 1
1 1
g10 =1
g20 =1
g12 =1
g22 =1
g11 =1
g21 =0
u()u(-2)u(-1)
x1()
x2()
u/[x1x2]
Graphical Representation in the Trellis Diagram
Finite state diagram does not contain any information in time⇒ State diagram expanded by temporal component results in Trellis diagram
Trellis starts at S0 and is fully developed after Lc times 2 transitions leaving each state (for u ={0,1}), 2 transitions arriving in each state Example: (2,1,3)-NSC-code with generators g1 = 78 and g2 = 58
14
0 0
1 0 0 1
1 1
0/00
1/11
0/10
1/00
1/01
1/10
0/01
0/11
00
10
01
11
0/00
0/10
1/01
1/00
0/01
0/111/11
1/10=0
=4 =3 =2 =1
Trellis is fully developed all state transitions
CLASSIFICATION OF CONVOLUTIONAL ENCODERS
15
16
Classification of Convolutional Encoders
Non-recursive, non-systematic Convolutional Encoders (NSC-Encoders) Non-systematic encoders No separation between information bits and parity bits within the code word Higher performance than systematic encoders Usually applied in practice
Systematic Convolutional Encoders Code word explicitly contains information bits Not relevant in practice due to lower performance Exception: recursive systematic convolutional encoders for Turbo-Codes and
Trellis Coded Modulation (TCM) Channel Coding II
⇒ Recursive Systematic Convolutional Encoders (RSC-Encoders)
17
Recursive Convolutional Encoders (RSC-Encoders) (1)
Next state depends on current state encoder input and feedback structure of the encoder
Recursive encoders of practical interest are mostly systematic and can be derived from NSC encoders
Beginning with a NSC encoder the generator polynomials are converted to get a systematic but recursive encoder
Generator polynomials of RSC encoder derived from NSC generator polynomials
1 1
22 2
1
( ) ( ) 1( )( ) ( )( )
G D G DG DG D G DG D
→ =
→ =
18
Recursive Convolutional Encoders (RSC-Encoders) (2)
Output of the RSC encoder is given by
with
Using the delay operator D and g1,0=1 it follows
a() can be regarded as the current content of the register and depends on the current input u() and the old register contents a(-i)
1
( )( )( )
U DA DG D
= 1,0
( ) ( )m
ii
iA D g D U D
=
⋅ ⋅ =∑
1,1
( ) ( ) ( )m
ii
a g a i u=
+ ⋅ − =∑ 1,1
( ) ( ) ( )m
ii
a u g a i=
= + ⋅ −∑
1
2 21
1
2 2( ) (
( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ((
)))
UX D
D A DG
U D G D U D
X D U D G D G D G DD
= ⋅ =
= ⋅ = ⋅ = ⋅
NSC and RSC generate same code space• X(D) =U(D)·G(D) is CW of NSC• same X(D) is generated byRSC for info word U(D)·G1(D)
IIR:• infinite impulse response• for finite output weight
wH(u) ³ 2 required
19
Recursive Convolutional Encoders (RSC-Encoders) (3)
Example: (2,1,3)-RSC encoder with generators g1 = 78 (recursive) and g2 = 58
Block diagram and state diagram
1
22 2
( ) ( )( )( ) ( ) (1 ) with ( ) : ( ) ( ) ( 1) ( 2)
1
X D U DU DX D A D D A D a u a aD D
=
= ⋅ + = ⇒ = + − + −+ +
0 0
1 0 0 1
1 1
0/00
1/11
1/10
0/00
0/01
1/10
0/01
1/11
g20 =1
g12 =1
g22 =1
g11 =1
g21 =0
a(-1)u()
( )1x
( )2x
a(-2)a()
u/[x1x2]
20
Recursive Convolutional Encoders (RSC-Encoders) (4)
Now other polynomial used for feedback Example: (2,1,3)-RSC encoder with generators g1 = 78 and g2 = 58 (recursive)
g10 =1
g22 =1
g12 =1
g21 =0
g11 =1
a(-1)u()
( )1x
( )2x
a(-2)
1
22 2
( ) ( )( )( ) ( ) (1 ) with ( ) : ( ) ( ) ( 2)
1
X D U DU DX D A D D D A D a u a
D
=
= ⋅ + + = ⇒ = + −+
0 0
1 0 0 1
1 1
0/00
1/11
0/01
0/00
1/10
0/01
1/10
1/11
a()
u/[x1x2]
21
Catastrophic Convolutional Encoder (1)
Catastrophic convolutional encoder can produce sequences of infinite length with finite weight that do not return to the all-zero-path⇒ Finite number of transmission errors can lead to infinite decoding errors
Example: (2,1,3)-NSC encoder with generators g1 = 58 and g2 = 68 u = [1 1 1 1 1 …] x = [11 10 00 00 00 …] wH(u) = ∞ but wH(x)=3 For y = [00 00 00 00 00 …] ML-decoder decides for all-zero seq. inf. dec. errors
g10 =1
g20 =1
g12 =1
g22 =0
g11 =0
g21 =1
u(-1) u(-2)u()
0/00
1/11
0/01
1/01
1/10
1/00
0/11
0/100 0
0 11 0
1 1
22
Catastrophic Convolutional Encoder (2)
A convolutional encoder is catastrophic if there exists an information sequence U(D) such that wH(U(D))=∞ and wH(X(D)) < ∞
Encoder is noncatastrophic if generator polynomials contain no common factor
or
D corresponds to simple delay (no delay corresponds to 1)
Properties: The state diagram of a catastrophic encoder contains a circuit in which a nonzero
input sequence corresponds to an all-zero output sequence state diagram contains weight-zero loop about the all-one state
Systematic encoders are always non-catastrophic Encoder is non-catastrophic if at least one output stream is formed by the
summation of an odd number of taps
[ ]1 2GCD ( ), ( ), , ( ) with integer 0ng D g D g D D= ≥
[ ]1 2GCD ( ), ( ), , ( ) 1ng D g D g D =
23
Truncated Convolutional Codes
Only sequences of finite length are considered in practice For information sequence u with arbitrary tail (Ende), the trellis can end in any
state ⇒ last state is not known by decoder⇒ last bits are decoded with lower reliability ⇒ worse performance
Interpretation as block code: Description by generator matrix
0 1
0 1
0 1
0 1
0
m
m
m
=
G G GG G G
G G G G
G GG
1, 2, ,i i i n ig g g = G
with
G is a truncated (abgeschnitten) convolution matrix dimension: K x K·n
24
Terminated Convolutional Codes
Appending tail bits to information sequence u⇒ encoder stops in a predefined state (usually state 0) ⇒ reliable decoding of the last information bits
Number of tail bits equals memory of encoder, tail bits depend on encoder NSC: adding m times zero RSC: adding m tail bits, value depends on last state encoded by information bits
Adding tail bits reduces the code rate Rc Sequence u with K information bits
and 1/n convolutional code:
Generator matrix:
0 1
0 1
0 1
m
m
m
=
G G GG G G
G
G G G
( )c cK KR R
n K m K m= =
⋅ + +Terminated
G is a convolution matrix Toeplitz structure dimension: K x (K+m)·n
25
Tailbiting Convolutional Codes
For small sequence length N the addition of tail bits significantly reduces the code rate
Tailbiting Convolutional Codes: last state corresponds to first state No tail bits are required State machine does necessarily start in state 0 decoder more complex NSC: initialize encoder with last m bits of u
Generator matrix:
0 1
0 1
0 1
0 1
1 0
m
m
m
m
m
=
G G GG G G
G G G GG
G GG G G
G is a circular convolution matrix dimension: K x K·n
OPTIMAL DECODING
26
27
Optimal Decoding (1)
Information sequence 𝐮𝐮 contains K information bits and is encoded into the code sequence consisting of N symbols x() (each symbol contains n bits)
The received sequence is 𝐲𝐲, �𝐱𝐱 is the estimated coded sequence and 𝐚𝐚 ∈ Γdenotes an arbitrary coded sequence
MAP criterion (Maximum A-posteriori Probability) Optimum decoder that calculates the sequence �𝐱𝐱 which maximizes Pr �𝐱𝐱 𝐲𝐲
( ) ( ) ( ) ( )1 10 0 1 1n nx x x N x N= − − x
( )0x ( )1N −x
{ } { }
( ) { }( ) ( ) { }
( )( ) { } ( ) { }
ˆPr Pr
ˆPr Prˆ
ˆ ˆPr Pr
p pp p
p p
≥
⋅ ≥ ⋅
⋅ ≥ ⋅
x y a y
x ay x y a
y y
y x x y a a ( ) { }( )ˆ arg max Prp∈Γ
= ⋅a
x y a a
A-Posteriori Probability (APP)
Pr 𝐚𝐚 : a-priori information about source
28
Optimal Decoding (2)
Maximum Likelihood criterion (ML) If all sequences are equally likely or if the receiver does not know
the statistics Pr{a} of the source, no a-priori information can be used Decision criterion becomes
For equally likely input sequences the MAP criterion and ML criterion yield the identical (optimal) result
If the input sequences are not equally likely but the input statistic Pr{a} is not known by the receiver the ML criterion is suboptimal
( )ˆ arg max p∈Γ
=a
x y a( ) ( )ˆp p≥y x y a
{ } { }ˆPr Pr 2 K−= =a x
29
Optimal Decoding (3)
ML-Decoding assuming a discrete memory less channel (DMC)
As the logarithm is a strictly monotonically increasing function we can write
Incremental metric γ’(yi()|ai()) describes the transition probabilities of the channel
AWGN channel:
( ) ( )( ) ( ) ( )( )2
00
1 exp/ 2 // 2 /
i ii i
ss
y ap y a
N TN T
− = − π
joint probabilities can be factorized
( ) ( )( )| /i i s sp y a E T= ( ) ( )( )| /i i s sp y a E T= −
/− s sE T /s sE Ty
( ) ( ) ( )( ) ( ) ( )( )1 1
0 0 1
N N n
i ii
p p p y a− −
= = =
= =∏ ∏∏y a y a
( ) ( ) ( )( ) ( ) ( )( ) ( ) ( )( )1 1 1
0 1 0 10 1
ln ln ln γN n N n N n
i i i i i ii ii
p p y a p y a y a− − −
= = = == =
′= = =∑∑ ∑∑∏∏y a
( ) ( )( ) ( ) ( )( )lni i i iy a p y a′γ =
30
Optimal Decoding (4)
Squared Euclidean distance
Correlation metric
Hamming metric
maximize
minimize
( ) ( )( ) ( ) ( )( ) ( ) ( )( ) ( ) ( )( )2 2
00 0
ln ln / 2 // 2 / / 2 /
i i i ii i i i s
s s
y a y ay a p y a N T C
N T N T
− − ′γ = = − π + − = −
( ) ( )( ) ( ) ( )( )2
0
2/
i ii i
s
y ay a
N T−
γ =
( ) ( )( ) ( ) ( ) ( ) ( ) ( )
( )
( ) ( )
2 2 2
0 0 0 0 0 0
22 4 2
/ 2 / / 2 / / 2 / / /
i
i i i i i i i si i
s s s s s
a
y y a a y y a Ey a C CN T N T N T N T N T N
′γ = − + − = − + −
constantdoes not depend
on
( ) ( )( ) ( ) ( ) ( )0 0
44/
si i i i i
s
Ey a y a yN T N
γ = = ±
minimize for y (), a () ∈GF(2)( ) ( )( ) ( ) ( ){ }Hγ ,d=y a y a
31
Optimal Decoding (5)
Direct approach for ML decoding: Sum up the incremental metric γ(yi()|ai()) for all possible code sequences a∈Γ Determine sequence a with minimum cost function
Number of sequences a corresponds to 2K effort increases exponentially with K not realizable in practice
Elaboration of Markovian property of convolutional codes The current state depends only on previous state and the current input Successive calculation of path metric Viterbi Algorithm
( ) ( ) ( ) ( )( )1
0 1ˆ arg max arg max ln arg max ln
N n
i ii
p p p y a−
∈Γ ∈Γ ∈Γ = =
= = = ∑∑a a ax y a y a
Andrew Viterbi
32
ML-Decoding of Convolutional Codes (1)
Example: [5,7]8-NSC-encoder, with termination K=3 bits 23=8 CW[000 00] [00 00 00 00 00][100 00] [11 10 11 00 00][010 00] [00 11 10 11 00][110 00] [11 01 01 11 00]
[001 00] [00 00 11 10 11][101 00] [11 10 00 01 11][011 00] [00 11 01 01 11][111 00] [11 01 10 01 11]
00
10
01
11
=0=4=3=2=1 =5
0/00 0/00 0/00 0/00 0/00
0/101/01
1/11 1/11 1/11
0/101/01
0/01
1/10
0/01
0/111/00
0/11
0/10
0/11
0/00
1/110/10
1/01
0/01
1/10
0/111/00
33
ML-Decoding of Convolutional Codes (2)
Receive word: y = [11 11 00 01 11] ML decoding
Incremental path metric
Cumulative state metric
00
10
01
11
20 2 2 4 0 4 1 4 2 5
0
0
0
2
2
6
11
1
1
2
3
30
11
3
1
21
21
3
1
10
4
2
10
( ) ( ) ( ) ( )( ){ }min 1 y aj iiM M= − + γ
[ ]x̂ 11 10 00 01 11=
[ ]x̂ 1 0 1=
( )0 0 0=M
Hamming distanceinstead of Euclidiandistance!
0/00
1/110/10
1/01
0/01
1/10
0/111/00
=0=4=3=2=1 =5
( ) ( )( ) ( ) ( ){ }Hγ ,d=y a y a
Estimated code word
Estimated information word
Viterbi Algorithm
1) Start Trellis at state 02) Calculate γ(y()|a ()) for y() and all possible code words a ()3) Add the incremental path metric γ(y()|a ()) to the
old cumulative state metric Mj( -1), j = 0,…,2m-14) Select for each state the path with lowest Euclidean
distance (largest correlation metric) and discard the other paths (w.r.t. to cumulative state metric)⇒ effort increases only linearly with observation length and not exponential
5) Return to step 2) unless all N received words y() have been processed6) End of the Trellis Terminated code (Trellis ends at state 0):
⇒ select path with best metric M0(N), Truncated code:
⇒ select path with the overall best metric Mj(N)7) Trace back the path selected in 6) (survivor) and output the corresponding
information bits
34
( ) ( ) ( ) ( )( )1 y aj im M= − + γ
( ) ( ) ( ){ }'min ,j ij i jM m l m l=
Decoding of Convolutional Codes with Viterbi-Algorithm
Example: (2,1,3)-NSC-encoder with generators g1 = 78 and g2 = 58 Information sequence: u = [1 0 0 1 | 0 0] x = [-1-1 -1+1 -1-1 -1-1|-1+1 -1-1] Receive sequence y = [+1+1 -1+1 -1-1 -1-1|-1+1 +1-1]
35
00
10
01
11=0 =4=3=2=1 =6=5
+1+1 -1 +1 -1 -1 -1 -1 -1 +1 +1 -1
2
-2
2
-2
0 0
0
2
-2
2
2
0
-4
0
0
0
0
2
-2
-2
2
0
0
0
0
2
-2
-2
2
4
4
-2
2
2
-2
0
0
0
0
2
-2
-2
2
0
0
0
0
2
4
2
2
4
4
4
4
6
6
6
8
6
6
0 0 11 00 010 10 110 10 11 01 00 01 0
BPSK{0,1} {+1,-1}
36
Decoding with Viterbi Algorithm
Rule of Thumb For continuous data transmission (or very long data blocks) the decoding delay
would become infinite or very high It has been found experimentally that a delay of 5·Lc results in a negligible
performance degradation Reason: If the decision depth is large enough, the beginnings of different paths
merge and the decision for this part is reliable
00
10
01
11
37
Shortest Distance between Bremen and Stuttgart
Finding the path in the Trellis (the Autobahn) with the shortest path cost (distance in km) from the starting point (Bremen) to the destination (Stuttgart)
121
Bremen Hannover
Osnabrück
Kassel
Dortmund
Würzburg
Frankfurt
Stuttgart
161 211 137
111 212
190322212
198226
121 282 493 6300
121
232 444
333 554
347 480
634
121
PUNCTURING
38
39
Puncturing of Convolutional Codes (1)
Variable adjustment of the code rate by puncturing (see block codes)⇒ single binary digits of the encoded sequence are not transmitted
Advantage of Puncturing: Flexible code rate without additional hardware effort Possibly lower decoding complexity Although the performance of the original code is decreased, the performance of the
punctured code is in general as good as a not punctured code of the same rate
Puncturing Matrix of period LP repeated application of columns pi
P
P
P
P
1,0 1,1 1, 1
2,0 2,1 2, 10 1 1
,0 ,1 , 1
−
−−
−
= =
L
LL
n n n L
p p p
p p p
p p p
P p p p
Each column pi of P contains the puncturing scheme of a code word and therefore consist of nelements pi,j ∈GF(2)(pi,j = 0 → j-th bit is not transmittedpi,j = 1 → j-th bit is transmitted)
40
Puncturing of Convolutional Codes (2)
Instead of transmitting n·LP coded bits, only +LP bits are transmitted due to the puncturing scheme
Parameter with 1 ≤ ≤ (n–1)·LP adjusts code rates in the range
Puncturing affects the distance properties⇒ optimal puncturing depends on the specific convolutional codeAttention: puncturing can produce catastrophic convolutional encoder!
Decoding of punctured codes Placeholders for the punctured bits have to be inserted into the received sequence
prior to decoding (zeros for antipodal transmission). As the distance properties are degraded by puncturing, the decision depth should be extended.
P
P 1CLR
L′ =
+1=
P
P
1C
LRL n n
′ = =⋅
( ) P1n L= − ⋅
41
Puncturing of Convolutional Codes (3)
Example: (2,1,3)-NSC-code of code rate Rc = 1/2 is punctured to code rate Rc = 3/4 with puncturing period LP = 3 (=1)
g10 =1
g20 =1
g12 =1
g22 =1
g11 =1
g21 =0
u(-1) u(-2)u() 1 1 0
1 0 1
=
P
x1()
x2()
x()
Encoded sequence Transmit sequencex1(0), x2(0), x1(1), x2(1), x1(2), x2(2), x1(3), x2(3), … x1(0), x2(0), x1(1), x2(2), x1(3), x2(3), …
DISTANCE PROPERTIES OFCONVOLUTIONAL CODES
42
43
Distance Properties of Convolutional Codes (1)
As for block codes, the distance spectrum affects the performance of convolutional codes
Free distance df describes smallest Hamming-distance between 2 sequences Free distance df affects the asymptotic (Eb /N0→∞) performance,
for moderate SNR, larger distances affect the performance as well⇒ Distance spectrum
Distance spectrum: Convolutional codes are linear comparison with all-zero-sequence
(instead of comparison of all possible sequence pairs) Hamming weight wH of all sequences has to be calculated Modified state diagram
Self-loop in state 0 is eliminated, state 0 becomes first state Sb and last state Se
Placeholders at state transitions: L = sequence length W = weight of uncoded input sequence D = weight of coded output sequence
44
Distance Properties of Convolutional Codes (2)
Example: distance spectrum for (2,1,3)-NSC-code with g1 = 78 and g2 = 58 Modified state diagram (values of interest are given in exponent of placeholders)
Linear Equation System
0 0 1 0 0 1
1 1
WD2L0 0
D2L
DL
DL
WL
WDL
WDL
Sb Se
210 01
01 10 11
11 11 102
01
b
e
S WD L S WL SS DL S DL SS WDL S WDL S
S D L S
= ⋅ + ⋅
= ⋅ + ⋅
= ⋅ + ⋅
= ⋅
( )5 3
2 : , ,1
e
b
S WD L T W D LS WDL WDL
= =− −
Solution (transfer function):
L = sequence lengthW = weight of uncoded input seq.D = weight of coded output seq.
0 0
1 0 0 1
1 1
0/00
1/11
0/10
1/00
1/01
1/10
0/01
0/11
45
Distance Properties of Convolutional Codes (3)
Series expansion of T(W, D, L) yields
Interpretation: 1 sequence of length = 3 with input weight w = 1 and output weight d = 5 1 sequence with input weight w = 2 and output weight d = 6 of length = 4 and = 5 1 sequence with input weight w = 3 and output weight d = 7 of length = 5 and = 7
and 2 sequences of length = 6 …
( )5 3
2
5 3
2 6 4 2 6 5
3 7 5 3 7 6 3 7 7
, ,
, ,1
2w d l
w d lw d l
WD LT W D LWDL WDL
WD LW D L W D LW D L W D L W D L
T W D L
=− −
=
+ +
+ + + +
= ⋅∑∑∑ Coefficient Tw,d,l: number of
sequences with input weight w and output weight d of length
Distance Properties of Convolutional Codes (4)
Example: (2,1,3)-NSC-code with generators g1 = 78 and g2 = 58 Presentation of the code sequences until a maximum weight of d ≤ 7 in the
Trellis diagram Free distance df =5
46
00
10
01
11
w=1,d=5,=3
0/100/111/11
=0=4=3=2=1 =7=6=5
1/01 0/01
w=2,d=6,=4
1/00
w=2,d=6, =5
1/10
w=3,d=7, =5w=3,d=7, =7w=3,d=7, =6
w=3,d=7, =6
( ) 5 3 2 6 4 2 6 5 3 7 5 3 7 6 3 7 7, , 2T W D L WD L W D L W D L W D L W D L W D L= + + + + + +
47
Distance Properties of Convolutional Codes (5)
General calculation
a: state transition of state 0 into all other states with parameters W, D and L S: state transition between states S01 up to S11 (without state 0) b: state transitions of all states into state 0
For the example
( )0
, , p
pT W D L
∞
=
=∑aS b
20 0WD L = a
0 000
WLDL WDLDL WDL
=
S
2
00
D L =
b
S01 S10 S11
S01
S10
S11
Transition from
0 0 1 0 0 1
1 1
WD2L0 0
D2L
DL
DL
WL
WDL
WDL
Sb Se
48
Distance Properties of Convolutional Codes (6)
For sequence of length L the exponent of S becomes p=L-2
( )
2 2
2 3 2 2 3 2 5 33
0 0, , 0 0 0 0 0 0
0 0 0
WL D L D LT W D L WD L DL WDL WD L W D L WD L
DL WDL=
= = = =
aSb
( )
2 2
2 3 2 2 3 2 2 4 3 2 3 3 3 4 3 2 6 44
0 0, , 0 0 0 0
0 0 0
WL D L D LT W D L WD L W D L DL WDL W D L W D L W D L W D L
DL WDL=
= = = =
aS b
( )
2
3 2 4 3 2 3 3 3 4 3 2 6 5 3 7 55
0 0, , 0 0
0 0
WL D LT W D L W D L W D L W D L DL WDL W D L W D L
DL WDL=
= = = +
aS b
( ) 5 3 2 6 4 2 6 5 3 7 55 , ,T W D L WD L W D L W D L W D L≤ = + + +
49
Distance Properties of Convolutional Codes (7)
Number of sequences with Hamming weight d
Total number of nonzero information bits (w) associated with code sequences of Hamming weight d useful to determine BER
Example
, ,d w dw
a T=∑∑
( ), ,
1
, , 1 d dw d d
d w dW
T W D Lw T D c D
W=
∂ = = ⋅ ⋅ = ⋅ ∂ ∑ ∑∑ ∑
, ,d w dw
c w T= ⋅∑∑
( ) , , , ,1, , 1 1 1w d l dw d l w d l
w d l d w lT W D L T D T D = = = ⋅ =
∑∑∑ ∑ ∑∑
( ) 5 2 6 2 6 3 7 3 7 3 7
11
5 6 6 2 7 2 7 7
15 6 7
, , 12
2 2 3 2 3 3
4 12
WW
W
T W D LWD W D W D W D W D W D
W W
D WD WD W D W D WD
D D D
==
=
∂ = ∂ = + + + + + + ∂ ∂
= + + + + ⋅ + + = + + +
50
Distance Spectra for NSC and RSC Encoders (1)
Non-systematic convolutional encoder (NSC) Generators
Each input sequence of weight wresults in output sequences of weight d(w)(special for this NSC code)
dmin =df =5 is achieved for w = 1 Number of paths increases
exponentially with distance
( )( )
21
22
1
1
G D D D
G D D
= + +
= +
10·lo
g 10(
ad
)
51
Distance Spectra for NSC and RSC Encoders (2)
Recursive convolutional encoder (RSC) Generators
Only for w ³ 2 output sequences of finite length exist
Output sequence of finite weight only for even input weights
10·lo
g 10(
ad
)10
·log 1
0( a
d )
( )( ) ( ) ( )
1
2 22
1
1 / 1
G D
G D D D D
=
= + + +
ERROR RATE PERFORMANCE
52
53
Error bounds
An error occurs if the conditional probability of the correct code sequence x is lower than the probability for another sequence a ≠ x
Probability of an error
( ) ( )4 /s s i iE T a x ≠=
for all 0 else
( ) ( ){ }( ) ( )( ) ( ) ( )( )
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )
( ) ( )( ) ( )
1 1
0 0
1 1 1 1
0 0 0 1 0 1
1
0 12
Pr ln ln
Pr
Pr Pr
Pr 0
w
N N
N N N n N n
i i i ii i
N n
ii
i ia x
P p p
y x y a
y
γ γ− −
= =
− − − −
= = = = = =
−
= =
= <
= < = < = < =
− >
∑ ∑
∑ ∑ ∑∑ ∑∑
∑∑
y x y a
y x y a
y x y a
54
Pairwise error probability Pd of sequences a and xwith distance d = dH(a,x)
The sum over d received symbols yi() is a Gaussian distributed random variable with
Mean Variance
Probability for mixing up two sequences with pairwise Hamming distance dbecomes
( )1
0 1Pr 0
N n
d ii
P y−
= =
= >
∑∑
Error bounds
( ) ( )i ix a≠
/s SY d E T= −2
0 / 2 /Y sd N Tσ = ⋅
0 0
1 1erfc erfc2 2
s bd c
E EP d dRN N
= =
( )ii
Y y=∑∑
55
Estimation of the sequence error probability
Probability for error event in a sequence
Estimation of the bit error probability
0
1 erfc2
bw d d d c
d d
EP a P a d RN
≤ ⋅ = ⋅ ⋅
∑ ∑
0
1 erfc2
bb d c
d
EP c d RN
≤ ⋅ ⋅ ⋅
∑
ad : number of sequences with Hamming weight d
cd : number of information bits equalto one for all code sequenceswith Hamming weight d
56
Example (1)
Example: half-rate code with generators g1 = 78 and g2 = 58 Estimation of the sequence error probability
( ) ( ) 54 2
5, , 1 dd d d
dT W D L W D L L
∞−− −
=
= +∑
( ) 5
0
1 4 2 erfc2
d bb c
d
EP d dRN
− ≤ ⋅ − ⋅ ⋅
∑
( ) 5
51, , 1 2d d
dT W D L D
∞−
=
= = =∑ 52dda −=
( ) ( ) 5
51
, , 14 2d d
dW
T W D Ld D
W
∞−
==
∂ == − ⋅
∂ ∑ ( ) 54 2ddc d −= − ⋅
Number of sequences with Hamming weight d
Number of information bits equal to one (w) for all sequences with Hamming weight d
57
Example (2)
Asymptotic bit error rate (BER) is determined by free distance df
For estimating the BER at moderate SNR, the whole distance spectrum is required
For large error rates or small signal-to-noise ratios, the union bound is very loose and may diverge
0/ in dBbE N
Comparison of simulation and analytical estimation
0 1 2 3 4 5 610-5
10-4
10-3
10-2
10-1
100
BER
Simulationd=5 d=6 d=7 d=20
58
Performance of Convolutional Codes: Quantization
By quantizing the received sequence before decoding, information is lost. Hard-Decision (q = 2):
Strong performance degradation
3-bit quantization (q = 8):only a small performance degradation compared to no quantization
0/ in dBbE N
Influence of quantization
none
0 2 4 6 810-5
10-4
10-3
10-2
10-1
100
BER
q=∞q=2 q=8
59
Numerical Results to Convolutional Codes
Influence of code rate, Lc=9Influence of constraint length, Rc=0.5
0 1 2 3 4 5 610-5
10-4
10-3
10-2
10-1
100
BER
Lc=3Lc=5Lc=7Lc=9
0/ in dBbE N 0/ in dBbE N0 1 2 3 4 5 6
10-5
10-4
10-3
10-2
10-1
100
BER
Rc =1/4Rc =1/3Rc =1/2Rc =2/3Rc =3/4