soft decision decoding of rs codes using adaptive parity check matrices jing jiang and krishna r....
TRANSCRIPT
Soft Decision Decoding of RS Codes Using Adaptive Parity Check MatricesSoft Decision Decoding of RS Codes Using Adaptive Parity Check Matrices
Jing Jiang and Krishna R. Narayanan
Wireless Communication Group
Department of Electrical Engineering
Texas A&M University
Reed Solomon Codes
Consider an (n,k) RS code over GF(2m), n = 2m-1
Linear block code – e.g. (7,5) RS code over GF(8)
be a primitive element in GF(8)
Cyclic shift of any codeword is also a valid codeword
RS codes are MDS (dmin = n-k+1)
The dual code is also MDS
53642
65432
1
1
H
Introduction
DrawbackPerformance loss due to bounded distance decodingSoft input soft output (SISO) decoding is not easy!
AdvantagesGuaranteed minimum distance Efficient bounded distance hard decision decoder (HDD)Decoder can handle errors and erasures
RS Coded Turbo Equalization System
-
+
a priori
extrinsicinterleaving
a priori
extrinsic
ΠΣ
source
RS Encoder
interleaving
PR Encoder
sink
hard decision
+
AWGN+
RS Decoder
BCJR Equalizer
de-interleaving
Π
1Π
Σ
Presentation Outline
Existing soft decision decoding techniques
Iterative decoding based on adaptive parity check matricesVariations of the generic algorithm
Applications over various channels
Conclusion and future work
Existing Soft Decoding Techniques
Enhanced Algebraic Hard Decision Decoding
Generalized Minimum Distance (GMD) Decoding (Forney 1966):Basic Idea:
Erase some of the least reliable symbolsRun algebraic hard decision decoding several times
Drawback: GMD has a limited performance gainChase decoding (Chase 1972):
Exhaustively flip some of the least reliable symbolsRunning algebraic hard decision decoding several times
Drawback: Has an exponentially increasing complexity
Combined Chase & GMD(Tang et al. 2001).
Algebraic Soft Input Hard Output Decoding
Algebraic SIHO decoding:Algebraic interpolation based decoding (Koetter & Vardy 2003)Reduced complexity KV algorithm (Gross et al. submitted 2003)
Basic ideas:Based on Guruswami and Sudan’s algebraic list decodingConvert the reliability information into a set of interpolation pointsGenerate a list of candidate codewordsPick up the most likely codeword from the codeword list
Drawback:The complexity increases with , maximum number of multiplicity.
4maxm
Reliability based Ordered Statistic Decoding
Reliability based decoding: Ordered Statistic Decoding (OSD) (Fossorier & Lin 1995) Box & Match Algorithm(BMA) (Valembois & Fossorier to appear 2004)
Basic ideas: Order the received bits according to their reliabilities Make hard decisions on a set of independent reliable bits (MR Basis) Re encode to obtain a list of candidate codewords
Drawback: The complexity increases exponentially with the reprocessing order BMA must trade memory for complexity
Trellis based Decoding using the Binary Image Expansion
Maximum-likelihood decoding and variations Trellis based decoding using binary image expansion (Vardy & Be’ery ‘91) Reduced complexity version (Ponnampalam & Vucetic 2002)
Basic ideas:Binary image expansion of RSTrellis structure construction using the binary image expansion
Drawback:Exponentially increasing complexityWork only for very short codes or codes with very small distance
Binary Image Expansion of RS Codes
)2( where, as expressed becan )2( )(1
0
)( GFcccGFc ib
m
i
iib
m
),2GF(in element primitive a be Let m )2GF( of basis a form ,...,,,1 12 mm
],...,,[ 1101 NN cccC ],...,,,...,,...,,[ )1(1
)1(1
)0(1
)1(0
)1(0
)0(0)1(
m
NNNm
Nmb ccccccC
NKNKNKN
N
NKN
HHH
HHH
H
),1(1),1(0),1(
1,01,00,0
)(
)1,1(1,1
)0,1(1,1
)1,1(0,1
)0,1(0,1
)1,0(1,1
)0,0(1,1
)1,0(0,1
)0,0(0,1
)1,1(1,0
)0,1(1,0
)1,1(0,0
)0,1(0,0
)1,0(1,0
)0,0(1,0
)1,0(0,0
)0,0(0,0
)(
mmNKN
mNKN
mmKN
mKN
mNKNNKN
mKNKN
mmN
mN
mmm
mNN
m
NmmKNb
hhhh
hhhh
hhhh
hhhh
H
bbb HCD HCD
)1,1()1,1()1,0(
)1,0()1,0()0,0(
mmmm
m
b
hhh
hhh
H
213 CCC 213 bbb CCC
Consider the (7,5) RS code
53642
65432
1
1
H
011110010001111101100
001111101100011110010
111101100011110010001
001011111110101010100
100001011111110101010
011111110101010100001
bH
Binary image expansion of the parity check matrix of RS(7, 5) over GF(23)
Recent Iterative Techniques
Sub-trellis based iterative decoding (Ungerboeck 2003)
Self-concatenation structure based on sub-trellis constructed from the parity check matrix
Drawbacks: Performance deteriorates due to large number of short cycles Work for short codes with small minimum distances Potential error floor problem in high SNR region
011110010001111101100
001111101100011110010
111101100011110010001
001011111110101010100
100001011111110101010
011111110101010100001
bH
Binary image expansion of the parity check matrix of RS(7, 5) over GF(23)
Recent Iterative Techniques (cont’d)
Stochastic shifting based iterative decoding (Jing & Narayanan, to appear 2004)
Due to the irregularity in the H matrix, iterative decoding favors some bits
Taking advantage of the cyclic structure of RS codes],,,,,,[ 4321065 rrrrrrr ],,,,,,[ 6543210 rrrrrrr
1011001
0110011
0001111
H
Stochastic shift prevent iterative procedure from getting stuck
Best result: RS(63,55) about 0.5dB gain from HDD
However, for long codes, this algorithm still doesnt provide good improvement
Shift by 2
Remarks on Existing Techniques
Most SIHO algorithms are either too complex to implement or having only marginal gain
Moreover, SIHO decoders cannot generate soft output directly Trellis-based decoders have exponentially increasing complexity
Iterative decoding algorithms do not work for long codes, since the parity check matrices of RS codes are not sparse
“Soft decoding of large RS codes as employed in many standard transmission systems, e.g., RS(255,239), with affordable complexity remains an open problem” (Ungerboeck, ISTC2003)
Questions
Q: Why doesn’t iterative decoding work for codes with non-sparse parity check matrices?
Q: Can we get some idea from the failure of iterative decoder?
How does standard message passing algorithm work?
bit nodes…………. ………..
. . . . . . . . . …………….
check nodes
…………….
erased bits
? If two or more of the incoming messages are erasures the check is erased Otherwise, check to bit message is the value of the bit that will satisfy the check
How does standard message passing algorithm work?
bit nodes…………. ………..
. . . . . . . . . …………….
check nodes
…………….
12 tanh tanh2j
kj k
vu
1 2 1, 1,..,| | min , ,..,k k k Ju v v v v v
Small values of vj can be thought of as erasures and hence more than two
edges with small vj’s saturate the check
1.15.01.08.02.09.06.01.02.09.05.01.03.04.08.01.10.17.06.19.08.0 r
A Few Unreliable Bits “Saturate” the Non-sparse Parity Check Matrix
000000000000000000000bc
Iterative decoding is stuck due to only a few unreliable bits “saturating” the whole non-sparse parity check matrix
011110010001111101100
001111101100011110010
111101100011110010001
001011111110101010100
100001011111110101010
011111110101010100001
bH
Binary image expansion of the parity check matrix of RS(7, 5) over GF(23)
Consider RS(7, 5) over GF(23)
Sparse Parity Check Matrices for RS Codes
Can we find an equivalent binary parity check matrix that is sparse?
For RS codes, this is not possible!
The H matrix is the G matrix of the dual code
The dual of an RS code is also an MDS Code
Every row has weight at least (N-K)!
Iterative Decoding Based on Adaptive Parity Check Matrix
transmitted codeword 0011010c
Idea: reduce the sub-matrix corresponding to the unreliable positions to a sparse nature.
For example, consider (7,4) Hamming code:
parity check matrix
received vector 1.01.02.14.11.06.01.1 r
1010110
1100101
1101010
H
1011001
0110011
0001111
H
1011001
0110011
0001111
H
After the adaptive update, iterative decoding can proceed.
Adaptive Decoding Procedure
bit nodes…………. ………..
. . . . . . . . . …………….
check nodes
…………….
unreliable bits
More Details about the Matrix Adaptive Scheme
transmitted codeword 0011010c
parity check matrix
1010110
1100101
1101010
H
0111100
1100101
1101010
H
1011001
1100101
1101010
H
1011001
0111100
0110011
H
1011001
1100101
1101010
H
4.03.02.14.12.01.01.1 rreceived vector
Consider the previous example: (7,4)Hamming code
We can guaranteed reduce some (n-k)m columns to degree 1
We attempt to chose these to be the least reliable independent bits
Least Reliable Basis
Interpretation as an Optimization Procedure
Standard iterative decoding procedure is interpreted as gradient descent optimization (Lucas et al. 1998).
Proposed algorithm is a generalization, two-stage optimization procedure:
The damping coefficient serves to control the convergent dynamics.
Parity check matrix update (change direction)
All bit-level reliabilities are sorted by their absolute valuesSystemize the sub-matrix corresponding to LRB in the parity check matrix
Bit reliabilities updating stage (gradient descent)
Iterative decoding is applied to generate extrinsic informationExtrinsic information is scaled by a damping coefficient and fed to update the bit-level reliabilities
A Hypothesis
Stuck at pseudo-equilibrium point
Adaption help gradient descent to converge
Complexity Analysis
Check Node Update
Overall Complexity
Variable Node Update
Matrix Adaption
Reliability Ordering
BinaryFloating PointOperation
)(log 2 NmNm
2})(,min{ mKNKmNm
2/)()( mkNKmmKN
)2/1()( KmmKN
2)(Nm 3)(Nm
Complexity can be even reduced when implemented in parallel
The complexity is in polynomial time with or N mind
Complexity Comparison
Metho
d
Dominant Complexity
GMD
Chase
KV
OSD
Trellis
ADP
)log2/( :operation )( 22
min NNdoqGF
exhausted be tosymbols ofnumber theis c ),log( :operation )( 22 NNqoqGF c
ty.multiplici maximum theis ),/1( :operation )( max4max
3 mmNRoqGF
})2,2(min{ :operationpoint Floating )( mKKNmo
processing-re oforder theis ),)(( :operationpoint Floating 1 iNmo i
processing-re oforder theis ),)(( :operationBinary 1 iNmo i
))(( :operationpoint Floating 2Nmo))(( :operationBinary 3Nmo
least reliable symbols
53642
65432
1
1
H
Variation1: Symbol-level Adaptive Scheme
Systemizing the sub-matrix involves undesirable Gaussian elimination.
4554
33
110
101
H
?????10
?????01H
We implement Symbol-level adaptive scheme.
This problem can be detoured via utilizing the structure of RS codes.
This step can be efficiently realized using Forney’s algorithm (Forney 1965)
111011011100111100000
011001001010011010000
110111111001110001000
011010110100110000100
100101111010111000010
100100101001101000001
bH
binary mapping
Variation2: Degree-2 sub-graph in the unreliable part
bit nodes…………. ………..
. . . . . . . . . …………….
check nodes
…………….
unreliable bits
weakly connected
Reduce the “unreliable” sub-matrix to a sparse sub-graph rather than an identity to improve the asymptotic performance.
Variation2: Degree-2 sub-graph in the unreliable part (cont’d)
111011011100111100000
011001001010011010000
110111111001110001000
010010110100110000100
101101111010111000010
100100101001101000001
bH
100010010110100110000
101110110011101011000
100101001101000001100
111111001110001000110
001001010011010000011
100100101001101000001
bH
011111110101010100001
100010010110100110000
011010000011001001010
001011111110101010100
101101111010111000010
010011010000011001001
bH
Q: How to adapt the parity check matrix?
Variation3: Different grouping of unreliable bits (cont’d)
Some bits at the boundary part may also have the wrong sign.
]2.11.18.075.070.067.060.058.052.050.047.045.040.034.034.032.022.021.012.004.001.0[bL
…….Group1 Group2
A list of candidate codewords are generated using different groups. Pick up the most likely from the list.
Consider the received LLR of an RS(7,5) code:
Run the proposed algorithm several times, each time with an exchange of some “reliable” and “unreliable” bits at the boundary.
Variation4: Partial updating scheme (cont’d)
The main complexity comes from updating the bits in the high density part, however, only few bits at the boundary part will be affected.
In variable node updating stage: update only the “unreliable” bits in the sparse sub-matrix and a few “reliable” bits at the boundary part.
111011011100111100000
011001001010011010000
110111111001110001000
010010110100110000100
101101111010111000010
100100101001101000001
bH
ascending reliability
In check node updating stage: make an approximation of the check sum via taking advantage of the ordered reliabilities.
Complexity in floating point operation part is reduced to be .)(Nmo
Applications
Simulation setups:A “genie aided” HDD is assumed for AWGN and fading channel.In the TE system, all coded bits are interleaved at random. A “genie aided” stopping rule is applied.
Q: How do the proposed algorithm and its variations perform?
Simulation results:Proposed algorithm and variations over AWGN channelPerformance over symbol level fully interleaved slow fading channelRS coded turbo equalization (TE) system over EPR4 channelRS coded modulation over fast fading channel
Additive White Gaussian Noise (AWGN) Channel
AWGN Channels
AWGN Channels (cont’d)
Asymptotic performance is consistent with the ML upper-bound.
AWGN Channels (cont’d)
AWGN Channels (cont’d)
Remarks
Proposed scheme performs near ML for medium length codes.Symbol-level adaptive updating scheme provides non-trivial gain.Partial updating incurs little penalty with great reduction in complexity.For long codes, proposed scheme is still away from ML decoding.Q: How does it work over other channels?
Interleaved Slow Fading Channel
Fully Interleaved Slow Fading Channels
Fully Interleaved Slow Fading Channels (cont.)
Turbo Equalization Systems
Embed the Proposed Algorithm in the Turbo Equalization System
RS Coded Turbo Equalization System
-
+
a priori
extrinsicinterleaving
a priori
extrinsic
ΠΣ
source
RS Encoder
interleaving
PR Encoder
sink
hard decision
+
AWGN+
RS Decoder
BCJR Equalizer
de-interleaving
Π
1Π
Σ
Turbo Equalization over EPR4 Channels
Turbo Equalization over EPR4 Channels
RS Coded Modulation
RS Coded Modulation over Fast Rayleigh Fading Channels
RS Coded Modulation over Fast Rayleigh Fading Channels (cont’d)
Remarks
More noticeable gain is observed for fading channels, especially for symbol-level adaptive scheme.
In RS coded modulation scheme, utilizing bit-level soft information seems provide more gain.
The proposed TE scheme can combat ISI and performs almost identically as the performance over AWGN channels.
The proposed algorithm has a potential “error floor” problem.However, simulation down to even lower FER is impossible.Asymptotic performance analysis is still under investigation.
Conclusion and Future work
Iterative decoding of RS codes based on adaptive parity check matrix works favorably for practical codes over various channels.
The proposed algorithm and its variations provide a wide range of complexity-performance tradeoff for different applications.More works under investigation:
Asymptotic performance bound.Understanding how this algorithm works from an information theoretic perspective, e.g., entropy of ordered statistics.Improving the generic algorithm using more sophisticated optimization schemes, e.g., conjugate gradient method.
Thank you!