varodayan_eurasipsignalprocessing06

Upload: floriniii

Post on 04-Apr-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/31/2019 Varodayan_EURASIPSignalProcessing06

    1/8

  • 7/31/2019 Varodayan_EURASIPSignalProcessing06

    2/8

    2 D. Varodayan, A. Aaron and B. Girod

    ements as the most likely one given the corre-

    lated side information Y. The impressive poten-tial of this approach has been demonstrated byvarious implementations of the system with turbocodes [8] [9] [10] and Low-Density Parity-Check(LDPC) codes [11]. In these schemes, achievingcompression close to the Slepian-Wolf bound de-pends on choosing an appropriate channel code;the dependencies between X and Y play the roleof the channel statistics in this context. If thestatistics of this dependency channel are knownat both encoder and decoder, they can agree ona good code and a rate close to the Slepian-Wolflimit can be used.

    For many practical applications, the statisti-cal dependency between X and Y may not beknown at the encoder. Low complexity video cod-ing via distributed source coding, for example,treats a frame of video as the source X and itsprediction at the decoder as the side informationY [12]. Since video data are highly non-ergodic,the achievable compression ratio varies and can-not be foretold by the encoder. In this situation,a rate-adaptive scheme with feedback is an at-tractive solution. The encoder transmits a shortsyndrome based on an aggressive code and thedecoder attempts decoding. In the event that

    decoding is successful, the decoder signals thisfact to the encoder, which then continues withthe next block of source data. However, if de-coding fails, the encoder augments the short syn-drome with additional transmitted bits, creatinga longer syndrome based on a less aggressive code.The process loops until the syndrome is sufficientfor successful decoding. Obviously, this approachis viable only if a feedback channel is availableand the round-trip time is not too long.

    Punctured turbo codes [13] were used to imple-ment rate-adaptive source coding with and with-out decoder side information in [12] and [14], re-

    spectively. The latter addressed the design ofthe codes and puncturing via EXIT chart anal-ysis [15]. Punctured LDPC codes [16] [17] [18],though they may be applied to this problem,perform poorly after even moderate punctur-ing. Recently, Chen et al. presented a betterrate-adaptive distributed source coding architec-ture with decoding complexity ofO(n logn) [19],

    where n is the blocklength of the code.

    The primary contribution of this paper is thedevelopment and design of LDPC-based rate-adaptive distributed source codes that have bet-ter performance than alternative codes of linearcomplexity encoding and decoding. In Sec. 2and 3, we introduce rate-adaptive LDPC Accu-mulate (LDPCA) codes and Sum LDPC Accu-mulate (SLDPCA) codes, respectively. Sec. 4discusses properties of these codes and strate-gies for their design. In Sec. 5, we compare theperformance of the two proposed classes of rate-adaptive distributed source codes with the per-formance of the turbo-coded system of [12] over

    various code lengths and source and conditionalstatistics.

    2. LDPC Accumulate (LDPCA) Codes

    As noted in Sec. 1, LDPC codes (in syndromecode form) have been used effectively in fixed-rate distributed source coding [11]. A nave wayto use such a code as part of a rate-adaptivescheme would be to transmit the syndrome bitsin stages and allow decoding after receipt of eachincrement of the syndrome. However, the perfor-mance of the high compression codes so derived

    is very poor because their graphs contain uncon-nected or singly-connected source nodes; thesestructural features impede the transfer of infor-mation via the LDPC iterative decoding algo-rithm [20] [21]. Instead, we now present a methodfor constructing LDPC-based rate-adaptive codesfor distributed source coding, whose performancedoes not degrade at high compression ratios.

    The LDPCA encoder consists of an LDPCsyndrome-former concatenated with an accumu-lator. An example is shown in Fig. 2. Thesource bits (x1, . . . , x8) are first summed mod-ulo 2 at the syndrome nodes according to theLDPC graph structure, yielding syndrome bits(s1, . . . , s8). These syndrome bits are in turnaccumulated modulo 2, producing the accumu-lated syndrome (a1, . . . , a8). The encoder buffersthe accumulated syndrome and transmits it incre-mentally to the decoder. This encoder structurecan be recast straightforwardly as the encoder foran extended Irregular Repeat Accumulate (eIRA)

  • 7/31/2019 Varodayan_EURASIPSignalProcessing06

    3/8

    Rate-Adaptive Distributed Source Coding using Low-Density Parity-Check Codes 3

    channel code [22]. The eIRA interpretation, how-

    ever, treats the accumulated syndrome nodes asdegree 2 variable nodes, inducing a degree distri-bution different to that of the underlying LDPCcode. Section 4 demonstrates that only this un-derlying degree distribution is invariant under therate-adaptive operation of LDPCA codes. Hence,we avoid conflating the concepts of LDPCA andeIRA codes.

    Fig. 2. The LDPCA encoder

    The LDPCA decoder handles rate-adaptivityby modifying its decoding graph each time it re-

    ceives an additional increment of the accumu-lated syndrome. First assume, for the sake ofargument, that the entire accumulated syndrome(a1, . . . , a8) has been received. Then taking theconsecutive differences modulo 2 of these valuesyields the syndrome (s1, . . . , s8). The syndrome-adjusted LDPC iterative decoding method of [11]can be applied on the same graph (shown inFig. 3a) that was used for encoding (s1, . . . , s8)from (x1, . . . , x8). For decoding, the source nodesare seeded with conditional probability distri-butions of the source bits given the side infor-mation, namely Pr{X1|Y}, . . . ,Pr{X8|Y}. Thenmessages are passed back and forth between thesource nodes and the syndrome nodes (accordingto the equations in [11]) until the estimates of thesource bits converge. The correctness of the re-covered source values can be tested with respectto the syndrome bits, with very small chance ofa false positive. When the number of receivedbits equals the number of source bits, as in this

    case, the performance achieved by transmitting

    (a1, . . . , a8) is no different to that had by trans-mitting (s1, . . . , s8) since the resulting decodinggraphs are identical.

    Fig. 3. Decoding graphs if the encoder transmits (a) theentire accumulated syndrome, (b) the even-indexed accu-mulated syndrome bits, (c) the even-indexed syndromebits.

    The modification of decoding graph structuremanifests at higher compression ratios. Consider,for instance, a compression ratio of 2. In ourexample, this corresponds to the transmission ofonly the even-indexed subset of the accumulated

    syndrome (a2, a4, a6, a8). The consecutive differ-ence modulo 2 operation at the decoder then pro-duces (s1 + s2, s3 + s4, s5 + s6, s7 + s8). Fig. 3bshows the graph which would have encoded (s1 +s2, s3+s4, s5+s6, s7+s8) from (x1, . . . , x8). Thisgraph maintains the degree of all source nodescompared to Fig. 3a. Therefore, it can be usedfor effective iterative decoding with source bitseeding Pr{X1|Y}, . . . ,Pr{X8|Y}. Upon comple-tion of decoding, the recovered source can betested against the syndrome to verify correct-ness. For comparison, if the syndrome subset(s2, s4, s6, s8) were transmitted instead of the ac-cumulated syndrome subset, the decoding graph(shown in Fig. 3c) would be severely degradedand unsuitable for iterative decoding.

    Finally, note that the encoding and decodingcomplexity of these LDPCA codes is linear inthe number of edges, which is invariant under theproposed construction. Moreover, the number ofedges is linear in the length of the code for a fixed

  • 7/31/2019 Varodayan_EURASIPSignalProcessing06

    4/8

    4 D. Varodayan, A. Aaron and B. Girod

    degree distribution. Therefore, the complexity of

    encoding and decoding is O(n), where n is theblocklength of the code in bits.

    3. Sum LDPC Accumulate (SLDPCA) Codes

    Serially Concatenated Accumulate codes havebeen proposed for the rate-adaptive distributedsource coding problem [19]. For this class ofcodes, the encoder is the concatenation of an in-verse accumulator with one or more rate-adaptivebase codes. The base codes considered in [19]are simple product codes and extended Ham-ming codes, yielding overall Product Accumulate

    codes [23] and extended Hamming Accumulatecodes [24]. Both of these systems incur decod-ing complexity of up to O(n logn) since that isthe soft decoding complexity for the base codes.In this section, we consider using LDPCA codesof Sec. 2 as base codes to create rate-adaptiveSLDPCA codes, which are linear in encoding anddecoding complexity with respect to their codelengths.

    The SLDPCA encoder is the concatenation ofa consecutive summer (i.e. inverse accumula-tor) with an LDPCA encoder; an example is de-picted in Fig. 4. Here, the source bits (x1, . . . , x8)

    are consecutively summed into intermediate bits(i1, . . . , i8), which are then coded in the samefashion as in Fig. 2 to produce the accumulatedsyndrome (a1, . . . , a8). As with the previous rate-adaptive scheme, the encoder buffers the accumu-lated syndrome and transmits it incrementally tothe decoder.

    Although the SLDPCA decoder employs a dif-ferent decoding algorithm, its rate-adaptive func-tionality is the same as that of the decoder inSec. 2. That is, when the decoder receives eachincrement of accumulated syndrome bits from theencoder, it modifies the LDPC portion of its de-coding graph to reflect this information.

    To understand the decoding algorithm, first ob-serve the following property of the encoder inFig. 4: the source bits (x1, . . . , x8) are the accu-mulated sum of the intermediate bits (i1, . . . , i8)modulo 2. In other words, (x1, . . . , x8) are theoutput of a very simple IIR filter with input(i1, . . . , i8). This means that the decoder can

    Fig. 4. The SLDPCA encoder

    employ the Bahl-Cocke-Jelinek-Raviv (BCJR) al-gorithm [25] to obtain soft information about(i1, . . . , i8) when the source nodes are seededwith the conditional probability distributionsPr{X1|Y}, . . . ,Pr{X8|Y}. Meanwhile, a singleiteration of the syndrome-adjusted LDPC decod-ing algorithm of [11] provides different soft in-formation about (i1, . . . , i8) originating from thereceived accumulated syndrome bits. The soft in-formation interchange at the intermediate nodes

    works as follows: each incoming message to an in-termediate node is answered by the net informa-tion of all other incoming message to that node.Simultaneous BCJR and LDPC decoding itera-tions follow. Decoding continues in this way untilthe estimates of the intermediate bits converge.Finally, the source bits can be recovered fromthe decoded intermediate bits by accumulationmodulo 2. Just like LDPCA codes, the correct-ness of the recovered source values can be testedwith respect to the syndrome bits, with very smallchance of a false positive.

    The decoding complexities of the BCJR andLDPC iterations are linear in the code lengthand the number of LDPC edges, respectively.Once again, for a rate-adaptive set of codes withfixed LDPC degree distribution on the interme-diate nodes, the number of LDPC edges is linearin the code length. Hence, the overall decodingcomplexity is O(n); the encoding complexity ofSLDPCA codes is also O(n).

  • 7/31/2019 Varodayan_EURASIPSignalProcessing06

    5/8

    Rate-Adaptive Distributed Source Coding using Low-Density Parity-Check Codes 5

    4. Code Design

    We now consider some properties of the pro-posed LDPCA and SLDPCA codes and show howto leverage existing LDPC design techniques.

    We begin with the observation that the decod-ing graph of Fig. 3b is obtained from the graphof Fig. 3a by merging adjacent syndrome nodes.The edges connected to each merged node arethose that were connected to one of its constituentnodes, but not both (because double edges can-cel out in modulo 2). Ensuring that no edges arelost over several merging steps requires very care-ful design of the lowest compression ratio code.

    A simpler strategy to guarantee a constantnumber of edges for all decoding graphs is to be-gin the design with the highest compression ratiograph. Knowing the transmission order of theaccumulated syndrome allows the derivation ofgraphs for each of the lower compression ratiocodes. Each additional accumulated syndromebit received results in the division of a syndromenode into two adjacent ones. The key to main-taining a constant number of edges across allgraphs is to partition the edge set of the old syn-drome node into the edge sets of the new pair.Furthermore, this approach keeps invariant the

    degrees of the source and intermediate nodes forthe LDPCA and SLDPCA codes, respectively.Thus, the global degree distribution can be usedto tune the performance of the codes [26]. In fact,degree distributions optimized for LDPC channelcodes (for example, using [27]) can be applied di-rectly to LDPCA codes due to the similarity oftheir decoding algorithms.

    When the number of received bits equals thenumber of source bits, there is an additional de-sign objective: the equations that generate theaccumulated syndrome bits should be indepen-dent in the source bits. So, for the coding rate

    of 1, this guarantees decoding of the source viastraightforward linear algebra, regardless of thequality of the side information.

    Constructing the graphs from the highest com-pression ratio code to lowest also has implica-tions on local graph structure. Define a stoppingset A to be a set of source nodes, all of whose(syndrome node) neighbors are connected to at

    least two nodes ofA. The stopping number of a

    graph, the size of its smallest stopping set, affectsthe performance of iterative LDPC decoding onthat graph [28]; the larger the stopping numberis, the more likely it is that decoding will succeed.We now show that the generation of lower com-pression ratio codes by the division of syndromenodes into pairs does not decrease the stoppingnumber of the graph. Suppose that the subsetof source nodes A is not a stopping set of thehigher compression ratio graph. So, there existsa syndrome node sA that is singly-connected to A.The graph for the lower compression ratio code isobtained by dividing syndrome nodes into pairs,

    such that the edge sets of the pair form a partitionon the edge set of the original syndrome node. IfsA is not divided in this way, it remains singly-connected to A. If sA is divided, exactly one ofthe resulting pair of syndrome nodes is singly-connected to A. In both cases, A is not a stop-ping set of the new graph. Hence, stopping setsare not created by the syndrome division opera-tion, so the stopping number does not decrease aslower compression ratio codes are generated fromhigher compression ratio ones. Moreover, variousstopping set elimination heuristics [29] [30] can beapplied to create the LDPC graph for the lowest

    compression ratio code. Therefore, this construc-tion of LDPCA and SLDPCA codes propagatesgood local graph structure.

    The design of LDPCA and SLDPCA codes,thus, leverages existing work on degree distribu-tion optimization and stopping set elimination.

    5. Simulation Results

    In this section, the performance of variousLDPCA and SLDPCA codes are compared withrespect to different LDPC degree distributionsand code lengths. Simulation results are also pre-sented for different conditional statistics betweensource and side information and different sourcedistributions. The LDPC subgraphs of all codespresented here were constructed by the methodsuggested in Sec. 4: starting with the highestcompression ratio graph, the other graphs are ob-tained by successively dividing syndrome nodesinto pairs. We assume that the decoder can detect

  • 7/31/2019 Varodayan_EURASIPSignalProcessing06

    6/8

    6 D. Varodayan, A. Aaron and B. Girod

    lossless recovery of the source bits perfectly. The

    simulations also reflect the fact that decoding isalways successful if the number of received accu-mulated syndrome bits equals the source length,as long as the received bits are generated by in-dependent functions of the source bits. The ratepoints plotted are the average of 75 trials each.

    Fig. 5 compares three LDPCA code systems ofdiffering degree distributions with rate-adaptiveturbo codes, all of source length 6336 bits, withi.i.d. binary symmetric (BSC) statistics betweenX and Y. The turbo codes, whose encoder isspecified in [13], are those used in [12]. Theregular LDPCA codes have a degree distribution

    given by (3 = 1), where r is the proportion ofnodes of degree r. One set of irregular LDPCAcodes has a degree distribution of (2 = 0.3, 3 =0.4, 4 = 0.3). The other irregular LDPCA codesshown have the following degree distribution se-lected from [27]: (2 = 0.316, 3 = 0.415, 7 =0.128, 8 = 0.069, 19 = 0.020, 21 = 0.052). Forcomparison, we demonstrate the extremely poorperformance of the underlying LDPC codes undernave incremental transmission of syndromes. Wealso plot the performance of the irregular LDPCcode of fixed rate 0.5 and length 10000 bits, pre-sented in [11]. The Slepian-Wolf bound indicates

    the performance of an ideal distributed sourcecode; namely, rate equal to H(X|Y).

    Fig. 5. Performance of regular and irregular LDPCA codesof length 6336 bits over i.i.d. BSC statistics

    Fig. 6 compares two SLDPCA code systems of

    differing degree distributions with rate-adaptiveturbo codes, all of source length 6336 bits, withi.i.d. BSC statistics between Xand Y. The turbocodes are identical to those in Fig. 5. The regu-lar SLDPCA codes have an LDPC degree distri-bution over intermediate nodes given by (2 =1), while the intermediate nodes of the irregularSLDPCA codes have an LDPC degree distribu-tion of (1 = 0.3, 2 = 0.4, 3 = 0.3). Figs. 5and 6 indicate that LDPCA and SLDPCA codesare superior to turbo codes over a large range ofrates. Moreover, some irregular codes can out-perform their regular counterparts for the entire

    range of rates. We demonstrate also that irregu-lar LDPCA codes can perform within 10% of theSlepian-Wolf bound at moderate rate, while ir-regular SLDPCA codes can operate within 5% ofthe bound at high rate.

    Fig. 6. Performance of regular and irregular SLDPCAcodes of length 6336 bits over i.i.d. BSC statistics

    The effect of varying the length of the codes isdemonstrated in Fig. 7, which compares codes oflength 396 and 6336 for BSC statistics betweenX and Y. The turbo codes are once again from[13]. Both sets of LDPCA codes have degree dis-tribution of (2 = 0.3, 3 = 0.4, 4 = 0.3), andboth sets of SLDPCA codes have degree distri-bution of (1 = 0.3, 2 = 0.4, 3 = 0.3). Theplot indicates that reducing the length of the code

  • 7/31/2019 Varodayan_EURASIPSignalProcessing06

    7/8

    Rate-Adaptive Distributed Source Coding using Low-Density Parity-Check Codes 7

    degrades compression performance slightly, if at

    all. The discrepancy is not as large as for fixed-rate distributed source codes [11] because rate-adaptive codes are opportunistic. Over severaltrials, even though the maximum rate requiredby the short codes exceeds the maximum rate forthe long ones, the minimum rate for the shortones is also less than that for the long codes.

    Fig. 7. Performance of rate-adaptive codes of lengths 396and 6336 bits over i.i.d. BSC statistics

    Fig. 8 investigates the application of LDPCAand SLDPCA codes to different conditionalstatistics between Xand Y. The two models con-sidered are i.i.d. BSC and i.i.d. Z (in which onesin X may be flipped into zeros in Y, but zeros inX cannot be flipped to ones in Y). The turbo,LDPCA and SLDPCA codes are the same length6336 as used in Fig. 7.

    Fig. 9 compares the performance of LDPCAand SLDPCA codes for BSC statistics betweenX and Y, when the source X is biased and un-biased. For the biased scenarios, X is modeledto be 90% zeros and 10% ones. In this plot, theturbo, LDPCA and SLDPCA codes are the samelength 6336 ones as those in Fig. 7. Note that,when the source is so biased, the conditional en-tropy of the side information does not exceed 0.5bits. Figs. 8 and 9 show that the performance ofLDPCA and SLDPCA codes are not degraded byasymmetries in conditional statistics and sourcedistribution.

    Fig. 8. Performance of rate-adaptive codes of length 6336bits over i.i.d. Z and i.i.d. BSC statistics

    6. Conclusion

    This paper presents rate-adaptive LDPCA andSLDPCA codes for the case of asymmetric dis-tributed coding in which the encoder is not awareof the joint statistics between source and side in-formation. Our constructions guarantee the per-formance of the codes at all compression ratios byfixing the LDPC degree distribution across them.

    In addition, good local structure (with respect tostopping sets) is propagated through the graphsfrom low compression ratios to high.

    The proposed rate-adaptive codes have beendemonstrated to be superior to linear encodingand decoding complexity alternatives for asym-metric distributed source coding. LDPCA andSLDPCA codes (of length 6336 bits) are able toperform within 10% and 5% of the Slepian-Wolfbound in the moderate and high rate regimes,respectively. We have also shown that the perfor-mance of the codes diminishes only slightly whenthe code length is reduced, and is not degraded byasymmetries in conditional statistics and sourcedistribution.

    References

    [1] D. Slepian, J. K. Wolf, Noiseless coding of corre-lated information sources, IEEE Trans. Inform. The-ory 19 (4) (1973) 471480.

    [2] A. D. Wyner, J. Ziv, The rate-distortion function for

  • 7/31/2019 Varodayan_EURASIPSignalProcessing06

    8/8

    8 D. Varodayan, A. Aaron and B. Girod

    Fig. 9. Performance of rate-adaptive codes of length 6336bits over i.i.d. BSC statistics with biased and unbiasedsources

    source coding with side information at the decoder,IEEE Trans. Inform. Theory 22 (1) (1976) 110.

    [3] A. D. Wyner, The rate-distortion function for sourcecoding with side information at the decoder-II: gen-eral sources, Inf. Control 38 (1) (1978) 6080.

    [4] R. B. Blizard, Convolutional coding for data compres-sion, Martin Marietta Corp., Denver Div., Rep. R-69-17 (1969) (cited by [5]).

    [5] M. E. Hellman, Convolutional source encoding, IEEETrans. Inform. Theory 21 (6) (1975) 651656.

    [6] A. D. Wyner, Recent results in the Shannon theory,IEEE Trans. Inform. Theory 20 (1) (1974) 210.

    [7] S. S. Pradhan, K. Ramchandran, Distributed sourcecoding using syndromes (DISCUS): design and con-struction, IEEE Trans. Inform. Theory 49 (3) (2003)626643.

    [8] A. Aaron, B. Girod, Compression with side informa-tion using turbo codes, in: Proc. IEEE Data Com-pression Conf., Snowbird, UT, 2002.

    [9] J. Garca-Fras, Compression of correlated binarysources using turbo codes, IEEE Commun. Lett.5 (10) (2001) 417419.

    [10] J. Bajcsy, P. Mitran, Coding for the Slepian-Wolfproblem with turbo codes, in: Proc. IEEE GlobalCommunications Conference, San Antonio, TX, 2001.

    [11] A. Liveris, Z. Xiong, C. Georghiades, Compressionof binary sources with side information at the de-coder using LDPC codes, IEEE Commun. Lett. 6 (10)(2002) 440442.

    [12] A. Aaron, S. Rane, E. Setton, B. Girod, Transform-domain Wyner-Ziv codec for video, in: SPIE VisualCommunications and Image Processing Conf., SanJose, CA, 2004.

    [13] D. N. Rowitch, L. B. Milstein, On the performanceof hybrid FEC/ARQ systems using rate compatiblepunctured turbo (RCPT) codes, IEEE Trans. Com-

    mun. 48 (6) (2000) 948959.[14] J. Hagenauer, J. Barros, A. Schaeffer, Lossless turbo

    source coding with decremental redundancy, in: Proc.5th ITG Conf. on Source and Channel Coding, Erlan-gen, Germany, 2004.

    [15] S. ten Brink, Designing iterative decoding schemeswith the extrinsic information transfer chart, AEUInt. J. Electron. Commun. 54 (6) (2000) 389398.

    [16] J. Ha, S. W. McLaughlin, Optimal puncturing ofirregular low-density parity-check codes, in: Proc.IEEE International Conf. on Communications, An-chorage, AK, 2003.

    [17] H. Pishro-Nik, F. Fekri, Results on punctured LDPCcodes, in: IEEE Inform. Theory Workshop, San An-tonio, TX, 2004.

    [18] S. Sesia, G. Caire, G. Vivier, Incremental redundancyhybrid ARQ schemes based on low-density parity-check codes, IEEE Trans. Commun. 52 (8) (2004)

    13111321.[19] J. Chen, A. Khisti, D. M. Malioutov, J. S.

    Yedidia, Distributed source coding using serially-concatenated-accumulate codes, in: IEEE Inform.Theory Workshop, San Antonio, TX, 2004.

    [20] R. G. Gallager, Low-density parity-check codes, Cam-bridge MA: MIT Press (1963).

    [21] F. R. Kschischang, B. J. Frey, H.-A. Loeliger, Factorgraphs and the sum-product algorithm, IEEE Trans.Inform. Theory 47 (2) (2001) 498519.

    [22] M. Yang, W. E. Ryan, Y. Li, Design of efficientlyencodable moderate-length high-rate irregular LDPCcodes, IEEE Trans. Commun. 52 (4) (2004) 564571.

    [23] J. Li, K. R. Narayanan, C. N. Georghiades, Prod-uct accumulate codes: a class of codes with near-capacity performance and low decoding complexity.,

    IEEE Trans. Inform. Theory 50 (1) (2004) 3146.[24] M. Isaka, M. P. C. Fossorier, High rate serially

    concatenated coding with extended Hamming codes,IEEE Commun. Lett. 9 (2) (2005) 160162.

    [25] L. Bahl, J. Cocke, F. Jelinek, J. Raviv, Optimal de-coding of linear codes for minimizing symbol error,IEEE Trans. Inform. Theory 20 (2) (1974) 284287.

    [26] S.-Y. Chung, T. J. Richardson, R. L. Urbanke, Anal-ysis of sum-product decoding of low-density parity-check codes using a Gaussian approximation, IEEETrans. Inform. Theory 47 (2) (2001) 657670.

    [27] A. Amraoui, LTHC: LdpcOpt (2001).URL http://lthcwww.epfl.ch/research/ldpcopt

    [28] C. Di, D. Proietti, I. E. Telatar, T. J. Richardson,R. L. Urbanke, Finite-length analysis of low-densityparity-check codes on the binary erasure channel.,

    IEEE Trans. Inform. Theory 48 (6) (2002) 15701579.[29] T. Tian, C. Jones, J. D. Villasenor, R. D. Wesel,

    Construction of irregular LDPC codes with low errorfloors, in: Proc. IEEE International Conf. on Com-munications, Anchorage, AK, 2003.

    [30] A. Ramamoorthy, R. D. Wesel, Construction of shortblock length irregular low-density parity-check codes,in: Proc. IEEE International Conf. on Communica-tions, Paris, France, 2004.