ieee transactions on information theory, vol ......lattices have been useful in distributed gaussian...

22
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 8, AUGUST 2013 4927 Lattice Codes for the Gaussian Relay Channel: Decode-and-Forward and Compress-and-Forward Yiwei Song and Natasha Devroye Abstract—Lattice codes are known to achieve capacity in the Gaussian point-to-point channel, achieving the same rates as i.i.d. random Gaussian codebooks. Lattice codes are also known to outperform random codes for certain channel models that are able to exploit their linearity. In this paper, we show that lattice codes may be used to achieve the same performance as known i.i.d. Gaussian random coding techniques for the Gaussian relay channel, and show several examples of how this may be combined with the linearity of lattices codes in multisource relay networks. In particular, we present a nested lattice list decoding technique in which lattice codes are shown to achieve the decode-and-for- ward (DF) rate of single source, single destination Gaussian relay channels with one or more relays. We next present two examples of how this DF scheme may be combined with the linearity of lattice codes to achieve new rate regions which for some channel conditions outperform analogous known Gaussian random coding techniques in multisource relay channels. That is, we derive a new achievable rate region for the two-way relay channel with direct links and compare it to existing schemes, and derive a new achievable rate region for the multiple access relay channel. We furthermore present a lattice compress-and-forward (CF) scheme for the Gaussian relay channel which exploits a lattice Wyner–Ziv binning scheme and achieves the same rate as the Cover–El Gamal CF rate evaluated for Gaussian random codes. These results suggest that structured/lattice codes may be used to mimic, and sometimes outperform, random Gaussian codes in general Gaussian networks. Index Terms—Compress and forward, decode and forward, Gaussian relay channel, lattice codes, relay channel. I. INTRODUCTION T HE derivation of achievable rate regions for general net- works including relays has classically used codewords and codebooks consisting of independent, identically generated symbols (i.i.d. random coding). Only in recent years have codes which possess additional structural properties, which we term structured codes, been used in networks with relays [3]–[9]. The benet of using structured codes in networks lies not only in Manuscript received October 30, 2011; revised October 16, 2012; accepted March 11, 2013. Date of publication April 19, 2013; date of current version July 10, 2013. This work was supported in part by the NSF under awards CCF- 1216825 and 1053933. This paper was presented in part at the Allerton Confer- ence on Communication, Control and Computing, Monticello, IL, 2010 [1] and in part at the 2011 IEEE Information Theory Workshop. The authors are with the Department of Electrical and Computer Engi- neering, University of Illinois at Chicago, Chicago, IL 60607 USA (e-mail: [email protected]; [email protected]). Communicated by M. Skoglund, Associate Editor for Communications. Color versions of one or more of the gures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identier 10.1109/TIT.2013.2259139 a somewhat more constructive achievability scheme and pos- sibly computationally more efcient decoding than i.i.d. random codes, but also in actual rate gains which exploit the struc- ture of the codes—their linearity in Gaussian channels—to de- code combinations of codewords rather than individual code- words/messages. While past work has focused mainly on spe- cic scenarios in which structured or lattice codes are particu- larly benecial, missing is the demonstration that lattice codes may be used to achieve the same rate as known i.i.d. random coding-based schemes in Gaussian relay networks, in addition to going above and beyond i.i.d. random codes in certain sce- narios. In this paper, we demonstrate generic nested lattice code- based schemes for achieving the decode-and-forward (DF) and compress-and-forward (CF) rates in Gaussian relay networks which achieve at least the same rate regions as those achieved using Gaussian random codes. In the longer term, these strate- gies may be combined with ones which exploit the linear struc- ture of lattice codes to obtain structured coding schemes for ar- bitrary Gaussian relay networks. Toward this goal, we illustrate how the DF-based lattice scheme may be combined with strate- gies which exploit the linearity of lattice codes in two examples: the two-way relay channel with direct links and the multiple ac- cess relay channel. A. Goal and Motivation In relay networks, as opposed to single-hop networks, multiple links or routes may exist between a given source and destination. Of key importance in such networks is how to best jointly utilize these links, which—in a single source scenario—all carry the same message and effectively cooperate with each other to maximize the number of messages that may be distinguished. The three node relay channel with one source with one message for one destination aided by one relay is the simplest relay network where pure cooperation between the links is manifested. Information may ow along the direct link or along the relayed link; how to manage or have these links cooperate to best transmit this message is key to approaching capacity for this channel. Despite this network’s simplicity, its capacity remains unknown in general. However, the DF and CF achievability strategies, both examples of cooperative strategies described in [10]–[13], may approach capacity under certain channel conditions. In the DF scheme, the receiver does not obtain the entire message from the direct link nor the relayed link. Rather, cooperation between the direct and relayed links may be implemented by having the receiver decode a list of possible messages (or codewords) from the direct link, another independent list from the coherent combination of the direct link and the relayed link, which then intersects to obtain 0018-9448/$31.00 © 2013 IEEE

Upload: others

Post on 02-Oct-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: IEEE TRANSACTIONS ON INFORMATION THEORY, VOL ......lattices have been useful in distributed Gaussian source coding when reconstructing a linear function [32], [33]. Lattice codes in

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 8, AUGUST 2013 4927

Lattice Codes for the Gaussian Relay Channel:Decode-and-Forward and Compress-and-Forward

Yiwei Song and Natasha Devroye

Abstract—Lattice codes are known to achieve capacity in theGaussian point-to-point channel, achieving the same rates as i.i.d.random Gaussian codebooks. Lattice codes are also known tooutperform random codes for certain channel models that areable to exploit their linearity. In this paper, we show that latticecodes may be used to achieve the same performance as knowni.i.d. Gaussian random coding techniques for the Gaussian relaychannel, and show several examples of how this may be combinedwith the linearity of lattices codes in multisource relay networks.In particular, we present a nested lattice list decoding techniquein which lattice codes are shown to achieve the decode-and-for-ward (DF) rate of single source, single destination Gaussian relaychannels with one or more relays. We next present two examplesof how this DF scheme may be combined with the linearity oflattice codes to achieve new rate regions which for some channelconditions outperform analogous known Gaussian random codingtechniques in multisource relay channels. That is, we derive anew achievable rate region for the two-way relay channel withdirect links and compare it to existing schemes, and derive anew achievable rate region for the multiple access relay channel.We furthermore present a lattice compress-and-forward (CF)scheme for the Gaussian relay channel which exploits a latticeWyner–Ziv binning scheme and achieves the same rate as theCover–El Gamal CF rate evaluated for Gaussian random codes.These results suggest that structured/lattice codes may be usedto mimic, and sometimes outperform, random Gaussian codes ingeneral Gaussian networks.

Index Terms—Compress and forward, decode and forward,Gaussian relay channel, lattice codes, relay channel.

I. INTRODUCTION

T HE derivation of achievable rate regions for general net-works including relays has classically used codewords

and codebooks consisting of independent, identically generatedsymbols (i.i.d. random coding). Only in recent years have codeswhich possess additional structural properties, which we termstructured codes, been used in networks with relays [3]–[9]. Thebenefit of using structured codes in networks lies not only in

Manuscript received October 30, 2011; revised October 16, 2012; acceptedMarch 11, 2013. Date of publication April 19, 2013; date of current versionJuly 10, 2013. This work was supported in part by the NSF under awards CCF-1216825 and 1053933. This paper was presented in part at the Allerton Confer-ence on Communication, Control and Computing, Monticello, IL, 2010 [1] andin part at the 2011 IEEE Information Theory Workshop.The authors are with the Department of Electrical and Computer Engi-

neering, University of Illinois at Chicago, Chicago, IL 60607 USA (e-mail:[email protected]; [email protected]).Communicated by M. Skoglund, Associate Editor for Communications.Color versions of one or more of the figures in this paper are available online

at http://ieeexplore.ieee.org.Digital Object Identifier 10.1109/TIT.2013.2259139

a somewhat more constructive achievability scheme and pos-sibly computationallymore efficient decoding than i.i.d. randomcodes, but also in actual rate gains which exploit the struc-ture of the codes—their linearity in Gaussian channels—to de-code combinations of codewords rather than individual code-words/messages. While past work has focused mainly on spe-cific scenarios in which structured or lattice codes are particu-larly beneficial, missing is the demonstration that lattice codesmay be used to achieve the same rate as known i.i.d. randomcoding-based schemes in Gaussian relay networks, in additionto going above and beyond i.i.d. random codes in certain sce-narios. In this paper, we demonstrate generic nested lattice code-based schemes for achieving the decode-and-forward (DF) andcompress-and-forward (CF) rates in Gaussian relay networkswhich achieve at least the same rate regions as those achievedusing Gaussian random codes. In the longer term, these strate-gies may be combined with ones which exploit the linear struc-ture of lattice codes to obtain structured coding schemes for ar-bitrary Gaussian relay networks. Toward this goal, we illustratehow the DF-based lattice scheme may be combined with strate-gies which exploit the linearity of lattice codes in two examples:the two-way relay channel with direct links and the multiple ac-cess relay channel.

A. Goal and Motivation

In relay networks, as opposed to single-hop networks,multiple links or routes may exist between a given sourceand destination. Of key importance in such networks is howto best jointly utilize these links, which—in a single sourcescenario—all carry the same message and effectively cooperatewith each other to maximize the number of messages that maybe distinguished. The three node relay channel with one sourcewith one message for one destination aided by one relay is thesimplest relay network where pure cooperation between thelinks is manifested. Information may flow along the direct linkor along the relayed link; how to manage or have these linkscooperate to best transmit this message is key to approachingcapacity for this channel. Despite this network’s simplicity,its capacity remains unknown in general. However, the DFand CF achievability strategies, both examples of cooperativestrategies described in [10]–[13], may approach capacity undercertain channel conditions. In the DF scheme, the receiverdoes not obtain the entire message from the direct link nor therelayed link. Rather, cooperation between the direct and relayedlinks may be implemented by having the receiver decode alist of possible messages (or codewords) from the direct link,another independent list from the coherent combination of thedirect link and the relayed link, which then intersects to obtain

0018-9448/$31.00 © 2013 IEEE

Page 2: IEEE TRANSACTIONS ON INFORMATION THEORY, VOL ......lattices have been useful in distributed Gaussian source coding when reconstructing a linear function [32], [33]. Lattice codes in

4928 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 8, AUGUST 2013

the message sent1. In the CF scheme of [10], cooperation isimplemented by a two-step decoding procedure combined withWyner–Ziv binning.Generalizations of these i.i.d. random-coding-based DF and

CF schemes have been proposed for general multiterminalrelay networks [11], [14], [15]. However, in recent years latticecodes have been shown to outperform random codes in severalGaussian multisource network scenarios due to their linearityproperty [3]–[6], [16], [17]. As such, one may hope to derivea coding scheme which combines the best of both worlds,i.e., incorporate lattice codes with their linearity property intocoding schemes for general Gaussian networks. At the mo-ment, we cannot simply replace i.i.d. random codes with latticecodes. That is, while nested lattice codes have been shown tobe capacity achieving in the point-to-point Gaussian channel,in relay networks with multiple links/paths and the possibilityof cooperation, technical issues need to be solved before onemay replace random codes with lattice codes.In this paper, we make progress in this direction by demon-

strating lattice-based cooperative techniques for a number ofrelay channels. One of the key new technical ingredients in theDF schemes is the usage of a lattice list decoding scheme to de-code a list of lattice points (using lattice decoding) rather thana single lattice point. We then extend this lattice-list-based co-operative technique and combine it with the linearity of latticecodes to provide gains for some channel conditions over i.i.d.random codes in scenarios with multiple cooperating links.

B. Related Work

In showing that lattice codes may be used to replace i.i.d.random codes in Gaussian relay networks, we build upon workon relay channels, on the existence of “good” nested latticecodes for Gaussian source and channel coding, and on recent ad-vancements in using lattices inmultiple-relay andmultiple-nodescenarios. We outline the most relevant related work.Relay Channels: Two of our main results are the demonstra-

tion that nested lattice codes may be used to achieve the DFand CF rates achieved by random Gaussian codes [10]. For theDF scheme, we mimic the regular encoding/sliding window de-coding DF strategy [11], [12] in which the relay decodes themessage of the source, reencodes it, and then forwards it. Thedestination combines the information from the source and therelay by intersecting two independent lists of messages obtainedfrom the source and relayed links, respectively, over two trans-mission blocks. We will rederive the DF rate, but with latticecodes replacing the random i.i.d. Gaussian codes. Of particularimportance is constructing and utilizing a lattice version of thelist decoder. It is worth mentioning that the concurrent work [8]uses a different lattice coding scheme to achieve the DF ratein the three-node relay channel which does not rely on list de-coding but rather on a careful nesting structure of the latticecodes.The DF scheme of [10] restricts the rate by requiring the relay

to decode the message. The CF achievability scheme of [10]for the relay channel places no such restriction, as the relay

1There are alternative schemes for implementing DF, but the main intuitionabout combining information along two paths remains the same.

compresses its received signal and forwards the compressionindex. In Cover and El Gamal’s original CF scheme, the relay’scompression technique utilizes a form of binning related to theWyner–Ziv rate-distortion problem with decoder side-informa-tion [18]. In [19] and [20], the authors describe a lattice versionof the noiseless quadratic Gaussian Wyner–Ziv coding scheme,where lattice codes quantize/compress the continuous signal;this will form the basis for our lattice-based CF strategy. An-other simple structured approach to the relay channel is consid-ered in [21] and [22], where 1-D structured quantizers are usedin the relay channel subject to instantaneous (or symbol-by-symbol) relaying.Our extension of the single relay DF rate to a multiple relay

DF rate is based on the DF multilevel relay channel schemepresented in [11] and [14]. These papers essentially extend theDF rate of [10]; the central idea behind mimicking the schemeof [11], [14] is the repeated usage of the lattice list decoder,enabling the message to again be decoded from the intersectionof multiple independent lists formed at the destination from thedifferent relay—destination links.Lattice codes for single-hop channels: Lattice codes are

known to be “good” for almost everything in Gaussianpoint-to-point, single-hop channels [23]–[25], from both sourceand channel coding perspectives. In particular, nested latticecodes have been shown to achieve capacity for the AWGNchannel, the AWGN broadcast channel [20], and to achievethe corner points of the AWGN multiple access channel [3](see further details in [26] and [27]). Lattice codes may furtherbe used in achieving the capacity of Gaussian channels withinterference or state known at the transmitter [28] using alattice equivalent [20] of dirty-paper coding [29]. The nestedlattice approach of [20] for the dirty-paper channel is extendedto dirty-paper networks in [30], where in some scenarios latticecodes are interestingly shown to outperform random codes. In-user interference channels for , their structure has

enabled the decoding of (portions of) “sums of interference”terms [16], [17], [27], [31], allowing receivers to subtract offthis sum rather than try to decode individual interference termsin order to remove them. From a source coding perspective,lattices have been useful in distributed Gaussian source codingwhen reconstructing a linear function [32], [33].Lattice codes in multihop channels: The linearity property

of lattice codes have been exploited in the compute-and-for-ward framework [3] for Gaussian multihop wireless relay net-works [4]–[6]. Therefore, intermediate relay nodes decode alinear combination, or equation, of the transmitted codewordsor equivalently messages by exploiting the noisy linear com-binations provided by the channel. Through the use of nestedlattice codes, it was shown that decoding linear combinationsmay be done at higher rates than decoding the individual code-words—one of the key benefits of using structured rather thani.i.d. random codewords [34]. Recently, progress has been madein characterizing the capacity of a single source, single destina-tion, multiple relay network to within a constant gap for arbi-trary network topologies [35]. Capacity was initially shown tobe approximately achieved via an i.i.d. random quantize-map-and-forward-based coding scheme [35] and alternatively, usingan extension of CF-based techniques termed “noisy network

Page 3: IEEE TRANSACTIONS ON INFORMATION THEORY, VOL ......lattices have been useful in distributed Gaussian source coding when reconstructing a linear function [32], [33]. Lattice codes in

SONG AND DEVROYE: LATTICE CODES FOR THE GAUSSIAN RELAY CHANNEL 4929

coding” [15]. Recently, relay network capacity was also shownto be achievable using nested lattice codes for quantization andtransmission [7]. Alternatively, using a new “computation align-ment” scheme which couples lattice codes in a compute-and-forward-like framework [3] together with a signal-alignmentscheme reminiscent of ergodic interference alignment [36], thework [37] was able to show a capacity approximation for multi-layer wireless relay networks with an approximation gap that isindependent of the network depth.While lattices have been usedin relay networks, the goals so far have mainly been to demon-strate their utility in specific networks in which decode linearcombinations of messages is beneficial, or to achieve finite-gapresults.As a first example of the use of lattices in multihop scenarios,

we will consider the Gaussian two-way relay channel [4], [5].The two-way relay channel consists of three nodes: two ter-minal nodes 1 and 2 that wish to exchange their two independentmessages through the help of one relay node R. When the ter-minal nodes employ nested lattice codes, the sum of their sig-nals is again a lattice point and may be decoded at the relay.Having the relay send this sum (possibly reencoded) allows theterminal nodes to exploit their own message side-informationto recover the other user’s message [4], [5]. Gains over DFschemes where both terminals transmit simultaneously to therelay stem from the fact that, if using random Gaussian code-books, the relay will see a multiple access channel and requirethe decoding of both individual messages, even though the sumis sufficient. In contrast, no multiple access (or sum-rate) con-straint is imposed by the lattice decoding of the sum. An alter-native non-DF (hence no rate constraints at relay) yet still struc-tured approach to the two-way relay channel is explored in [38]and [39], where simple 1-D structured quantizers are used for asymbol-by-symbol amplify-and-forward-based scheme. In thetwo-way relay channel, models with and without direct linksbetween the transmitters have been considered. While randomcoding techniques have been able to exploit both the direct linkand relayed links, lattice codes have only been used in chan-nels without direct links. Here, we will present a lattice codingscheme which will combine the linearity properties, leading toless restrictive decoding constraints at the relay, with direct-linkinformation, allowing for a form of lattice-enabled two-waycooperation.A second example in which we will combine the linearity

property with direct-link cooperation is the Gaussian multipleaccess relay channel [12], [40], [41]. In this model, two sourceswish to communicate independent messages to a commondestination with the help of a single relay. As in the Gaussiantwo-way relay channel, the relay may choose to decode the sumof the codewords using lattice codes, rather than the individualcodewords (as in random coding-based DF schemes), whichit would forward to the destination. The destination wouldcombine this sum with direct-link information (cooperation).As in the two-way relay channel, decoding the sum at the relayeliminates the multiple access sum-rate constraint.

C. Contributions and Outline

Our contributions center around demonstrating that latticesmay achieve the same rates as currently known Gaussian i.i.d.

random coding-based achievability schemes for relay networks.While we do not prove this sweeping statement in general, wemake progress toward this goal along the following lines.1) Preliminaries and lattice list decoder: In Section II, webriefly outline lattice coding preliminaries and notation be-fore outlining key technical lemmas that will be needed,including the central contribution of Section II—the pro-posed lattice list decoding technique in Theorem 3.

2) DF, single source: This lattice list decoding technique isused to show that nested lattice codes may achieve theDF rate for the Gaussian relay channel achieved by i.i.d.random Gaussian codes [10] in Section III, Theorem 7. Wefurthermore extend this result to the general single source,multiple relay Gaussian channel in Theorem 8.

3) DF, multiple source including two-way relay and multipleaccess relay channels: In Section IV, relays DF combina-tions of messages as in the compute-and-forward frame-work, which is combined with direct link side-informationat the destination. In particular, we present lattice-basedachievable rate regions for the Gaussian two-way relaychannel with direct links in Theorem 9, and the Gaussianmultiple access relay channel in Theorem 10.

4) CF, single source: In Section V, we revisit our goal ofshowing that lattice codes may mimic the performance ofi.i.d. Gaussian codes in the relay channel by demonstratinga lattice code-based CF scheme which achieves the samerate as the CF scheme in [10] evaluated for i.i.d. Gaussiancodebooks. The proposed lattice CF scheme is based ona variation of the lattice-based Wyner–Ziv scheme of [19]and [20], as outlined in Theorem 12. We note that latticeshave been shown to achieve the quantize-map-and-forwardrates for general relay channels using quantize-and-mapscheme (similar to the CF scheme) which simply quantizesthe received signal at the relay and reencodes it withoutany form of binning/hashing in [7]; the contribution isto show an alternative lattice-coding-based achievabilityscheme which employs computationally more efficient lat-tice decoding.

Remark 1: We note that motivation behind seeking lattice-based schemes for relay networks is to allow for more flex-ible relaying strategies whereby nodes need not be constrainedto decoding individual messages as in i.i.d. DF-based codingschemes (i.e., relays can decode linear combinations of mes-sages). This, rather than using the structure of nested latticecodes to reduce decoding complexity, is our focus. Decodingcomplexity (relative to i.i.d. codes) is reduced at the relay(s) butnot significantly at the destination. In this respect, our DF-basedschemes differs from the more computationally efficient lat-tice-based DF scheme of [8]. Our lattice-based CF does how-ever utilize the structure of the lattice codes to offer decodingcomplexity reduction.

II. PRELIMINARIES, NOTATION, AND THE

LATTICE LIST DECODER

We introduce our notation for lattice codes, nested latticecodes, and nested lattice chains and present several existinglemmas. We next present the new lattice list decoder (see The-orem 3) in which the decoder, instead of outputting a single es-

Page 4: IEEE TRANSACTIONS ON INFORMATION THEORY, VOL ......lattices have been useful in distributed Gaussian source coding when reconstructing a linear function [32], [33]. Lattice codes in

4930 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 8, AUGUST 2013

timated codeword, outputs a list which contains the correct onewith high probability. The lemma bounds the number of pointsin the list. For completeness, a unique-decoding equivalent ofthis theorem is provided in Lemma 6, which states that one mayalso use lattice codes to decode a unique point with high prob-ability rather than a list, in mixed noise consisting of the sumof Gaussian noise(s) and independent noise(s) uniformly dis-tributed over the Voronoi regions of Rogers-good lattices.

A. Lattice Codes

Our notation for (nested) lattice codes for transmission overAWGN channels follows that of [6], [20]; comprehensive treat-ments may be found in [20], [23], and [42] and in particular [25].An -dimensional lattice is a discrete subgroup of Euclideanspace with Euclidean norm under vector addition andmay be expressed as all integral combinations of basis vectors

for the set of integers, , and thegenerator matrix corresponding to the lattice . We use

bold to denote column vectors, to denote the transposeof the vector . All vectors are generally in unless other-wise stated, and all logarithms are base 2. Let denote the allzeros vector of length , denote the identity matrix,and denote a Gaussian random variable (or vector) ofmean and variance . Define . Fur-ther define1) The nearest neighbor lattice quantizer of as

2) The operation as .3) The fundamental Voronoi region of as the points closerto the origin than to any other lattice point

which is of volume (also sometimes denotedby or for lattice ).

4) The second moment per dimension of a uniform distribu-tion over as

5) The normalized second moment of a lattice of dimensionas

6) A sequence of -dimensional lattices is said to bePoltyrev good [6], [23] (in terms of channel coding over theAWGN channel) if, for and -dimensionalvector, we have

which upper bounds the error probability of nearest latticepoint decoding when using lattice points as codewords inthe AWGN channel. Here is the Poltyrev exponent[23], [43] which is given as

and is volume-to-noise ratio (VNR) defined as [24]

Since for , a necessary condition forthe reliable decoding of a single point is , therebyrelating the size of the fundamental Voronoi region (andultimately how many points one can transmit reliably) tothe noise power, aligning well with our intuition aboutGaussian channels.

7) A sequence of -dimensional lattices is said to beRogers good [44] if

where the covering radius is the radius of the smallestball which contains the fundamental Voronoi region of

, and the effective radius is the radius of a ballof the same volume as the fundamental Voronoi region of

.8) A sequence of -dimensional lattices is said to begood for mean-squared error quantization if

It may be shown that if a sequence of lattices is Rogersgood, that it is also good for mean-squared error quantiza-tion [45]. Furthermore, for a Rogers’ good lattice , it maybe shown that defining either or alsodefines the other as in [6, Appendix A]; hence for a Rogersgood lattice we may define either its second moment perdimension or its volume. This will be used in generatingnested lattice chains.

Finally, we include a statement of the useful “Crypto lemma”for completeness.Lemma 1 (Crypto lemma [23], [46]): For any random vari-

able distributed over the fundamental region and statisti-cally independent of , which is uniformly distributed over ,

is independent of and uniformly distributedover .

B. Nested Lattice Codes

Consider two lattices and such that with fun-damental regions of volumes , respectively. Hereis termed the coarse lattice which is a sublattice of , the finelattice, and hence . When transmitting over the AWGNchannel, one may use the set as the codebook.

Page 5: IEEE TRANSACTIONS ON INFORMATION THEORY, VOL ......lattices have been useful in distributed Gaussian source coding when reconstructing a linear function [32], [33]. Lattice codes in

SONG AND DEVROYE: LATTICE CODES FOR THE GAUSSIAN RELAY CHANNEL 4931

Fig. 1. Lattice chain with fundamental regionsof volumes .

The coding rate of this nested lattice pair is definedas

where is the nesting ratio of the nestedlattice pair. It was shown that there exist nested lattice pairswhich achieve the capacity of the AWGN channel [23].

C. Nested Lattice Chains

In the following, we will use an extension of nested latticecodes termed nested lattice chains as in [5] and [6], and shownin Fig. 1 (chain of length 3). We first restate a slightly modi-fied version of [6, Th. 2] on the existence of good nested latticechains, of use in our achievability proofs.Theorem 2 Existence of “good” nested lattice chains

(adapted from [6, Th. 2]): For anyand , there exists a sequence of -dimensional lattice

with () satisfying:a) , , , are simultaneously Rogers-good andPoltyrev-good while is Poltyrev-good.

b) For any , , forsufficiently large .

c) The coding rate associated with the nested lattice pairis whereas . Moreover, for ,

the coding rate of the nested lattice pair isand

( ).Proof: From [6, Th. 2], there exists a nested lattice chain

which satisfies the properties a) and b) and for which, and .

Now notice that

.

D. Lattice List Decoder

List decoding here refers to a decoding procedure in which,instead of outputting a single codeword corresponding to asingle message, the decoder outputs a list of possible code-words which includes the correct (transmitted) one with highprobability. Such a decoding scheme is useful in cooperativescenarios when a message is transmitted above the capacity

of a given link (and hence the decoder would not be able tocorrectly distinguish the true transmitted codeword from thatgiven link) and is combined with additional information at thereceiver to decode a single message point from within the list.We present our key theorem next which bounds the list size fora lattice list decoder which will decode a list which containsthe correct message with high probability.Theorem 3 (Lattice List Decoding in Mixed Noise): Consider

the channel , subject to input power constraint, where is noise which is a

mixture of Gaussian noise and independentnoises which are uniformly distributed over fundamentalVoronoi regions of Rogers-good lattices with second moments. Thus, is of equivalent total variance

. For any ,, and large enough, there exists a chain of nested

lattices such that the lattice list decoder can produce a list of size, which does not contain the correct codeword with proba-

bility smaller than .Proof:Encoding: We consider a good nested lattice chain

as in Fig. 1 and Theorem 2, in which and areboth Rogers good and Poltyrev good while is Poltyrev good.We define the coding rate and the nesting rate

. Each message is one-to-onemapped to the lattice point , andthe transmitter sends , where is an-dimensional dither signal (known to the encoder and decoder)uniformly distributed over .

Decoding: Upon receiving , the receiver computes

(1)

for . We choose to be the MMSE coefficientand note that the equivalent noise

is independent of . The receiver decodes thelist of messages

(2)

where

is the set of lattice points inside centered at the pointas shown in Fig. 2.Remark 2: The notation used for the list of messages, i.e.,

should be understood as follows: the subscriptis meant to denote the transmitter and the receiver , thedependence on (rather than ) is included, though in allcases we will make the analogous transformation from toas in (1) (but for brevity do not include this in future schemes),and the superscript is used to recall what messages are in thelist, useful in multisource and Block Markov schemes.

Page 6: IEEE TRANSACTIONS ON INFORMATION THEORY, VOL ......lattices have been useful in distributed Gaussian source coding when reconstructing a linear function [32], [33]. Lattice codes in

4932 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 8, AUGUST 2013

Fig. 2. Two equivalent lists, in this example consisting of the four encircledpoints.

Probability of error for list decoding: Pick . In de-coding a list, we require that the correct, transmitted codeword

lies in the list with high probability as , i.e., theprobability of error is (for the blocklength or dimension of thelattices) , which should bemade less than as . This is easy to do with large listsizes; we bound the list size next. The following Lemma allowsus to more easily bound the probability of list decoding error.Lemma 4 (Equivalent Decoding List): For the nested lattices

and given , define

(3)

and

Then the sets andare equal.

Proof: is the set of points satisfying. Also note that the fundamental Voronoi region

of any lattice is centro-symmetric ( , we have that) by definition of a lattice and fundamental Voronoi

region (alternatively, see [47]). Hence, for any two points and, and a centro-symmetric region , .

Applying this to and yields the lemma.

We continue with the proof of Theorem 3. We first useLemma 4 to see that the lists and

are equal. Next notice that the proba-bility of error may be bounded as follows:

where and. We now use Lemma 5 to show that the pdf of

can be upper bounded by the pdf of a Gaussian random vectorof not much larger variance, which in turn is used to bound theabove probability of error.Lemma 5: Let , be uniform over the

fundamental Voronoi region of the Rogers good , of effec-tive and covering radii and and second moment ,and be uniform over the fundamental Voronoi region of theRogers good of effective and covering radii andand secondmoments , . Let

. Then there exists an i.i.d. Gaussian vector

with variance satisfying

such that the density of is upper bounded as

(4)

where and

, and is the normalizedsecond moment of an -dimensional ball.

Proof: The proof follows: [3, Appendix A] and [23,Lemmas 6 and 11] almost exactly, where the central differencewith [3, Appendix A] is that we need to bound the pdf of asum of random variables uniformly distributed over differentRogers good lattices rather than identical ones. This leads tothe summation in the exponent of (4) but note that we will stillhave as .Continuing the proof of Theorem 3, according to Lemma 5:

(5)To bound , we first need to show that the VNR ofrelative to , , is greater than one:

(6)

(7)

(8)

(9)

(10)

where (7) follows from Lemma 5, the fact that and () are all Rogers good, and recalling that ,

where . Then (8) follows from the definition

Page 7: IEEE TRANSACTIONS ON INFORMATION THEORY, VOL ......lattices have been useful in distributed Gaussian source coding when reconstructing a linear function [32], [33]. Lattice codes in

SONG AND DEVROYE: LATTICE CODES FOR THE GAUSSIAN RELAY CHANNEL 4933

of and follows as is Rogers good. Combining (5),(10), and the fact that is Poltyrev good, by definition

(11)

(12)

(13)

where (13) follows as are Rogers good and henceall tend to 0 as .

To ensure as we need ,where , and sufficientlylarge. By choosing an appropriate according to Theorem 2,we may set for any .Combining these, we obtain

(14)

The cardinality of the decoded list , in which thetrue codeword lies with high probability as , is thus

since . As may be set to be arbitrarily small,the list size may be made arbi-trarily close to .Remark 3: Note that in our Theorem statement, we have as-

sumed ; when , the decoder candecode an unique codeword with high probability, as stated inLemma 6 next.Lemma 6: Lattice unique decoding in mixed noise. Consider

the channel , subject to input power constraint, where is noise which is a

mixture of Gaussian noise and independentnoises which are uniformly distributed over fundamentalVoronoi regions of Rogers-good lattices with second moments. Thus, is of equivalent variance

. For any , , and large enough,there exist lattice codebooks such that the decoder can decodean unique codeword with probability of error smaller than .

Proof: This lemma can be derived as a special case of CF[3, Th. 1]; in particular this is found in [3, Example 2], wherethe decoder is interested in one of the messages and treats allother messages as noise. We may view in this lemma as thesignals from other (lattice-codeword based) transmitters in [3,Example 2].

III. SINGLE SOURCE DF

We first show that nested lattice codes may be used to achievethe DF rate of [10, Th. 5] for the Gaussian relay channel usingnested lattice codes at the source and relay, and a lattice listdecoder at the destination. We then extend this result to showthat the generalized DF rate for a Gaussian relay network with asingle source, a single destination, and multiple DF relays may

also be achieved using an extension of the single relay lattice-based achievability scheme.

A. DF for the AWGN Single Relay Channel

Consider a relay channel in which the source node , withchannel input transmits a message todestination node which has access to the channel outputand is aided by a relay node with channel input and output

and . Input and output random variables lie in . Ateach channel use, the channel inputs and outputs are related as

, where areindependent Gaussian random variables of zero mean and vari-ance and , respectively. Let denote a sequence ofchannel inputs (a row vector), and similarly, letall denote the length sequences of channel inputs and outputs.Then the channel may be described by

(15)

where and , and inputsare subject to the power constraints and

.An code for the relay channel consists of the set of

messages uniformly distributed over ,an encoding function satisfying the powerconstraint, a set of relay functions such that the relaychannel input at time is a function of the previously receivedrelay channel outputs from channel uses 1 to ,

, and finally a decoding functionwhich yields the message estimate . We de-

fine the average probability of error of the code to be. The rate is then said to be

achievable by a relay channel if, for any and for suffi-ciently large , there exists an code such that .The capacity of the relay channel is the supremum of the setof achievable rates.We are first interested in showing that the DF rate achieved

by Gaussian random codebooks of [10, Th. 5] may be achievedusing lattice codes. As outlined in [12], this DF rate may beachieved using irregular encoding/successive decoding as in[10], regular encoding/sliding-window decoding as first shownin [48], and using regular encoding/backwards decoding asin [49]. We will mimic the regular encoding/sliding-windowdecoding scheme of [14], which includes: 1) random coding, 2)list decoding, 3) two joint typicality decoding steps, 4) codingfor the cooperative multiple access channel, 5) superpositioncoding, and 6) block Markov encoding. We rederive the DFrate, following the achievability scheme of [14], but with latticecodes replacing the random Gaussian coding techniques. Ofparticular importance is the usage of two lattice list decodersto replace two joint typicality decoding steps in the randomcoding achievability scheme.Theorem 7: Lattices achieve the DF rate achieved by random

Gaussian codebooks for the relay channel. The following DF

Page 8: IEEE TRANSACTIONS ON INFORMATION THEORY, VOL ......lattices have been useful in distributed Gaussian source coding when reconstructing a linear function [32], [33]. Lattice codes in

4934 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 8, AUGUST 2013

Fig. 3. Two Gaussian relay channels under consideration in Sections III-A and IV-A. For the AWGN relay channel, we have assumed a particular relay order(2,3) for our achievability scheme and shown the equivalent channel model used in deriving the achievable rate rather than the general channel model.

Fig. 4. Lattice DF scheme for the AWGN relay channel.

rates can be achieved using nested lattice codes for the Gaussianrelay channel described by (15):

(16)

Proof:Codebook construction: We consider two nested lattice

chains of length three , andwhose existence is guaranteed by Theorem 2, and whose pa-rameters we still need to specify. The nested lattice pairs

and are used to construct lattice codebooksof coding rate with and forgiven . Since and will not be the finest latticein the chain, they will be Rogers good, and hencewill define the volume of , and will definethe volume of . Since and are usedto construct lattice codebooks of coding rate

this will in turn define in terms of and rate ; similarlyfor in terms of and rate . Since and are onlyPoltyrev good, we may obtain the needed by appro-priate selection of in Theorem 2. Finally, the lattices and

(whose second moments we may still specify arbitrarily,

and which will be used for lattice list decoding at the destina-tion node) will also be Rogers good and their volumes, or equiv-alently, second moments, will be selected in the course of theproof.Randomly map the messages to code-

words and .Let these two mappings be independent and known to all nodes.We use block Markov coding and define as the new

message index to be sent in block ( ); de-fine . At the end of block , the receiver knows

and the relay knows . We letdenote the vectors of length of received

signals at the relay and the destination, respectively, duringthe th block, and denote dithers during blockknown to all nodes which are i.i.d., change from block to block,and are uniformly distributed over and , respectively. Theencoding and decoding steps are outlined in Fig. 4.

Encoding: During the th block, the transmitter sends thesuperposition (sum) ,and the relay sends , where

Page 9: IEEE TRANSACTIONS ON INFORMATION THEORY, VOL ......lattices have been useful in distributed Gaussian source coding when reconstructing a linear function [32], [33]. Lattice codes in

SONG AND DEVROYE: LATTICE CODES FOR THE GAUSSIAN RELAY CHANNEL 4935

By the Crypto lemma, and are uniformlydistributed over and and independent of all else.

Decoding:1) At the th block, the relay knows and consequently

, and so may decode the message from thereceived signalas long as , since may achievethe capacity of the point-to-point channel [23] or Lemma6.

2) The receiver first decodes a list of messages ,, defined according to (2) as

(17)

of asymptotic size from the signal

(18)

(19)

for using the lattice list decoding

scheme of Theorem 3. Notice that Theorem 3 is appli-cable as the “noise” in decoding a list of from

is composed of the sum of a Gaussian signaland which is uniformly distributed over

the fundamental Voronoi region of the Rogers goodlattice of second moment . The equivalent noisevariance in Theorem 3 is thus , and the ca-pacity of the channel is [23]

. We may thus obtain alist of size as long as

(20)

One may directly apply Theorem 3; for additional detailson this step, please see Appendix A.

3) A second list of messages was obtained at theend of block from the direct link between thetransmitter node S and the destination node D, de-noted as definedaccording to (2) and analogous to (17) using a latticelist decoder. We now describe the formation of the list

in block which will beused in block . Assuming that the receiver has de-coded successfully, it subtracts from

: ,and then decodes another list of possible messages

of asymptotic size using The-orem 3. This is done using the nested lattice chain

. Again, Theorem 3 is applicableas we have a channel of capacity

where the noise is purely Gaussian of secondmoment . Here, choose the list decoding lattice

to have a fundamental Voronoi region of volume

approaching asymptotically[analogous to (14)] so that the size of the decoded listapproaches . Notice that this choiceof and hence is permissible by Theorem 2 (as

). For the interesting case when approaches

(and hence list decoding is

needed/relevant),asymptotically in the sense of (14). Thus,as needed.

4) The receiver now decodes by intersecting two in-dependent lists and

and declares a success if there is a uniquein this intersection. An error is declared if there is no

message in this intersection, or multiple messages in thisintersection. We are guaranteed by Theorem 3 that the cor-rect message will lie in each list, and hence also in their in-tersection, with high probability by appropriate choiceand . To see that no more than one message will lie inthe list, notice that the two lists are independent due to therandom and independent mappings between the messageand two codeword sets. Thus, following the arguments sur-rounding [10, (27) and Lemma 3], or alternatively by inde-pendence of the lists and applying [50, Packing Lemma],with high probability, there is no more than one correctmessage in this intersection if ,or

Remark 4: While we have mimicked the regular encoding/sliding window decoding method to achieve the DF rate, lat-tice list decoding may equally be used in the irregular encodingand backwards decoding schemes. The intuition we want to re-inforce is that one may obtain similar results to random-coding-based DF schemes using lattice codes by intersecting multipleindependent lists to decode a unique message. Furthermore, asthe lattice list decoder is a Euclidean lattice decoder, it does notincrease the complexity at the decoder. We note that using listsis not necessary—other novel lattice-based schemes can be usedinstead of lattice list decoding such as [8] to achieve the sameDF rate region.

B. DF for the Multirelay Gaussian Relay Channel

We now show that nested lattice codes may also be used toachieve the DF rates of the single source, single destination mul-tilevel relay channel [11], [12], [14]. Here, all definitions remainthe same as in Section III-A; changing the channel model to ac-count for an arbitrary number of full-duplex relays. For the 2relay scenario, we show the input/output relations used in de-riving achievable rates in Fig. 3. In general, we would for ex-ample have , but that, for our achiev-ability scheme we assume a relay order (e.g., 2 then 3) whichresults in the equivalent input/output equation

Page 10: IEEE TRANSACTIONS ON INFORMATION THEORY, VOL ......lattices have been useful in distributed Gaussian source coding when reconstructing a linear function [32], [33]. Lattice codes in

4936 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 8, AUGUST 2013

at node 2. This is equivalent due to the achievability schemewe will propose combined with the assumed relaying order, inwhich node 2 will be able to cancel out all signals transmitted byitself as well as node 3 (more generally, node may cancel outall relay transmissions “further” in the relay order than itself).The central idea remains the same—we cooperate via a se-

ries of lattice list decoders and replace multiple joint typicalitychecks with the intersection of multiple independent lists ob-tained via the lattice list decoder. For clarity, we focus on thetwo-relay case as in Fig. 3, but the results may be extended tothe -relay case in a straightforward manner. Let denotea permutation (or ordering) of the relays. In the case asshown in Fig. 3, we have two possible permutations: the firstthe identity permutation and the second

.The channel model is expressed as (a node’s own signal is

omitted as it may subtract it off)

where , , and, under input power constraints ,

, and .Theorem 8: Lattices achieve the DF rate achieved by

Gaussian random codebooks for the multirelay channel. Therate in (21) at the bottom of the page is achievable usingnested lattice codes for the Gaussian two relay channel de-scribed by [11].The proof of Theorem 8 may be found in Appendix B, and

follows along the same lines as Theorem 7.

IV. MULTISOURCE DF—COMBINING CF AND DF

We now illustrate how list decoding may be combined withthe linearity of lattice codes inmore general networks by consid-ering two examples. In particular, we consider relay networksin which two messages are communicated, along relayed anddirect links, as opposed to the single message case previouslyconsidered. The relay channel may be viewed as strictly coop-erative in the sense that all nodes aid in the transmission of thesame message and the only impairment is noise; the presenceof multiple messages leads to the notion of interference and thepossibility of decoding combinations of messages.We again focus on demonstrating the utility of lattices in

DF-based achievability schemes. In the previous section, it

was demonstrated that lattices may achieve the same rates asGaussian random coding-based schemes. Here, the presenceof multiple messages/sources gives lattices a potential ratebenefit over random coding-based schemes, as encoders anddecoders may exploit the linearity of the lattice codes to betterdecode a linear combination of messages. Often, such a linearcombination is sufficient to extract the desired messages ifcombined with the appropriate side-information, and may en-large the achievable rate region for certain channel conditions.In this section, we demonstrate two examples of combiningCompute-and-Forward-based decoding of the sum of signalsat relays with direct link side-information in: 1) the two-wayrelay channel with direct links and 2) the multiple accessrelay channel. To the best of our knowledge, these are the firstlattice-coding-based achievable rate regions for these channels.

A. Two-Way Gaussian Relay Channel With Direct Links

The two-way relay channel is the logical extension of theclassical relay channel for one-way point-to-point communica-tion aided by a relay to allow for two-way communication.While the capacity region is in general unknown, it is known

for half-duplex channel models under the 2-phase multiple ac-cess channel and broadcast channel (MABC) protocol [51], towithin 1/2 bit for the full-duplex Gaussian channel model withno direct links [4], [5], and to within 2 bits for the same modelwith direct links in certain cases [52].Random coding techniques employing DF, CF, and AF re-

lays have been the most common in deriving achievable rate re-gions for the two-way relay channel, but a handful of work [4],[5], [53], [54] has considered lattice-based schemes which, ina DF-like setting, effectively exploit the additive nature of theGaussian noise channel in allowing the sum of the two trans-mitted lattice points to be decoded at the relay. The intuitivegains of decoding the sum of the messages rather than the in-dividual messages stem from the absence of the classical mul-tiple access sum constraints. This sum-rate point is forwardedto the terminal which utilizes its own-message side-informationto subtract off its own message from the decoded sum. Whilerandom coding schemes have been used in deriving achievablerate regions in the presence of direct links, lattice codes—of in-terest in order to exploit the ability to decode the sum of mes-sages at the relay—have so far not been used. We present sucha lattice-based scheme next.The two-way Gaussian relay channel with direct links con-

sists of two terminal nodes with inputs with power con-straints (without loss of generality, it is assumed

(21)

Page 11: IEEE TRANSACTIONS ON INFORMATION THEORY, VOL ......lattices have been useful in distributed Gaussian source coding when reconstructing a linear function [32], [33]. Lattice codes in

SONG AND DEVROYE: LATTICE CODES FOR THE GAUSSIAN RELAY CHANNEL 4937

Fig. 5. AWGN two-way relay channel with direct links and the AWGN multiple access relay channel. We illustrate the lists of messages carried by thecodewords at node and list decoded according to Theorem 3 at node .

) and outputs which wish to exchange messagesand with the help of

the relay with input of power and output . We as-sume, without loss of generality (WLOG), the channel:

subject to input power constraintsand real

constants . The channel model is shown in Fig. 5, andall input and output alphabets are .An code for the two-relay channel con-

sists of the two sets of messages , uniformlydistributed over , and two encodingfunctions (shortened to ) satisfyingthe power constraints , a set of relay functionssuch that the relay channel input at time is a function ofthe previously received relay channel outputs from channeluses 1 to , , and finallytwo decoding functions which yieldsthe message estimates for .We define the average probability of error of the code tobe

. The rate pair is then said tobe achievable by the two-relay channel if, for any andfor sufficiently large , there exists an codesuch that . The capacity region of the two-way relaychannel is the supremum of the set of achievable rate pairs.Theorem 9 (Lattices in Two-Way Relay Channels With Di-

rect Links): The following rates are achievable for the two-wayAWGN relay channel with direct links

Proof: The achievability proof combines a lattice versionof regular encoding/sliding window decoding scheme (to takeadvantage of the direct link), decoding of the sum of transmitted

signals at the relay using nested coarse lattices to take care ofthe asymmetric powers, as in [5], a lattice binning techniqueequivalent to the random binning technique developed by [55],and lattice list decoding at the terminal nodes to combine directand relayed information.

Codebook construction: We construct two nested latticechains according to Theorem 2. The first consists of the lattices

all nested in an order such that:1) and .; the coarsestlattice is (since WLOG) and the finest isor .

2) ; since WLOG wealso have .

3) The coding rate of is

, and that of is

. As-

sociate each message withand each message

with .4) If (determined by relative values ofand in the above), then , implying

may be Rogers good and hence we may guaranteethe desired by proper selection of in Theorem

2 ; other-wise by proper selection of in Theorem 2 (and likewisefor ).

5) The lattices and which will be used for lattice listdecoding at nodes 2 and 1, respectively, are both Rogersgood and hence may be specified by the volumes of theirfundamental Voronoi regions and (under the con-straints and ), or the cor-responding . These will be chosen in the course ofthe proof.

6) Then final relative ordering of the six lattices will thendepend on the relative sizes of their fundamental regionvolumes.

We also construct a nested lattice chain ofaccording to Theorem 2 such

that:1) or

;2) ;

Page 12: IEEE TRANSACTIONS ON INFORMATION THEORY, VOL ......lattices have been useful in distributed Gaussian source coding when reconstructing a linear function [32], [33]. Lattice codes in

4938 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 8, AUGUST 2013

3) the relay uses the codebook con-sisting of codewords . This codebook is of rate

if and of rate

if . Thisrate in turn fixes the choice of in Theorem 2;

4) and are used to decode lists at the two destina-tions, and their relative nesting depends on and(or equivalently and as both are Rogers good)subject to andwhich will be specified in the course of the proof.Encoding: We use Block Markov encoding. Messages

and are themessages the two terminals wish to send in block . Nodes 1and 2 send and :

for dithers known to all nodes which are i.i.d.uniformly distributed over and and vary from block toblock. At the relay, we assume that it has obtained

(22)

in block . Note that lies in ifand in if , and is furthermore uniformlydistributed over this set consisting of points. We may thusassociate each with an index say , whichthe relay then uses as index for the codewordin (also of rate ). With some abuse of notation we write

instead of the indexed version .The relay then sends

(23)for a dither known to all nodes which is uniformlydistributed over .

Decoding: During block , the following messages/signalsare known/decoded at each node:1) Node 1: knows , , de-codes

2) Node 2: knows , , de-codes

3) Node : knows , decodes .Relay decoding: The relay terminal receives

, and, following the arguments of[3]–[5] can decode

if

Terminal 2 decoding: Terminal 2 decodes afterblock from the received signals

This will generally follow the lattice version of regular en-coding/sliding-window decoding scheme as described inSection III-A. That is, after block , terminal 2 firstforms since ithas decoded and knows its own and hencemay form . Then it uses the list decoder ofTheorem 3 to produce a list of messages , denoted by

, of size using the lat-tice , whose fundamental Voronoi region volume is selected

to asymptotically approach (in the

sense of (14)). For approaching ,

where list decoding is relevant,asymptotically, and thus as needed. Toresolve which codeword was actually sent, it intersects thislist with another list of obtained inthis block . This list of messages isobtained from using lattice list decoding with the lattice

whose fundamental Voronoi region volume is taken to

asymptotically approach . For

approaching , where list decoding is

relevant, asymptotically, andthus as needed. One may verify that by con-struction of the nested lattice chains, all conditions of Theorem3 are met. This list of messages is actually obtainedfrom decoding a list of , and using knowledgeof its own to obtain a list of (andhence by one-to-one mapping) of size approximately

. To see this, notice that each is associ-ated with a single .Then, given and , one may obtain a single as follows:

(24)

Similarly, given a and one may obtain a single as

(25)

(26)

where follows fromwhen . Hence, the list of decoded

codewords may be transformed into a list of at Terminalnode 2, which may in turn be associated with a list of .The two decoded lists of are independent due to theindependent mapping relationships between and at Node

Page 13: IEEE TRANSACTIONS ON INFORMATION THEORY, VOL ......lattices have been useful in distributed Gaussian source coding when reconstructing a linear function [32], [33]. Lattice codes in

SONG AND DEVROYE: LATTICE CODES FOR THE GAUSSIAN RELAY CHANNEL 4939

Fig. 6. Lattice DF scheme for the AWGN two-way relay channel with direct links.

Fig. 7. Comparison of DF achievable rate regions of various two-way relay channel rate regions.

1 and between and at the relay. List decoding ensures thatat least the correct message lies in the intersection with highprobability. To ensure no more than one in the intersection:

Analogous steps apply to rate .

B. Comparison to Existing Rate Regions

We briefly compare the new achievable rate region of The-orem 9 with three other existing DF-based rate regions for thetwo-way relay channel with direct links, and to the cut-set outerbound. In particular, in Fig. 7, the region “Rankov-DF” [56,Proposition 2], the “Xie” [55, Theorem 3.1 under Gaussianinputs], and our “This work” (Theorem 9) are compared to thecut-set outer bound under three different choices of noise andpower constraints for . The “Rankov-DF” and“Xie” schemes use a multiple access channel model to decodethe two messages at the relay, while we use lattice codes todecode their sum, which avoids the sum rate constraint. Inthe broadcast phase, the “Rankov-DF” scheme broadcasts the

superposition of the two codewords, while the “Xie” and ourscheme use a random binning technique to broadcast the binindex. The advantage of the “Rankov-DF” scheme is its abilityof obtain a coherent gain at the receiver from the source andrelay at the cost of a reduced power for each message (powersplit and ). On the other hand, the “Xie” andTheorem 9 schemes both broadcast the bin index using all ofthe relay power, but are unable to obtain coherent gains. Wenote that our current scheme does not allow for a coherent gainbetween the direct and relayed links as 1) we decode the sum ofcodewords and reencode that, and 2) we use the full relay powerto transmit this sum. Whether simultaneous coherent gains arepossible to the two receivers while using a lattice-based schemeto decode the sum of codewords is an interesting open questionwhich may possibly be addressed along the lines of [57].At low SNR, the rate-gain seen by decoding the sum and

eliminating the sum-rate constraint is outweighed by 1) the loss

seen in the rates compared to

, or 2) the coherent gain present in the “Rankov-DF”scheme. At high SNR, our scheme performs well, and at least insome cases, is able to guarantee an improved finite-gap result tothe outer bound, as further elaborated upon in [58]. Further notethat, compared with the two-way relay channel without direct

Page 14: IEEE TRANSACTIONS ON INFORMATION THEORY, VOL ......lattices have been useful in distributed Gaussian source coding when reconstructing a linear function [32], [33]. Lattice codes in

4940 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 8, AUGUST 2013

links [4], [5], the direct links may provide additional informa-tion which translate to rate gains—direct comparison shows thatthe rate region in [5, Theorem 1] is always contained in that ofTheorem 9.

C. Multiple Access Relay Channel

We now consider a second example of a relay network withtwo messages and cooperative relay links: the multiple accessrelay channel (MARC). The MARC was proposed and studiedin [12], [40] and [41], and describes a multiuser communicationscenario in which two users transmit different messages to thesame destination with the help of a relay. As in the TWRC, theMARC can be seen as another example of an extension of thethree-node relay channel. The channel model is described by

where , , and have power constraints , , and .An code for the multiple access relay

channel consists of the two sets of messages ,uniformly distributed over , and twoencoding functions (shortened to )satisfying the power constraints , a set of relay functions

such that the relay channel input at time is a func-tion of the previously received relay channel outputs fromchannel uses 1 to , ,and one decoding functionswhich yields the message estimates .We define the average probability of error of the code tobe

. The rate pair is then saidto be achievable by the multiple access relay channel if,for any and for sufficiently large , there exists an

code such that . The capacity regionof the multiple access relay channel is the supremum of the

set of achievable rate pairs.We derive a new achievable rate region whose achievability

scheme combines the previously derived lattice DF scheme, andthe linearity of lattice codes using lattice list decoding. In partic-ular, we demonstrate how we may decode the sum of two latticecodewords at the relay rather than decoding the individual mes-

sages, eliminating the sum-rate constraint seen in i.i.d. randomcoding schemes. The relay then forwards a reencoded versionof this which may be combined with lattice list decoding at thedestination to obtain a new rate region.Theorem 10: Lattices in the AWGN multiple access relay

channel. For any , the rates described in (27) at thebottom of the page are achievable for the AWGN multiple ac-cess relay channel.

Proof:Codebook construction: We construct two nested lattice

chains according to Theorem 2, , and, nested in the exact same way as in the

codebook construction of Theorem 9.Encoding: We again use block Markov encoding. At the

th block, terminals 1 and 2 send and , where

At the relay, we assume that it has decoded

in block . Following the exact same steps as in between(22) and (23), the relay sends

The dithers , and are known to all nodesand are i.i.d. and uniformly distributed over , , and andvary from block to block. In the first block 1, terminals 1 and 2send and , respectively, while the relay sendsa known .

Decoding: At the end of each block , the relay terminalreceives and decodes

as long as

(27)

Page 15: IEEE TRANSACTIONS ON INFORMATION THEORY, VOL ......lattices have been useful in distributed Gaussian source coding when reconstructing a linear function [32], [33]. Lattice codes in

SONG AND DEVROYE: LATTICE CODES FOR THE GAUSSIAN RELAY CHANNEL 4941

following arguments similar to those in [5].At the end of block , before decoding, the destination already

has the following:

Using , we now describe how it will obtain

The destination receivesand either decodes the messages in the

order and then or the reverse and then .We describe the former; the latter follows analogously and wetime-share between the two decoding orders. The destinationfirst decodes from , treating

as noise. This equivalent noise is the sum of sig-nals uniformly distributed over fundamental Voronoi regionsof Rogers good lattices and Gaussian noise. Hence, accordingto Lemma 6, the probability of error in decoding the correct(unique) will decay exponentially as long as

It then subtracts from the signal to ob-tain anddecodes a list of denoted by of size

assuming side information , andtreating as noise. This list of isobtained from a lattice list decoder based on andnoting the one-to-one correspondence betweenand and hence given , usingthe arguments of (24) and (26).The destination then intersects the list with

another list of size ob-tained in the block (described next for block ) to deter-mine the unique . Once the destination has decoded ,

, and , it is also able to reconstruct.At last, the destination decodes a list of pos-

sible of size from the signal

which is used to determine in the next block . To ensurethat there is an unique codeword in the intersection ofthe two lists and , weneed

We presented the decoding order . Alternatively,one may decode in the order and at the analogousrates. Time sharing with parameter between theorders yields the theorem.Remark 5: Note that the above region is derived using time-

sharing between two decoding orders at the destination. This

results as we employ successive decoding at the destination inorder to allow for the use of lower complexity Euclidean latticedecoding, rather than a more complex form of “joint” decodingfor lattices proposed, for example, in [7] and [26]. Further notethat this region does not always outperform or even attain thesame rates as random coding-based schemes—in fact, as in thetwo-way relay channel, there is a tradeoff between rate gainsfrom decoding the sum at the relay node, and coherent gainsand joint decoding at the destination.

V. SINGLE SOURCE CF

We have shown several lattice-based DF schemes for relaynetworks. Forcing the relay(s) to actually decode the message(s)imposes a rate constraint; CF is an alternative type of forwardingwhich alleviates this constraint. Cover and El Gamal first pro-posed a CF scheme for the relay channel in [10] in which therelay does not decode the message but instead compresses itsreceived signal and forwards the compression index. The des-tination first recovers the compressed signal, using its direct-link side-information (the Wyner–Ziv problem of lossy sourcecoding with correlated side-information at the receiver), andthen proceeds to decode the message from the recovered com-pressed signal and the received signal.It is natural to wonder whether lattice codes may be used

in the original Cover and El Gamal CF scheme for the relaychannel. We answer this in the positive. We note that latticeshave recently been shown to achieve the quantize-map-and-for-ward rates for general relay channels using quantize-and-mapscheme (similar to the CF scheme) which quantizes the receivedsignal at the relay and reencodes it without any form of bin-ning/hashing in [7]. The contribution in this section is to show analternative achievability scheme which achieves the same ratein the three node relay channel, demonstrating that lattices maybe used to achieve CF-based rates in a number of fashions. Wenote that our decoder employs a lattice decoder rather than themore complex joint typicality, or “consistency check” decodingof [7].In the CF scheme of [10], Wyner–Ziv coding—which ex-

ploits binning—is used at the relay to exploit receiver side-in-formation obtained from the direct link between the source anddestination. The usage of lattices and structured codes for bin-ning (as opposed to their random binning counterparts) was con-sidered in a comprehensive fashion in [20]. Of particular interestto the problem considered here is the nested lattice-coding ap-proach of [20] to the Gaussian Wyner–Ziv coding problem.

A. Lattice Codes for the Wyner–Ziv Model in CF

We consider the lossy compression of the Gaussian source, with Gaussian side-information avail-

able at the reconstruction node, where and are in-dependent vectors of length which are independent and eachgenerated in an i.i.d. fashion according to a Gaussian of zeromean and variance , and , respectively. We use thesame definitions for the channel model and for achievabilityas in Section III-A. The rate-distortion function for the source

taking on values in with side-informationtaking on values in is the infimum of rates

such that there exist maps and

Page 16: IEEE TRANSACTIONS ON INFORMATION THEORY, VOL ......lattices have been useful in distributed Gaussian source coding when reconstructing a linear function [32], [33]. Lattice codes in

4942 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 8, AUGUST 2013

Fig. 8. Lattice DF scheme for the AWGN multiple access relay channel.

Fig. 9. Lattice coding for the Wyner–Ziv problem.

such thatfor some distortion measure

. If the distortion measure is the squared error dis-tortion, , then, by [59], the rate dis-tortion function for the source given the side-in-formation is given by

and 0 otherwise, where is the conditional varianceof given .A general lattice code implementation of the Wyner–Ziv

scheme is considered in [20]. In order to mimic the CF schemeachieved by Gaussian random codes of [10], we need a slightlysuboptimal version of the optimal scheme described in [20].That is, in the context of CF, and to mimic the rate achievedby independent Gaussian random codes used for compressionin the CF rate of [10], the quantization noise after compressionshould be independent of the signal to be compressed to allowfor two independent views of the source, i.e., to express thecompressed signal as , whereis independent of . This may be achieved by selecting

in a modified version of the lattice-coding Wyner–Zivscheme of [20] rather than the optimal MMSE scaling coeffi-

cient . This roughly allows one to view

as an equivalent AWGN channel, and isthe form generally used in Gaussian CF as in [13]. Whether

this is optimal is unknown. The second difference from directapplication of [20] is that, in our lattice CF scheme, the signalis no longer Gaussian but uniformly distributed over the fun-

damental Voronoi region of a Rogers good lattice. We modifythe scheme of [20] to incorporate these two changes next.Corollary 11: Lattices for the Wyner–Ziv

problem used in the lattice CF scheme based on [20]. Letbe uniformly distributed over the fundamental Voronoi regionof a Rogers good lattice with second moment , while

and . The following rate-distortionfunction for the lossy compression of the source tobe reconstructed as (where is independentof and has variance ) may be achieved using latticecodes:

Proof: Consider a pair of nested lattice codes ,where is Rogers-good with second moment , and isPoltyrev-good with second moment . The ex-istence of such a nested lattice pair good for quantization isguaranteed as in [20]. We consider the encoding and decodingschemes of Fig. 9, similar to that of [20]. We let be a quanti-zation dither signal which is uniformly distributed over andintroduce the following coefficients (choices justified later):

(28)

Encoding: The encoder quantizes the scaled and ditheredsignal to the nearest fine lattice point, which is

Page 17: IEEE TRANSACTIONS ON INFORMATION THEORY, VOL ......lattices have been useful in distributed Gaussian source coding when reconstructing a linear function [32], [33]. Lattice codes in

SONG AND DEVROYE: LATTICE CODES FOR THE GAUSSIAN RELAY CHANNEL 4943

then modulo-ed back to the coarse lattice fundamental Voronoiregion as

where is independent ofand uniformly distributed over according to the

Crypto lemma [46]. The encoder sends index correspondingto at the source coding rate

Decoding: The decoder receives the index of and re-constructs as

where equivalence denotes asymptotic equivalence (as), since, as in [20, Proof of 4.19]

(29)

for a sequence of a good nested lattice codes since

(30)

Note that there is a slight difference from [20, Proof of (4.19)]since is uniformly distributed over the fundamental Voronoiregion of a Rogers good lattice rather than Gaussian distributed.However, according to Lemma 5:

may be upperbounded by the pdf of an i.i.d. Gaussian random vector (times aconstant) with variance approaching (30) since is uniformlydistributed over the Rogers good , is uniformly distributedover the Rogers good of secondmoment , andis Gaussian. Then, because is Poltyrev good, (29) can be madearbitrary small as . This guarantees a distortion of asis of second moment .

B. Lattice Coding for CF

Armed with a lattice Wyner–Ziv scheme, we mimic everystep of the CF scheme for the AWGN relay channel of Fig. 3and Section III-A, described in [10] using lattice codes and willshow that the same rate as that achieved using random Gaussiancodebooks may be achieved in a structured manner.

Theorem 12: Lattices achieve the CF rate for the relaychannel. The following rate may be achieved for the AWGNrelay channel using lattice codes in a lattice CF fashion:

Proof:Lattice codebook construction: We employ three nested

lattice pairs of dimension satisfying:• Channel codebook for Node : codewords

where , and is both Rogers-good and Poltyrev-good and is Poltyrev-good. We set

to satisfy the transmitter power constraint.We associate each message with acodeword in one-to-one fashion and send a ditheredversion of . Note that is chosen such that

.• Channel codebook for the relay: codewords

where , and is both Rogers-good and Poltyrev-good and is Poltyrev-good. We set

to satisfy the relay power constraint. We as-sociate each compression index withthe codeword in a one-to-one fashion and send adithered version of . Note that is chosen suchthat .

• Quantization/Compression codebook:and , where is Poltyrev-good and is Rogers-good. We set , ,

such that the source coding rate is

.

Encoding: We use block Markov encoding as [10]. Inblock , Node 1 transmits

where is the dither uniformly distributed over .The relay quantizes the received signal in the previousblock to

(with index ) by using the quantization lattice codepair as described in the encoding part of Theorem 11,for a quantization dither uniformly distributed over and

. Node 2chooses the codeword associated with the index

of and sends

with the dither signal uniformly distributed overand independent across blocks.

Decoding: In block , Node receives

It first decodes using lattice decoding as in [23] orLemma 6 as long as

Page 18: IEEE TRANSACTIONS ON INFORMATION THEORY, VOL ......lattices have been useful in distributed Gaussian source coding when reconstructing a linear function [32], [33]. Lattice codes in

4944 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 8, AUGUST 2013

We note that the source coding rate of , must be less than thechannel coding rate , i.e.:

(31)

which sets a lower bound on the achievable distortion . Nodethen may obtain

which is used as direct-link side-information in the next block. In the previous block, Node had also obtained

. Combining this with ,Node uses as side-information to reconstruct

as in the decoder of Theorem 11.Thus, we see that the CF scheme employs theWyner–Ziv coding scheme of Section V-Awhere the source

to be compressed at the relay is and the side-informa-tion at the receiver (from the previous block) is .The compressed may now be expressed as

where (withthe quantization dither which is uniformly distributed

over ) is independent and uniformly distributed over withsecond moment . The destination may decode from

and by coherently combining them as

Now we wish to decode from (32) which is the sum of thedesired codeword which is uniformly distributed over a Rogersgood lattice, and noise composed of Gaussian noise anduniformly distributed over a fundamental Voronoi region of aRogers good lattice. This scenario may be handled by Lemma6, and we may thus uniquely decode as long as

Combining this with the constraint (31), we obtain

which is the CF rate achieved by the usual choice of Gaussianrandom codes (in which the relay quantizes the received signal

as in which is independent of )[13, pp. 17–48].

VI. CONCLUSION

We have demonstrated that lattice codes may mimic randomGaussian codes in the context of the Gaussian relay channel,achieving the same DF and CF rates as those using randomGaussian codes. One of the central technical tools needed was anew lattice list decoder, which proved useful in networks withcooperation where various links to a destination carry differentencodings of a given message. We have further demonstrated atechnique for combining the linearity of lattice codes with clas-sical Block Markov cooperation techniques in a DF fashion intwo multisource networks. Such achievability schemes outper-form known i.i.d. random coding for certain channel conditions.The question of whether lattice codes can replace random codesin all Gaussian relays networks and thereby achieve the samerates as the random coding counterparts remain open. Anotherremaining open question is whether the DF and CF schemesmay be unified into a single scheme—from the lattice DF andCF schemes presented here we notice that the relay performs aform of lattice quantization in both scenarios. Finally, the exten-sion of these results—which roughly imply that structured codesmay be used to replace random Gaussian codes in Gaussian net-works—to discrete memoryless channels is of interest. In par-ticular, structured codes such as “abelian group codes” [60] mayprove useful in this direction.

APPENDIX

A. Details in Decoding Step 2 of Theorem 7

In applying the lattice list decoder of Theorem 3 to the stepsbetween (17)–(20), we form the list

where

As in Section II-B, choose to be the MMSE coeffi-cient , resulting in self-noise

of variance

Select in the lattice chainto have a fundamental Voronoi region of volume

asymptoti-cally (notice as needed). This will en-sure a list of the desired size as long as

. For rates

approaching (where list decoding

Page 19: IEEE TRANSACTIONS ON INFORMATION THEORY, VOL ......lattices have been useful in distributed Gaussian source coding when reconstructing a linear function [32], [33]. Lattice codes in

SONG AND DEVROYE: LATTICE CODES FOR THE GAUSSIAN RELAY CHANNEL 4945

Fig. 10. Lattice DF scheme for the AWGN multirelay channel.

is needed/relevant),asymptotically. Thus, as needed.

B. Proof of Theorem 8

Proof: Here we demonstrate achievability for the permu-tation , and thus drop to simplifynotation. The other permutation may be analogously achieved.Source Node 1 transmits a message to the destination Node 4with the help of two relays: Nodes 2 and 3. The achievabilityscheme follows a generalization of the lattice regular encoding/sliding window decoding DF scheme of Theorem 7. The onlydifference is the addition of one relay and thus one coding level.

Codebook construction: We construct three nested latticechains according to Theorem 2:• , or

(relative nesting order depends on thesystem parameters and will be discussed in the followingparagraph).

• , or.

• .How these are ordered depends on the relative values ofthe power split parameters , the powerconstraints and the noise variances .In particular, the second moments of coarse lattices areselected as: , , and

. Themessage setis mapped in a one-to-one fashion to three codebooks

, ,and . These mappings are in-dependent. The fine lattices may be chosen tosatisfy the needed rate constraint by proper selection of thecorresponding in Theorem 2. The latticeswill be used for lattice list decoding at relay 3, while ,

, and will be used for lattice list decoding at

the destination node 4. They will all be Rogers good, withfundamental Voronoi region volume specified by the desiredlattice list decoding constraints; we are able to select thisvolume (or equivalently second moment) arbitrarily as longas they are smaller than their corresponding nested coarselattices, by Theorem 2. In which order they are nested willdepend on the relative volumes, which in turn depends on thesystems parameters , the power constraints

, and the noise variances . Define thefollowing signals (which will be superposed as described in theencoding):

where , , and are the dithers which are uniformlydistributed over , , and , respectively, independent fromblock to block and independent of each other. The encodingand decoding steps are outlined in Fig. 10. We make a smallremark on our notation: should not be thought of as thesignal being transmitted by Node (which would be but wedo not use this, opting instead to write out the transmit signalsin terms of ). Rather, node will send a superposition of thesignals . Thus, multiple nodes may transmit thesame (scaled) codeword which will coherently combine.

Encoding: We again use block Markov encoding: themessage is divided into B blocks of bits each. In block ,suppose Node 2 knows and Node 3 knows

. Node 1 sends the superposition/sum of, , and with power , ,

and , respectively. Node 2 sends the superpo-

sition/sum of and

with power , and , respectively. Node 3 sends

with power .

Page 20: IEEE TRANSACTIONS ON INFORMATION THEORY, VOL ......lattices have been useful in distributed Gaussian source coding when reconstructing a linear function [32], [33]. Lattice codes in

4946 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 8, AUGUST 2013

Decoding:Node 2 decodes : In block , since Node 2 knows

and and thus and , it can subtractthese terms from its received signal

and obtains a noisy observation of only. Node 2 is ableto then uniquely decode as long as (see [23] or Lemma 6)

Node 3 decodes : Since Node 3 knows and thus, it subtracts these from :

and obtains a noisy observation of and :

It then uses to decode a list of pos-

sible of size in the pres-ence of interference (uniformly distributed over thefundamental Voronoi region of a Rogers good lattice code) andGaussian noise (hence, we may apply Theorem 3). It thenintersects this list with the list

of asymptotic size obtained in blockby subtracting off the known signals dependent onto obtain . To ensure aunique in the intersection, by independence of the lists(based on the independent mappings of the messages to thecodebooks and ), we need

After Node 3 decodes , it further subtracts fromits received signal and obtains a noisy observation of .It again uses the lattice list decoder using to output a

list of of size which is usedin block to determine .

Node 4 decodes : Finally, Node 4 intersects three liststo determine . These three lists are again independent bythe independent mapping of the messages to the codebooks ,, , where each corresponds to one of the three links (be-

tween node 1-4, 2-4, and 3-4). The first list ofmessages is obtained by list decoding using on

its received signal

which is a combination of scaled signals andwhich are uniform over the fundamental Voronoi regions ofRogers good lattices and additive Gaussian noise , and isof size

The second list is obtained in block

and is of size , while the third listis obtained in block and is of size

. The formation of these lists is described next(they are formed analogously in blocks and ).After the successful decoding of in block , node 4 de-

codes two more lists which are used in the blocks andto determine and , respectively. Node 4 first sub-

tracts the terms from its received signal to ob-tain and decodes a list of possible from the terms

using in the presence of interference terms

(32)

Page 21: IEEE TRANSACTIONS ON INFORMATION THEORY, VOL ......lattices have been useful in distributed Gaussian source coding when reconstructing a linear function [32], [33]. Lattice codes in

SONG AND DEVROYE: LATTICE CODES FOR THE GAUSSIAN RELAY CHANNEL 4947

which are uniformly distributed over Rogers good lat-tices and Gaussian noise (hence Theorem 3 applies). This listis denoted as and is used in the block todetermine .After Node 4 decodes in the block , it further sub-

tracts the terms from to obtain. It then uses to decode a list of ,

denoted as , which is used in block to de-termine .In block , to ensure a unique message in the intersec-

tion of the three independent lists, we need to constrain as in(32) at the bottom of the previous page.

ACKNOWLEDGMENT

The contents of this paper are solely the responsibility of theauthors and do not necessarily represent the official views of theNSF.

REFERENCES[1] Y. Song and N. Devroye, “List decoding for nested lattices and appli-

cations to relay channels,” presented at the Allerton Conf. Commun.,Control and Comp., Monticello, IL, USA, 2010.

[2] Y. Song and N. Devroye, “A lattice compress-and-forward scheme,”presented at the IEEE Inf. Theory Workshop, Paraty, Brazil, Oct. 2011.

[3] B. Nazer andM. Gastpar, “Compute-and-forward: Harnessing interfer-ence through structured codes,” IEEE Trans. Inf. Theory, vol. 57, no.10, pp. 6463–6486, Oct. 2011.

[4] M. P. Wilson, K. Narayanan, H. Pfister, and A. Sprintson, “Joint phys-ical layer coding and network coding for bi-directional relaying,” IEEETrans. Inf. Theory, vol. 56, no. 11, pp. 5641–5654, Nov. 2010.

[5] W. Nam, S. Y. Chung, and Y. Lee, “Capacity of the Gaussian two-wayrelay channel to within 1/2 bit,” IEEE Trans. Inf. Theory, vol. 56, no.11, pp. 5488–5494, Nov. 2010.

[6] W. Nam, S.-Y. Chung, and Y. Lee, “Nested lattice codes for Gaussianrelay networks with interference,” IEEE Trans. Inf. Theory, vol. 57, no.12, pp. 7733–7745, Dec. 2011.

[7] A. Özgür and S. Diggavi, “Approximately achieving Gaussian relaynetwork capacity with lattice codes,” in Proc. IEEE Int. Symp. Inf.Theory, 2010, pp. 669–673 [Online]. Available: arXiv:1005.1284v1

[8] M. Nokleby and B. Aazhang, “Lattice coding over the relay channel,”presented at the IEEE Int. Conf. Commun. Kyoto, Japan, Jun. 2011.

[9] M. Nokleby and B. Aazhang, “Unchaining from the channel: Coop-erative computation over multiple-access channels,” presented at theIEEE Inf. Theory Workshop Paraty, Brazil, Oct. 2011.

[10] T. M. Cover and A. El Gamal, “Capacity theorems for the relaychannel,” IEEE Trans. Inf. Theory, vol. IT-25, no. 5, pp. 572–584,Sep. 1979.

[11] L.-L. Xie and P. R. Kumar, “A network information theory for wirelesscommunication: Scaling laws and optimal operation,” IEEE Trans. Inf.Theory, vol. 50, no. 5, pp. 748–767, May 2004.

[12] G. Kramer, M. Gastpar, and P. Gupta, “Cooperative strategies and ca-pacity theorems for relay networks,” IEEE Trans. Inf. Theory, vol. 51,no. 9, pp. 3037–3063, Sep. 2005.

[13] A. El Gamal and Y.-H. Kim, Lecture Notes on Network InformationTheory 2010 [Online]. Available: http://arxiv.org/abs/1001.3404

[14] L.-L. Xie and P. R. Kumar, “An achievable rate for the multiple-levelrelay channel,” IEEE Trans. Inf. Theory, vol. 51, no. 4, pp. 1348–1358,Apr. 2005.

[15] S.-H. Lim, Y.-H. Kim, A. El Gamal, and S.-Y. Chung, “Noisy networkcoding,” IEEE Trans. Inf. Theory, vol. 57, no. 5, pp. 3132–3152, May2011.

[16] G. Bresler, A. Parekh, and D. Tse, “The approximate capacity of themany-to-one and one-to-many Gaussian interference channels,” IEEETrans. Inf. Theory, vol. 56, no. 9, pp. 4566–4592, Sep. 2010.

[17] S. Sridharan, A. Jafarian, S. Vishwanath, and S. A. Jafar, “Capacity ofsymmetric -User Gaussian very strong interference channels,” 2008[Online]. Available: http://arxiv.org/abs/0808.2314

[18] A. Wyner and J. Ziv, “The rate-distortion function for source codingwith side information at the decoder,” IEEE Trans. Inf. Theory, vol.IT-22, no. 1, pp. 1–10, Jan. 1976.

[19] R. Zamir and S. Shamai, “Nested linear/lattice codes for Wyner-Zivencoding,” in Proc. Inf. Theory Workshop, 1998, pp. 92–93.

[20] R. Zamir, S. Shamai, and U. Erez, “Nested linear/lattice codes for struc-tured multiterminal binning,” IEEE Trans. Inf. Theory, vol. 48, no. 6,pp. 1250–1276, Jun. 2002.

[21] M. Khormuji and M. Skoglund, “On instantaneous relaying,” IEEETrans. Inf. Theory, vol. 56, no. 7, pp. 3378–3394, Jul. 2010.

[22] S. Yao, M. Khormuji, and M. Skoglund, “Sawtooth relaying,” IEEECommun. Lett., vol. 12, no. 9, pp. 612–615, Sep. 2008.

[23] U. Erez and R. Zamir, “Achieving on the AWGNchannel with lattice encoding and decoding,” IEEE Trans. Inf. Theory,vol. 50, no. 10, pp. 2293–2314, Oct. 2004.

[24] U. Erez, S. Litsyn, and R. Zamir, “Lattices which are good for (almost)everything,” IEEE Trans. Inf. Theory, vol. 51, no. 10, pp. 3401–3416,Oct. 2005.

[25] R. Zamir, “Lattices are everywhere,” in Proc. 4th Annu. Workshop In-form. Theory Appl., 2009, pp. 392–421.

[26] O. Ordentlich and U. Erez, “Interference alignment at finite SNR,”2011 [Online]. Available: arXiv:1104.5456

[27] O. Ordentlich, U. Erez, and B. Nazer, “The approximate sum capacityof the symmetric Gaussian K-user interference channel,” in Proc. IEEESymp. Inf. Theory, Jul. 2012, pp. 2072–2076.

[28] S. Gel’fand and M. Pinsker, “Coding for channels with random param-eters,” Probl. Contr. Inf. Theory, vol. 9, no. 1, pp. 19–31, 1980.

[29] M. Costa, “Writing on dirty paper,” IEEE Trans. Inf. Theory, vol. IT-29,no. 3, pp. 439–441, May 1983.

[30] T. Philosof, R. Zamir, and A. Khisti, “Lattice strategies for the dirtymultiple access channel,” IEEE Trans. Inf. Theory, 2009, submitted forpublication, submitted for publication.

[31] A. Jafarian and S. Vishwanath, “Gaussian interference networks: Lat-tice alignment,” in Proc. IEEE Inf. Theory Workshop, 2010, pp. 1–5.

[32] D. Krithivasan and S. Pradhan, “Lattices for distributed source coding:Jointly Gaussian sources and reconstruction of a linear function,” IEEETrans. Inf. Theory, vol. 55, no. 12, pp. 5628–5651, Dec. 2009.

[33] A. Wagner, “On distributed compression of linear functions,” IEEETrans. Inf. Theory, vol. 57, no. 1, pp. 79–94, Jan. 2011.

[34] J. Korner and K. Marton, “How to encode the modulo-two sum of bi-nary sources,” IEEE Trans. Inf. Theory, vol. IT-25, no. 2, pp. 219–221,Mar. 1979.

[35] A. Avestimehr, S. Diggavi, and D. Tse, “Wireless network informationflow: A deterministic approach,” IEEE Trans. Inf. Theory, vol. 57, no.4, pp. 1872–1905, Apr. 2011.

[36] B. Nazer, M. Gastpar, S. A. Jafar, and S. Vishwanath, “Ergodic in-terference alignment,” IEEE Trans. Inf. Theory, vol. 58, no. 10, pp.6355–6371, Oct. 2012.

[37] U. Niesen, B. Nazer, and P. Whiting, “Computation alignment: Ca-pacity approximation without noise accumulation,” [Online]. Avail-able: http://arxiv.org/pdf/1108.6312

[38] M. Khormuji and M. Skoglund, “Noisy analog network coding for thetwo-way relay channel,” in Proc. IEEE Int. Symp. Inf. Theory, 2011,pp. 2065–2069.

[39] S. L. Fong, L. Ping, and C. W. Sung, “Amplify-and-modulo forGaussian two-way relay channel,” in Proc. IEEE 23rd Int. Symp.Personal Indoor Mobile Radio Commun., 2012, pp. 83–88.

[40] L. Sankar, G. Kramer, and N. Mandayam, “Offset encoding for mul-tiple-access relay channels,” IEEE Trans. Inf. Theory, vol. 53, no. 10,pp. 3814–3821, 2007.

[41] D. Woldegebreal and H. Karl, “Multiple-access relay channel with net-work coding and non-ideal source-relay channels,” in Proc. 4th Int.Symp. Wireless Commun. Syst., 2007, pp. 732–736.

[42] H. Loeliger, “Averaging bounds for lattices and linear codes,” IEEETrans. Inf. Theory, vol. 43, no. 6, pp. 1767–1773, Nov. 1997.

[43] G. Poltyrev, “On coding without restrictions for the AWGN channel,”IEEE Trans. Inf. Theory, vol. 40, no. 2, pp. 409–417, Mar. 1994.

[44] C. A. Rogers, “Lattice coverings of space,” Mathematica, vol. 6, pp.33–39, 1959.

[45] R. Zamir and M. Feder, “On lattice quantization noise,” IEEE Trans.Inf. Theory, vol. 42, no. 4, pp. 1152–1159, Jul. 1996.

[46] G. D. Forney Jr., “On the role of MMSE estimation in approachingthe information theoretic limits of linear Gaussian channels: Shannonmeets Wiener,” presented at the Allerton Conf. Commun., Control andComp., 2003.

[47] W. Coppel, Number Theory: An Introduction to Mathematics, 2nded. New York, NY, USA: Springer-Verlag, 2009.

[48] A. Carleial, “Multiple-access channels with different generalized feed-back signals,” IEEE Trans. Inf. Theory, vol. IT-28, no. 6, pp. 841–850,Nov. 1982.

Page 22: IEEE TRANSACTIONS ON INFORMATION THEORY, VOL ......lattices have been useful in distributed Gaussian source coding when reconstructing a linear function [32], [33]. Lattice codes in

4948 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 8, AUGUST 2013

[49] F. Willems, “Information Theoretical Results for Multiple AccessChannels,” Ph.D. dissertation, K.U. Leuven, Leuven, Belgium, 1982.

[50] A. El Gamal and Y.-H. Kim, Network Information Theory. Cam-bridge, U.K.: Cambridge Univ. Press, 2011.

[51] S. Kim, N. Devroye, P. Mitran, and V. Tarokh, “Achievable rateregions and performance comparison of half duplex bi-directionalrelaying protocols,” IEEE Trans. Inf. Theory, vol. 57, no. 10, pp.6405–6418, Oct. 2011, to be published.

[52] A. Avestimehr, A. Sezgin, and D. Tse, “Capacity of the two-way relaychannel within a constant gap,” Eur. Trans. Telecommun., vol. 21, no.4, pp. 363–374, 2010.

[53] I. Baik and S.-Y. Chung, “Network coding for two-way relay channelsusing lattices,” presented at the IEEE Int. Conf. Commun., Beijing,May 2008.

[54] L. Ong, C. Kellett, and S. Johnson, Capacity theorems for the AWGNmulti-way relay channel [Online]. Available: http://arxiv4.library.cor-nell.edu/abs/1004.2300

[55] L.-L. Xie, “Network coding and random binning for multi-user chan-nels,” in Proc. 10th Canadian Workshop Inf. Theory, 2007, pp. 85–88.

[56] B. Rankov and A.Wittneben, “Achievable rate regions for the two-wayrelay channel,” in Proc. IEEE Int. Symp. Inf. Theory, Seattle, Jul. 2006,pp. 1668–1672.

[57] M. Nokleby, B. Nazer, B. Aazhang, and N. Devroye, “Relays that co-operate to compute,” in Proc. Int. Symp. Wireless Commun. Syst., Aug.2012, pp. 266–270.

[58] Y. Song and N. Devroye, “List decoding for nested lattices and appli-cations to relay channels,” presented at the Allerton Conf. Commun.,Control Comp., Monticello, IL, Sep. 2010.

[59] A. Wyner, “The rate-distortion function for source coding with sideinformation at the decoder 3-II: General sources,” Inf. Control, vol.38, no. 1, pp. 60–80, 1978.

[60] D. Krithivasan and S. S. Pradhan, “Distributed source coding usingabelian group codes: A new achievable rate-distortion region,” IEEETrans. Inf. Theory, vol. 57, no. 3, pp. 1459–1591, Mar. 2011.

Yiwei Song obtained his B.Eng. fromNanjing University of Posts and Telecom-munications in 2009. He is currently a Ph.D. candidate at the University of Illi-nois at Chicago.He was a research intern at Samsung Information System America during the

summer of 2012 and an engineering intern at Broadcom during the summer of2013.His research interests are in network information theory, especially lattice

codes and relay channels.

Natasha Devroye has been an Assistant Professor in the Department of Elec-trical and Computer Engineering at the University of Illinois at Chicago sinceJanuary 2009. From July 2007 until July 2008 she was a Lecturer at HarvardUniversity.Dr. Devroye obtained her Ph.D. in Engineering Sciences from the School of

Engineering and Applied Sciences at Harvard University in 2007, anM.Sc. fromHarvard University in 2003 and a Honors B.Eng. in Electrical Engineering fromMcGill University in 2001.Dr. Devroye was a recipient of an NSF CAREER award in 2011.Her research focuses on multi-user information theory and applications to

cognitive and software-defined radio, radar, two-way and wireless communica-tions in general.