digital communication, barry lee solutions.pdf

156
0 + + Digital COMMUNICATION John R. Barry Edward A. Lee David G. Messerschmitt third edition Solutions Manual for Kluwer Academic Publishers

Upload: maclaren235

Post on 30-Oct-2014

470 views

Category:

Documents


62 download

TRANSCRIPT

Page 1: digital communication, barry lee solutions.pdf

0

+

+

– –

Digital

COMMUNICATION

John R. BarryEdward A. Lee

David G. Messerschmitt

third edition

Solutions Manualfor

Kluwer Academic Publishers

Page 2: digital communication, barry lee solutions.pdf

s-1

Chapter 1

Problem 1-1.

(a) The overload point of the A/D converter (largest signal that can be accommodated) willbe chosen on the basis of the signal statistics and the signal power so as to keep theprobability of overload low. Assuming the signal doesn’t change, we would want tokeep the overload fixed. Hence, the ∆ would be halved.

(b) Generally the error signal would be halved in amplitude. This would increase the SNRby 20log102 = 6 dB.

(c) The bit rate would increase by fs, the sampling rate.

(d) We get, for some constant K,

SNRdB = 6n + K , fb = nfs , (S.1)

and thus

SNRdB = + K . (S.2)

In particular, the SNR in dB increases linearly with the bit rate.

Problem 1-2. Each bit error will cause one recovered sample to be the wrong amplitude,which is similar to an added impulse to the signal. This will be perceived as a “pop” or a“click.” The size of this impulse will depend on which of the n bits of a particular sample is inerror. The error will range from the smallest quantization interval (the least-significant bit inerror) to the entire range of signal levels (the sign bit in error).

Problem 1-3. The most significant sources will be the anti-aliasing and reconstructionlowpass filters, which will have some group delay, and the propagation delay on thecommunication medium. Any multiplexes (Chapter 17) will introduce a small amount of delay,as will digital switches (Chapter 17).

Problem 1-4. Assume the constant bit rate is larger than the peak bit rate of the source. Thenwe might artificially increase the bit rate of the source up until it precisely equals the bit rate ofthe link by adding extra bits. We must have some way of identifying these extra bits at thereceiver so that they can be removed. A number of schemes are possible, so here is but one:Divide the source bits in to groups called packets with arbitrary length. Append a uniquesequence of eight bits, called a flag, to the beginning and end of each packet, and transmit thesepackets on the link interspersed with an idle code (say all zeros). The only problem now is toinsure that the flag does not occur in the input bit stream. This can be accomplished using linecoding techniques. For example, the flag might be a sequence of 8 consecutive ones, and thetransmitter might use the Manchester code, sending 01 to convey a 0 bit, and 10 to convey a 1bit.

6fbfs

---------

Page 3: digital communication, barry lee solutions.pdf

s-2

+

+

––

Digital

COMMUNICATION

John R. BarryEdward A. Lee

David G. Messerschmitt

thirdedition

SolutionsManualfor

Kluwer Academic Publishers

Page 4: digital communication, barry lee solutions.pdf

s-3

Chapter 2

Problem 2-1. We start out with an easy problem! Looking at Fig. 2-2, when the imaginary

part of the impulse response is zero, we see that the system consists of two independent filters,

one for real part and one for imaginary part of the input, with no crosstalk. The imaginary part

of the impulse response results in crosstalk between the real and imaginary parts.

Problem 2-2. Doing the discrete-time part only, write the convolution sum when the input

is ej fkT as:

yk = hk ej fkT

= hnej f(k – n)T

=ej fkT hne–j fnT. (S.3)

The output is the same complex exponential multiplied by a sum that is a function of the

impulse response of the system hn and the frequency f of the input, but is not a function of the

time index k. This frequency response

H(ej fT) = hne–j fnT (S.4)

is recognized as the Fourier transform of the discrete-time signal hn.

Problem 2-3.

(a) Let z( t ) = x( t ) g( t ), which has Fourier transform:

Z(f ) = X( f )G( f ), (S.5)

Since wk = hk z(kT ), it follows from (2.17) that

W(ej fT ) = H(ej fT ) X(f – m/T )G(f – m/T ) (S.6)

The output signal Fourier transform is Y( f ) = F( f )W(f ), where W( f ) is the Fourier

transform of the output of the impulse generator, defined as:

w( t ) = wk (t – kT). (S.7)

Taking the Fourier transform of the above equation yields:

W( f ) = wke–j fkT. (S.8)

n –=

n –=

n –=

1T---

m –=

k –=

k –=

Page 5: digital communication, barry lee solutions.pdf

s-4

We recognize this as the discrete-time Fourier transform W(ej fT ) of the sequence wk.

Therefore, we conclude that:

Y( f ) = F( f )W( f )

= F( f )W(ej fT )

= F( f )H(ej fT ) G(f – m/T )X(f – m/T ). (S.9)

(b) Yes, you can see from part (a) that if we add two input signals, the output will be a

similar superposition.

(c) If F( f ) = 0 for | f | > 1/(2T) then for | f | (2T) we have

Y( f )= F( f )H(ej fT)G( f )X( f ) (S.10)

and the system is time-invariant with frequency response F( f )H(ej fT)G( f ).

Problem 2-4. In continuous-time:

|x( t )| dt = F.T.x( t )x*( t )f = 0

= X( f ) X*(–f)f = 0= |X( f )| df. (S.11)

Discrete-time follows similarly.

Problem 2-5. In continuous-time:

x( t )y*( t)dt= F.T.x( t )y*( t )f = 0

= X( f ) Y *(–f)f = 0= X( f )Y *( f )df. (S.12)

Discrete-time follows similarly.

Problem 2-6. Let x( t ) be a continuous-time signal, and let xk = x(kT ) denote the discrete-

time signal obtained by sampling at the rate 1/T. Let Ec and Ed denote the energies of the

continuous-time and discrete-time signals, respectively:

Ec = x ( t )dt, Ed = k|xk| . (S.13)

In the special case when x( t ) is properly bandlimited, so that its Fourier transform satisfies

X( f ) = 0 for | f | > 1/(2T), then there is a simple relationship between the two energies:

Ed k|xk| = |X(ej )| d (from Parseval’s)

1T---

m –=

1T---

1T---

12------

Page 6: digital communication, barry lee solutions.pdf

s-5

= T |X(ej fT)| df (where = 2 fT )

= | mX(f – m/T)| df (from (2.17))

= m|X(f – m/T)| df (from bandlimited assumption)

= m |X(f – m/T)| df

= m |X( )| d (where = f – m/T)

= |X( )| d

= . (S.14)

Thus, whenever the signal is properly bandlimited, the continuous-time energy and the

discrete-time energy are related by the simple relationship Ec = EdT.

In the general case when the signal is not bandlimited, however, there is no such simple

relationship. All three of the following are possible:

(1) Ec = EdT ; (2) Ec < EdT ; (3) Ec > EdT.

For example, consider a continuous-time signal of the form:

x( t ) = kxkg( t – kT) + kykg(t – kT – T/2), (S.15)

where g( t ) is a symmetric unit-height rectangular pulse of width T, so that g( t ) = 1 for

| t |< /2 and g( t ) = 0 for | t |> /2. Note that the energy of g( t ) is . Because g( t ) is time

limited, it follows that x( t ) is not bandlimited. Sampling x( t ) at rate 1/T yields x(kT) = xk.

For any particular sequence xk we can make the energy of x( t ) big or small by carefully

choosing the pulse width and the secondary sequence yk. In particular, the energy of x( t ) is

easily expressed in terms of Ed = k|xk| and Ey = k|yk| , namely:

Ec = x ( t )dt

= ( kxkg( t – kT) + kykg(t – kT – T/2)) dt

= ( k|xk| + k|yk| )

= (Ed + Ey). (S.16)

(1) We can get Ec = EdT by choosing = T and yk = 0.

(2) We can get Ec < EdT by choosing = T/2 and yk = 0.

(3) We can get Ec > EdT by choosing = T/2 and yk = 2xk.

2T 1––

2T 1–

1T---

2T 1––

2T 1–

1T---

2T 1––

2T 1–

1T---

2T 1––

2T 1–

1T---

m 0.5– T

m 0.5+ T

1T---

Ec

T------

Page 7: digital communication, barry lee solutions.pdf

s-6

Problem 2-7. The transfer function is H( z ) = 1 + z– . Therefore, the frequency response

is:

H(ej ) = 1 + e–j = e–j / (e j / + e–j / )

= 2e–j / cos( /2) . (S.17)

(a) When is restricted to | | , the magnitude response and phase response are:

|H(ej )| = 2cos( /2) ,

H(ej ) = – /2. (S.18)

These repeat with period 2 for other . Clearly, the phase response is a linear function

of over this range, and it is thus a piecewise linear function for all .

(b) Assume| 0 | , without loss of generality. (If not, simply add or subtract multiples of

2 until it is; this process won’t change the input sequence hk.) When the input is

hk = cos( 0k) = 0.5e j 0k + 0.5e –j 0k, a sum of eigenfuctions, the output is:

yk = 0.5H(ej )e j 0k + 0.5H(e–j )e –j 0k

= cos( 0/2)e j( 0k – 0/2) +cos( /2)e –j( 0k – 0/2)

= 2cos( 0/2)cos( 0k – 0/2) . (S.19)

The output is thus a sinusoid whose magnitude and phase are given by the magnitude

response and phase response, respectively, evaluated at the sinusoid’s frequency 0.

Problem 2-8. The Fourier transform of a real system is conjugate symmetric, so

H( f )=A( f )ej ( f ) = H*(–f ) = A( f )e–j (–f ), (S.20)

since A( f )=|H( f )| is both non-negative and symmetric. Hence, ( f ) = – ( –f).

Problem 2-9. From Problem 2-8, the phase response of a real system is anti-symmetric, so

the frequency response of the phase shifter should be

H(f ) = e –j sgn( f ) = cos( ) + jsin( )sgn( f ), (S.21)

where we have used the symmetry and anti-symmetry of the cos and sin, respectively. Thus, in

the time domain this becomes

h( t ) = ( t )cos( ) – sin( ). (S.22)

Problem 2-10. If Rea > 0, then we can use the Fourier transform pair

y( t ) = Y( f ) = 2 e afu(–f ), (S.23)

where u(f ) is the unit step function. Then we observe that x( t ) is y( t ) convolved with an

impulse stream ( t – mT), so its transform is

1t

-----

1jt a+--------------

m –=

Page 8: digital communication, barry lee solutions.pdf

s-7

X( f ) = Y( f ) (f – m/T)

= (f – m/T)e am/T. (S.24)

If Rea = 0, then we can use the transform of 1/jt, namely – sign( f ), convolving it again

with an impulse stream to get

X( f ) = sign( f ) (f – m/T), (when Rea = 0). (S.25)

Problem 2-11. Given Y( f ) = H( f )X( f ), then X(f0) = 0 necessarily implies

that Y(f0) = 0.

Problem 2-12. It is easy to see that the two outputs of Fig. 2-6(a) are the real and imaginary

parts of a complex signal z( t ), where z( t ) is the output of the system sketched below:

Schematically let us represent the input Fourier transform S( f ) as follows:

The triangles represents the portion of S( f ) bandlimited to | f | 2f0, and the shaded lumps

represent any energy beyond 2f0. According to the above system diagram, the input to the LPF

has Fourier transform:

1T---

m –=

2T

------

m –=

0

T------

m –=

s( t ) LPF z( t )

exp(–j fct)2

y( t )

0 2f0–2f0

f

S( f )

0 2f0–2f0

f

Y( f )2

Page 9: digital communication, barry lee solutions.pdf

s-8

And thus the LPF output will have Fourier transform:

Contrast this with the Fourier transform of the complex envelope of s( t ), namely:

Comparing Z( f ) to S( f ), we see they will be equal if and only if the shaded lump is zero,

i.e., if and only if there is no energy beyond 2f0.

Problem 2-18.

(a) From (2.7),

yk = zk – mhm = z

kz

–mhm = zkH( z ). (S.26)

Because the system is time invariant, H( z ) does not depend on k.

(b) If we constrain z to lie on the unit circle, | z | = 1, the two results are identical.

Problem 2-19. Let the response to zk be yk. By linearity, if the input to the system is zk + m

then the output is zmyk = zkzm. By time invariance the response to that same input is yk + m.

Setting these two responses equal,

zmyk = yk + m (S.27)

and setting k = 0 we get the desired result

ym = y0zm. (S.28)

The transfer function is a complex number y0, which is evidently a function of z, so we define

the notation y =H( z ) to reflect this property.

Problem 2-20. The Z transform is

X( z ) = z–kakuk = (az– )k. (S.29)

For any b satisfying | b |< 1 we have the identity

= bk (S.30)

which is easily verified by using long division on the left hand side. Therefore, in the region of

the complex plane where |z– | < 1/|a|, the Z transform is

0 2f0–2f0

f

Z( f )2

0 2f0–2f0

f

( f )S2

m –= m –=

k –= k 0=

11 b–-----------

k 0=

Page 10: digital communication, barry lee solutions.pdf

s-9

X( z ) = . (S.31)

Outside of this region, the Z transform does not exist. If | a | > 1, we easily see that the signal

goes to infinity as k increases. Not coincidentally, if | a | > 1 the region in which the Z

transform exists does not include the unit circle, implying that the Fourier transform does not

exist either.

Problem 2-21. If the response of the system to zt is y( t ), then by linearity the response to

zt + u = ztzu is zuy( t ). By time invariance, the response to zt + u is y(t = u). Setting these

two equal, y(t + u) = tuy( t ), and setting t = 0, y(u ) = y( 0 )zu.

Clearly est = zt if z = es, i.e., if s = j f, then z = ej f, a point on the unit circle.

Substituting into the convolution,

y( t ) = h( )x(t– )d

= h( )es(t – )d

=est h( )e–s d = estHL( s ) (S.32)

which implies that HL( s ) is an eigenvalue of the system.

Problem 2-22.

(a) In both cases, we have:

X( z ) = . (S.33)

(b) | z | > | a | and | z | < | a | respectively.

(c) | a | < 1 and | a | > 1 respectively.

Problem 2-23. When the ROC is | z | > | a |, which implies |az– |< 1, we get:

= 1 + az– + a z– + (S.34)

so that the time domain signal can be read off from the polynomial coefficients as akuk.

On the other hand, when the ROC is | z | < | a |, which implies |za– | < 1, we get:

= (1 + + + ) (S.35)

In this case, the time-domain coefficients are –aku–k– .

1

1 az 1––--------------------

z

z a–-----------

1

1 az 1––--------------------

z a–

1 z a–------------------

z–

a------

z

a---

z

a---

2

Page 11: digital communication, barry lee solutions.pdf

s-10

Problem 2-24. First we perform a partial fraction expansion,

X( z ) = + (S.36)

where

A = , B = . (S.37)

(a) Causality implies that the ROC is | z | > | b |. Taking the inverse transform of each term

in the partial fraction expansion and then adding the results yields,

xk = Aak– +Bbk– (S.38)

for k 1, and zero otherwise.

(b) If the signal is two-sided, the ROC must be be | a | < | z | < | b | and hence:

xk = . (S.39)

(c) For (a) the signal is not stable because bk . This is because the ROC does not

include the unit circle. For (b) the ROC does include the unit circle so the signal is

stable (this is the only ROC for which the signal is stable).

Problem 2-25. Consider the zeros, and the poles will follow similarly. Assume that z0 is a

zero, and hence

bkz0–k = 0. (S.40)

Ttaking the conjugate of this equation:

bk*(z0*)–k= bk(z0*)–k= 0 (S.41)

and we conclude that z0* is also a zero.

Problem 2-26. The easy terms are

Hzero( z ) = (1 – jz– )(1 + jz– ) = (1 + z– ), (S.42)

Hmin( z ) = = , (S.43)

but the maximum-phase term requires some more work. Writing one maximum-phase zero in

monic form,

(1 – ej / z– ) = z– ( ej / )(1 – e–j / z) (S.44)

A

z a–-----------

B

z b–-----------

a2

a b–-----------

b2

b a–-----------

Aak ,

–Bbk ,

for k 0

for k 0

k 0=

M

k 0=

M

k 0=

M

1

1 0.5ej 8z 1––--------------------------------------

1

1 0.5e j– 8z 1––-----------------------------------------

1

1 8 z 1–cos– 0.25z 2–+------------------------------------------------------------------

32---

3–2

------23---

Page 12: digital communication, barry lee solutions.pdf

s-11

and considering both zeros we get L = –2, B=(–2/3) = 4/9, and

Hmax( z ) = 1 – cos( /8)z + z . (S.45)

Problem 2-27. Hypothesize a zero-based spectrum S( z ) = (1 – bz– )(1 – bz),where b is

real and –1 < b < 1; this is already in the form of the spectral factorization (2.67). Therefore,

its geoemetric mean is unity by inspection. From Parseval’s, its arithmetic mean is:

= |1 – be–j | d

= | k – b k– | = 1 + b . (S.46)

There is no way this will be three, given our constraint that | b |< 1. So this was a dead end. Try

instead a pole-based spectrum:

S( z ) = , (S.47)

where a is real and | a | < 1; this too is in the form of the spectral factorization (2.67), and its

geometric mean is clearly unity. Its arithmetic mean is:

= d

= |akuk| = . (S.48)

Thus, we can make the arithmetic mean of (S.47) equal to 3 by choosing a = .

Problem 2-28.

(a) Let

A( z )= (S.49)

and note that G( z ) =H( z )A( z ). Then A( z ) is allpass (Example 2-9), so

|G(ej )| = |H(ej )| |A(ej )| = |H(ej )|. (S.50)

It has the effect of moving a zero | c | < 1 outside the unit circle without changing the

magnitude response.

(b) The hint tells us that hk can be obtained by passing fk through a filter with transfer

function (1 – cz– ), so that

hk = fk – cfk– . (S.51)

Similarly gk can be obtained by putting fk through transfer function (z– – c*),

gk = fk– – c*fk. (S.52)

43---

49---

12------ S ej d

12------

k

1

1 az 1–– 1 az–---------------------------------------------

12------ S ej d

12------

1

1 ae j––----------------------

2

k

1

1 a2–--------------

2 3

z 1– c*–

1 cz 1––-------------------

Page 13: digital communication, barry lee solutions.pdf

s-12

Calculating the difference between the energy in each sample in hk and gk,

|hk| – |gk| = |fk – cfk– | – |fk– – c*fk| = (1 – | c | )(|fk| – |fk– | ) (S.53)

and calculating the difference in energy for the first N samples,

(|hk| – |gk| ) = (1 – |c| )|fN | 0, (S.54)

which exploits the fact that fk is causal (why?), and | c | < 1, and hence f– = 0.

Problem 2-29. Using the allpass transfer function in Example 2-9 with | c | < 1, define a

signal wk with Z transform X( z )/(1 – cz– ). Note that wk is causal also (why?). Then xk is

obtained by putting wk through a system with transfer function (1 – cz– ) and yk is obtained

by system (z– – c*). From here, the derivation is identical to Problem 2-28, with the result

that:

(|xk| – |yk| ) = (1 – |c| )|wN| 0 . (S.55)

Problem 2-30. Using (2.52) to factor H( z ) into the product of minimum-phase and

maximum-phase transfer functions,

H( z )=B zL Hmin(z) Hmax(z) Hzero(z)

=B zL Hmin(z)Hmax*(1/z*) . (S.56)

Now note that Hmax*(1/z*) is minimum phase. Furthermore, from Example 2-10, the ratio

Hmax( z )/Hmax*(1/z*) is allpass, as is zL. Hence,

A( z ) = zL (S.57)

and

Hmin( z ) = BHmin( z )Hmax*(1/z*) . (S.58)

Problem 2-31. From the Fourier transform tables, g( t ) = h*(–t) G( f ) = H*( f ) .

Problem 2-32.

(a) The squared norms (i.e., energies) of both signals is unity, E = E = 1. The inner

product is

s ( t )s ( t )dt = 1 – 1 = . (S.59)

From the discussion on page 43, the angle between s ( t ) and s ( t ) satisfies:

k 0=

N

k 0=

N

Hmax z

Hmax* 1/z*

--------------------------------

Hmax z

Hmax* 1/z*

--------------------------------

34---

14---

12---

Page 14: digital communication, barry lee solutions.pdf

s-13

cos = s ( t )s ( t )dt. (S.60)

Since the energies are both unity, (S.59) implies that cos( ) = 1/2, or = / = 60°.

(b) The squared norm of the sum is:

(s ( t ) + s ( t )) dt = E +E + s ( t)s ( t )dt

=1 + 1 + 2 = 3. (S.61)

Hence, the norm is .

(c) There are many possibilities, but here is one:

(d) Since s ( t ) is unit energy, it is a basis function for the subspace it spans. Consider

s ( t ), the projection of s ( t ) onto the subspace spanned by s ( t ); is given by

s , s ( t ), where from (S.59) the coefficient is:

s , = s ( t )s ( t)dt = . (S.62)

Therefore, the projection error is given by:

e( t )= s ( t ) – s ( t )

= s ( t ) – s ( t ), (S.63)

as sketched below:

This signal is 0.5 for t [0, 3/4) and –1.5 for t [3/4, 1). Being a linear combination

of s ( t ) and s ( t ), e( t ) is clearly in their span. And from the projection theorem

(Theorem 2-2), the signal e( t ) is orthogonal to s ( t ). So the answer is s ( t ) = e( t ),as sketched above.

(e) The answer is s5( t ), the projection of s ( t ) onto the subspace spanned by s ( t ) and

s ( t ). It remains only to find this projection. Normalizing e( t ) yields the pair of basis

functions (t ) = s ( t ), ( t ) = e( t ) for the subspace spanned by s ( t ) and

s ( t ). The projection of s ( t ) onto the two basis signals is

s ( t ) ( t )dt = – , s ( t ) ( t )dt = , (S.64)

E1 E2

– –

12---

3

0

s ( t )

12---

12---

1

0

e( t )

2

3-------

12---

1

2 3----------

Page 15: digital communication, barry lee solutions.pdf

s-14

and hence the projection on the subspace is

s5( t ) = – ( t ) + ( t ) = – s ( t ) + e( t ). (S.65)

Problem 2-33.

(a) Clearly if two signals are bandlimited, then their weighted sum is also bandlimited.

Being closed and a subset, it is itself a subspace.

(b) Let x( t ) be in the subspace. By Parseval’s theorem (Problem 2-4), for any y( t ) B,

x( t )y*( t )dt = X( f )Y *( f )df = 0. (S.66)

Clearly, this is satisfied if and only if X( f ) = 0 for | f | W.

(c) Let this projection be s ( t ), then s ( t ) – s1( t ), y( t ) = 0 for all y( t ) B. From (b)

this implies that S ( f ) = ˆS1( f ) for | f | W, and of course since s1( t ) B, we must

have that P( f ) = 0 for | f | > W. Hence,

S ( f ) = e–j ftdt = (S.67)

and

s ( t ) = df. (S.68)

Unfortunately this integral cannot be evaluated in closed form.

Problem 2-34. Let X S and X S . An element of S S can be written in the form

X + X . Hence it suffices to show that

X – (PS (X ) + PS (X )), X + X = 0. (S.69)

Expanding the left side, it equals

X – PS (X ), X – PS (X ), X + X – PS (X ), X – PS (X ), X (S.70)

But by the definition of projection

X – PS (X ), X = X – PS (X ), X = 0, (S.71)

and because S and S are orthogonal subspaces

PS2(X ), X = PS

1(X), X = 0. (S.72)

The result is thus established.

12---

1

2 3----------

12--- 1

3---

– W–

W

0

11 e j2 f––

j2 f------------------------

1–

11 e j2 f––

j2 f------------------------ ej2 ft

Page 16: digital communication, barry lee solutions.pdf

s-15

Problem 2-35. The Schwarz inequality states that

| h(k)| |h( t )| dt |h( t – kT)| dt , (S.73)

and since it is easy to verify that the signal and its time-translate have identical norms, this

becomes the desired result.

– –

Page 17: digital communication, barry lee solutions.pdf

14

+

+

– –

Digital

COMMUNICATION

John R. BarryEdward A. Lee

David G. Messerschmitt

third edition

Solutions Manualfor

Kluwer Academic Publishers

Page 18: digital communication, barry lee solutions.pdf

s-15

Chapter 3

Problem 3-1. Taking the first derivative,

ΦX(s) = ΦX(s)(µ + σ2s) (S.71)

and setting s = 0 we get E[X] = µ. Similarly, the second derivative is

ΦX(s) = ΦX(s)(µ + σ2s) + ΦX(s)σ2 , (S.72)

and again setting s = 0 we get E[X2] = (µ2 + σ2). The variance is therefore σ2.

Problem 3-2. We can use (3.17) and carry out the integral.

Problem 3-3.

Q(x) = te–t2 ⁄ 2 dt = e–t2 ⁄ 2 – e–t2 ⁄ 2 dt

= e–x2 ⁄ 2 – e–t2 ⁄ 2 dt . (S.73)

The bounds then follow from the fact that

0 < e–t2 ⁄ 2 dt ≤ te–t2 ⁄ 2 dt = e–x2 ⁄ 2 . (S.74)

Problem 3-4.

(a) Define the MSE as ε2, so that:

ε2 = E[|X – aY|2] = E[|X|2] +|a|2E[|Y|2] –2Rea*E[XY*]. (S.75)

Setting to zero the derivative with respect to the real and imaginary parts, yields:

ReaE[|Y|2] = ReE[XY*], (S.76)

ImaE[|Y|2] = ImE[XY*]. (S.77)

Combining these two equations yields:

a = E[XY*] ⁄ E[|Y|2] . (S.78)

(b) The restated problem: Find the RV in the subspace spanned by Y that is closest to X.

(c) By the orthogonality principle,

⟨X – aY, Y⟩ = 0 (S.79)

or a = ⟨X,Y⟩ ⁄ ⟨Y,Y⟩ = E[XY*] ⁄ E[|Y|2] . (S.80)

s∂∂

s2∂∂2

s∂∂

2πx

∫ 1t--- 1–

t------

x

x

∫ 1t2-----

1x---

x

∫ 1t2-----

x

∫ 1t2----- 1

x3-----

x

∫ 1x3-----

Page 19: digital communication, barry lee solutions.pdf

s-16

Problem 3-5.

(a) First, we can write:

E[|E′k|]2 = E[|E′k – Ek + Ek|2]

= E[|E′k – Ek|2] + E[|Ek|2] + 2Re(E′k – Ek)Ek*. (S.81)

Since the filters generating both Ek and E′k have unity coefficients at delay zero, thefilter generating (E′k – Ek) has a zero coefficient at delay zero, and this signal is afunction of only past inputs Xk – 1, Xk – 2, …, and in view of property (3.204), the thirdterm in (S.416) must in fact be zero.

(b) Since Ek is a linear combination of Xk, Xk – 1, …, it follows from (3.204) that

E[Ek+mEk*] = RE(m) = 0 , m > 0, (S.82)

and since the autocorrelation function has conjugate symmetry, it follows that RE(m) =0 for all m ≠ 0.

Problem 3-6.

(a) The problem can be restated as:Given a sequence of vectors Xk, – ∞< k < ∞, with inner products

RX(m) = ⟨Xk + m, Xk⟩ (S.83)

that are independent of k, given a subspace M spanned by Xk – m, m > 0, find thevector k in M that is closest to the vector Xk, with error vector Ek = Xk – k.

(b) By the projection theorem, for every vector Y∈M, we must have that

⟨Ek,Y ⟩ = 0 (S.84)

and in particular

⟨Ek, Xk – m⟩ = 0, m> 0 . (S.85)

Thus the prediction error vector is orthogonal to the past (the vectors used in theprediction estimation). Since Ek – m ∈M, m > 0, it follows that

⟨Ek, Ek – m⟩ = 0, m> 0 , (S.86)

or equivalently RE(m) = 0, m ≠ 0.

Problem 3-7.

RY(0) = E[|Yk|2] = E[|X(kT)|2] . (S.87)

Observe that since X( t ) is WSS,

RX(0) = E[|X( t )|2] = E[|X(t + τ)|2] (S.88)

X X

Page 20: digital communication, barry lee solutions.pdf

s-17

for any τ. Define

τ = kT – t , (S.89)

and the result follows.

Problem 3-8. Mechanically,

E[ApAqArAs] = δpqδrs + δprδqs + δpsδqr – 2δpqδprδps . (S.90)

Problem 3-9. Observe from (3.67) that

SX( f ) df = SY( f ) df (S.91)

and from the definition of the power spectrum,

RY(τ) = SY( f )e j2πfτ df, (S.92)

from which the first result follows. To show that SX( f ) is non-negative everywhere, assume itis negative over some region from fa to fb. Then note that

SX( f )df (S.93)

must also be negative, which implies that RY(0) is negative. But

RY(0) = E[|Y( t )|2] , (S.94)

which must be non-negative, a contradiction.

Problem 3-10. The power spectrum is

N0 ⋅ |F(e jθ)|2 = N0 ⋅ F(e jθ)F*(e jθ) (S.95)

where F(ejθ) is the discrete-time Fourier transform of f(kT). Since the inverse Fouriertransform of F*(e jθ) is f*(–kT), the result follows immediately.

Problem 3-11.

RXY(τ) = E[X(t + τ)Y*( t )] = E[Y*(p – τ)X( p )] = RYX*(–τ). (S.96)

No, it may be complex-valued.

Problem 3-12. If the steady state probabilities exist they must satisfy (3.117), and also mustsatisfy:

∑j ∈ Ωp(j) = 1 . (S.97)

Define the row vector of state probabilities

pk = [pk(0),…, pk(M)] . (S.98)

f a

f b

∫∞–

∞–

f a

f b

Page 21: digital communication, barry lee solutions.pdf

s-18

Then the system of equations in (3.117) can be written more concisely as

pk + 1 = pkP . (S.99)

The condition that the state probabilities don’t change through a state transition gives thedesired steady-state probabilities.

Problem 3-13.

(a) The signal flow graph is shown below:

(b) The corresponding equations are

P0(z) = 1+ pz –1Pi(z) (S.100)

Pi(z) = qz –1Pi – 1(z) , 1 ≤ i ≤ N – 1 (S.101)

PN(z) = qz –1PN –1(z) + z –1PN(z) . (S.102)

Solving these equations and using the identity

ri = , (S.103)

we get the result.

(c) This follows directly by differentiation.

(d) For this case,

fN ≈ q –N (S.104)

which is what we would expect. The probability of a head is q, the probability of N heads in arow is qN, and on a relative frequency basis N heads in a row will occur once out of q –N trials.

Problem 3-14.

p(ψ0,ψ1,…,ψn) = p(ψn|ψn –1,…,ψ0)p(ψn –1,…,ψ0) (S.105)

= p(ψn|ψn –1)p(ψn –1,…,ψ0) (S.106)

= p(ψn|ψn –1)p(ψn –1|ψn –2…,ψ0) p(ψn –2,…,ψ0) (S.107)

= p(ψn|ψn –1)p(ψn –1|ψn –2)p(ψn –2,…,ψ0) (S.108)

= … (S.109)

qz–1 qz–1PN(z)

PN–1(z)

z–1pz–1 P0(z) P1(z)

1

1

qz–1

pz–1

pz–1

i 0=

N 1–∑

i 0=

N 1–∑ 1 rN–1 r–

----------------

Page 22: digital communication, barry lee solutions.pdf

s-19

Problem 3-15.

p(ψn|ψn+1,…,ψn+m) = . (S.110)

Using the result in Problem 3-14 on both the numerator and denominator and canceling theterms that are equal, this becomes

p(ψn|ψn+1,…,ψn+m) =

= p(ψn|ψn+1). (S.111)

Problem 3-16.

p(ψn,ψs|ψr) =

= p(ψn|ψr) ,

where the last equality follows fromProblem 3-14. The result now follows using Bayes’ rule.

Problem 3-17. The signal flow graph is in shown in Fig. S-1(a). The set of equationsgoverning this signal flow graph is

P0(z) = 0.5z –1P0(z) + 0.5z –2P1(z) + 1 (S.112)

P1(z) = 0.5z –1P1(z) + 0.5P0(z) . (S.113)

Solving these linear equations for P0(z),

p ψn ψn 1+ … ψn m+, ,( )

p ψn 1+ …, ψn m+( )-----------------------------------------------------------------

p ψn 1+ ψn( ) p ψn( )p ψn 1+( )

--------------------------------------------------

p ψn ψr ψs, ,( )p ψr( )

-----------------------------------

p ψr ψs( ) p ψs( )p ψr( )

----------------------------------------

Fig. S-1. (a) The signal flow graph describing the state probabilities of the parity checkexample when the initial state is zero. (b) The state probabilities as a function of k.

(b)

(a)1

pk(1)pk(0)

P1(z)P0(z)

1

0.5z–1

0.5z–1

0.5z–1 0.5z–1

0 1 2 3 4

1

0 1 2 3 4

1

kk

Page 23: digital communication, barry lee solutions.pdf

s-20

P0(z) = = – .

Inverting the Z transform,

pk(0) = uk – 0.5uk – 1,

where uk is the unit step. This is sketched in Fig. S-1(b). Computing P1(z) similarly,

P1(z) = .

Inverting the Z transform:

pk(1) = 0.5uk – 1.

The Markov chain is not stationary.

Problem 3-18. The Markov state diagram is shown in Fig. S-2(a).

(a) The diagram shows the independence required for the random process to be Markov,assuming that the coin tosses are independent of one another.

(b) The signal flow graph is shown in Fig. S-2(b).

(c) Writing the set of equations and solving them,

+ 1 = . (S.114)

Problem 3-19. A Poisson random variable N with parameter a has the pmf given in (3.140).It is easy to show (see Exercise 3-19-(b)) that the moment generating function for N is:

ΦN(s) = expa(es – 1). (S.115)

Therefore, (3.26) indicates that the optimum value of s satisfies:

= es . (S.116)

Plugging this into the bound gives the result.

1 0.5z 1––1 z 1––

-------------------------- 11 z 1––----------------- 0.5z 1–

1 z 1––-----------------

0.5z 1–

1 z 1––-----------------

Fig. S-2. (a) Markov chain description of the coin toss experiment.(b) Signal flow graph.

210

(a)

(b)

1

Tails

Heads

Tails

HeadsHeads

or

Tails

P0(z)0.5z–1

0.5z–1

0.5z–1

0.5z–1

0.5z–1P1(z) P2(z)

1

P2 z( )1

--------------- z4z3 6z2– z 1+ +-------------------------------------------

xa---

Page 24: digital communication, barry lee solutions.pdf

s-21

Problem 3-20. The solution follows immediately from the fact that N(t0) is Poissondistributed with parameter a = λt0. Hence from (3.141) we get

E[N(t0)] = λt0 , (S.117)

Var[N(t0)] = λt0 . (S.118)

Problem 3-21. Writing the joint probability as a conditional probability, the desiredprobability is

pN(t2)|N(t1)(k + n|k)pN(t1)(k) . (S.119)

The left term is the probability of n arrivals in time interval (t2 – t1), or

(S.120)

and the second probability is

. (S.121)

Taking the product, the result follows immediately.

Problem 3-22. The process is governed by the differential equation

= (j – 1)λqj –1( t ) – jλqj( t ) (S.122)

which has Laplace transform

Qj( s ) = (S.123)

and iterating we get

Qj( s ) = . (S.124)

It is nontrivial to derive, but the inverse Laplace transform is

qj( t ) = e –λt(1 –e–λt)j –1. (S.125)

Problem 3-23.

(a) Since the mean value of N( t ) is given by the integral of (3.154), differentiating givesthe desired expression.

(b) This also follows easily from the fact that the expectation of the convolution is theconvolution of the expectation.

(c) This again follows directly from the interchangeability of expectation anddifferentiation.

λ t2 t1–( )( )k

k!--------------------------------e λ t2 t1–( )–

λt1( )k

k!----------------e λt1–

dq j t( )dt

----------------

q j 0( ) j 1–( )λQ j 1– s( )+

s jλ+---------------------------------------------------------------

λ j 1– j 1–( )!s λ+( ) s 2λ+( )… s jλ+( )

------------------------------------------------------------------

Page 25: digital communication, barry lee solutions.pdf

s-22

(d) We will just derive the first equation, the second is similar. First note that

RWX(t1,t2) = E[W(t1)X(t2)]

= E[W(t1) h(τ)W(t2 – τ) dτ]

= h(τ)RWW(t1,t2 – τ)dτ

which we recognize as a convolution in the second argument of the autocorrelation function.

Problem 3-24. Note that

E[N(t1)(N(t2) – N(t1))] = E[N(t1)N(t2)] – E[N2(t1)]. (S.126)

Independence implies that the expectation of the product is the product of the expectations:

E[N(t1)(N(t2) –N(t1))] = E[N(t1)]E[N(t2) – N(t1)]= Λ(t1)(Λ(t2) – Λ(t1)). (S.127)

The second moment of a random variable is equal to the square of its mean plus its variance.Since N( t ) is a Poisson random variable, we have:

E[N 2(t1)] = Λ2(t1) + Λ(t1) , (S.128)

from which the result follows immediately with some minor algebraic manipulation.

Problem 3-25. Noting that RNN(t1,t2) is continuous at t1 = t2, take the derivative first withrespect to t2,

(t1,t2) = = . (S.129)

Now take the derivative with respect to t1, first noting that there is a discontinuity of size λ(t2)at t1 = t2,

(t1,t2) = = λ(t1)λ(t2) + λ(t2)δ(t1 – t2). (S.130)

Finally, we convolve this result with first h(t1) and then h(t2) to obtain the autocorrelation ofshot noise. First convolving with h(t1),

(t1,t2) ∗ h(t1) = [λ(t1) ∗ h(t1)]λ(t2) + λ(t2 )h(t1 – t2 ) (S.131)

and then convolving with h(t2), we obtain the desired result.

∞–

∞–

RN N t2∂

∂ RNN t1 t2,( )

Λ t1( )λ t2( ) t1 t2<,

1 Λ t1( )+( )λ t2( ), t1 t2≥

RN N t1∂

∂ RN N

t1 t2,( )

RN N

Page 26: digital communication, barry lee solutions.pdf

s-23

Problem 3-26. Substituting a constant rate into the autocorrelation of (3.218), theautocorrelation is

RXX(t1,t2) = λ2H2(0) + λ h(–u)h(t2 – t1 – u)du (S.132)

which is a function of τ = t2 – t1 and hence the autocorrelation is

RX( τ ) = λ2H 2(0) + λh( τ ) ∗ h( τ ). (S.133)

Taking the Fourier transform of this expression we get the power spectrum,

SX( f ) = λ2H2(0)δ( f ) + λ|H( f )|2. (S.134)

Note that the power spectrum has a d.c. term, corresponding to the expected value of the shotnoise, and a term proportional to the magnitude-squared of the filter transfer function (asexpected).

Problem 3-27. The expected value of the shot noise is

E[X( t )] = λ0 h(τ)dτ (S.135)

which is proportional to the step function.

Problem 3-28. If the filter has impulse response h( t ) and transfer function

H( f ) = A( f )e jφ( f ) (S.136)

then the response of the filter to λ0 is λ0A(0) and the response to the sinusoid isλ1A( f1 )cos(2πf1t + φ( f1)). The mean value is the sum of these two signals.

Problem 3-29. The outputs are still obviously equally probable, p(0) = p(1) = 1 ⁄ 2. FromFig. 3-19 we can calculate P1|1(z),

P1|1(z) = (S.137)

SX+(z) = 0.5 ⋅ . (S.138)

Recognizing that µX = 1 ⁄ 2 and subtracting from the above we get

SX+(z) – = . (S.139)

Calculating the two-sided spectrum, we get

SX(z) = + – 0.25 , (S.140)

which simplifies to the desired result.

∞–

∞–

t

1 qz 1––1 qz 1– qz 1– p2z 2–+ +( )– q2z 2–+---------------------------------------------------------------------------------------

1 qz 1––1 z 1––( ) 1 1 2p–( )z 1––( )

-------------------------------------------------------------------

0.251 z 1––-----------------

0.251 z 1––-----------------

0.251 1 2p–( )z 1––---------------------------------------

0.251 1 2p–( )z 1––--------------------------------------- 0.25

1 1 2p–( )z–----------------------------------

Page 27: digital communication, barry lee solutions.pdf

s-24

+

+

– –

Digital

COMMUNICATION

John R. BarryEdward A. Lee

David G. Messerschmitt

third edition

Solutions Manualfor

Kluwer Academic Publishers

Page 28: digital communication, barry lee solutions.pdf

s-25

Chapter 4

Problem 4-1. The entropy is 0.81 bits. The rate is R = 0.81 bits/second. There are manypossibilities for the coder. Here is one. Pair the coin flips and represent them as follows:

The first occurs with probability 9/16, the second and third with probability 3/16 and the lastwith probability 1/16. The expected number of bits per flip-pair is therefore 1.67, or 0.844 bitsper flip, which is close to the rate of the source, but not equal.

Problem 4-2.

H(X) = log2(2) + log2(4) + log2(8) + log2(8) = 1.75 . (S.141)

R = 175 trials ⁄ second . (S.142)

A coder that will work is given by the following table.

The average number of bits per trial is

⋅ 1 + ⋅ 2 + ⋅ 3 + ⋅ 3 = 1.75 . (S.143)

Problem 4-3. Suppose that pi = pY(yi) where the set of yi for i ∈1, 2, …, M is thealphabet for the random variable Y. Define a new random variable X in terms of Y such that anoutcome Y = yi results in the outcome X = qi ⁄ pi. By Jensen’s inequality,

E[log2X] ≤ log2E[X] (S.144)

but

E[log2X] = = pilog2qi – pilog2pi (S.145)

and

FLIPS

TTTHHTHH

010110111

BITS

12--- 1

4--- 1

8--- 1

8---

outcome

a1a2a3a4

010110111

bits

12--- 1

4--- 1

8--- 1

8---

pilog2qi

pi-----

i 1=

M∑ i 1=

M∑ i 1=

M∑

Page 29: digital communication, barry lee solutions.pdf

s-26

E[X] = qi = α. (S.146)

The p-q inequality follows.

Problem 4-4. Let qi = 1 ⁄ K in the p-q inequality and the result follows easily.

Problem 4-5. The size of the set over which X has positive probability is less than or equal to2n, so the result follows immediately from Exercise 4-1.

Problem 4-6.

(a) It is easy to show that the input and output of this channel are statistically independent,and from this that

H(X|Y) = H(X) , H(Y|X ) = H(Y ) , (S.147)

I(X,Y ) = 0 . (S.148)

(b) Since the mutual information is zero independent of the input probability distribution,the capacity is also zero.

Problem 4-7.

(a) For this noiseless channel, the output is equal to the input, so

H(X|Y ) = 0 , H(Y|X ) = 0 , (S.149)

I(X,Y ) = H(X ) = H(Y ) . (S.150)

(b) The capacity is the maximum of H(X ) over the input distribution, which is one bitbecause the input has alphabet size of two.

Problem 4-8.

(a) We get

I(X,Y ) = H(Y ) + plog2p + (1 – p)log2(1 – p) . (S.151)

(b) Capacity is achieved when H(Y ) is maximized. By direct calculation,

H(Y) = –plog2p – (1 – p)log2(1 – p) – q(1 – p)log2q – (1 – q)(1 – p)log2(1 – q) , (S.152)

where q is the probability of the first input and (1 – q) is the probability of the second. Thisquantity is maximized when the inputs are equally likely (q = 1 ⁄ 2), and the capacity is

Cs = 1 – p . (S.153)

The center output is called an erasure, and tells us nothing about what the channel input is, soit is not surprising that the capacity approaches zero as p → 1.

Problem 4-9.

(a) This follows from

≤ = 0 (S.154)

i 1=

M∑

pilogqi

pi-----

i 1=

M∑ piqi

pi----- 1–

i 1=

M∑

Page 30: digital communication, barry lee solutions.pdf

s-27

where we have used the inequality log(x) ≤ x – 1.

(b) Substitute a uniform distribution qi = 1 ⁄ K, and the bound follows immediately.

Problem 4-10.

(a) The channel has binary input and output, so all we need to show is symmetry. To do this,we solve part b.

(b) The number of transitions from 0 to 1 or vice versa is binomially distributed. As long asthere are an even number of such crossovers, then the output of the channel will be thesame as the input. The probability of this occurring is

pY|X(0|0) = pY|X(1|1) = Km(1 –p)L – mpm (S.155)

where

Km = = . (S.156)

Similarly, a channel error occurs if there are an odd number of crossovers, which occurs withprobability

pY|X (1|0) = pY|X (0|1) = Km(1 – p)L – mpm . (S.157)

(c)

pX|Y (0|0) = pX|Y (1|0) = pX|Y (0|1) = pX|Y (1|1) → 1 ⁄ 2 as L → ∞ . (S.158)

Problem 4-11. Suppose X has distribution pi and Y has distribution qi. Then

H(rX ) = – pilog2pi ≤ – pilog2qi , (S.159)

where the inequality follows from Problem 4-9. Meanwhile,

H(Y) = – qilog2qi

= –(p1 – δ)log2q1 – (p2+ δ)log2q2 – pilog2qi

= – pilog2qi + [δlog2q1 –δlog2q2] . (S.160)

Since p1> p2, we have that q1> q2, and the term in brackets in nonnegative, so by comparingwith (S.159) we get that H(Y ) ≥ H(X ).

Problem 4-12.

(a)

m 0=

L∑even

L

m L!

m! L m–( )!-----------------------------

m 1=

L∑odd

i 1=

K∑ i 1=

K∑

i 1=

K∑

i 3=

K∑

i 1=

K∑

Page 31: digital communication, barry lee solutions.pdf

s-28

H(X ) = – log2 dx = –log2 . (S.161)

(b) The variance of X is

σ2 = E[X2] = x2dx = a2 . (S.162)

Hence we can write

a = σ (S.163)

and

H(X ) = log2(2 σ) . (S.164)

The entropy of a Gaussian is given by (4.21),

H(Y) = log2(2πeσ2) = log2(σ ) . (S.165)

Since 2 < , then H(X )< H(Y ).

Problem 4-13. Let the random vector Z denote the ordered pair (X, Y ). The set of allpossible pairs of outcomes of Z is in ΩX × ΩY. Number the possible outcomes (in any order)from 1 through M, where M is the size of ΩX multiplied by the size of ΩY. Then let zi denotea particular pair of outcomes (x, y), where 1 ≤ i ≤ M. Then define

pi = pX,Y(x, y) (S.166)

Further, define

qi = pX( x )pY( y ) . (S.167)

These pi and qi satisfy the constraints of the p-q inequality. The p-q inequality then yields

– I(X,Y ) = – ≤ log2 = α. (S.168)

Hence,

I(X,Y) ≥ α = log2 pX(x)pY(y)

= log2 pX(x) pY(y)

= log21 = 0. (S.169)

The second and third inequalities follow easily. The inequalities are equalities when X and Yare independent.

a–

a

∫ 12a------- 1

2a-------

12a-------

a–

a

∫ 12a------- 1

3---

3

3

12--- 2πe

3 2πe

pilog2qi

pi-----

i 1=

M∑ pi

qi

pi-----

i 1=

M∑

x ΩX∈∑

x ΩY∈∑ x ΩX∈∑

x ΩX∈∑

x ΩX∈∑

x ΩX∈∑

x ΩX∈∑

x ΩY∈∑ x ΩX∈∑

x ΩX∈∑

Page 32: digital communication, barry lee solutions.pdf

s-29

Problem 4-14. This follows easily with repeated application of the definition of conditionalprobability.

Problem 4-15. Using the inequality loge(x) ≤ (x – 1),

C ≤ ⋅loge2 ⋅ = log2 ⋅ . (S.170)

Thus, as the number of degrees of freedom increases, the capacity approaches a constant. Aswe increase N we have in effect more parallel channels to transmit over, and hence the factorof N. However, since the total input power is constrained to σx

2, the transmitter is forced toreduce the power in each parallel channel, and hence the SNR on each channel decreases. Inthe limit, these two effects precisely balance one another.

Problem 4-16. The implication of (4.47) is that for any set of outcomes x1,…, xn,

(x1 + … + xn) ≈ E[X ] , (S.171)

with high probability. Let Xi = log2Yi. Then

E[log2Y] ≈ log2(y1…yn) (S.172)

with high probability. The result follows from this.

Problem 4-17. Define the random variable Y to have value 5 with probability 1 ⁄ 6 and value1 ⁄ 5 with probability 5 ⁄ 6. Then the money left after playing the game n times is

Mn = 100 Yi (S.173)

where Yi is a sequence of independent trials of the random variable Y. Now, because ofindependence,

E[Mn] = 100 E[Yi] = 100. (S.174)

Surprisingly, this does not imply that the game is fair. From (4.39), with high probability, a setof outcomes y1,…,yn satisfies

Mn = 100 Yi ≈ [2E[log2Y]]n ≈ [0.34]n. (S.175)

This certainly goes to zero as n gets large. I wouldn’t play the game, at least not repeatedly.

Problem 4-18. The better estimate is (b). The argument is similar to that in the solution toProblem 4-17.

N2-----

σx2

N σ2------------ e

σx2

σ2------

1n---

1n---

i 1=

n

i 1=

n

i 1=

n

Page 33: digital communication, barry lee solutions.pdf

s-30

+

+

– –

Digital

COMMUNICATION

John R. BarryEdward A. Lee

David G. Messerschmitt

third edition

Solutions Manualfor

Kluwer Academic Publishers

Page 34: digital communication, barry lee solutions.pdf

s-31

Chapter 5

Problem 5-1. From section 5.1.2, the channel output is a PAM signal whose pulse shapeh( t ) is related to the transmit pulse shape b( t ) and the channel impulse response b( t ) by:

H( f ) = G( f )B( f ). (S.176)

Using the result of Appendix 3-A, we can write the power spectrum of the received signal as

SR( f ) = |H( f )|2SA(ej2πfT ) = |G( f )B( f )|2 = |G( f )|2B( f ). (S.177)

Using Appendix 2-A,

G( f ) = Te–jπfT , (S.178)

so

|G( f )|2 = . (S.179)

The power spectrum SR( f ) is sketched below:

The distortion is due to the loss of the sidelobes of G( f ) when it is multiplied by B( f ).

Problem 5-2. From Appendix 3-A, the transmit power spectrum for baseband PAM is

SS( f ) = |G( f )|2Sa(e jθ ) = |G( f )|2. (S.180)

Integrating the power spectrum yields the power:

P = |G( f )|2 df = , (S.181)

where Eg is the energy of the pulse shape g( t ). (The relationship P = Eg ⁄ T makes intuitivesense, since power is energy per unit time.) But:

1T----

Ea

T-------

Ea

T-------

πfT( )sinπfT

------------------------

sin2 πfT( )πf( )2

---------------------------

lost sidelobes

B( f )

SR( f )

f

1T---- 1

T----

1T----

∞–

∫Eg

T-------

Page 35: digital communication, barry lee solutions.pdf

s-32

Eg = 2 |G( f )|2 df

= 2 T 2df + 2 df. (S.182)

Let ν = f – :

Eg= (1 – α)T + (1 + cos(πνT ⁄ α))2 dν

= (1 – α)T + (1 + 2cos(πνT ⁄ α) + cos2(πνT ⁄ α))dν

= (1 – α)T + + 0 +

= (1 – α ⁄ 4)T . (S.183)

Therefore:

P = = 1 – , (S.184)

which is independent of T.

Problem 5-3. The minimum bandwidth pulse has bandwidth 1 ⁄ (2T) Hz, where 1 ⁄ T is thesymbol rate. The bandwidth of a pulse with 50% excess bandwidth is 1.5 ⁄ (2T) Hz. Werequire that

1.5 ⁄ (2T) ≤ 1500 , (S.185)

which implies that the maximum symbol rate is = 2000 symbols per second.

Problem 5-4.

(a) The input bit sequence is white, so that Rx(m) = δm and Sx(e jθ ) = 1. Further,

ak = xk – xk –1 . (S.186)

Taking Z-transforms,

A( z ) = X( z )(1 – z–1 ) . (S.187)

Define

H( z ) = = 1 – z–1 (S.188)

so that

0

0

1 α–2T

-------------

∫ T2---- 1 π f 1 α–

2T-------------

– T α⁄

cos+

2

1 α–2T

-------------

1 α+2T

-------------

1 α–2T

-------------

T 2

2-------

0

αT----

∫T 2

2-------

0

αT----

∫T 2

2-------

αT---- α

2T--------

Eg

T------- α

4---

1T----

A z( )X z( )-------------

Page 36: digital communication, barry lee solutions.pdf

s-33

H(e j2πfT ) = 1 – e–j2πfT . (S.189)

Hence

Sa(e j2πfT ) = Sx(ej2πfT )|H(e j2πfT )|2

= |1 – e–j2πfT|2

= 2 – 2cos(2πfT) . (S.190)

The sketch is shown in the following figure.

(b) Here is a well-labeled, careful sketch:

(c) Starting with:

s( t ) = (xm – xm – 1)g(t – mT )

= Xmg(t – mT ) – Xmg(t – mT – T )

= Xm[g(t –mT) –g(t –mT –T)] . (S.191)

So

h( t ) = g( t ) – g(t – T ) (S.192)

and

H( f ) = (1 – e–j2πfT )G( f ) . (S.193)

Note that H( f ) = 0 for all f = m ⁄ T, so the Nyquist criterion can never be satisfied,regardless of G( f ). Furthermore,

P( f ) = (1 – e–j2πfT )G( f )F( f ) (S.194)

Sa(e j2πfT )

f1 ⁄ (2T)0

4 ⁄ TSS( f )

f1 ⁄ (2T)0

m ∞–=

∞∑

m ∞–=

∞∑ m ∞–=

∞∑

m ∞–=

∞∑

Page 37: digital communication, barry lee solutions.pdf

s-34

does not satisfy the Nyquist criterion for any G( f ) and F( f ) that are finite for all f. (Itturns out that ISI can be eliminated using a receive filter that has infinite gain at DC, butthis filter is not stable.)

Problem 5-5. The only zero excess-bandwidth pulse satisfying the Nyquist criterion is (5.6).But since the pulse falls off as 1 ⁄ t, the worst case transmitted symbol sequence will result ininfinite intersymbol interference for any sampling phase other than the ideal phase (where theintersymbol interference is zero).

Consider binary PAM signaling with a zero excess-bandwidth raised-cosine pulse shape,which is an ordinary sinc pulse. We can show that if the timing phase is off from the ideal byany amount, the data cannot be recovered. The pulse shape is given by

g( t ) = (S.195)

and the PAM signal by

r( t ) = amg(t – mT )

= am . (S.196)

Sampling with exactly the correct timing phase, we get for any integer k

r(kT) = ak . (S.197)

To see what happens if the timing phase is slightly off, we rewrite the signal usingtrigonometric properties of the sine function,

r( t ) = am . (S.198)

Evaluating at time t = ε ≈ 0, and using the approximation sin(x) ≈ x for small x, gives:

r(ε) ≈ ε am ≈ a0 + am . (S.199)

This can be far from the desired value a0. In particular, with the unfortunate symbol sequenceam = ( –1)m + um, where um is the unit step, the above reduces to:

r(ε) ≈ a0 + , (S.200)

which is not even finite. Since the interference from neighboring symbols may not even befinite for any arbitrarily small error in the timing phase, the width of the eye is zero.Consequently, binary antipodal excess bandwidth must be larger than zero. (A differentconclusion might arise from a probabilistic exploration of this problem, since the probability ofsequences such as ( –1)m + um are asymptotically zero.)

Problem 5-6.

(a)

πt T⁄( )sinπt T⁄

----------------------------

m ∞–=

∞∑

m ∞–=

∞∑ π t mT–( ) T⁄( )sinπ t mT–( ) T⁄

-------------------------------------------------

Tπ---- πt

T-----

sinm ∞–=

∞∑ 1–( )m

t mT–( )-----------------------

m ∞–=

∞∑ 1–( )m

ε mT–( )-----------------------

m 0≠∑ 1–( )m

mT–---------------

2T----

m 0≠∑ 1m-----

Page 38: digital communication, barry lee solutions.pdf

s-35

E[|ak|2] = Ra(0) = Sa(e jθ)dθ = 1 . (S.201)

(b) From Appendix 3-A, the power spectrum for a baseband PAM signal is

SS( f ) = |G( f )|2Sa(e jθ ) = |G( f )|2 = 1 , (S.202)

which is independent of T.

(c) We need to find F( f ) so that P( f ) = G( f )F( f ) is as sketched below:

Since G( f ) = , we can write F( f ) = P( f ) ⁄ , so that the desired received filter hasthe following frequency response:

Yes, the pulse satisfies the Nyquist criterion.

Let us find a general solution that applies to both parts (d) and (e). From (5.52) and (5.53), thesignal-to-noise ratio is defined by:

SNR = , (S.203)

where we changed the noise power spectral density to reflect the fact that this problemconcerns a real, baseband system. In the above expression, Ea = E[ak

2] is the energy in thesymbols, and Ef is the energy in the receive filter f( t ). The integral in the numerator is easilyrecognized as the convolution of g( t ) and f( t ), which is p( t ), evaluated at time zero:

g( t )p( – t )dt = p(0). (S.204)

Hence, the SNR reduces to:

SNR = . (S.205)

(d) When F( f ) is as given in part (c), its energy is clearly unity: Ef = 1. Furthermore, sincep(0) = 1 and Ea = 1, the SNR from (S.205) reduces to:

12π------

π–

π

1T---- 1

T----

0

P(f )

12T-------1–

2T-------

f

T

T T

0

F(f )

12T-------1–

2T-------

f

1T

--------

g t( )f t–( ) td∫( )2

E f----------------------------------------- Ea

N0 2⁄---------------⋅

∞–

2p2 0( )Ea

E f N0

--------------------------

Page 39: digital communication, barry lee solutions.pdf

s-36

SNR = . (S.206)

(e) We still have Ea = 1, and it is easy to see that p(0) = 1. It remains only to find theenergy Ef. Since G( f ) = , it follows that F( f ) = P( f ) ⁄ and thus:

Ef = |F( f )|2df

= |P( f )|2df

= T2(1 – fT)2df

= T2(fT)2df

= 2T3 f2df

= = . (S.207)

Plugging into (S.205), the SNR reduces to:

SNR = . (S.208)

This is better than in part (d), even though the receive filter bandwidth is greater.

Problem 5-7.

(a) Yes this pulse satisfies the Nyquist criterion; we recognize it as a shifted version of theNqyuist pulse from Fig. 5-3(b). In the time domain, this pulse g( t ) is related to that ofFig. 5-3(b), call it y( t ), by:

g( t ) = y( t )e j2πt ⁄ T . (S.209)

Multiplying a Nyquist pulse by anything that is nonzero at time zero will necessarilyresult in another Nyquist pulse.

Yes, Reg( t ) also satisfies the Nyquist criterion, because from (S.209):

Reg( t ) = y( t )cos(2πt ⁄ T ) . (S.210)

Again, multiplying a Nyquist pulse by anything that is nonzero at time zero willnecessarily result in another Nyquist pulse.

(b) From Appendix 3-A we get

Ss( f ) = |G( f )|2 (S.211)

2N0-------

T T

∞–

∫1T----

∞–

∫2T----

0

1 T⁄

∫2T----

0

1 T⁄

0

1 T⁄

∫2T 3

3T 3----------- 2

3---

3N0-------

Ea

T-------

Page 40: digital communication, barry lee solutions.pdf

s-37

which is sketched below.

(c) Let x( t ) = sin(πt ⁄ T) ⁄ (πt ⁄ T ) be the ideal zero-excess-bandwidth pulse, with Fouriertransform as shown below:

Convolving two rectangles gives a triangle:

Y( f ) = X( f ) ∗ X( f ) , (S.212)

as sketched below:

Observe that

G( f ) = Y( f – 1 ⁄ T ) . (S.213)

In the time-domain, from Appendix 2-A, these previous two equations imply that:

y( t ) = [x( t )]2 (S.214)

and

g( t ) = y( t )e j2πt ⁄ T . (S.215)

Hence

g( t ) = e j2πt ⁄ T . (S.216)

EaT

Ss( f )

0 1 ⁄ T 2 ⁄ T f

TX( f )

0 1 ⁄ (2T ) f

TY( f )

0 1 ⁄ T f–1 ⁄ T

πt T⁄( )sinπt T⁄

---------------------------- 2

Page 41: digital communication, barry lee solutions.pdf

s-38

Problem 5-8.

From (5.52) and (5.53), the signal-to-noise ratio is defined by:

SNR = , (S.217)

where we changed the noise power spectral density to reflect the fact that this problemconcerns a real, baseband system. In the above expression, Ea = E[ak

2] is the energy in thesymbols, and Ef is the energy in the receive filter f( t ). The integral in the numerator is easilyrecognized as the convolution of g( t ) and f( t ), which is p( t ), evaluated at time zero:

g( t )f( – t )dt = p(0). (S.218)

Hence, the SNR can also be written as:

SNR = . (S.219)

In this problem the overall (equalized) pulse p( t ) is the raised-cosine pulse of (5.8), whichsatisfies p( 0 ) = 1 by design, so that:

SNR = . (S.220)

This equation makes intuitive sense: When the overall pulse is Nyquist with p(0) = 1,sampling the output yields the transmitted symbol plus sampled noise. The numerator above isthe power of the signal term, and the denominator is the power of the sampled noise.

(a) When the transmitted pulse is an impulse, G( f ) = 1, and under the assumption that thechannel is distortionless, the receive filter and the overall pulse shape are one in thesame, F( f ) = P( f ). As we saw in (S.183) for Problem 5-2, the energy of the raised-cosine pulse is Ep = Ef = (1 – α ⁄ 4)T. Therefore, the SNR is:

SNR = . (S.221)

(b) The receive filter required in this case is nothing but an ideal low-pass filter, as sketchedbelow:

The energy of this filter is clearly Ef = (1 + α) ⁄ T. Therefore, the SNR reduces to:

SNR = . (S.222)

g t( )f t–( ) td∫ 2

E f---------------------------------------- Ea

N0 2⁄---------------⋅

∞–

p2 0( )Ea

E f N0 2⁄----------------------

Ea

E f N0 2⁄----------------------

2Ea

1 α4---–

T N 0

--------------------------------

1F( f )

0 f1 α+2T

-------------1– α–2T

-----------------

2Ea T1 α+( )N0

--------------------------

Page 42: digital communication, barry lee solutions.pdf

s-39

(c) (Contrary to the comment, this is not tedious.) The receive filter in this case is also asquare-root raised-cosine pulse, F( f ) = P( f )1 ⁄ 2. Therefore, the energy in the receivefilter is easy to calculate:

Ef = |F( f )|2df

= P( f )df

= p(0)= 1. (S.223)

So the SNR in this case is:

SNR = = . (S.224)

Problem 5-9.

(a) 64 kb/s are required, so with binary antipodal signaling this is 64,000 symbols persecond, so the minimum bandwidth is 32 kHz.

(b) 64 kHz.

(c) 32 kHz.

(d) 16 kHz.

Problem 5-10.

(a) The answer depends on two things: the passband gain and the nominal carrierfrequency. Assume the passband gain is unity, and assume the nominal carrier frequencyis the midpoint of the pass band. Then the Fourier transform of the complex envelope ofthe channel can be written down by inspection:

Therefore, the complex envelope is the corresponding sinc function:

= . (S.225)

(b) The answers are respectively 20 Kb ⁄ s, 6667 b ⁄ s, 20 Kb ⁄ s, and 40 kb ⁄ s. In the lattercase, the receive filter should be properly designed to meet the Nyquist criterion.

(c) A reasonable transmit filter is g( t ) = ( t ). This makes full use of the band, butdoesn’t transmit any energy outside of the pass band (which would get rejected by thechannel anway). In this case, the Fourier transform of the received pulse shape is H( f )= ( f )2. A matched receive filter will yield an overall pulse shape of P( f ) = |H( f )|2

= ( f )4 that is itself a Nyquist pulse for the 10 KHz symbol rate.

∞–

∞–

Ea

E f N0 2⁄---------------------- Ea

N0 2⁄---------------

5 KHz–5 KHz f

2

B f( )

0

b t( ) 2 10000πt( )sinπt

------------------------------------

2b

BB

Page 43: digital communication, barry lee solutions.pdf

s-40

Problem 5-11. Use ROMs with N bit addresses, two outputs, each with bits.

Problem 5-12.

(a) Here is a Nyquist pulse of duration 2T:

(b) Here is a Nyquist pulse of duration 3T:

Problem 5-13.

(a)

ρh*( –k) = h*( t )h(t + kT) dt = h(u)h*(u – kT ) du , (S.226)

where the change of variables u = t + kT has been made.

(b) The sampled autocorrelation function is defined by ρh(k) = h( t )h*(t – kT )dt.Recognizing this integral as the convolution of h( t ) withh*(–t ), evaluated at time kT, we can view ρh(k) as the output of the sampled matchedfilter when the received pulse h( t ) is the input, as shown below:

The Fourier transform of the signal before the sampler is |H( f )|2. Therefore, theFourier transform of the signal after the sampler, which is the folded spectrum, is givenby:

Sh(e j2πfT ) = |H(f – m ⁄ T )|2. (S.227)

It is nonnegative and real because it is the sum of things that are nonnegative and real.

Problem 5-14. The folded spectrum Z-transform can be broken down into two summations,one for positive time and one for negative time:

Sh(z) = ρh(k)z –k

2N2-----

0 T 2T t3T

0 T 2T t3T

∞–

∫∞–

∞–

h( t )h*(–t )

t = kT

ρh(k)

H( f ) |H( f )|2 Sh(e j2πfT )

1T----

m ∞–=

∞∑

k ∞–=

∞∑

Page 44: digital communication, barry lee solutions.pdf

s-41

= ρh(k)z –k + ρh(k)z –k – ρh(0) . (S.228)

The first summation is Sh,+( z ). The second summation can be rewritten as:

ρh( –k)zk = ρh*(k)zk = Sh,+*(1 ⁄ z*) . (S.229)

Problem 5-15.

(a)

ρh(k) = fmh0(t – mT ) ⋅ fn*h0*(t – nT – kT )dt

= fm fn* h0(t – mT )h0*(t – nT – kT )dt

= fm fn*E0δm – k – n

= E0 fmfm – k* . (S.230)

Taking the Z transform of this results directly in (5.138).

(b) The pulse energy is Eh = |h( t )|2dt = ρh(0), which from part (a) gives:

Eh = E0 |fm|2 . (S.231)

Problem 5-16.

(a) Q(2) = 0.0228, which is about 44 times larger than Q2(2) = 5.176 × 10–4.

(b) Q(4) = 3.167 × 10–5, which is about 31574 times larger than Q2(4) = 10–9.

Neglecting Q2 relative to Q is clearly valid in these cases.

Problem 5-17.

The first probability is Q( ⁄ 0.5) = 0.0023, which is about 74 times bigger than the secondprobability Q(2 ⁄ 0.5) = 3.17 × 10–5.

Problem 5-18.

As discussed in Example 5-37, the probability of error for the four innermost points is

Pr[error|a on the inside] = [1 – 2Q(d ⁄ 2σ)]2

= 4Q(d ⁄ 2σ) – 4Q2(d ⁄ 2σ) . (S.232)

Similarly, the probability of error for the four corner points is

Pr[error|a in the corner] = [1 – Q(d ⁄ 2σ)]2

= 2Q(d ⁄ 2σ) – Q2(d ⁄ 2σ) . (S.233)

k 0=

∞∑ k ∞–=

0∑

k 0=

∞∑ k 0=

∞∑

∞–

∫ k 0=

K∑ n 0=

K∑

k 0=

K∑ n 0=

K∑∞–

k 0=

K∑ n 0=

K∑

m 0=

K∑

∞–

∫m 0=

K∑

2

Page 45: digital communication, barry lee solutions.pdf

s-42

The probability of error for the eight edge points is:

Pr[error|a on edge] = [1 – 2Q(d ⁄ 2σ)][1 – Q(d ⁄ 2σ)]= 3Q(d ⁄ 2σ) – 2Q2(d ⁄ 2σ). (S.234)

The result follows by adding these terms, weighted by their probabilities:

Pr[error] = Pr[a on inside]Pr[error|a on the inside]+ Pr[a on corner]Pr[error|a on a corner]+ Pr[a on ege ]Pr[error|a on an edge] (S.235)

= 4Q(d ⁄ 2σ) – 4Q2(d ⁄ 2σ)

+ 2Q(d ⁄ 2σ) – Q2(d ⁄ 2σ)

+ 3Q(d ⁄ 2σ) – 2Q2(d ⁄ 2σ) (S.236)

= 3Q(d ⁄ 2σ) – 2.25Q2(d ⁄ 2σ)) . (S.237)

Problem 5-19. We could proceed as in Problem 5-18, which is straightforward, but insteadlet us take a different route. The constraint that M = 2b for b even is equivalent to a constraintthat be a power of 2. In this case, M-ary QAM can be viewed as a pair of real -aryPAM signals in parallel, one for the in-phase component and another for the quadrature.

To proceed we need to know the error probability of L-ary PAM. We’ve already done this in(5.104) for the special case of 4-ary PAM, but we need to generalize to arbitrary L that is apower of two. Let z = a + n, where a is uniformly distributed over the real PAM alphabet±c, ±3c, … ±(L – 1)c, where c = d ⁄ 2 is related to the alphabet energy Ea by c = ,and let n ~ N (0, σ2 = N0 ⁄ (2Eh)) be independent real noise. Then:

PePAM(L, E ⁄ N0) = Pr[error for L-ary PAM]

= Pr[a on end]Pr[error|a on end]+ Pr[a not on end]Pr[error|a not on end] (S.238)

= Q(d ⁄ 2σ) + 2Q(d ⁄ 2σ)

= 2 Q(c ⁄ σ)

= 2 Q , (S.239)

where E = EaEh is the average received energy per pulse. Observe that this reduces to (5.104)when L = 4.

Now back to the view that QAM is a pair of real -ary PAM signals in parallel. When theQAM alphabet has energy Ea, the energy of each of the -ary PAM alphabets is Ea ⁄ 2. Thecomplex QAM symbol is decoded correctly if and only if both the real and imaginary symbolsare decoded correctly. Therefore:

416------

416------

816------

M M

3Ea

L2 1–----------------

2L----

L 2–

L-------------

1 1L----–

1 1L----–

6E N0⁄L2 1–

--------------------

MM

Page 46: digital communication, barry lee solutions.pdf

s-43

PeQAM(M, E ⁄ N0) = Pr[error for M-ary QAM]

= 1 – Pr[correct for -ary PAM]2

= 1 – 1 – PePAM( , E ⁄ N0) 2 (S.240)

= 2PePAM( , E ⁄ N0) – Pe

PAM( , E ⁄ N0)2 (S.241)

= 4 Q – 2 Q 2 (S.242)

= 4 Q – 4 Q2

. (S.243)

Note the typo in the text: The second occurence of is supposed to be squared.

Problem 5-20.

(a) There are 4 symbols at distance d = 2c, 4 symbols at distance d = 2 c, 2 symbols atdistance d = 4c, 4 symbols at distance d = 2 c, and 1 symbol at distance d = 4 c,so the union bound is

Pr[symbol error|ak = c + jc] ≤ 4Q(c ⁄ σ) + 4Q( c ⁄ σ) + 2Q(2c ⁄ σ)

+ 4Q( c ⁄ σ) + Q(2 c ⁄ σ)≈ 4Q(c ⁄ σ) . (S.244)

(b) There are 2 symbols at distance d = 2, 2 symbols at distance d = , 2 symbols atdistance d = 2 , 4 symbols at distance d = , 2 symbols at distance d = 2 , 1symbol at distance d = 4 , and 2 symbol at distance d = , so the union bound is

Pr[symbol error|ak = 1 + j ] ≤ 2Q(1 ⁄ σ) + 2Q( ⁄ 2σ) + 2Q( ⁄ σ)

+ 4Q( ⁄ 2σ) + 2Q( ⁄ σ)

+ Q(2 ⁄ σ) + 2Q( ⁄ 2σ)≈ 2Q(1 ⁄ σ) . (S.245)

(c) The noise components in the directions of the nearest neighbors are not orthogonal, andhence not independent.

(d) The average power in the 16-QAM constellation is 10c2 and in the V.29 constellation itis 13.5, so c = ≈ 1.16. The approximate probabilities of error are 4Q(1.16 ⁄ σ)for 16-QAM and 2Q(1 ⁄ σ) for V.29, so assuming the SNR is high enough that theconstant multipliers are not important, 16-QAM is about 1.3 dB better. There are goodreasons, nonetheless, for using the V.29 constellation. In particular, it is less sensitive tophase jitter.

Problem 5-21. This implies that the folded spectrum is a constant, Sh(e j2πfT ) = Eh, whichadmits a trivial spectral factorization (5.67) with γ2 = Eh and M(z) = 1. The precursor filterof (5.69) reduces to a constant gain of Eh

–1. In this case, the minimum-distance sequencedetector of (5.72) reduces to:

M

M 1

2---

M 12--- M 1

2---

1 1M

----------– 3E N0⁄

M 1–--------------------

1 1

M----------–

3E N0⁄M 1–

--------------------

1 1M

----------– 3E N0⁄

M 1–--------------------

1 1M

----------– 2 3E N0⁄

M 1–--------------------

1 1M

----------–

25 2

2

5 2

52 17 5

2 37

5 2

17 5

2 37

1.35

Page 47: digital communication, barry lee solutions.pdf

s-44

k = . (245.1)

This makes intuitive sense, since the output of the sampled matched filter is yk =Ehak + nk, where nk is unknown noise. When the data symbols are chosenindependently of one another, the minimization can be performed symbol-by-symbol:since all the terms in the sum are non-negative, the sum is minimized by minimizingeach term, where each term is precisely the criterion for a minimum-distance slicerdesign. Thus, in this case, the simplified structure of Fig. 5-25(b) can be used. This isexactly the same structure of Fig. 5-19 that was derived for the special case of anisolated pulse, except that here the matched filter output is sampled every symbolperiod instead of only once at time zero. Thus, with independent symbols and no ISI,the minimum-distance sequence detector reduces to a sequence of independent one-shot minimum-distance detectors.

Problem 5-22. The allpass filter has transfer function

Hallpass(z) = , |d|< 1 . (S.246)

(a) The precursor equalizer filter is

(S.247)

and the channel model filter is

. (S.248)

(b) The zero location of the upper-path filter is outside the unit circle, since it is maximum-phase. In order to cancel this zero, the pole of the allpass filter would have to be outsidethe unit circle, which would make it anti-causal.

(c) The filter transfer function can be expanded in partial fractions as

+ (S.249)

which has impulse response

ackuk + bdkuk , (S.250)

where

a = (cd* – 1) ⁄ (c – d) , b = (|d|2 – d2) ⁄ (d – c) . (S.251)

Problem 5-23. The channel is so noisy that, even with binary PAM, a bit-error probability of10–6 cannot be maintained. Indeed, with the alphabet A = ±1 on this channel, the bit-errorprobability can be found exactly:

a arg minak∈AL k 0=

L 1–∑yk

Eh-------- ak–

2

d* z 1––1 dz–

---------------------

1 c*z–( ) d* z 1––( )1 dz 1––

-------------------------------------------------

d* z 1––1 cz 1––( ) 1 dz 1––( )

----------------------------------------------------

a1 cz 1––-------------------- b

1 dz 1––---------------------

Page 48: digital communication, barry lee solutions.pdf

s-45

Pb = Q(1 ⁄ σ) = Q(1 ⁄ ) = 7.8 × 10–4. (S.252)

Increasing the size of the alphabet while maintaining unit energy will only make the bit-errorprobability grow larger.

Note to instructor: Changing the noise standard deviation from σ = to σ = 0.006 willmake the problem much more interesting. A general solution for arbitary SNR is outlinedbelow.

To proceed we need to know the error probability of L-ary PAM. We’ve already done this in(5.104) for the special case of 4-ary PAM, but we need to generalize to arbitrary L. Theprobability of error for L-ary PAM is easily derived:

Pe = Pr[a on end]Pr[error|a on end]+ Pr[a not on end]Pr[error|a not on end] (S.253)

= Q(d ⁄ 2σ) + 2Q(d ⁄ 2σ)

= 2 Q(c ⁄ σ)

= 2 Q , (S.254)

where we substituted c = and SNR = E[X 2] ⁄ E[N2] = Ea ⁄ σ2. (Observe thatthis reduces to (5.104) when L = 4, as expected.). Assuming a Gray mapping, the bit-errorprobability will be:

Pb = Pe . (S.255)

Solving this equation for the L2 term in the denominator of (S.254), as a function of Pb, yields:

L2 = 1 + , (S.256)

where Γ = . (S.257)

With Pb = 10–6, the factor Γ ranges from 7.5 to 6.7 as L ranges from 2 to 128. If weapproximate Γ by Γ ≈ 7, independent of L, then (S.256) gives:

L ≈ . (S.258)

This is how big the alphabet can be while maintaing Pb = 10–6. If the noise standard deviationwere σ = 0.006, then substituting SNR = 1 ⁄ σ2 gives L = 63. But at this value of L, ourapproximation of Γ = 7 can be improved upon, since Γ is known to be 6.82 when L = 64.With Γ = 6.82, (S.256) gives L = 63.8, or essentially L = 64.

0.1

0.1

2L----

L 2–

L-------------

1 1L----–

1 1L----–

3SNRL2 1–------------------

3Ea( ) L2 1–( )⁄

1log2L---------------

SNRΓ

--------------

13--- Q 1– Pb log2L

2 1 1 L⁄–( )-----------------------------

2

1 SNR7

--------------+

Page 49: digital communication, barry lee solutions.pdf

s-46

Problem 5-24.

(a) The precursor equalizer filter is

(S.259)

and the channel-model filter is

. (S.260)

(b) The pole location of the precursor equalizer filter is outside the unit circle, since it ismaximum-phase. In order to cancel this pole, the zero of the allpass filter would have tobe outside the unit circle, which is acceptable because the allpass filter is then causal.The precursor equalizer filter then has transfer function

, (S.261)

which is causal, and the channel-model filter transfer function is c* – z –1.

(c) The filter transfer function can be expanded in partial fractions as

γ2 (S.262)

which has impulse response

γ2 d*dkuk – (cd* + 1)dk – 1uk –1 + cdk – 2uk – 2 . (S.263)

(d) The impulse response is FIR,

c*δk – δk – 1 . (S.264)

Problem 5-25. Observe that E[Z( t )] has zero mean, since E[cos(2πfct + Θ)] = 0. To findits autocorrelation function, use the trigonometric identity

cos(a)cos(b) = 0.5cos(a – b) + 0.5cos(a + b) (S.265)

to get

RZ(τ) = 2E[cos(2πfc(t + τ) + Θ)cos(2πfct + Θ)S(t + τ)S*( t )]= E[cos(2πfcτ) + cos(2πfc(2t + τ) + 2Θ)]RS( τ )= cos(2πfcτ)RS( τ ) . (S.266)

We have used the easily shown fact that

E[cos(2πfc(2t + τ) + 2Θ)] = 0. (S.267)

The power spectrum is then given by

d* z 1––1 c*z–( ) 1 dz 1––( )

--------------------------------------------------

1 cz 1––( ) d* z 1––( )1 dz 1––

---------------------------------------------------

z 1––1 cz 1––--------------------

d* cd* 1+( )z 1–– cz 2–+1 dz 1––

---------------------------------------------------------------

Page 50: digital communication, barry lee solutions.pdf

s-47

SZ( f ) = 0.5[SS( fc – f )+SS( –fc –f )] . (S.268)

For example, given the same complex-valued baseband power spectrum shown below,

the power spectrum of Z( t ) is shown below:

Without the random phase, Z( t ) is not WSS. This is because E[Z( t )Z( t + τ)] depends on tas well as τ when the phase is Θ = 0. For example, when t = 0, we have E[Z( 0 )Z( 0 + τ)] =0 for all τ. But other values of t will give a nonzero autocorrelation.

Problem 5-26. We can write

X( t ) = ReZ( t ) = [Z( t ) + Z*( t )] (S.269)

where

Z( t ) = S( t )e j(2πfct + Θ) . (S.270)

Showing that the expected value is independent of time is easy using

E[X( t )] = ReE[S( t )]E[e j(2πfct + Θ)] . (S.271)

The second expectation is easily shown to be zero by directly computing it (the expectation isover Θ, which has a uniform p.d.f., so we simply need to integrate over [0, 2π]). Theautocorrelation can be written

E[X(t1)X*(t2)] = E[Z(t1)Z(t2) + Z(t1)Z*(t2) + Z*(t1)Z(t2) + Z*(t1)Z*(t2)] , (S.272)

where the first term,

E[Z(t1)Z(t2)] = E[S(t1)S(t2)]E[e j(2πfc(t1 + t2) + 2Θ)] . (S.273)

can also easily be shown to be zero by directly computing the expectation over Θ of thecomplex exponential. The fourth term E[Z*(t1)Z*(t2)] will similarly turn out to be zero. Thesecond term is

E[Z(t1)Z*(t2)] = E[S(t1)S*(t2)]e j(2πfc(t1 – t2) . (S.274)

0f

SS( f )

SZ( f )

ffc –fc 0 W1 W2

2 12

-------

2

12---

Page 51: digital communication, barry lee solutions.pdf

s-48

The exponential is now a deterministic function of t1 – t2, as is E[S(t1)S*(t2)], theautocorrelation of S( t ), because S( t ) is WSS. The third term will similarly prove to be afunction of t1 – t2, so X( t ) is WSS.

Problem 5-27. Before we begin, note from (2.23) that (Y( t ) + j ( t )) ⁄ 2 is the output of aphase splitter whose input is Y( t ), and therefore X( t ) is related to Y( t ) by the followingpicture:

This system is thus a upper-single-sideband modulator; for example, if the input Y( t ) isdeterministic with Fourier transform Y( f ), as shown below, the output Fourier transform X( f )is as shown below:

We want to show that the WSS property is preserved by the U-SSB modulator. The outputsignal can be written as:

X( t ) = Y( t )cos(2πfc t ) – ( t )sin(2πfc t ) . (S.275)

We want to show that the following autocorrelation function is independent of time t:

RXX( t + τ, t ) = E[X( t + τ )X( t )]

= 2E Y( t + τ )cos(2πfc (t + τ )) – ( t + τ )sin(2πfc (t + τ ) ) ×

Y( t )cos(2πfc t ) – ( t )sin(2πfc t )

= 2E Y( t + τ )Y( t ) cos(2πfc (t + τ ))cos(2πfc t )

– 2E ( t + τ )Y( t ) sin(2πfc (t + τ ) )cos(2πfc t )

– 2E Y( t + τ ) ( t ) cos(2πfc (t + τ ))sin(2πfc t )

+ 2E ( t + τ ) ( t ) sin(2πfc (t + τ ) )sin(2πfc t ) (S.276)

= 2RY( τ )cos(2πfc (t + τ ))cos(2πfc t )– 2 ( τ )sin(2πfc (t + τ ) )cos(2πfc t )

Y

Y( t )

PHASE SPLITTER

Re ⋅

UPCONVERTER

e j2πfct2

X( t )2

u( f )

0 f 0 ffc– fc

1 Y( f ) ⇒ X( f ) 2

2 Y

Y

Y

Y

Y

Y Y

RY Y

Page 52: digital communication, barry lee solutions.pdf

s-49

– 2 ( τ )cos(2πfc (t + τ ))sin(2πfc t )

+ 2 ( τ )sin(2πfc (t + τ ) )sin(2πfc t ). (S.277)

Since the Hilbert transformer is an all-pass filter, ( f ) = SY( f ), and hence ( τ ) =RY( τ ). Furthermore, if h( t ) is the impulse response of the Hilbert transformer, then:

E[ ( t + τ )Y( t )] = E h(u)Y(t + τ – u)duY( t )

= h(u)E Y(t + τ – u)Y( t ) du

= h(u)RY( τ – u )du = RY( τ ) ∗ h( τ ), (S.278)

which implies that:

( τ ) = ( τ ) (S.279)

and

( f ) = –jsign( f )SY( f ). (S.280)

In particular, since ( f ) is purely imaginary, ( τ ) must be odd, satisfying ( –τ ) =– ( τ ). And ( τ ) is related to ( τ ) by:

( τ ) = E[ ( t ´– τ + τ )Y( t ´– τ )] (when t = t ´– τ )

= E[ ( t ´)Y( t ´– τ )]= (– τ ). (S.281)

Therefore, (S.277) reduces to:

RXX( t + τ, t ) =

2RY( τ ) cos(2πfc (t + τ ))cos(2πfc t ) + sin(2πfc (t + τ ) )sin(2πfc t )

– 2 ( τ ) sin(2πfc (t + τ ))cos(2πfc t ) – cos(2πfc (t + τ ))sin(2πfc t ) (S.282)

= 2RY( τ )cos(2πfcτ ) – 2 ( τ )sin(2πfcτ ), (S.283)

where we used the trigonemetric identities:

cos( x )cos( y ) + sin( x )sin( y ) = cos(x – y) ,sin( x )cos( y ) – cos( x )sin( y ) = sin(x – y) . (S.284)

Clearly (S.283) is independent of time t, so that X( t ) is indeed WSS. Further, since ( τ )= ( τ ), we recognize (S.283) as being the output of an upconverter:

RX( τ ) = 2Re RY( τ ) + j ( τ ) e j2πfct. (S.285)

RY Y

RY

SY

RY

Y∞–

∞–

∞–

RY Y

RY

SY Y

SY Y

RY Y

RY Y

RY Y

RY Y

RY Y

RY Y

Y

Y

RY Y

RY Y

RY Y

RY Y

RY

RY

Page 53: digital communication, barry lee solutions.pdf

s-50

In particular, this is the output of an upconverter whose input is RY( τ ) + j ( τ ) ,which from (2.23) is a scaled version of the output of a phase splitter whose input is RY( τ ).Overall, the relationship between RY( τ ) and RX( τ ) is captured by the following picture:

Except for the the factor of , this is identical to the U-SSB modulator that we began with.Therefore, the power spectrum is:

SX( f ) = 2SY( f – fc )u( f – fc ) + 2SY( f + fc )u( f – fc ). (S.286)

An example is sketched below:

2 RY

RY( τ )

PHASE SPLITTER

Re ⋅

UPCONVERTER

e j2πfct2

RX( τ )2 2

u( f )

2

0 f 0 ffc– fc

21

SY( f ) ⇒ SX( f )

Page 54: digital communication, barry lee solutions.pdf

s-51

+

+

– –

Digital

COMMUNICATION

John R. BarryEdward A. Lee

David G. Messerschmitt

third edition

Solutions Manualfor

Kluwer Academic Publishers

Page 55: digital communication, barry lee solutions.pdf

s-52

Chapter 6

Problem 6-1.

h0( t )h1( t )dt = sin(2πf0t)sin(2πf1t)dt

= cos(2π(f0 – f1)t) + cos(2π(f0 + f1)t) dt. (S.287)

Under both set of assumptions, the integral evaluates to zero.

Problem 6-2. Begin by rewriting the pulses gi( t ) = ± sin(2πfit)w( t ) from (6.114) as:

g0( t ) = ± sin(2π(fc + fd )t)w( t )g1( t ) = ± sin(2π(fc – fd)t)w( t ) (S.288)

where

fc = , fd = (S.289)

are the nominal carrier frequency and the peak deviation frequency, respectively. Then

g0( t )g1( t )dt = ± cos(4πfdt)dt ± cos(4πfct)dt

= sin(4πfdT ) . (S.290)

The signals are orthogonal if and only if this is zero, which occurs if and only if

4πfdT = Kπ (S.291)

for some integer K. The minimum (non-zero) frequency spacing therefore occurs when

4πfdT = π (S.292)

or

fd = . (S.293)

This is the frequency spacing (6.113) of MSK signals.

Problem 6-3. For MSK, the frequency separation should be fi – fi – 1 = 1 ⁄ (2T) = 0.5 MHz.So the desired frequencies are f1 = 10.5 MHz, f2 = 11.0 MHz, etc.

∞–

∫ 2ET

--------0

T

∫ET----

0

T

f 0 f 1+

2-----------------

f 0 f 1–

2-----------------

0

T

∫ 12---

0

T

∫ 12---

0

T

∫1

8πf d-------------

14T--------

Page 56: digital communication, barry lee solutions.pdf

s-53

Problem 6-4.

(a) From (6.117), using trigonometric identities,

x( t ) = sin(2πfct + )cos(φk)

+ cos(2πfct + )sin(φk) w(t – kT ) . (S.294)

Since sin(φk) = 0 this becomes, using more trigonometric identities

x( t ) = sin(2πfct)cos + cos(2πfct)sin cos(φk)w(t – kT ) . (S.295)

From the symmetry of the cosine we get

cos = cos (S.296)

and from the anti-symmetry of the sine we get

sin = bksin , (S.297)

from which the result follows.

(b) Notice that bk – 1 – bk is always either zero or ±2, so if k is even then from (6.118)

φk = φk – 1 + K2π , (S.298)

where K is an integer. Hence

Qk = cos(φk) = Qk – 1 = cos(φk – 1) . (S.299)

If k is odd then examining (6.118) we see that if bk = bk – 1 then Ik = Ik – 1.Furthermore, if bk = – bk – 1 then

φk = φk – 1 ± πk (S.300)

and since k is odd, cos(φk) = –cos(φk – 1), and again Ik = Ik – 1.

(c) Write the first summation of (6.179)

cos(2πfct)∑k even Iksin [w(t – kT ) – w(t – kT – T )] (S.301)

using the fact that for k even Ik = Ik + 1. Then notice that for k even

sin = ( –1)k ⁄ 2sin (S.302)

m ∞–=

∞∑ bkπt

2T------------

bkπt2T

------------

m ∞–=

∞∑ bkπt

2T------------

bkπt2T

------------

bkπt2T

------------ πt

2T--------

bkπt2T

------------ πt

2T--------

πt2T--------

π t kT–( )2T

------------------------- πt

2T--------

Page 57: digital communication, barry lee solutions.pdf

s-54

so

sin = sin ( –1)k ⁄ 2 . (S.303)

Hence the first summation of (6.179) is

cos(2πfct)∑k even Ik( –1)k ⁄ 2p(t – kT ) . (S.304)

The second summation follows similarly.

Problem 6-5. From(2.23), the output of the phase splitter when the input is Acos(2πfct) is:

s( t ) + j ( t ) = cos(2πfct) + jcos(2πfct)

= e j2πfc t . (S.305)

Clearly, the magnitude of this complex signal is A ⁄ 2.

Problem 6-6. Each of 128 subchannels (one per pulse) must carry a bit rate of 19,200 ⁄ 128= 150 bits per second. Using 4-PSK, we can transmit 2 bits per symbol, so the symbol rateshould be 75 symbols/second. Hence, T is 1 ⁄ 75 seconds or about 13 msec.

Problem 6-7. The sketches are shown in Fig. S-3. Define

sn( t ) = cos((n + 1 ⁄ 2)πt ⁄ T ) . (S.306)

Then

h0(t) = s1( t ) + s3( t ) , (S.307)

h1( t ) = s1( t ) – s3( t ) , (S.308)

h2(t) = s2( t ) + s0( t ) , (S.309)

h3(t) = s2( t ) – s0( t ) . (S.310)

The bandwidth efficiency is the same as that of the pulses in Fig. 6-8.

Problem 6-8.

(a) This is just a matter of multiplying it out.

(b) Let H( f ) = D( f ).

(c) If we choose M1 = 0 and M2 = 2 in (6.166), we get the pulses with Fourier transformsshown in Fig. S-4. From this, we see that each pulse consists of three parts. Each part isa sinc pulse modulated by e –j2πmt ⁄ T for m = 0, 1, and 2, and scaled by a complexvalue that depends on n. For n = 0, 1, and 2, we can write

πt2T--------

π t kT–( )2T

-------------------------

12---

s A

2----

A2----

12 T------------

πt 2T( )⁄( )sinπt 2T( )⁄

-------------------------------------

T 3⁄

Page 58: digital communication, barry lee solutions.pdf

s-55

hn( t ) = 1 + e –j2π(t ⁄ T+n ⁄ 3) + e –j4π(t ⁄ T+n ⁄ 3) . (S.311)

(d) The time domain pulses are not real. To use these pulses for orthogonal multipulse overa real channel, we need to upconvert them, forming the real-valued passband equivalentpulses

h n( t ) = Ree j2πfcthn( t ) (S.312)

for some carrier fc ≥ 1 ⁄ (2T). The upconversion doubles the bandwidth to N ⁄ T Hz,making the spectral efficiency of orthogonal multipulse

ν = = ≈ .528 . (S.313)

With combined PAM and multipulse, instead of transmitting log2(3) bits per symbol wecan transmit log2(M) on each of N simultaneously transmitted pulses, so

ν = = log2(M) . (S.314)

Fig. S-3. The Fourier transform for the orthonormal pulses in Problem 6-7. The value ofHn( f ) in the dashed regions of the f axis is determined by the requirement for conjugatesymmetry, while the value in the solid regions is determined by H( f ).

H3( f )

H2( f )

f

f0

H1( f )

f0

H0( f )

f0

1T---- 2

T----1–

T------

1T----

2T----

1–T------

1T---- 2

T----1–

T------

1T---- 2

T----1–

T------

T 2⁄

T 2⁄

T 2⁄

T 2⁄

13T

------------πt T⁄( )sin

πt T⁄----------------------------

2

log2NN

----------------log23

3--------------

N log2MN

-----------------------

Page 59: digital communication, barry lee solutions.pdf

s-56

This is equivalent to passband PAM with the same alphabet size, and since symbols canbe complex, is equivalent to baseband PAM and to PAM plus multipulse. The pulses arenot practical because of the gradual rolloff of the sinc function, or equivalently, becauseof the abrupt transitions in the frequency domain.

Problem 6-9. The conditions of the problem are satisfied if

|| Y – Sj ||2 ≤ || Y – Si ||2 (S.315)

and substituting for Y, this becomes

||E + Si – Sj ||2 ≤ ||E||2 . (S.316)

Multiplying out the left side, we get

||E||2 + di,j2 – 2Re⟨E, Sj – Si ⟩ ≤ ||E||2 , (S.317)

and dividing by di,j,

Re ⟨E, ⟩ ≥ di,j ⁄ 2 . (S.318)

This is the result promised, since (Sj – Si) ⁄ di,j is a unit vector in the direction of (Sj – Si).

Fig. S-4. The Fourier transforms of three orthogonal multipulse pulses. Complex values areplotted as magnitudes but labeled with the actual complex value.

0

0

0f

f

f

H2( f )

H1( f )

H0( f )

1T---- 2

T----

T 3⁄ e j4π 3⁄–

3T----

1T---- 2

T---- 3

T----

1T---- 2

T---- 3

T----

T 3⁄ e j8π 3⁄–

T 3⁄ e j4π 3⁄–T 3⁄ e j2π 3⁄–

T 3⁄

T 3⁄

T 3⁄

S j Si–

di j,-----------------

Page 60: digital communication, barry lee solutions.pdf

s-57

Problem 6-10.

(a) If the data symbols have magnitude unity, then the minimum distance between them isdmin = .

(b) dmin is σh times the minimum distance between data symbols, and hence σh.

dmin2 = σh

2 + σh2 ⇒ dmin = σh . (S.319)

(c) The distance is

d2 = |ε1|2 + |ε2|2 + 2αReε1ε2* . (S.320)

Clearly we want to make the third term as large negative as we can. Since the ε can behave a phase that is any multiple of π ⁄ 4, clearly the third term is minimized if ε1 and ε2are antipodal; that is, they have opposite phase. Also, the ε can have magnitude either

or 2. Thus, there are three cases to consider:

dmin2 = 4 + 4 – 2α ⋅ 2 ⋅ 2 (S.321)

dmin2 = 4 + 2 – 2α ⋅ ⋅ 2 (S.322)

dmin2 = 2 + 2 – 2α ⋅ ⋅ . (S.323)

It is easy to verify that the third case is always the smallest for all 0 ≤ α < 1, and hence

dmin = 2 . (S.324)

Note that the minimum distance goes to zero as α → 1.

Problem 6-11. Let us bound the energy of s( t ) in the interval [KT, ∞),

s2( t ) dt = ak,nhn(t – kT ) 2dt

≤ ak,n2 hn

2(t – kT )dt

≤ ak,n2 . (S.325)

If the data symbols are drawn from a finite constellation, we can assume that ak,n2 is

bounded, say by C1. The remaining sum:

1 ⁄ (KT – kT )2 (S.326)

is also bounded, since the series is convergent, say by C2. Hence, the energy outside [0, KT] isbounded by a constant αC1C2, independent of K, and the fraction of the energy outside theinterval [KT, ∞] goes to zero since the energy of s( t ) is increasing with K.

2

2

2

2

2

2 2

1 α–

KT

∫ KT

k 0=

K 1–∑ n 1=

N∑

k 0=

K 1–∑ n 1=

N∑KT

k 0=

K 1–∑ n 1=

N∑ αk K–( )T( )2

--------------------------------

n 1=

N∑

k 0=

K 1–∑

Page 61: digital communication, barry lee solutions.pdf

s-58

Problem 6-12. From (5.52) and (5.53), the signal-to-noise ratio is defined by:

SNR = (S.327)

where Ea = E[|ak|2] is the energy in the symbols, Ef is the energy in the receive filter f( t ),and p(0) is the overall pulse shape (after the receive filter) evaluated at time zero. The transmitpulse shape g( t ) is specified as the raised-cosine pulse shape given in (5.8) with α = 1 (100%excess bandwidth). We have already established (see the solution to Problem 5-2) that theenergy of this pulse is Eg = (1 – α ⁄ 4)T = 3T ⁄ 4. Therefore, the transmit signal power(average energy per unit time) will be:

P = = Ea. (S.328)

Therefore, to ensure that thethe transmit power is limited to P = 1, we must have:

Ea = . (S.329)

The receive filter is specified in the frequency domain as F( f ) = 1 for |f|<1 ⁄ T and F( f ) = 0for |f|> 1 ⁄ T , which has two consequences. First, it is easy to see that its energy is:

Ef = |F( f )|2df = . (S.330)

Second, the receive filter will not affect the pulse shape, so that the overall pulse shape is justthe transmit pulse shape:

p( t ) = g( t ) ⇒ p( 0 ) = 1. (S.331)

The SNR is thus:

SNR = = . (S.332)

Therefore, from (5.113) with M = 16:

Pe = 4 – 4

= 3Q – 2.25Q2

= 3Q – 2.25Q2 . (S.333)

p 0( ) 2Ea

E f N0

-------------------------

EaEg

T--------------- 3

4---

43---

∞–

∫ 2T----

p2 0( )Ea

E f N0

---------------------- 2T3N 0-----------

1 1M

----------– Q 3SNR

M 1–------------------

1 1M

----------– Q2 3SNR

M 1–--------------------

SNR5

-------------- SNR

5--------------

2T15N0---------------

2T15N0---------------

Page 62: digital communication, barry lee solutions.pdf

s-59

Problem 6-13.

(a) A suitable coder works as follows. Take one bit;if it is a one, transmit one of the inner pointsdepending on the next bit. If it is a zero,transmit one of the outer points depending onthe next two bits. An example bit mapping(which is a Gray code) is given in the figure tothe right.

(b) Let C denote a correct decision and E a signal error. Then

Pr[C|inner] = (1 – 2Q(b ⁄ 2σ))(1 – Q(b ⁄ 2σ)) (S.334)

which implies that

Pr[symbol error|inner] = 3Q(b ⁄ 2σ) – 2Q2(b ⁄ 2σ) . (S.335)

Also

Pr[C|outer] = (1 – Q(b ⁄ 2σ))2 (S.336)

which implies that

Pr[symbol error|outer] = 2Q(b ⁄ 2σ) – Q2(b ⁄ 2σ) . (S.337)

Combining,

Pr[symbol error] = [3Q(b ⁄ 2σ) – 2Q2(b ⁄ 2σ)] + [2Q(b ⁄ 2σ) – Q2(b ⁄ 2σ)]

= Q(b ⁄ 2σ) – Q2(b ⁄ 2σ) . (S.338)

(c) The power is

E[a2] = + b2 + = b2 . (S.339)

(d) The SNR is:

SNR =

=

= , (S.340)

which implies that:

= . (S.341)

b-b

01111010

00010001

12--- 1

2---

52--- 3

2---

12--- b2

4------ 1

2--- b2

4------ 3

4---

E a 2[ ]2σ2

------------------

3b2

8σ2----------

32--- b

2σ-------

2

b2σ------- 2

3---SNR

Page 63: digital communication, barry lee solutions.pdf

s-60

Therefore, the answer from part (b) can be rewritten as:

Pr[symbol error] = Q – Q2 . (S.342)

If SNR = 10 dB then SNR = 10 on a linear scale, so that

Pr[symbol error] = Q – Q2 = 0.0122 . (S.343)

(e) The approximations are respectively,

Pr[symbol error] ≈ Q , Pr[symbol error] ≈ Q . (S.344)

At SNR = 10 dB, these approximations are 0.0123 and 0.0049, respectively.

Problem 6-14.

(a) The decision regions are bounded by the planes formed by pairs of axes.

(b) The probability of error is exactly:

Pr[symbol error] = 1 – 1 – Q M . (S.345)

Problem 6-15.

(a) The trellis is shown in Fig. S-5.

(b) From (6.115), note that the MSK signal within any pulse interval is

gi( t ) = csin 2πfc + b , (S.346)

52--- 2SNR

3------------------

32--- 2SNR

3------------------

52--- 20

3------

32--- 20

3------

52--- 2SNR

3------------------

2SNR3

------------------

d

2σ-------

Fig. S-5. A finite planar trellis for the MSK signal.

3π ⁄ 2

π

π ⁄ 2

0

πt

2T--------

Page 64: digital communication, barry lee solutions.pdf

s-61

where b = ±1 and c = ±1. Upward-tending branches in Fig. 6-26 correspond to b = +1and downward-tending branches correspond to b = –1. The value of c is dependent onthe starting position of the branch in Fig. 6-26. The squared distance between any twobranches going in opposite directions in Fig. 6-26 is

d2 = sin 2πfc+ – sin 2πfc–2dt = 2E (S.347)

where E is the energy in one pulse. By contrast any two distinct (modulo 2π) parallelbranches have squared distance 4E. By inspection, therefore, being careful to associatebranches in Fig. 6-26 with branches in Fig. S-5, we see that the minimum-distance errorevent is the one of length two, with squared distance 4E.

(c) The squared distance of 4E is 3 dB better than the squared distance 2E in Fig. 6-25.

Problem 6-16. Taking a second pulse of the form of (6.100),

g( t ) = ykhc(t – mTc ) , (S.348)

then the inner product of h( t ) and g( t ) is

h( t )g*( t )dt = Ec xm ym* . (S.349)

Considering xm, 0 ≤ m ≤ N – 1 and xm, 0 ≤ m ≤ N – 1 as vectors x and y in N-dimensional Euclidean space, then the pulses h( t ) and g( t ) will be orthogonal when theEuclidean vectors x and y are orthogonal. The number of pulses specified in this fashion thatcan be mutually orthogonal is N = 2BT, the dimensionality of the Euclidean space.

Problem 6-17. For every zero in H( z ) at location z = c, if H( z ) is to be allpass, we musthave a pole at location z = 1 ⁄ c*, as shown in section 2.5.4. Since FIR filters can only havepoles at z = 0 or z = ∞, an allpass FIR can only have zeros at z = ∞ or z = 0. A filter withL ≥ 0 poles at z = 0 and L zeros at z = ∞ has impulse response δk – L. If L < 0, then the polesare at z = ∞ and the zeros are at z = 0.

Problem 6-18. The received signal is r( t ) = ±h( t ) + x( t ), and the matched filter output is arandom variable

U = r( t )h( t )dt = (si2 + sixi) . (S.350)

(a) The signal component of U is si2 and the noise component is sixi.

(b) The mean-value of the signal is Eh. The variance of the noise conditioned on knowledgeof the signal is, since the xi are independent,

si2Var[ xi ] . (S.351)

The variance of the noise is the expected value of this conditional variance, which is

0

T

πt

2T--------

πt

2T--------

m 0=

N 1–∑

∞–

∫ m 0=

N 1–∑

0

T

∫ i 1=

N∑

i

N∑ i

N∑

i

N∑

Page 65: digital communication, barry lee solutions.pdf

s-62

E[si2]Var[ xi ] = Var[ xi ] = . (S.352)

The SNR is therefore

SNR = = N . (S.353)

The processing gain is therefore N, independent of how the jammer distributes itsenergy.

Problem 6-19.

(a) The group velocity is approximately 3 × 106 meters/sec, so τ = 6.67 × 10–7 secondsfor a one meter link. At 10 Gb ⁄ s, the number of bits in transit is thus the product of thebit rate and τ, or 6670 bits.

(b) The transmit time is:

(8 × 106 bits) ⁄ (1010 bits per second) = 8 × 10–4 seconds. (S.354)

A L-meter link has transit time L × 6.67 × 10–7 seconds. Setting these equal, we get L= 1.2 km.

Problem 6-20. For the passband channel of Fig. 6-28-b the complex-baseband channel hasbandwidth W ⁄ 2, and hence the dimensionality of the signal subspace in time T is WT. Thereceived vector C is now complex-valued, and the noise vector is complex-valued where thecomponents are independent, have independent real and imaginary parts, each with varianceσ2 = N0 ⁄ 2 (by circular symmetry of the noise). We can think of C as an N = 2WTdimensional real-valued vector with independent noise components. The signal powerconstraint now applies directly to the resulting 2WT-dimensional real-valued signal vector

E[ |sn|2] = E Resn2 + Imsn2 = PST . (S.355)

This establishes the equivalence of the baseband and passband cases.

Problem 6-21. From (6.144),

W1log2(1 + SNR1) = W2log2(1+SNR2) , (S.356)

or at high SNR,

SNR1W1 ≈ SNR2

W2 . (S.357)

Taking, the logarithm, we can express the SNRs in dB, as

10 ⋅log10SNR1 ≈ 10 ⋅log10SNR2 . (S.358)

i

N∑Eh

N-------

i

N∑EhEJ

N---------------

Eh2

EhEJ N⁄-------------------------

Eh

EJ-------

12---

i

N∑

i

N∑

W 2W 1---------

Page 66: digital communication, barry lee solutions.pdf

s-63

Thus, for high SNR, to get the same channel capacity in half the bandwidth, the SNR (in dB)must be doubled, meaning that the SNR (not in dB) must be squared.

Problem 6-22. We have M = 2, and assuming the signal constellation is ±1, Ea = 1 andamin = 2, and γA = 1. The spectral efficiency is ν = (log2 2) ⁄ WT = 1 ⁄ WT, and hence

γSS = = WT(21 ⁄ WT – 1) . (S.359)

Asymptotically, as WT → ∞, γSS → loge2 = 0.46. Thus, the SNR gap to capacity isasymptotically increased by –10 ⋅log100.46 = 3.35 dB.

2WT 21 WT⁄ 1–( )3

---------------------------------------------- 23---

23---

Page 67: digital communication, barry lee solutions.pdf

s-65

+

+

– –

Digital

COMMUNICATION

John R. BarryEdward A. Lee

David G. Messerschmitt

third edition

Solutions Manualfor

Kluwer Academic Publishers

Page 68: digital communication, barry lee solutions.pdf

s-66

Chapter 7

Problem 7-1.

(a) The ML detector ignores the a priori distribution of the input. Since p < , the MLdetector selects = 0 if y = 0 and = 1 if y = 1. That is, it chooses = y.

(b) An error occurs every time the channel flips a bit, which occurs with probability p.

(c) The posterior probabilities are

pY|A(0| )pA( ) =

pY|A(1| )pA( ) = . (S.360)

Using the numbers supplied we see that the MAP detector always selects = 0. Anerror occurs whenever a = 1 is transmitted, which occurs with probability 1 – q = 0.1.This is lower than the probability of error in part (b), which is p = 0.2.

(d) The MAP detector will maximize the probability pY|A(y|a)pA(a), which is given inthe following table:

If we observe y = 0, then we will choose = 0 if (1 – p)q > p(1 – q), or q > p. If weobserve y = 1 we will choose = 1 if (1 –p)(1 – q) > pq, or q < 1 – p. Hence wemust divide the q axis into three regions as shown below:

For very small q, the prior probability of a = 0 is small, so the MAP detector alwayschooses = 1. Similarly, for large q it always chooses = 0. In the mid-range of q,the MAP detector makes the same decision as the ML detector.

(e) The MAP detector will always select = 0 if q > 1 – p.

12---

a a a

a a

1 p–( )q a; 0=

p 1 q–( ); a 1=

a a

1 p–( )q a; 0=

p 1 q–( ); a 1=

a

a

1100

y

1010

pY|A(y|a)pA(a)

(1 – p)(1 – q)p(1 – q)

pq(1 – p)q

aa

1 – pp0 1q

a 1=

a y= a 0=

a a

a

Page 69: digital communication, barry lee solutions.pdf

s-67

Problem 7-2. The MAP detector maximizes

pY|s(y|s)ps(s) = (1 –p)M – W(s, y)pW(s, y)ps(s) . (S.361)

Taking the logarithm of this expression and discarding the constant term, the MAP detectorequivalently minimizes

W(s, y)log – logps(s) . (S.362)

Problem 7-3. The values for fY(y| )pX( ) are shown below for each possible as afunction of the observation y.

The decision regions are determined by three thresholds, the middle of which is zero. The othertwo can be found by finding the observation at which the receiver is indifferent between adecision +3 and a decision +1. That is the point y satisfying

|y –3|2 – 2σ2ln(0.1) = |y – 1|2 – 2σ2ln(0.4) . (S.363)

Solving this yields

y = 2 + σ2ln(4) . (S.364)

The thresholds are therefore at ± (2+ σ2ln(4)). If σ2 = 0.25 the thresholds are at ±2.17, soan observation of 2.1 yields a decision = 1.

Problem 7-4. Using techniques similar to those in Problem 7-3, the final answer is

+ ln (S.365)

Problem 7-5.

(a)

s1 = [1,0,0], s2 = [0,1,0], s3 = [0,0,1] . (S.366)

(b)

s1 = [1,0,0], s2 = [0,1,0], s3 = [0,0,1] . (S.367)

(c)

Pr[error|si transmitted] ≤ 3Q( ⁄ 2σ) (S.368)

1 p–

p-------------

x x x

f( x )

THRESHOLD THRESHOLD

12---

12---

x

x1 x2+

2------------------ σ2

x1 x2–-----------------

pX x2( )pX x1( )-------------------

2

Page 70: digital communication, barry lee solutions.pdf

s-68

Pr[error|si transmitted] ≤ 3Q(2, p) . (S.369)

Note that in both cases Pr[error|si transmitted] is independent of i, so

Pr[error] ≤ 3Q( ⁄ 2σ) (S.370)

Pr[error] ≤ 3Q(2, p) . (S.371)

Problem 7-6. The likelihood to be maximized is

L(σ2) = fX1, … XN|σ2(x1,… xN|σ2)

= e–x12 ⁄ 2σ2

e–x22 ⁄ 2σ2 … e–xN

2 ⁄ 2σ2, (S.372)

from independence. This can be rewritten as:

L(σ2) =

Taking the derivative with respect to σ2 we get

L(σ2) = L(σ2) +

= L(σ2) +

= L(σ2) – . (S.373)

This is zero only when the term in square brackets is zero, or when:

σ2 = . (S.374)

Thus, the ML estimator for the variance is the average of the squares of the observations.

Problem 7-7. By symmetry, we can condition the probability of error on any of the signalstransmitted, and the result will be the same as the un-conditioned error probability. Henceassume that (–1, –1) is transmitted, in which case the error probability is

Pe = Pr[N1 > 1] + Pr[N2 > 1] – Pr[N1 > 1, N2 > 1] = 2Q – Q2 . (S.375)

Problem 7-8.

(a) By symmetry, it is clear that the error probability is the same whether (000000) or(111111) is transmitted, and similarly for (111000) or (000111). If (000000)is transmitted, a detection error occurs whenever two out of the first three bits are inerror, or two out of the last three bits, or both. If there is one channel error, no detectionerror is ever made, and this occurs with probability (1 – p)6. If there are two channelerrors, if one is in the first three bits and the other is in the second three bits, no error is

2

12πσ2-------------

N 2/

12πσ2-------------

N 2/ 1–2σ2---------- xi

2i 1=

N∑

σ2∂∂

σ2∂∂ 1–

2σ2---------- xi

2i 1=

N∑ 1–

2σ2---------- xi

2i 1=

N∑ exp

σ2∂∂ 1

2πσ2-------------

N 2/

12σ4---------- xi

2i 1=

N∑ 1–

2σ2---------- xi

2i 1=

N∑ exp 1

2πσ2-------------

N 2/ N–2σ2----------

12σ4---------- xi

2i 1=

N∑ N2σ2----------

1N----- xi

2i 1=

N∑

1σ---

1σ---

Page 71: digital communication, barry lee solutions.pdf

s-69

made, and this occurs with probability (3p(1 – p)2)2 = 9p2(1 – p)4. However, if botherrors are in the first three or second three bits, there is an error. If there are threechannel errors, there must always be two errors in either the first three or the secondthree bits, and there is therefore always a detection error. Similarly, four or morechannel errors will always result in a detection error. Thus,

Pe = 1 – [(1 – p)6 + 6p(1 – p)5 + 9p2(1 – p)4 ] . (S.376)

It is easy tp verify that the error probability is the same when (111000) is transmitted.

(b) Since the minimum Hamming distance is three, the approximate error probability is

Pe ≈ 2Q(1, p) = 2[32(1 – p) + p3 ] . (S.377)

When p = 0.1, the error probability evaluates to Pe = 0.05522 and the approximation is0.056.

Problem 7-9. In AWGN, the ML detector is the minimum-distance detector. We havealready solved the one-shot minimum distance detection problem for continuous-time PAM;this problem (and the next one too) requires a generalization to discrete time. A generalsolution, for any discrete-time impulse response and any alphabet, is easily found by followingthe derivation of section 5.3.1. In particular, the one-shot minimum-distance detectorminimizes the following cost function:

J = (rk – ahk)2

= Er – 2ay + a2Eh , (S.378)

where we have introduced:

Er = rk2 , Eh = hk

2 , y = rkhk . (S.379)

Completing the square yields:

J = Eh – + Er , (S.380)

which is minimized if and only if the following equivalent cost function is minimized:

J´ = . (S.381)

The solution is thus to compute the correlation y = ∑krkhk , which can be doneefficiently using a discrete-time matched filter, normalize by the energy in the impulseresponse, and then quantize the result to the nearest symbol in the alphabet.

k ∞–=

∞∑

k ∞–=

∞∑ k ∞–=

∞∑ k ∞–=

∞∑

yEh------- a–

2 y2

Eh-------

yEh------- a–

2

Page 72: digital communication, barry lee solutions.pdf

s-70

(a) Here, the energy in the impulse response hk = δk + δk – 1 is Eh = 1.25. Therefore,the ML detector with A = 0, 1 is as follows:

(b) The distance between points in the alphabet A = 0, 1 is d = 1. Also, after thenormalization, the noise variance is σ2⁄ Eh. Therefore, the probability of error is:

Pe = Q = Q . (S.382)

If the two symbols in A = 0, 1 are equally likely, the ML receiver is the MAPreceiver, and this is the minimum probability of error.

Problem 7-10.

(a) The only difference from Problem 7-9 is that the matched filter has causal impulseresponse

h2 – k = 0.1δk – 0.5δk – 1 + δk – 2 (S.383)

and the output is sampled at k = 2. The energy is thus Eh = 1.26. The block diagramlooks like this:

(b) The probability of error is

Q . (S.384)

(c) For the 3-ary alphabet, the resulting receiver is shown below:

12---

SLICER

IF INPUT > 1 ⁄ 2SELECT = 1

MATCHED FILTER

0.5δk + δk –1a

k = 1a

rk y

Eh–1 =

11.25-----------

d

2 σ2 Eh⁄-------------------------

5

16σ2-------------

MATCHED FILTER

0.1δk – 0.5δk – 1 + δk – 2k = 2

rk ySLICER

IF INPUT > 1 ⁄ 2SELECT = 1

a

aEh

–1 =1

1.26-----------

1.262σ

---------------

SLICER

MATCHED FILTER

0.1δk – 0.5δk – 1 + δk – 2a

k = 2

rk y

1

–1

1 ⁄ 2

–1 ⁄ 2

Eh–1 =

11.26-----------

Page 73: digital communication, barry lee solutions.pdf

s-71

Problem 7-11.

(a) We have

1( t ) = 0 , 2( t ) = h( t ) (S.385)

|| 1||2 = 0 || 2 ||2 = ρh(0) . (S.386)

Defining

K1 = 0 , K2 = Y( t )e–j2πfcth*( t )dt , (S.387)

the ML detector chooses 1 if

I0 = I0(0) = 1 > e–ρh(0) ⁄ (2σ2)I0 . (S.388)

Taking the logarithm of both sides will not change the inequality, so we get equivalently

logI0 < . (S.389)

In view of the monotonicity of logI0( ⋅ ), the ML detector compares K2 to somethreshold v, where that threshold depends on the SNR. The receiver is the same as apassband PAM coherent receiver, consisting of a demodulator, matched filter, andsampler. The difference is that instead of comparing the complex-valued slicer input tothe transmitted data symbols, we only evaluate the magnitude of this input (distancefrom the origin) and compare in effect to the magnitude of the data symbols.

(b) The data symbols must all have distinct radii in the signal constellation. Thus, forexample, PSK would not work, but the ASK of this problem will work.

(c) When signal one is transmitted,

K2 = | ⟨ Z, 2 ⟩ | , (S.390)

where of course 2 = H, the PAM pulse shape. The error probability in this case is

Pr[error| 1 transmitted] = Pr[K2 > v] . (S.391)

Similarly when signal two is transmitted,

K2 = |e jθ || 2||2 + ⟨ Z, 2 ⟩| (S.392)

and the probability of error is

Pr[error| 2 transmitted] = Pr[K2 < v] . (S.393)

s s

S S

∞–

∫∞–

∫S

K 1

σ2-------

K 2

σ2-------

K 2

σ2-------

ρh 0( )

2σ2--------------

S

S

S

S S

S

Page 74: digital communication, barry lee solutions.pdf

s-72

The get the overall probability of error we sum these two probabilities weighted by theprior probabilities of the two signals.

Problem 7-12.

(a) With respect to the signal, an isolated pulse before sampling has Fourier transformH( f )F*( f ), and hence after sampling the discrete-time Fourier transform is

∑m H(f + m ⁄ T )F*(f + m ⁄ T ) . (S.394)

Note that the impulse response of the equivalent discrete-time channel is equivalent tothe sampled isolated pulse response. Similarly, for the noise, after downconversion thenoise has power spectrum 2SN( f + fc ), and at the output of the matched filter, beforesampling, 2SN( f + fc ))|F( f )|2. After sampling, the spectrum is

∑mSN(f + fc + m ⁄ T )|F(f + m ⁄ T )|2 . (S.395)

(b) The discrete-time isolated pulse has Fourier transform

∑m , (S.396)

and the discrete-time noise has a power spectrum which is the same formula multipliedby two.

(c) When SN( f ) = N0 ⁄ 2, the isolated pulse response is Sh(e jθ ) ⁄ N0 and the noise powerspectrum is 2Sh(e jθ ) ⁄ N0. If we scale the signal size by N0, the noise spectrum isscaled by N0

2, and we get the same answer as in the text.

Problem 7-13. Let us begin by assuming that the received pulse is of the form

h( t ) = hmg(t – mT) (S.397)

for some pulse g( t ) for which g( t ) and g(t – mT ) are mutually orthogonal for m ≠ 0. Inother words, assume that g( t ) convolved with its match is a Nyquist pulse. (For example, thezero-excess-bandwidth sinc function works.) Then, the sampled autocorrelation becomes:

ρh( z ) = hmg(t –mT ) hl*g*(t – lT – kT ) (S.398)

= Eg hmhm* – k . (S.399)

where Eg = |g( t )|2dt . Taking the Z-transform, we get

Sh( z ) = EgH( z )H*(1 ⁄ z*) . (S.400)

1T----

2T----

1T----

H f mT-----+

2

2SN f f cmT-----+ +

--------------------------------------------

m ∞–=

∞∑

∞–

∫ m ∞–=

∞∑ l ∞–=

∞∑

m ∞–=

∞∑

∞–

Page 75: digital communication, barry lee solutions.pdf

s-73

Now this looks a lot like the factorization of the folded spectrum,

Sh( z ) = γ2M( z )M*(1 ⁄ z*) , (S.401)

where γ2 is the geometric mean of Sh( z ), and where M( z ) is monic and minimum phase.Comparing (S.400) and (S.401), we conclude that it suffices to choose:

H( z ) = M( z ) (S.402)

and furthermore scaling the fundamental pulse so that its energy is equal to the geometric meanof the folded spectrum:

Eg = γ2 . (S.403)

Problem 7-14. The conditional probability density is

p(x1…xN|s1…sN) = e–αsk (S.404)

and hence the log-likelihood function is

– logp(x1, … xN|s1, …sN ) = αsk – xkloge(αsk) + loge(xk!) . (S.405)

The last term is independent of the signal, as is the xklogeα term, so the simplified branchmetric is:

branch metric = αsk – xklogesk . (S.406)

Problem 7-15. The log-likelihood function is given by the following table, where the lastcolumn specifies the branch metric:

k 1=

N∏αsk( )xk

xk!-------------------

k 1=

N∑

x

0011

y

0120

Pr[y|x]

1 – pp00

11

12

p1 – p

–logPr[y|x]

–log(1 – p)–log(p)

–∞–∞

– log(p)–log(1 – p)

Page 76: digital communication, barry lee solutions.pdf

s-74

Problem 7-16.

(a) The ML detector can operate independently on each observation. It will perform athreshold test with the threshold set at 0.5. The decision is k = 1, 1, 1, 0.

(b) The outputs associated with each transition are shown in Fig. 5-30, and repeated here:

We can find the transition weights by just subtracting those outputs from theobservations and squaring. The result is shown in the following figure.

Performing the Viterbi algorithm, the surviving paths and their path metrics after eachobservation are shown in the following figure.

The decision is k = 0, 1, 1, 0, which is different from the decision in part (a).The knowledge of the ISI is useful.

x

(1, 1.0)(0, 0.5)

(0, 0.0)

(1, 1.5)

0.040.64

0.9

0.16

0.010.16

0.040.36

0.091.690.810.36

0.45

0.41

1.01

0.37

0.32

x

Page 77: digital communication, barry lee solutions.pdf

s-75

Problem 7-17.

(a) The shift register model and state transition diagram are shown in the following figure:

(b) One stage of the trellis is shown below:

(a)

Ψk = [Ak – 1, Ak – 2 ]

(b)

Ak z –1

sk zk

nk–0.5

z –1

0.1

Ak – 1 Ak – 2

(1, 0.5)

(0, 0.1)

(1, 0.6)

(0, –0.4)

(0, –

0.5)

(1,1

.1)

(1, 1.0)

(0, 0.0)

[1,1][1,0]

[0,1][0,0]

(0,-0.4)

(0,0.0)

(1,1.0)

(0,0.1)

(1,0.5)

(1,1.1)

(0,-0.5)

(1,0.6)

Page 78: digital communication, barry lee solutions.pdf

s-76

(c) Here is the complete trellis:

(d) Here is the Viterbi algorithm applied to the trellis:

(e)

0.00

0.01

2.56

2.89

1.21

1.44

0.09

0.16

1.69

1.960.04

0.01

0.64

0.89

0.49

0.09

1.44

0.04

0.25

0.25

2.43

3.19

2.42

0.83

0.30

2.43

0.98 0.74

1.69

0.34

0.29

0.25

0.25

d2 = d12 +d2

2 + d32 ≈ 1.26

d1 = 1.0

d2 = 0.5

d3 = 0.1

Page 79: digital communication, barry lee solutions.pdf

s-77

Problem 7-18.

(a) The state is (εk –1,εk – 2), and the branch metric is

|εk + m1εk – 1 + m2εk – 2|2 . (S.407)

One stage of the trellis, not labeled with the branch metrics, is shown below:

(1,1)

(1,0)

(1,-1)

(0,1)

(0,0)

(0,-1)

(-1,1)

(-1,0)

(-1,-1)

Page 80: digital communication, barry lee solutions.pdf

s-78

(b) A finite set of error events which do not pass through the same state or its negative twiceis shown below:

(c) Shown below are the single-error error event, and another error event corresponding totwo symbol errors:

The shorter path has path metric (1 + m12 + m2

2), while the longer path has metric

1 + (1 + m1)2 + (m1 + m2)2 + m22 , (S.408)

and hence will have a smaller metric if m1 ≈ 1 and m2 ≈ –m1. For example, if m1 = 1and m2 = –1, the minimum distance is .

Remark: But m1 = 1 and m2 = –1 implies that M( z ) = 1 + z–1 – z–2 is not minimumphase. Does the answer change if we constrain M( z ) to be minimum phase?

2

Page 81: digital communication, barry lee solutions.pdf

s-79

Problem 7-19. The metric in this case is (εk + dεk – 1 + dεk – 2)2, and the trellis diagram ispictured in Fig. S-6.

A set of five paths guaranteed to include the minimum distance path is pictured in Fig. S-7.

Fig. S-6. Trellis diagram for a four-state ISI channel.

1, 1

0, 1

–1, 0

1, 0

0, 0

–1, 0

1, –1

0, –1

–1, –1

1, 1

0, 1

–1, 0

1, 0

0, 0

–1, 0

1, –1

0, –1

–1, –1

εk, εk –1εk – 1, εk – 2

Fig. S-7. A set of candidate minimum-distance paths for the trellis of Fig. S-6.

5

4

3

2

1

Page 82: digital communication, barry lee solutions.pdf

s-80

The path metrics for these paths are:

Path 1: 1 + (1 + d)2 + 4d2 + d2

Path 2: 1 + (1 + d)2 + (1 – 2d)2 + d2

Path 3: 1 + d2 + d2

Path 4: 1 + (1 – d)2 + d2

Path 5: 1 + (1 – d)2 + 1 + 4d2 + d2 (S.409)

The metric for paths 1 and 2 are always bigger than path 3, and similarly the metric for path 5is always bigger than path 4. When 0 ≤ d ≤ 1 ⁄ 2, the path 3 metric is smaller than path 4.Thus, the answer is

dmin2 = . (S.410)

Problem 7-20. The trellis diagram and the two error events at a minimum distance areshown in Fig. S-8.

As in Example 7-31, there is an infinite set of error events at this minimum distance, where theonly difference is that each error event corresponds to a sequence of alternating data symbols.The error probability estimates are the same. The intuitive explanation is that the output of thechannel is zero during sequences of alternating data symbols. It is therefore difficult todistinguish the two sequences of alternating symbols which are the complement of each other.

1 + 2d2,

1 + d2 + (1 – d)2,

0 ≤ d ≤ 1 ⁄ 2

1 ⁄ 2 ≤ d ≤ 1

2

Fig. S-8. A trellis diagram and two corresponding events at the minimum distance of .2

1

0

4

4

1

1

11

1

1

+1

0

-1Ψ

(b)(a)

0

0

Page 83: digital communication, barry lee solutions.pdf

s-81

Problem 7-21. Since the channel transfer function hasmemory µ = 1, the state at time k is the previous inputsymbol, ψk = ak – 1. Since the alphabet is binary, there areonly two states, ψk ∈±1. One stage of the trellis is shownto the right. The transitions corresponding to inputs of +1 areshown as dotted lines. (These are the transitions thatterminate at the state ψ = +1.) The branches are labeled bythe unique signal output s (p, q) associated with thecorresponding state transition from state p to state q; sinceH( z ) = 1 – z–1, this output is simply q – p.

The branch metric for the transition from state p to q at time k is proportional to (7.65):

γk(p, q) = 2 pAk(a(p, q)) , (S.411)

where we have dropped a factor of 1 ⁄ (4πσ2) to simplify calculations (see section 7.5.2). Attime k = 0 and k = 2, when the symbols –1 and +1 are equally likely, this simplifies to:

γk(p, q) = , (S.412)

whereas at time k = 1 (where –1 is three times as likely as +1) it simplifies to:

γ1(p, q) = 3 when q = –1, and

γ1(p, q) = , when q = +1 . (S.413)

(We dropped another factor of 1 ⁄ 2 here, as allowed by section 7.5.2, to simplify as much aspossible.) Therefore, the entire trellis diagram with labeled branch metrics looks like this:

which simplifies to this:

ψ = –1

ψ = +1

0

+2

–2

0

e yk s p q,( )– 2 2σ2( )⁄–

e yk s p q,( )– 2–

e yk s p q,( )– 2–

e yk s p q,( )– 2–

ψ = –1

ψ = +1

e–|1 – 0|2

e–|1 – 2|2

e–|–1 – 0|2

e–|–1 – 0|2

e–|–1 – 2|2

e–|–1 + 2|2

e–|0 – 0|2

e–|0 – 0|2

e–|0 – 2|2

e–|0 + 2|2

3e–|0 – 0|2

3e–|0 + 2|2

ψ = –1

ψ = +1

e–1

e–1

e–1

e–1

e–9

e–1

1

1

e–4

e–4

3

3e–4

Page 84: digital communication, barry lee solutions.pdf

s-82

This completes step 1 of the BCJR algorithm. We can now proceed exactly as inExample 7-25. The next step is step 2, the forward recursion that calculates the forwardmetrics αk(p). We write these metrics above the corresponding nodes of the trellis:

where to avoid clutter, we did not write the last forward metric on the trellis:

αend = α4(–1)

= 4e–2+ 5e–6+ e–10+ 3(e–14+ e–18). (S.414)

Next comes step 3, the backward recursion that calculates the backward metrics βk(p). Inthe following we write these metrics below the corresponding node of the trellis, in boldfaceand blue:

where again the last metric is not shown, but can be calculated as:

βend = β0(–1)

= 4e–2+ 5e–6+ e–10+ 3(e–14+ e–18)= αend . (S.415)

(This is a good sanity check: If the last backward metric does not match the last forwardmetric, you must have made an arithmetic mistake somewhere.)

ψ = –1

ψ = +1

e–1

e–1

e–1

e–1

e–9

e–1

1

1

e–4

e–4

3

3e–4

1 e–1

e–1

3(e–1+ e–5)

e–1+ e–5

4(e–2+ e–6)

e–2+ e–6+ 3(e–10+ e–14)

αend

ψ = –1

ψ = +1

e–1

e–1

e–1

e–1

e–9

e–1

1

1

e–4

e–4

3

3e–4

1 e–1

e–1

3(e–1+ e–5)

e–1+ e–5

4(e–2+ e–6)

e–2+ e–6+ 3(e–10+ e–14)

αend

11

e–4

e–1+e–13

e–1+e–5

3e–1 +e–5+e–9+3e–13

e–1+4e–5+3e–17

βendβ

Page 85: digital communication, barry lee solutions.pdf

s-83

Therefore, from (7.77):

λ0 = log

= log = –1.034 ,

λ1 = log

= log = –1.080 , (S.416)

λ2 = log

= log = –5.385 . (S.417)

Note that a reasonable approximation can be found by keeping only the dominant terms:

λ0 ≈ λ1 ≈ log(1 ⁄ 3) = –1.099, λ2 ≈ log = –5.386 . (S.418)

The fact that all three λk values are negative indicates that all three MAP decisions will be –1.

Problem 7-22.

(a) The logarithm of the moment generation function is, from (3.158),

Ψi(v) = (λi( t ) + λdark ) evf(–t) – 1 dt (S.419)

and hence the Chernov bound is

Pe ≤ exp (λ1( t ) + λdark )[evf(–t) – 1] + (λ2( t ) + λdark )[e–vf(–t) – 1] dt . (S.420)

(b) By the method of variations, we substitute f + ε∆f for f, differentiate w.r.t. ε, set ε = 0,and set the result to zero. The result is

f( –t) = log(λ2( t ) + λdark ) – log(λ1( t ) + λdark ) , (S.421)

which says that we correlate the shot noise against the logarithm of the known intensity.

(c) Substituting into the Chernov bound from(b)., we get

Pe ≤ exp (λ1( t ) + λdark )1 ⁄ 2(λ2( t ) + λdark )1 ⁄ 2dt – (S.422)

where Ei is the energy corresponding to the intensity,

1 e 1– e 1– 4e 5– 3e 17–+ +( )××1 e 1– 3e 1– e 5– e 9– 3e 13–+ + +( )××--------------------------------------------------------------------------------------------

1 4e 4– 3e 16–+ +3 e 4– e 8– 3e 12–+ + +-----------------------------------------------------

e 1– e 4– e 1– e 5–+( )×× e 1– 1 e 1– e 5–+( )××+e 1– 3× e 1– e 13–+( )× e 1– 3e 4– e 1– e 13–+( )××+-----------------------------------------------------------------------------------------------------------------------------

13--- 1 e 4–+

1 e 12–+--------------------

3 e 1– e 5–+( ) e 9– e 4–×× e 1– e 5–+( ) e 1– e 4–××+3 e 1– e 5–+( ) e 1– 1( )×× e 1– e 5–+( ) e 1– 1( )××+---------------------------------------------------------------------------------------------------------------------------

e 4– 3e 12–+4

----------------------------

e 4–

4--------

∞–

1

2---

∞–

∞–

12---

∞–

∫E1 E2+

2---------------------

Page 86: digital communication, barry lee solutions.pdf

s-84

Ei = (λ1( t ) + λdark )dt . (S.423)

Problem 7-23. (Typo: The left-hand side of (7.169) should be Vm, not Vl.)First equate the two representations of fl( t ), (7.128) and (7.46),

fl( t ) = Fk(l)ψk( t ) = φi( t ) . (S.424)

Now form the inner product of both sides with φm( t ),

Fk(l) ψk( t )φm*( t ) dt = φi( t )φm*( t ) dt (S.425)

and applying (7.130) we get

∑k Fk(l)ψk,m = . (S.426)

Finally, substituting into (7.44) yields:

Vl = Yi

= Fk(l)ψk,i*

= Fk(l)* ⋅ψk,i*

= Fk(l)*Uk . (S.427)

∞–

k 1=

N∑ i 1=

∞∑sl i,σi--------

k 1=

N∑0

T

∫ i 1=

∞∑sl i,σi--------

0

T

sl m,σm----------

i 1=

∞∑sl i, *

σi-----------

i 1=

∞∑Y i

σi------

k 1=

N∑

k 1=

N∑ i 1=

∞∑Y i

σi------

k 1=

N∑

Page 87: digital communication, barry lee solutions.pdf

s-85

+

+

– –

Digital

COMMUNICATION

John R. BarryEdward A. Lee

David G. Messerschmitt

third edition

Solutions Manualfor

Kluwer Academic Publishers

Page 88: digital communication, barry lee solutions.pdf

s-86

Chapter 8

Problem 8-1. The trellis and three error events are sketched below:

The error event with the dark lines is the one requested in (b), and the two events in light linesare those requested in (c). (The answer to (c) is not unique.)

Problem 8-2. Let µ be the order of the FIR filter M( z ). We will show that the inequality(8.31) is strict for every error event. To do this, we need to show that at least one of the termsthrown away (m ≥ 2) is non-zero, for any error event. Assume is the length of a given errorevent (εL ≠ 0 and εk = 0 for k > L), and consider the m = L + µ term,

| εkmL + µ – k|2 = |εLmµ|2> 0 . (S.428)

Thus, the inequality is strict for each and every error event, which implies that it is strict for theminimum-distance error event.

Problem 8-3. Since

|H|–2 = (1 – cz –1)(1 – c*z) , (S.429)

the coefficient of z0 is 1 +|c|2, independent of whether the channel is minimum-phase ormaximum-phase. For |c|< 1, the geometric mean is clearly unity, since (S.429) is in the formof a minimum-phase spectral factorization. When |c|> 1, we can rewrite (S.429) in the form

|H|– 2 = |c|2(1 – (c*) –1z –1)(1 – c –1z) , (S.430)

and thus the geometric mean is |c|2.

Problem 8-4.

(a) (1 – cz –1)

1, 1

1, 0

1, –1

0, 1

0, 0

1, –1

–1, 1

–1, 0

–1, –1

εk – 2, εk – 1 εk – 1, εk

k 1=

L∑

Page 89: digital communication, barry lee solutions.pdf

s-87

= N0⟨|H|–2 ⟩A = N0(1 + |c|2) (S.431)

from Problem 8-3.

(b) When c = 0 there is no noise enhancement. As |c|→ 1 the noise enhancementapproaches 3 dB. For |c|> 1 the noise enhancement gets even larger because thechannel transfer function gets smaller.

(c) The maximum-phase case will not arise in practice, because the channel impulseresponse would be both IIR and anticausal.

Problem 8-5. For the minimum-phase case, multiply the transfer function by tonormalize it, and thus

= N0 . (S.432)

For the maximum-phase case, the normalization constant can be determined by

H( z ) = = c –kzk + 1 , (S.433)

and the energy is 1 ⁄ (|c|2 – 1) so that the normalization constant becomes , and

= N0 (S.434)

The solution is quite different from Problem 8-4, since the noise enhancement approachesinfinity as |c|→ 1 and goes away as |c| gets large. This is to be expected, since as |c|→ 1the channel transfer function on the unit circle goes to zero at all frequencies except the polelocation, and as |c| gets large the channel approaches a negative unit delay, which is easilyequalized without noise enhancement by a unit delay.

Problem 8-6.

(a) From Problem 8-3, the geometric mean is unity and hence = N0. No precursorequalizer is required, and the postcursor equalizer is

H( z ) – 1 = . (S.435)

(b) From Problem 8-3, the geometric mean is |c|2, and hence = N0|c|2.Writing H in monic form,

H( z ) = (S.436)

we get r = 1, H0 = –c –1, and Hmax = 1 ⁄ (1 – c–1z). The precursor equalizer is

C( z )E( z ) = (S.437)

and the postcursor equalizer is E( z ) – 1 = –(c*) –1z –1 .

εZF-LE2

1 c 2–

εZF-LE2 1 c 2+

1 c 2–------------------

zc 1 c 1– z–( )---------------------------- 1

c---

k 0=

∞∑c 2 1–

εZF-LE2 c 2 1+

c 2 1–------------------

εZF- DFE2

cz 1–

1 cz 1––--------------------

εZF- DFE2

c 1–– z1 c 1– z–--------------------

c– 1 c*( ) 1– z 1––( )1 c 1– z–

---------------------------------------------

Page 90: digital communication, barry lee solutions.pdf

s-88

(c) In the maximum-phase case the precursor equalizer is IIR and anticausal, and hence notpractical to implement. However, the MSE gets smaller because the equalizer utilizesthe large delayed sample for decision making. The MSE of the LE-ZF is always largerthan the LE-ZF, by a factor of 1 + |c|2 for |c|< 1 and by a factor of (1+|c|2) ⁄ |c|2

for |c|> 1. This difference is largest (about 3 dB) in the region of |c|≈ 1.

Problem 8-7. In the minimum-phase case the normalization constant is , and

= . (S.438)

In the maximum-phase case the normalization constant is and

= N0 . (S.439)

As for the LE-ZF, the MSE blows up as the pole approaches the unit circle. The DFE-ZF cantolerate zeros on the unit circle, but not poles. Likewise, the noise enhancement goes away as|c|→ ∞ because the equalizer bases its decision on the larger delayed sample, which isasymptotically unity in the normalized case.

Problem 8-8. The formula for the MSE follow readily. The spectral factorization of (8.84)approaches, as SZ → 0,

SY → SAHH* (S.440)

SY → |H0|2Aa2GaGa*HminHmax*Hmin*Hmax (S.441)

Ay2 → |H0|2σA

2 (S.442)

Gy → GaHminHmax* (S.443)

and finally

CE → Gz –1 (S.444)

which is the DFE-ZF precursor equalizer.

Problem 8-9.

(a) When the excess bandwidth is less than 200%, we get the following picture:

1 c 2–

εZF- DFE2 N 0

1 c 2–-----------------

c 2 1–

εZF- DFE2 c 2

c 2 1–-----------------

Hmax*

H0Hmax-----------------------

C3(e j2πfT ⁄ 3 ) = H*( f )C1(e j2πfT )SAMPLER

sn

SAMPLER

e –j2πfct

r( t ) LPF

1.5 ⁄ T nT ⁄ 3 kT yk

C3(e j2πfT ⁄ 3 )

Page 91: digital communication, barry lee solutions.pdf

s-89

(b) When the excess bandwidth is less than 50%, we get the following picture

Problem 8-10. The picture is as follows:

The output is

yk = cmx2k – m . (S.445)

Problem 8-11. Given a baseband transmit spectrum SX, the transmit power constraint of(8.117) is

P ⁄ 2 = |F| ⋅⟨ SX ⟩A, F , (S.446)

where Sx = L – SN ⁄ |H|2 for f ∈F, and hence we get a relation for L,

P ⁄ 2 = |F| ⋅ (L – ⟨SN ⁄ |H|2⟩A, F) . (S.447)

Substituting this into (8.115), if the integral is restricted to f ∈F,

2C ⁄ |F| = ⟨1+SX|H|2 ⁄ SN⟩G, F

=

= . (S.448)

Finally,

C = |F| ⋅ log2 . (S.449)

Problem 8-12. The pulse at the output of the receive filter has Fourier transform:

TG( f )H( f )F( f ) (S.450)

and the noise has power spectrum TSN( f )|F( f )|2. After sampling, the isolated pulse hasFourier transform

C4(e j4πfT ⁄ 3 ) = H*( f )C1(e j2πfT )SAMPLER

sn

SAMPLER

e –j2πfct

r( t ) LPF

0.75 ⁄ T n2T ⁄ 3 kT yk

C4(e j4πfT ⁄ 3 )

SAMPLER

xn

SAMPLER

e –j2πfct

r( t ) LPF

1 ⁄ T nT ⁄ 2 kT ykC2( z )

m ∞–=

∞∑

LSN H 2⁄⟨ ⟩G F,

---------------------------------------

P2---- F SN H 2⁄⟨ ⟩ A F,+

SN H 2⁄⟨ ⟩G F,---------------------------------------------------------

P2---- F SN H 2⁄⟨ ⟩ A F,+

SN H 2⁄⟨ ⟩G F,---------------------------------------------------------

Page 92: digital communication, barry lee solutions.pdf

s-90

H(e j2πfT ) = ∑m G H F , (S.451)

and the noise has power spectrum

SN(e j2πfT ) = ∑m SN |F |2 , (S.452)

(a) The capacity is given by (8.122) with these values of H(e j2πfT ) and SN(e j2πfT ).

(b) For this case, F = G*H*, and thus

H(e j2πfT ) = ∑m|G |2 ⋅|H |2 , (S.453)

and

SN(e j2πfT ) = ∑m SN |G |2 ⋅|H |2 . (S.454)

Note that when the noise is white, H(e j2πfT ) and SN(e j2πfT ) have the same shape; that is,they are equal within a multiplicative constant N0. Thus,

= . (S.455)

(c) First, the discrete-time system has to be able to generate the water-pouring spectrum. Asufficient condition for this is the following: If F is the water-pouring band, which mustbe symmetric about f = 0, then the sampling rate is twice |F| ⁄ 2, or 1 ⁄ T > |F|, andthe transmit filter is ideally bandlimited to half the sampling rate and non-zero over thisbandwidth. (This is not a necessary condition, because if F is a "generalized Nyquistinterval" with respect to sampling rate 1 ⁄ T, then the water-pouring spectrum can begenerated by an appropriate transmit filter. A generalized Nyquist interval has theproperty that for each |f|≤ 1 ⁄ (2T), f + m ⁄ T ∈F for precisely one value of m.)Second, the receive filter must prevent aliasing and allow all frequencies within thewater-pouring band to pass. A sufficient condition for this is that it be an ideal LPFbandlimited to 1 ⁄ (2T) Hz. (Again this is not a necessary condition. If the water-pouring band is a generalized Nyquist interval with respect to sampling rate 1 ⁄ T thenthe receive filter that is ideally bandlimited to this water-pouring band will do.)

(d) For this case, if the transmit filter and sampling rate meet the criteria of (c), then thereceive filter will automatically be OK. The fact that the receive filter is not flat withinthe generalized Nyquist interval will not be a problem, since this transfer function canalways be reversed with an equalizer filter in a reversible fashion.

(e) The precursor equalizer is a reversible operation, and thus will not affect the capacity.

Problem 8-13. For the baseband case, we get SN = N0 ⁄ 2 and H = 1, and thus from (8.123),SX = L – N0 ⁄ 2. From the power constraint, P = (L – N0 ⁄ 2) ⁄ T or SX = PT. Substitutinginto (8.122),

f mT-----–

f mT-----–

f mT-----–

f mT-----–

f mT-----–

f mT-----–

f mT-----–

f mT-----–

f mT-----–

f mT-----–

SN e j2πfT( )

H e j2πfT( ) 2---------------------------------

N 0

H e j2πfT( )------------------------------

Page 93: digital communication, barry lee solutions.pdf

s-91

C = log2 (1 + 2PT ⁄ N0 ) = B log2(1 + P ⁄ N0B ) , (S.456)

since in this case the sampling rate is 1 ⁄ T = 2B.

In the passband case, we have SZ = N0 and H = 1, and thus from(8.127) SX = 2L – N0 ⁄ 2,and the power constraint of (8.125) becomes P = (2L – N0) ⁄ T. Substituting into (8.126),

C = log2(1+PT ⁄ N0) = Blog2 (1+ P ⁄ N0B ) , (S.457)

since the sampling rate is 1 ⁄ T = B in this case.

Problem 8-14.

(a) At the output of the channel, the noise spectrum is SZ and hence the total noise power is|F|⟨ SZ ⟩A, F . If the channel input signal has power spectrum SX, confined to water-pouring band F, then the total signal power at the channel output is:

|F| ⋅⟨SX|H|2⟩A, F . (S.458)

This is easily related back to the transmit signal power P through the water-pouringspectrum, since for f ∈F,

SX|H|2 = |H|2 + ⟨SZ ⁄ |H|2⟩A, F – SZ , (S.459)

and from this the channel-output SNR is

SNRout =

= ⟨|H|2⟩A, F + ⟨SZ ⁄ |H|2⟩A, F – 1 . (S.460)

(b)

SNRnorm = . (S.461)

Problem 8-15. We require that ⟨|G|2⟩A = PT ⁄ Ea. The effect of the transmit filter is tochange the channel from H to GH, and hence the MSE at the output of the DFE-ZF to

= ⟨SZ ⁄ |GH|2⟩G = . (S.462)

Using the geometric mean inequality ⟨|G|2⟩G ≤ ⟨|G|2⟩A = PT ⁄ Ea, we get:

≥ ⟨SZ ⁄ |H|2⟩GEa ⁄ PT , (S.463)

12T--------

1T----

P

F-------

F SX H 2⟨ ⟩ A F,F SZ⟨ ⟩ A F,

------------------------------------------

P

F-------

H 2⟨ ⟩ A F,SZ⟨ ⟩ A F,

--------------------------

SN Rout 1+( )SZ⟨ ⟩ A F,

H 2⟨ ⟩ A F,--------------------------

SZ

H 2-----------⟨ ⟩

A F,–

2νB0 F⁄ SZ

H 2-----------⟨ ⟩

G F,

SZ

H 2-----------⟨ ⟩

A F,–

------------------------------------------------------------------------------------------------

εZF- DFE2 SZ H 2⁄⟨ ⟩G

G 2⟨ ⟩G--------------------------------

εZF- DFE2

Page 94: digital communication, barry lee solutions.pdf

s-92

with equality if and only if G = T .

Problem 8-16. We have that ⟨SA⟩A = Ea and constraint ⟨SA|G|2⟩A = PT.

(a) Expliciting calculating the MSE,

⟨SZ ⁄ |GH|2⟩G = ⟨SASZ ⁄ SA|G|2|H|2⟩G

=

≥ ⟨SZ ⁄ |H|2⟩G . (S.464)

Thus, the MSE is bounded below by a quantity that can be achieved when SA|G|2 is aconstant, namely .

(b) = ⟨SZ ⁄ |H|2⟩G⟨SA⟩G ⁄ (PT).

(c) Since ⟨SA⟩G ≤ ⟨SA⟩A = Ea, we get that, when the transmit filter is optimized,

≤ ⟨SZ ⁄ |H|2⟩G . (S.465)

The right side of (S.465) is the MSE for the white symbol case, established inProblem 8-15.

Problem 8-17. The derivation for the error probability assumed Gaussian noise at the slicer.For the DFE-MSE, the slicer error includes residual ISI, and hence is not Gaussian. It would besurprising to find the SNR gap to capacity shrunk by the presence of ISI, so it is likely that ifthe effect of ISI at the slicer were taken into account we would find ISI to be beneficial.

P Ea⁄

SZ H 2⁄⟨ ⟩G SA⟨ ⟩G

SA G 2⟨ ⟩G

--------------------------------------------------

SA⟨ ⟩G

PT-----------------

PT

εZF- DFE2

εZF- DFE2 Ea

PT---------

Page 95: digital communication, barry lee solutions.pdf

s-93

+

+

– –

Digital

COMMUNICATION

John R. BarryEdward A. Lee

David G. Messerschmitt

third edition

Solutions Manualfor

Kluwer Academic Publishers

Page 96: digital communication, barry lee solutions.pdf

s-94

Chapter 9

Problem 9-1. For a predictor with coefficient vector f, the error is given by

E[|Ek|2] = E[|Rk – f´rk|2]

= E[|Rk|2] – 2Ref*E[Rkrk] + f *E[rkrkT ]f

= φ0 – 2Ref*α + f*Φf (S.466)

where

α = E[Rkrk] = [φ1, φ2, … φn]T , (S.467)

Φ = E[rkrkT ], (S.468)

the same as before, by wide-sense stationarity. The solution is the same as the the equalizer,with the new definition of α.

Problem 9-2. The orthogonality principle of (9.26) implies that

0 = E[Ekrk] = E[(Ak – cTrk)rk] = α – Φc (S.469)

where the last equality follows since cTrk is a scaler and therefore

E[(cTrk)rk] = E[rkrkT]c . (S.470)

Problem 9-3.

E[Ekrk] = E[(Rk – foptTrk)rk]

= E[Rkrk] – E[rkrkT]fopt

= α – Φfopt = 0 . (S.471)

Problem 9-4.

(a) This follows from the definition of matrix multiplication since the element of Φ in row iand column j is φi – j = φj – i.

(b) As L→ ∞,

φj – ivi = λvj , – ∞ < j < ∞ . (S.472)

Since this is a convolution sum, the Fourier Transform gives:

S(e jθ)V(e jθ) = λV(e jθ) . (S.473)

i ∞–=

∞∑

Page 97: digital communication, barry lee solutions.pdf

s-95

(c) Either V(e jθ) = 0 or λ = S(e jθ). Since S(e jθ) is single valued, λ = S(e jθ) can occurat only one θ0 since λ is a constant. Hence V(e jθ) will be zero at other θ.V(e jθ) = δ(θ – θ0) will be an eigenvector, or vi = ejθ0i (a complex exponential).

(d) As L → ∞ the eigenvalues are by this argument the values of the function S(e jθ).

Problem 9-5.

(a) Taking the Fourier transform of the autocorrelation function φk, the power spectrum is

S(e jθ) = ∑kα|k|e jkθ

=

= . (S.474)

(b) Assume 0 < α < 1. Then the minimum of the power spectrum is at cos(θ) = –1, and

λmin → (S.475)

Similarly the maximum is at cos(2πf) = +1, and

λmax → (S.476)

(c) For N = 2, the autocorrelation matrix is

Φ = , (S.477)

and setting the determinant of (λI – Φ) equal to zero, we get the pair of eigenvaluesλ1 = 1 – α and λ2 = 1 + α.

(d) As N → ∞ we get that

= 2 (S.478)

and for N = 2,

= . (S.479)

For α ≈ 1 these values are the same.

(e) As N → ∞,

βopt = , (S.480)

1 α2–1 αe jθ–( ) 1 αe jθ––( )

------------------------------------------------------

1 α2–1 α2 2α θ( )cos–+-----------------------------------------------

1 α–1 α+-------------

1 α+1 α–-------------

1

α

α

1

λmaxλmin------------

1 α+1 α–-------------

λmaxλmin------------

1 α+1 α–-------------

1 α2–1 α2+----------------

Page 98: digital communication, barry lee solutions.pdf

s-96

and the dominant mode of convergence is proportional to

k. (S.481)

As α→1, the input samples become perfectly correlated and the convergence of theMSEG algorithm slows.

Problem 9-6. The error vector is given by (9.32),

qj = I – βΦ jq0

= 1 – βλijvivi*q0

= γi,jvi (S.482)

where

γi,j = 1 – βλijvi*q0 . (S.483)

The component of the error in the direction of each eigenvector vi is γi,j, and decreasesexponentially as 1 – βλi

j. The component of the initial error in the direction of vi is vi*q0,the component of the initial error in the direction of vi.

Problem 9-7. From Problem 9-6,

qj*vl = γi,jvi*vl

= γl,j (S.484)

or

E[Ek2] – E Ek

2min= λi qj*vi

2

= λiγi,j2

= λi 1 – βλi2j vi*q0

2 (S.485)

which decreases exponentially with n modes as 1 – βλi2j .

Problem 9-8.

(a) Assume the dominant mode is i, λi, vi. From Problem 9-7 , the excess MSE for thismode is

xsmse = λi(1 – βλi)2j(vi*q0)2 , (S.486)

so that

1 α2+----------------

i 1=

n∑

i 1=

n∑

i 0=

n∑

i 1=

n∑

i 1=

n∑

i 1=

n∑

Page 99: digital communication, barry lee solutions.pdf

s-97

10log10(xsmse) = γ1 – jγ2 (S.487)

where

γ1 = 10log10(λi(vi*q0)2) (S.488)

γ2= –20log10(1 – βλi) (S.489)

Note that the excess MSE measured in dB decreases linearly with time. Hence the speedof convergence is often measured in dB/sec. or some equivalent units.

(b) From Fig. 9-4, when β is small the dominant mode corresponds to λmin, so that

γ1 = 10log10(vi*q0)2 + 10log10λminγ2 = –20log10(1 – βλmin)

=

≈ . (S.490)

Where we used log(1 + x) ≈ x for small x. As β increases, the speed of convergence increasesin direct proportion.

Problem 9-9. The MSE is

E[Yk – k]2 = E[Yk –aYk –1 – b]2 (S.491)

and setting the derivative w.r.t. a and b to zero,

0 = φ1 –aφ0 – bµ = µ – aµ – b . (S.492)

Solving for a and b we get the stated results.

Problem 9-10.

(a) Doing a partial fraction expansion, we get

Φ( z ) = + (S.493)

and expanding each term, the first corresponding to positive time and the second tonegative, we get

φk = α|k| . (S.494)

(b) Putting Yk through a filter (1 –α z –1) results in a white signal. This filter is in the formof a predictor, and hence is the optimal predictor of infinite order. Hence the optimalpredictor of any order one or higher has all-zero tap coefficients except for f1 = α.

20loge 1 βλmin–( )–

loge10--------------------------------------------------

20βλminloge10

----------------------

Y

A1 α2–---------------- 1

1 αz–---------------- α

z α–------------

A1 α2–----------------

Page 100: digital communication, barry lee solutions.pdf

s-98

Problem 9-11.

(a) From (9.38), we know that

qj+1 = (1 – βj+1λi)(vi*qj)vi. (S.495)

Using this result and the assumption that the eigenvectors are orthonormal, we canprove the stated result by induction.

(b) We force the error to zero after N iterations by choosing

βj = , 1 ≤ j ≤ N. (S.496)

The product term then always contains a term 1 – βlλl = 0 for every i.

Problem 9-12. Note: The problem should state that Rk is a real-valued and zero-meanprocess.

E[σk2] = (1 – α) αjσ2 = σ2 . (S.497)

(a) A key fact is that when X is a Gaussian zero-mean random variable, E[X3] = 0 andE[X4] = 3σ4, which can be derived from the moment-generating function (Section 3.1).Then calculating the variance,

Var[σk2] = E[(σk

2)2] – (σ2)2 . (S.498)

The first term can be calculated directly,

(1 –α)2 αm+nE[Rk –2

mRk –2

n]

= (1 –α)2 α2mE[Rk –4

m] + αm+n(σ2)2 – α2m(σ2)2 (S.499)

where the third term subtracts off terms that were included twice in the first and second terms.Using the result for the fourth moment, we get

Var[σk2] = 2σ4 , (S.500)

where this variance approaches zero as α → 1 and 2σ4 as α → 0. It is of course desirable tohave an α near unity because of the long time constant, since this results in a lot of averaging.The price we pay is a long convergence time, or poor tracking capability.

Problem 9-13. In Fig. 8-18(c), let the samples at the first LPF output be Rk, so that thisimplies that

Yk = cmR2k – m and Ek = Ak – Yk (S.501)

For simplicity, doing the real-valued case,

i 1=

n∑

1λ j-----

j 0=

∞∑

m 0=

∞∑ n 0=

∞∑

m 0=

∞∑ m 0=

∞∑ n 0=

∞∑ m 0=

∞∑

1 α–1 α+-------------

m N–=

N∑

Page 101: digital communication, barry lee solutions.pdf

s-99

Ek2 = 2EkR2k – j . (S.502)

Then analogous to (9.54),

[ck+1]j = [ck]j – βEkR2k – j . (S.503)

For each increment of k, the delay line storing Rk shifts two positions.

Problem 9-14.

(a) Φ is replaced by (Φ + σ2I), and it is simple to verify that the eigenvalues of this matrixare (λi + σ2), with the same eigenvectors.

(b) The new eigenvalue spread is (λmax + σ2) ⁄ (λmin + σ2), which is smaller than before.

(c) From Exercise 9-7,

(Φ + σ2I) –1 = vivi* (S.504)

and hence

(c – Φ –1α) = (vi*α)vi (S.505)

and the excess MSE is

(c – Φ –1α)*Φ(c – Φ –1α) = σ4 (S.506)

which is the same result as Problem 9-15. For small σ2 the increase in MSE is approximatelyproportional to (σ2 ⁄ λi)

2 for the i-th mode.

Problem 9-15. Adding µ ||c||2 to (9.8),

E Ek2] = E[|Ak|2] – 2Rec*α + c*(Φ + µI)c . (S.507)

Hence the solution is the same as before with Φ replaced by (Φ + µI), or

cµ = (Φ + µI)–1α . (S.508)

The eigenvalues of (Φ + µI) are vi and the eigenvalues (λi + µ), where vi and λi are theeigenvectors and eigenvalues of Φ. The eigenvectors and eigenvalues of (Φ + µI)–1 are vi and1 ⁄ (λi + µ) and hence

(Φ + µI)–1 = vivi* (S.509)

and

cµ – copt = vivi*α . (S.510)

c j∂∂

i 1=

N∑ 1λk σ2+-------------------

i 1=

N∑ σ2

λi λi σ2+( )----------------------------

i 1=

N∑vi*α 2

λi λl σ2+( )----------------------------

i 1=

n∑ 1λi µ+--------------

i 1=

n∑ 1λi µ+-------------- 1

λi-----–

Page 102: digital communication, barry lee solutions.pdf

s-100

The excess mse is given by (9.42),

E[Ek2(T)] – E[Ek

2(T)]min = λi[(cµ – copt)*vi]2

= λi (S.511)

The excess mse of the ith mode increases as

(S.512)

which has derivative

= (S.513)

and since this derivative is zero at µ = 0, the excess mse increases very slowly with µ.

Problem 9-16.

(a)

∇c µ||c||2 = 2µc (S.514)

so that the gradient algorithm is

ci + 1 = (1 – βµ)ci + β(α – Φci). (S.515)

(b) The eigenvalues of (Φ – µI) are (λi + µ), so replace λmin by (λmin + µ) and λmax by(λmax + µ) . The algorithm is stable if

0 < β < . (S.516)

(c)

βopt = (S.517)

(d) The eigenvalue spread is now

≈ , (S.518)

which can be reduced dramatically by µ.

(e) They apply to the speed of convergence of the average trajectory and the asymptoticexcess mse caused by the algorithms converging to cµ rather than copt. The second termin (S.518) biases the solution in the direction of keeping the coefficients small. The SGalgorithm corresponding to minimizing (S.518) is

ck+1 = (1 – βµ)ck + βEkpk , (S.519)

i 1=

n∑

i 1=

n∑µvi*α

λi λi µ+( )-------------------------

2

µλi µ+--------------

2

µ∂∂ µ

λi µ+--------------

2 2µλi

λi µ+( )3----------------------

2λmax µ+----------------------

2λmin λmax 2µ+ +--------------------------------------------

λmax µ+

λmin µ+----------------------

λmaxλmax µ+----------------------

Page 103: digital communication, barry lee solutions.pdf

s-101

where µ = 0 corresponds to the previous algorithm without leakage (9.53). The operation ofthis algorithm is evident, since the coefficient vector is multiplied by a constant slightly lessthan unity at each step before adding in the correction term. When the corrections are small, aswhen the coefficient vector is wandering in the direction of a eigenvector corresponding to asmall eigenvalue, this leakage decreases the size of the vector over time.

Problem 9-17. The lowest frequency in the passband signal is

1800 –1200 ⋅ 1.1 = 480 Hz (S.520)

and the highest frequency is

1800+1200 ⋅ 1.1 = 3120 Hz . (S.521)

(a) The baseband case is shown below:

The BPF rejects all frequencies other than the signal bandwidth. The sampling rate atthe front end of 7200 Hz is greater than twice the highest frequency of the passbandsignal. Following demodulation, a sampling rate of 4800 Hz would be adequate sincethe highest signal frequency is 1320 Hz; however, it is not convenient to decimate by afactor of two-thirds (perhaps a 9600-4800 decimation would be more appropriate, butthis does not meet the specifications of the problem statement). The fractionally spacedequalizer can generate a signal at the slicer input at the symbol rate, a decimation factorof three.

(b) The passband case is shown below:

You might expect that the equalizer output had to be sampled at 7200 Hz also, but afterdemodulation a rate of 2400 is adequate. Since demodulation followed by decimation isequivalent to decimation followed by demodulation, in fact the equalizer output can bedecimated to the symbol rate.

IN

2400 Hz

–+

2400 Hz

7200 Hz

RECOVERY

CARRIER

EQUALIZERSPLITTER

PHASE7200 Hz

480-3120 Hz

BPF ak

2400 Hz

7200 Hz

IN

–+

2400 Hz

EQUALIZERSPLITTER

PHASE7200 Hz

480-3120 Hz

BPF

RECOVERY

CARRIER

ak

Page 104: digital communication, barry lee solutions.pdf

s-102

(c) This case is shown below:

This case is similar, except for the reasoning. The equalizer input could again have asampling rate of 4800, although this is not convenient. The 2400 Hz sampling rate atequalizer output is adequate for the same reason as in the baseband case (no funnybusiness as in the passband case).

The passband equalizer case seems superior since the sampling rates are the same, but only onecomplex multiply for demodulation is required rather than two.

Problem 9-18. The block diagram is shown below:

The postcursor equalizer and carrier recovery circuits work in the non-rotated domain alongwith the slicer. The slicer error is used directly in the postcursor equalizer and is rotated beforebeing input to the precursor equalizer.

2400 Hz

IN

–+

2400 Hz

EQUALIZERSPLITTER

PHASE7200 Hz

480-3120 Hz

BPF

RECOVERY

CARRIER

7200 Hz

OSCILLATOR

ak

7200 Hz

IN

–+

EQUALIZERSPLITTER

PHASE7200 Hz

480-3120 Hz

BPF

RECOVERY

CARRIER

EQUALIZER

2400 Hz

2400 HzPRECURSOR

POSTCURSOR

ak

Page 105: digital communication, barry lee solutions.pdf

s-103

+

+

– –

Digital

COMMUNICATION

John R. BarryEdward A. Lee

David G. Messerschmitt

third edition

Solutions Manualfor

Kluwer Academic Publishers

Page 106: digital communication, barry lee solutions.pdf

s-104

Chapter 10

Problem 10-1. The m × m power spectrum to be factored is:

S(z) =

= z + + z–1

= S–1z + S0 + S1z–1 , (S.523)

where

S–1 = , S0 = , S1 = . (S.524)

(a) With k = 1 and m = 2, the m(k + 1) × m(k + 1) matrix Rk reduces to:

R1 = = . (S.525)

A Cholesky factorization of this 4 × 4 matrix yields:

R1 = L1L1T (S.526)

where (for example, using L1 = chol(R1) ’ in MATLAB):

L1 = . (S.527)

Equating the last block row (the last two rows) of this matrix with = [G1, G0] yields:

G0 = , G1 = . (S.528)

Thus, the Bauer method with k = 1 leads to the following approximation:

G(z) ≈ . (S.529)

64.01

0.2z + 8 + 1.6z–1

1.6z + 8 + 0.2z–1

0.2z + 5.04 + 0.2z–1

0 1.6

0.2 0.2

64.01 8

8 5.04

0 0.2

1.6 0.2

0 1.6

0.2 0.2

64.01 8

8 5.04

0 0.2

1.6 0.2

S0 S–1

S1 S0

64.01 8

8 5.04

64.01 8

8 5.04

0 1.2

1.6 0.2

0 1.6

0.2 0.2

8.0006 0

8 2.0100

8.0000 0

1.0000 2.0000

0 0.0995

0.2000 0.0000

0 0

0 0

8.0000 0

1.0000 2.0000

0 0.0995

0.2000 0.0000

8.0000

1.0000 + 0.20000z–1

0.0995z–1

2.0000 + 0.0000z–1

Page 107: digital communication, barry lee solutions.pdf

s-105

(b) The approximation from part (a) is not exact, because it gives:

G( z )G*(1 ⁄ z*) = (G0 + G1z–1)(G0T + G1

Tz)

= G0G1Tz + (G0G0

T + G1G1T ) + G1G0

T z–1

= z + + z–1. (S.530)

However, comparing this to the original S( z ) of (S.523), the error is small:

G( z )G*(1 ⁄ z*) – S( z ) = , (S.531)

which suggests that the G( z ) of (S.528) is nearly correct, and that by changing the0.0995 in G1 to 0.1 might lead to the exact answer. In fact this is true, the actualfactor (as is easily verified) is:

G( z ) = + z–1. (S.532)

Problem 10-2. The overall filter can be decomposed into a cascade of five filters:

The transfer functions of each are easily found by inspection, so that:

H( z ) = H5( z )H4( z )H3( z )H2( z )H1( z )

=

=

=

0 1.5999

0.1990 0.2000

64.010 8.0000

8.0000 5.0400

0 0.1990

1.5999 0.2000

0

–0.00009972z – 0.00001237z–1

–0.00001237z – 0.00009962z–1

0.00000153(z + z–1)

8 0

1 2

0 0.1

0.2 0

yk(1)

yk(2)

yk(3)

uk(1)

uk(2)

z–1

z–1

z–1

z–1

z–1

H1( z ) H2( z ) H5( z ) H3( z ) H4( z )

1 01 10 1

z–1 0

1 1

1 1

z–10

z–1 0

1 1

z–1 0

z–10

1 01 10 1

z–1 0

1 1

1 1

z–10

z–2 0

z–1 z–1

1 01 10 1

z–1 0

1 1

z–1 + z–2 z–1

z–2 z–2

Page 108: digital communication, barry lee solutions.pdf

s-106

=

= . (S.533)

Problem 10-3. The relationship between xk = [yk(1), yk

(3)] and yk = [yk(1), yk

(2), yk(3)] is:

xk = Qyk, where Q = , (S.534)

and therefore, the transfer function from uk = [uk(1), uk

(2)] to xk is:

( z ) = QH( z )

=

=

= z–1 (S.535)

Therefore, exploiting the fact that Su( z ) = I yields:

Sx( z ) = ( z )Su( z ) *(1 ⁄ z*)

= ( z ) *(1 ⁄ z*)

= z–1z

=

=

Sx( z ) = z2 + z + + z–1 + z–2. (S.536)

1 01 10 1

z–2 + z–3 z–2

z–1 + z–2 + z–3 z–1 + z–2

z–2 + z–3 z–2

z–1 + 2z–2 + 2z–3

z–1 + z–2z–1 + z–2 + z–3

z–1 + 2z–2

1

0

0

0

0

1

H

1

0

0

0

0

1

z–2 + z–3 z–2

z–1 + 2z–2 + 2z–3

z–1 + z–2z–1 + z–2 + z–3

z–1 + 2z–2

z–2 + z–3 z–2

z–1 + z–2z–1 + z–2 + z–3

z–1(1 + z–1) z–1

(1 + z–1)1 + z–1 + z–2

H H

H H

z–1(1 + z–1) z–1

(1 + z–1)1 + z–1 + z–2

z(1 + z)

z (1 + z)

1 + z + z2

(z–1(1 + z–1))(z(1 + z)) + (z–1)(z)

(1 + z–1 + z–2)(z(1 + z)) + (1 + z–1)(z)

(z–1(1 + z–1))(1 + z + z2) + (z–1)(1 + z)

(1 + z–1 + z–2)(1 + z + z2) + (1 + z–1)(1 + z)

z + 3 + z–1

z2 + 3z + 3 + z–1

z + 3 + 3z–1 + z–2

z–2 + 3z–1 + 5 + 3z + z2

0 0

1 1

1 1

3 3

3 3

3 5

1 3

1 3

0 1

0 1

Rx(–2)

Rx(–1) Rx(2)Rx(0) Rx(1)

Page 109: digital communication, barry lee solutions.pdf

s-107

(a) The autocorrelation function Rx( m ) at lag m is the coefficient of z–m in (S.536).

(b) The power spectrum Sx( z ) is given by (S.536).

Problem 10-4. The filter input power spectrum is a constant, and it can be factored as:

SX( z ) = = . (S.537)

(a) We want the output power spectrum to be S(z) = G(z)G*(1 ⁄ z*), where from(10.18):

G(z) = + z–1. (S.538)

This can be achieved by first creating an intermediate signal Uk = AXk, with A chosenso that Uk is both temporally and spatially white, satisfying SU( z ) = I. In other words,we choose A so that:

ASX( z )A* = I . (S.539)

In light of (S.537), one obvious choice for A is:

A = (MΓ)–1 = = . (S.540)

In light of (10.11), passing Uk through the following 3-input 2-output filter:

F( z ) = (S.541)

will result in the desired power spectrum at the output. Overall, the desired filter is thecascade of A and F( z ):

H( z ) = F( z )A

=

= . (S.542)

(b) The answer to part (a) is not unique. The filter of (S.541) drops the third component ofUk, but this is arbitrary; we could also have dropped the first or second component.Also, the whitening filter A is not unique; any A of the form (MΓ)–1Θ where Θ is aunitary matrix will give the same answer.

2016 88

20 10

41021

11 0

1 ⁄ 21 ⁄ 4 1 ⁄ 2

001

1616 000 0

0016

11 1 ⁄ 200 0

1 ⁄ 41 ⁄ 2

1

M Γ2 M*

10 0

1 20

1 2

3 4

44 021 2

004

–118--- 2

2 0–10 –1

002

00G(z)

18---

0

010 + z–1

1 + 3z–1

2z–1

20 + 4z–1 22 0

–10 –1

002

14---

0

010

–9 + z–1

2z–1

20 + 4z–1

Page 110: digital communication, barry lee solutions.pdf

s-108

Problem 10-5. Let us solve the problem for the general set of pulses h1( t ) = e–t ⁄ τu( t – T )and h2( t ) = e–t ⁄ τu( t – 2T ). That way we can handle both cases (τ finite and τ → ∞)simultaneously.

It is not hard to see that, for any SIMO channel, the folded spectrum collapses to the sum of thefolded spectra for each of the subchannels. In particular, whenever H( t ) is of the form:

H( t ) = ↔ H( f ) = (S.543)

then the folded spectrum is, by definition (10.40):

S(e j2πfT) = ∑kH*(f – )H(f – )

= ∑k

= ∑k + … + ∑k

= Sh1( f ) + Sh2

( f ) + … + Shn( f ) . (S.544)

This implies that, in the Z domain, the SIMO folded spectrum is S( z ) = ∑iShi( z ), where

Shi( z ) is the folded spectrum of the i-th pulse.

In this problem we have h1( t ) = e–t ⁄ τu( t – T ), so that the first sampled autocorrelationfunction evaluated at k ≥ 0 is:

ρh1( k ) = h1( t )h1( t – kT )dt

= h1( t + T )h1( t + T – kT )dt

= e–2T⁄ τ e–t ⁄ τe–(t + kT) ⁄ τdt (assuming k ≥ 0)

= e–(k + 2)T⁄ τ e–2t ⁄ τdt (assuming k ≥ 0)

= E1ak , (assuming k ≥ 0) (S.545)

where a= e–T⁄ τ , and where E1 is the energy in the first pulse:

E1 = |h1( t )|2dt = a2 . (S.546)

Exploiting the Hermitian symmetry ρh(–k) = ρh*( k ), we immediately conclude that:

h1( t )

h2( t )…

hn( t )

H1( f )

H2( f )…Hn( f )

1T----

kT---- k

T----

1T---- H1*( f – k ⁄ T ) …H2*( f – k ⁄ T ) Hn*( f – k ⁄ T ) H1( f – k ⁄ T )

H2( f – k ⁄ T )…

Hn( f – k ⁄ T )

1T---- H1 f k

T----–

2 1T---- H1 f k

T----–

2

∞–

∞–

kT

0

∞–

∫ τ2--

Page 111: digital communication, barry lee solutions.pdf

s-109

ρh1( k ) = E1a|k| . (S.547)

Here are two sketches of ρh1( k ), one for small τ ⁄ T and one for large:

We can decompose ρh1( k ) into the sum ρh1

( k ) = xk + yk, where xk = E1akuk is causal andyk is strictly anticausal:

yk = E1a–ku–k–1

= aE1a–k–1u–k–1= ax–k–1 . (S.548)

The Z transform of xk = E1akuk is E1 ⁄ (1 – az–1). From the properties of Z transforms, weknow that the Z transform of xk–1 is z–1X( z ), and that the Z transform of h–k* = H*(1 ⁄ z*).Setting hk = xk – 1 implies that the Z transform of yk = ah–k is Y( z ) = aH*(1 ⁄ z*) =azX*(1 ⁄ z*). Combining these results, we see that the Z transform of ρh1

( k ) is:

Sh1( z ) = X( z ) + Y( z )

= X( z ) + azX*(1 ⁄ z*)

= +

= . (S.549)

Since h2( t ) = ah1( t – T), where again a = e–T⁄ τ, it immediately follows that:

ρh2( k ) = a2ρh1

( k ) , (S.550)

so that Sh2( z ) = a2Sh1

( z ). The overall folded spectrum for the SIMO channel is thus:

S( z ) = Sh1( z ) + Sh2

( z )

= Sh1( z )(1 + a2)

= . (S.551)

a = 0.95(τ = 19.5T)

a = 0.5(τ = 1.4T)

……

……00

E1 E1

E1

1 az 1––---------------------

E1az1 az–---------------

1 a2–( )E1

1 az 1––( ) 1 az–( )-----------------------------------------------

1 a4–( )E1

1 az 1––( ) 1 az–( )-----------------------------------------------

Page 112: digital communication, barry lee solutions.pdf

s-110

Setting τ = 1 leads to a = e–T and E1 = e–2T, so that:

S( z ) = (τ = 1) . (S.552)

Letting τ → ∞ leads to a → 1, so that:

S( z ) = → , (S.553)

where, by introducing x = T ⁄ τ, so that a = e–x and E1 = e–2x, the numerator is:

A = (1 – a4)E1

= T

= T → 2T as τ → ∞ (x → 0) , (S.554)

where the last equality follows from L’Hopital’s rule. This leads to the following limit:

S( z ) → as τ → ∞. (S.555)

Problem 10-6. There is a typo in the problem statement, the problem cannot be solved asstated. The folded spectrum of a SIMO channel is always a scalar, there is no way we can forceit to have dimension 2 × 2.

We will solve the intended problem, where “single-input double output” should be “double-input single-output”. From (10.38), the folded spectrum is then S( z ) = ∑kSkz–k, where

Sk = H*(t – kT)H(t )dt . (S.556)

For a double-input single-output channel with H( t ) = [h1( t ), h2(t )], this reduces to:

Sk = dt (S.557)

= (S.558)

Setting k = 0 leads to S0, and equating it to the coefficient of z0 in the target spectrum yields:

12---

1 2⁄( ) 1 e 4T––( )e 2T–

1 e T– z 1––( ) 1 e T– z–( )----------------------------------------------------------

1 a4–( )E1

1 az 1––( ) 1 az–( )----------------------------------------------- A

1 z 1––( ) 1 z–( )---------------------------------------

T2x------

1 e 4x––2xe2x

--------------------

4e 4x–

4xe2x 2e2x+---------------------------------

2T1 z 1––( ) 1 z–( )

---------------------------------------

∞–

∞–

∫h1(t* – kT)

h2(t* – kT)

h1( t ), h2(t )

h1(t* – kT)h1( t )dt∞–

∫h2(t* – kT)h1( t )dt

∞–

h1(t* – kT)h2( t )dt∞–

∫h2(t* – kT)h2( t )dt

∞–

Page 113: digital communication, barry lee solutions.pdf

s-111

S0 = = . (S.559)

This implies that the first pulse has energy 9, the second has energy 3, and the two pulses areorthogonal.

Setting k = 1 leads to S1, and equating it to the coefficient of z–1 in the target spectrum yields:

S1 = = . (S.560)

The diagonal elements imply that both pulses are orthogonal to delayed-by-T versions ofthemselves. The following pair of impulse responses gives the desired power spectrum:

Clearly the first pulse has energy 9, the second has energy 3, and the two pulses areorthogonal, so (S.559) is satisfied. Further, the sampled crosscorrelation function:

[Sk ]2,1 = h2*(t – kT)h1( t )dt (S.561)

is nonzero only when k = 1, in which case is reduces to [S1]2,1 = 3. So (S.560) is satisfied.

Problem 10-7. The proposition is true, it is a generalized version of Parseval’s relationshipapplied to MIMO filters. The proof is based on the inverse Fourier transform equation:

Bk = B(e jθ)e jkθ dθ . (S.562)

Making this substitution, the right-hand-side of the proposition reduces to:

|h1( t )|2dt∞–

∫h2( t )h1( t )dt*

∞–

h1( t )h2( t )dt*

∞–

∫|h2(t)|2dt

∞–

∫9 0

0 3

h1(t* – T)h1( t )dt∞–

∫h2(t* – T)h1( t )dt

∞–

h1(t* – T)h2( t )dt∞–

∫h2(t* – T)h2( t )dt

∞–

∫0 0

3 0

0 T 2Tt

0 Tt

T3---- 2T

3--------

3T

--------

3T

--------

2T

h1( t )

h2( t )

∞–

12π------

π–

π

Page 114: digital communication, barry lee solutions.pdf

s-112

Ak Bk* = Ak B*(e jθ)e–jkθ dθ

= Ake–jkθ B*(e jθ)dθ

= A(e jθ) B*(e jθ)dθ , (S.563)

which is the left-hand side of the proposition.

Problem 10-8. In terms of xk = [xk(1), xk

(2)]T, the given linear predictor can be written as:

k(1) = p1xk – 1

(1) + p2xk – 1(2)

= pTxk – 1 , (S.564)

where p = [p1, p2]T. The quantity to be predicted is xk(1), the first component of xk:

xk(1) = eTxk , (S.565)

where e = [1, 0]T. The mean-squared prediction error is thus:

MSE = E[( k(1) – xk

(1))2]

= E[(pTxk – 1 – eTxk)2]

= pTE[xk– 1xkT– 1]p + eTE[xkxk

T]e – 2pTE[xk–1xkT]e

= pTRx( 0 )p + eTRx( 0 )e – 2pTRx( – 1 )e . (S.566)

(We used Rx(d ) = E[xk + dxk*] from (10.6) or (10.9).)Taking the gradient with respect to p yields (see (9.21) and (9.22)):

∇pMSE = 2Rx(0)p – 2Rx( – 1 )e . (S.567)

The optimal (MMSE) prediction coefficients can be found by setting the gradient to zero:

p = Rx( 0 )–1Rx( – 1 )e . (S.568)

Since R( d ) is the coefficient of z–d in S( z ), the PSD given in (10.17) implies that:

Rx( 0 ) = , Rx( – 1 ) = . (S.569)

With these values, (S.568) reduces to:

p = = . (S.570)

It is instructive to compare the above “finite-tap” solution (where the number of predictioncoefficients is severely constrained) to the “infinite-tap” unrestricted linear predictor of(10.26). The unconstrained solution is given by (10.29), namely:

k ∞–=

∞∑ k ∞–=

∞∑ 12π------

π–

π

∫1

2π------

π–

π

∫ k ∞–=

∞∑

k ∞–=

∞∑

12π------

π–

π

∫ k ∞–=

∞∑

k ∞–=

∞∑

x

x

105 21

21 426

10 30

41 83

p1p2

0.0767

0.0925

Page 115: digital communication, barry lee solutions.pdf

s-113

P( z ) = I – M( z )–1, (S.571)

where from (10.18) M( z ) satisfies:

M( z )Γ = + z–1 ⇒ Γ = , (S.572)

so that:

M( z ) = . (S.573)

Therefore, in terms of the determinant D( z ) = detM( z ), the infinite-tap predictor is:

P(z) = I – M( z )–1

= –

=

= , (S.574)

where we exploited the fact that:

D( z ) = detM( z ) = 1 + 0.29z–1 – 0.01z–2 . (S.575)

Since 1 ⁄ D( z ) is a causal and monic filter, the first two prediction-filter coefficients are:

P0 = , P1 = . (S.576)

(The symbol × represents “don’t care”; it cannot be found by inspection, which would requireknowledge of the inverse transform of 1 ⁄ D( z ), but for our purposes its value doesn’t matter.)

We are now ready to compare: the finite-tap predictor in this problem forces all predictorcoefficients to be zero, except for the first row of P1. Comparing the first row of P1 to thefinite-tap solution, we see that the finite-tap solution is very nearly (but not exactly) equal tothe infinite-tap solution with the nonexistant taps are set to zero:

p = = ≈ first row of P1T = . (S.577)

Problem 10-9. The JML detector chooses (a1, a2)T to maximize the conditional pdff(r|a1, a2). It thus decides a1 = 1 when

maxf(r|1, 1), f(r|1, –1) > maxf(r|–1, 1), f(r|–1, –1) (S.578)

10 0

1 20

1 2

3 4

10 0

0 20

1 + 0.1z–1

1 + 0.2z–1

0.1z–1

0.1 + 0.3z–1

D( z )

D( z )

0

0

1D z( )------------

1 + 0.2z–1

1 + 0.1z–1

–0.1z–1

–0.1 – 0.3z–1

1D z( )------------

D( z ) – 1 – 0.2z–1

D( z ) –1 – 0.1z–1

0.1z–1

0.1 + 0.3z–1

1D z( )------------

0.09z–1 – 0.01z–2

0.19z–1 – 0.01z–2

0.1z–1

0.1 + 0.3z–1

1D z( )------------

0

0

0

0.1

0.09

0.19

0.1

×

p1p2

0.0767

0.0925

0.09

0.1

Page 116: digital communication, barry lee solutions.pdf

s-114

or equivalently, since the logarithm is monotonic:

= sign . (S.579)

In contrast, the IML detector chooses its decision for a1 to maximize

f(r|a1) = . (S.580)

In other words, the IML detector will choose according to the sign of log :

= sign . (S.581)

Comparing, we see that the IML and JML detectors can be put into the same form:

= sign , (S.582)

where hJML(x, y) = max(x, y) for the JML detector, and hIML(x, y) = x + y for the IMLdetector. But

f(r|a1, a2)=

=

= K , (S.583)

where K is a constant, independent of a1 and a2. Plugging this into (S.582) yields:

= sign

= sign , (S.584)

a1JML max f r 1 1,( ) f r 1 1–,( ),

max f r 1 1,–( ) f r 1 1–,–( ), -----------------------------------------------------------------------------------log

f r a1 1,( ) f r a1 1–,( )+

2-------------------------------------------------------------

a1f r a1= 1( )

f r a1 1–=( )---------------------------------

a1IML f r 1 1,( ) f r 1 1–,( )+

f r 1 1,–( ) f r 1 1–,–( )+------------------------------------------------------------------log

a1h f r 1 1,( ) f r 1 1–,( ),

h f r 1 1,–( ) f r 1 1–,–( ), -------------------------------------------------------------------------log

12πσ2-------------e

1–2σ2---------- r a1h1– a2h2– 2

12πσ2-------------e

1–2σ2---------- r 2 h1

2 h22+ +( )

e1σ2------ a1y1 a2y2 a1a2ρ–+( )

e1–

2σ2---------- 2a1y1– 2a2y2– 2a1a2ρ+( )

a1

h K e1σ2------ y1 y2 ρ–+( )

K e1σ2------ y1 y2– ρ+( )

,

h K e1σ2------ y1– y2 ρ+ +( )

K e1σ2------ y1– y2– ρ–( )

,

--------------------------------------------------------------------------------------------------log

K ey1

σ2------

h e1σ2------ y2 ρ–( )

e1–

σ2------ y2 ρ–( )

,

K ey– 1

σ2---------

h e1σ2------ y2 ρ+( )

e1–

σ2------ y2 ρ+( )

,

------------------------------------------------------------------------------log

Page 117: digital communication, barry lee solutions.pdf

s-115

where we used the fact that h(Ax, Ay) = Ah(x, y) for both definitions of h( ⋅ , ⋅ ) and for anyconstaint A. The above equation simplifies to:

= sign . (S.585)

(a) For the special case of JML, where hJML(x, y) = max(x, y), this reduces to:

= sign

= sign

= sign y1 + |y2 – ρ| – |y2 + ρ| ,

= sign y1 + g0(y2 – ρ) – g0(y2 + ρ) , (S.586)

where we have introduced the nonlinearity g0(x) = |x|.

For the special case of IML, where hIML(x, y) = x + y, (S.585) reduces to:

= sign

= sign

= sign y1 + logcosh – logcosh ,

= sign y1 + gσ(y2 – ρ) – gσ(y2 + ρ) , (S.587)

where gσ(x) = σ2logcosh(x ⁄ σ2).

a1 y1σ2

2------

h e

1σ2------ y2 ρ–( )

e1–

σ2------ y2 ρ–( )

,

h e1σ2------ y2 ρ+( )

e1–

σ2------ y2 ρ+( )

,

--------------------------------------------------------------log+

a1JML y1

σ2

2------

max e

1σ2------ y2 ρ–( )

e1–

σ2------ y2 ρ–( )

,

max e1σ2------ y2 ρ+( )

e1–

σ2------ y2 ρ+( )

,

----------------------------------------------------------------------log+

y1σ2

2------

e

y2 ρ–

σ2---------------

ey2 ρ+

σ2---------------

-----------------log+

1

2--- 1

2---

1

2--- 1

2---

a1IML y1

σ2

2------

e1σ2------ y2 ρ–( )

e1–

σ2------ y2 ρ–( )

+

e1σ2------ y2 ρ+( )

e1–

σ2------ y2 ρ+( )

+

-------------------------------------------------------log+

y1σ2

2------

2

y2 ρ–

σ2---------------

cosh

2y2 ρ+

σ2---------------

cosh

--------------------------------------log+

σ2

2-------

y2 ρ–

σ2---------------

σ2

2-------

y2 ρ+

σ2---------------

1

2--- 1

2---

Page 118: digital communication, barry lee solutions.pdf

s-116

(b) The function gσ(x) is defined by:

gσ(x) = σ2log

= σ2log . (S.588)

But goes to zero as σ → 0, so that:

gσ(x) = σ2log – σ2log(2)

→ σ 2log = log

= log = |x|. (S.589)

It follows that and usually agree at high SNR.

(c) When y2 = ρ = σ = 1, the JML and IML detectors reduce to:

= sign y1 – 1 , (S.590)

= sign y1 + g1(0) – g1(2)

= sign y1 – 0.6625 . (S.591)

So any y1 satisfying g1(2) ≈ 0.6625 < y1 < 1 will result in = 1 but = –1.

Problem 10-10. Let P be the identity matrix with the first and i-th columns swapped, so that

H = , HP = . (S.592)

Then applying the Gram-Schmidt orthonormalization procedure to the columns of HP, inreverse order, yields:

HP = QG, (S.593)

where the columns of Q are an orthonormal basis for the span of hj, and where G is a lowertriangular matrix of expansion coefficients. In particular, at the very last step of the GSprocedure, the first column of Q is found by normalizing the projection error when the firstcolumn of HP is projected onto the subspace spanned by the other columns:

q1 = . (S.594)

Solving this equation for hi gives:

hi = q1 + . (S.595)

ex σ2⁄ e x σ2⁄–+2

-----------------------------------

e x σ2⁄ e x σ2⁄–+2

----------------------------------------

e x– σ2⁄

ex σ2⁄ e x σ2⁄–+( )

e x σ2⁄( ) e x σ2⁄( )σ2

e x( )

a1IML a1

JML

a1JML

a1IML

1

2--- 1

2---

12--- a1

IML a1JML

h1 h2 hN…hi… hi h2 hN…h1…

hi hi–

hi hi–----------------------

hi hi– hi

Page 119: digital communication, barry lee solutions.pdf

s-117

This implies that G1,1 = . (The matrix G contains expansion coefficients, so that G1,1is the coefficient of q1 when hi represented as a linear combination of the columns of Q.) Now

R–1= (H*H)–1

= ((QGP*)*(QGP*))–1

= (PG*Q*QGP*)–1

= (PG–1)(PG–1)* . (S.596)

We are interested in the (i, i)th entry of this matrix. Since this matrix is in the form FF*, the(i, i)th entry will be the squared norm of the cc-th row of F, namely:

(R–1)i,i = || fi ||2 (S.597)

where fi denotes the i-th row of F = PG–1. But this is just the first row of G–1, a lowertriangular matrix (because G is lower triangular), so its first row will be:

fi = [1 ⁄ G1,1, 0, 0 … 0]. (S.598)

(The nonzero entry ensures that GG–1 = I.) Thus,

(R–1)i,i = || fi ||2 = = , (S.599)

the desired result.

Problem 10-11. We start with the trivial equality:

H*HH* + N0H* = H*HH* + N0H* (S.600)

The fact that matrix multiplication is associative allows us to put parenthesis in different placeson the left-hand side and right-hand side, yielding:

H*(HH* + N0I) = (H*H + N0I)H* (S.601)

Right-multiplying both sides of the equation by (HH* + N0I)–1, and left-multiplying bothsides by (H*H + N0I)–1, yields:

(H*H + N0I)–1H* = H*(HH* + N0I)–1 , (S.602)

the desired result.

hi hi–

1G1 1,

2------------ hi hi– 2–

Page 120: digital communication, barry lee solutions.pdf

s-118

Problem 10-12. We begin with the channel model r = Ha + w, where E[ww*] = σ2I. Letci denote the i-th row of C = (H*H)–1H*, the pseudoinverse of H. Then:

y = Cr = a + n, (S.603)

where n = Cw is correlated noise, and where the i-th noise component is ni = ciw.

(a) the optimal choice for i1 minimizes the variance of ni1:

E[|ni1|2] = trE[ci1

ww*ci1*]

= trci1E[ww*]ci1

*

= trci1σ2Ici1

*

= σ2trci1ci1

*

= σ2|| ci1||2 . (S.604)

So the best choice for i1 is the index of the pseudoinverse row with minimum norm.

(b) The best choice for i2 is the one that minimizes the variance of the noise at the input tothe second slicer, assuming that the first decision was correct. From Fig. 10-31, thepredicted value of the second noise component is:

i2 = p2,1ni1 .

= p2,1ci1w. (S.605)

The variance of the prediction error is the variance of the noise at the input to the secondslicer, namely:

E[ |ni2 – i2

|2 ] = E[|ci2w – p2,1ci1

w|2]

= E[|(ci2 – p2,1ci1

)w |2]

= σ2|| ci2– p2,1ci1

||2 . (S.606)

An optimal choice for the prediction coefficient will force p2,1ci1to be as close to ci2

aspossible, in a Euclidean distance sense. Therefore, the best choice for i2 satisfies(10.160).

(c) The best choice for ik minimizes the variance at the input to the k-th slicer, assumingprevious decisions are correct. Specifically, the best choice for ik minimizes:

E[|nik− ik|

2 ], (S.607)

where

ik = pk, j nij

. (S.608)

Substituting (S.608) into (S.607) yields:

n

n

n

nj k<∑

Page 121: digital communication, barry lee solutions.pdf

s-119

E[|nik− ik|

2 ] = E[|nik− pk, j nij

|2]

= E[|(cik− pk, j cij

)w |2]

= σ2||cik− pk, j cij

||2

= σ2||cik− ik

||2 (S.609)

where we have introduced

ik= pk, j cij

. (S.610)

For a given ik, the prediction coefficients pk,j are chosen to minimize (S.609). Inother words, they are chosen so that ik

is as close to cik as possible, which is achievedwhen ik

is the projection of cikonto the linear subspace spanned by ci1, … cik – 1.

Therefore, (10.161) follows.

nj k<∑

j k<∑j k<∑

c

cj k<∑

cc

Page 122: digital communication, barry lee solutions.pdf

s-120

Problem 10-13. The QR decomposition of the matrix H with its columns in the natural orderyields a G matrix whose first diagonal element is G11, which is positive and real. If y1 is thefirst output of the whitened matched filter (10.129) , then it may be expressed as: y1 = G11a1 +n1, where n1 is a Gaussian random variable with variance σ2 = 1. The probability that thebest-first sphere detector extends the node that does not belong to the transmitted path can beexpressed as:

Pe1 = Pr[ | y1 – G11a1 |2 > | y1 + G11a1 |2 ]. (S.611)

After substituting for y1, this probability becomes:

Pr[ | n1 |2 > | n1 + 2G11a1 |2 ] = Pr[ G11+ n1a1 < 0]= Pr[ n1 > G11]= Q(G11). (S.612)

(a) From Example 10-24, G11 = 0.77 with the natural ordering. Therefore, Pe1(a) = 0.22.

(b) From Example 10-24, G11 = 1.22 with the BLAST ordering. Therefore, Pe1(b) = 0.11,

about half of that from part (a).

(c) First of all let’s assume that the decision reached by the best-first sphere detector is thecorrect decision vector. The complexity of the sphere detector is directly related to howmany nodes are visited by the search algorithm, and indirectly related to how quicklythe cost threshold J is reduced.

If the best-first sphere detector extends to the correct node at the first stage, then it willsearch the correct side of the tree first, thereby establishing a low cost threshold J for itstentative decision. With this small cost threshold, searching the wrong side of the treebecomes less complex since many of its nodes will have costs greater than J allowingthem and their subbranches to be pruned.

From part (a), the best-first sphere detector is half as likely to extend the wrong node atthe first stage when the BLAST ordering is used instead of the natural ordering.Therefore, with the BLAST ordering, the best-first sphere detector is half as likely towaste unnecessary computations searching the wrong side of the tree.

Page 123: digital communication, barry lee solutions.pdf

s-121

+

+

– –

Digital

COMMUNICATION

John R. BarryEdward A. Lee

David G. Messerschmitt

third edition

Solutions Manualfor

Kluwer Academic Publishers

Page 124: digital communication, barry lee solutions.pdf

s-122

Chapter 11

Problem 11-1. Starting with the MISO model of (11.36) and (11.37):

r = Ha + n = Hwx + n , (S.613)

the instantaneous SNR per receive antenna is

SNR =

= |Hw|2E ⁄ N0 , (S.614)

where E = E[|x|2] is the average symbol energy. Application of the Cauchy-Schwartzinequality yields:

SNR ≤ || H ||2|| w ||2E ⁄ N0 = || H ||2E ⁄ N0 , (S.615)

where the equality follows from the constraint that || w || = 1. Thus, regardless of how wechoose w, the SNR cannot exceed the right-hand side of (S.615). But we can achieve thismaximum by setting w = H* ⁄ || H ||, which follows from the Cauchy-Schwartz inequality, asverified below:

SNR =|Hw|2E ⁄ N0 = |H |2E ⁄ N0 = || H ||2E ⁄ N0 . (S.616)

Note that the solution is not unique; setting w = e jφH* ⁄ || H || for any angle φ will alsomaximize SNR.

Problem 11-2. Observe that both x = [1,–1, 1, –1]T and y = [1,1, –1, –1]T are unit-length, satisfying || x || = || y || = 1. Furthermore, they are orthogonal, since y*x = ∑ixiyi* = 0.Therefore, x and y form an orthonormal basis for their span. The projection of n onto thissubspace is then:

= x(x*n) + y(y*n)= QQ*n , (S.617)

where Q = [ x, y ]. Hitting both sides on the left by Q* yields:

Q* = Q*n . (S.618)

Since the components of n are i.i.d. CN (0, N0), the components of u = [u1, u2]T = Q*n arezero mean, jointly Gaussian, and correlated according to:

Ruu = E[uu*] = Q*E[nn*]Q= Q*N0IQ = N0Q*Q

E Hw 2 x 2[ ]E n 2[ ]

-----------------------------------

H*H

---------

12--- 1

2---

n

n

n

Page 125: digital communication, barry lee solutions.pdf

s-123

= N0 = N0I . (S.619)

This implies that u1 and u2 are i.i.d. CN (0, N0), just like the components of n. It followsthat:

|| ||2 = || u1x + u2y ||2

= || u1x ||2 + || u2y ||2 + 2Reu1u2*y*x= |u2|2 + |u2|2 = || u ||2. (S.620)

Since u1 and u2 are i.i.d. CN (0, N0), it follows that || ||2 is a central chi-squared randomvariable with mean 2N0 and with four degrees of freedom. Its pdf is given by:

= . (S.621)

Problem 11-3. Equation (11.24) gives the following expression for the BER:

BER = . (S.622)

Using the expansion for , with , yields:

. (S.623)

Substituting into (11.24), we get

BER =

= . (S.624)

At high SNR, the infinite sum in the above expression can be approximated by truncating tothe first m terms, since :

BER ≈ . (S.625)

But the second summation can be written as follows

. (S.626)

Hence, the BER simplifies:

x*y*

x y

n

n

fn 2 x( ) x

N 20

-----------ex– N 0⁄

u x( )

12--- 1–( )k m

k 1 k

Eb N 0⁄-------------------+

1 2⁄–

k 0=

m

1 x+( ) 1 2⁄– x kEb N 0⁄-------------------=

1 x+( ) 1 2⁄– 2l 1–( )!!2l( ) !!

----------------------- x–( )l

l 0=

∑ 2l 1–( )!!2l( ) !!

-----------------------k

Eb N 0⁄-------------------–

l

l 0=

∑= =

12--- 1–( )k m

k 2l 1–( )!!

2l( ) !!-----------------------

kEb N 0⁄-------------------–

l

l 0=

∑k 0=

m

12--- 2l 1–( )!!

2l( ) !!----------------------- Eb N 0⁄–( ) l–

l 0=

∑ 1–( )k m

k k( )l

k 0=

m

Eb N 0⁄( ) l– 0 as l ∞→→

12---

2l 1–( )!!2l( ) !!

----------------------- Eb N 0⁄–( ) l–

l 0=

m

∑ 1–( )k m

k k( )l

k 0=

m

1–( )k m

k k( )l

k 0=

m

∑ 0 if l 0 1 … m 1–, , , ∈

1–( )mm! lif m=

=

Page 126: digital communication, barry lee solutions.pdf

s-124

BER ≈ (S.627)

or equivalently:

BER ≈ , (S.628)

where

. (S.629)

Equivalently,

with . (S.630)

Problem 11-4.

(a) The MRC receiver processing is given by

y = H*r = H*(Ha + n). (S.631)

The instantaneous SNR per receive antenna per bit is:

SNRbit = = = . (S.632)

The average post-detection SNR is:

E[SNRbit] = E = E = . (S.633)

Hence, the average post-detection SNR is increased m-fold using MRC combining.

(b) For selection combining, the instantaneous SNR per receive antenna per bit is given by

SNRbit = (S.634)

The average post-detection SNR is:

E[SNRbit] = . (S.635)

The probability density function of X = is given by (11.22):

12---

2m 1–( )!!2m( ) !!

--------------------------- Eb N 0⁄( ) m– m!

Am

Eb N 0⁄( )m----------------------------

Am12---

2m 1–( )!!2m( ) !!

---------------------------m!12--- 2m 1–( )!!

2m---------------------------= =

Am m 12---–

Am 1–= A114---

=

H∗H2Eb

H∗ 2N 0

-----------------------------H 2Eb

N 0--------------------

Eb

N 0-------- hi

2

i 1=

m

Eb

N 0-------- hi

2

i 1=

m

∑ Eb

N 0-------- hi

2

i 1=

m

∑ Eb

N 0-------- 1( )

i 1=

m

∑ mEb

N 0-------------=

Eb

N 0--------max

ihi

2

Eb

N 0--------E max

ihi

2

maxi

hi2

Page 127: digital communication, barry lee solutions.pdf

s-125

. (S.636)

Thus, the average post detection SNR can be computed by integrating:

E[SNRbit] =

= (S.637)

= . (S.638)

But the integral inside the brackets is the mean value of an exponential random variable withmean 1 ⁄ k. (Or we can just do the integration directly, it’s simple.) Thus, we have:

E[SNRbit] =

= . (S.639)

Hence, selection combining increases the average post-detection SNR by a factor of .

f X x( ) 1–( )k 1– m

k ke kx– u x( )

k 1=

m

∑=

Eb

N 0-------- x f⋅ X x( ) xd

0

Eb

N 0-------- x 1–( )k 1– m

k ke kx–

k 1=

m

∑ xd0

Eb

N 0-------- 1–( )k 1– m

k

k 1=

m

∑ k xe kx– xd0

Eb

N 0-------- 1–( )k 1– m

k 1

k---

k 1=

m

Eb

N 0--------

1k---

k 1=

m

∑1k---

k 1=m∑

Page 128: digital communication, barry lee solutions.pdf

s-126

Problem 11-5. The starting point is the narrowband two-input one-output channel:

The received sample is:

r = h1a1 + h2a2 + n , (S.640)

where n ~ CN (0, N0). The premise of this problem is that the transmitter knows the gains h1and h2, and it transmits a single 4-QAM symbol a ∈±1 ± j across the antenna with thehigher gain, and transmits nothing across the antenna with the lower gain. With this transmitterstrategy, the received sample reduces to:

r = himaxa + n , (S.641)

where imax = arg maxi|hi|. This is exactly the same as the simplified channel model afterselection combining at the receiver; see (11.18). Antenna selection at the transmitter leads toexactly the same channel model as antenna selection at the receiver. So we need not analyzethis anew, we can reuse the results from section 11.3.2 directly. In particular, the average bit-error rate at high SNR is given by (11.25) with m = 2, namely:

BER ≈ . (S.642)

Problem 11-6. No changes would be needed. From the perspective of the receiver, a secondantenna — transmitting a delayed version of the signal transmitted by the first antenna —would appear as an extra multipath component. And an OFDM receiver is already well-equipped to mitigate multipath-induced ISI. The only constraint is that the delay betweenantenna one and antenna two be small enough so that, when combined with the delay spread ofthe channel, it does not exceed the cyclic prefix length.

r

a1AWGN

TR

AN

SM

ITT

ER

a2

⇔RECEIVER

1

2

h2

h1

Eb

3 8⁄Eb N0⁄( )2

---------------------------

Page 129: digital communication, barry lee solutions.pdf

s-127

Problem 11-7. All of the space-time codewords have two rows, which implies that there aretwo transmit antennas, and they all have two columns, which implies that the duration of eachcodeword is two signaling intervals. If m denotes the number of receive antennas, then aspace-time code achieves “full” diversity when the maximum diversity order 2m is achieved.

(a) To show that a given space-time code does not achieve full diversity, it is sufficient toshow that it violates the rank criterion of section 11.5.4; i.e., that there exists two validinput vectors x and x′ for which the difference matrix A(x) – A(x′) is not full rank.

• A1 is not full diversity.It fails the rank criterion test when x = [1 + j, 1 + j]T and x′ = [1 + j, 1 + j]T:

, (S.643)

which has rank one.• A2 is not full diversity.

It fails the rank criterion test when x = [1 + j, 1 + j]T and x′ = [–1 + j, 1 – j]T:

, (S.644)

which has rank one.• A3 is full diversity. Observe that A3 = A*, where A is the Alamouti space-time encoder

of section 11.5.3. Since A satisfies the rank criterion, so too does A3. (The Hermitiantranspose operator does not change the rank of a matrix.)

• A4 is not full diversity.It fails the rank criterion test when x = [1 + j, 1 – j]T and x′ = [–1 + j, –1 – j]T:

, (S.645)

which has rank one.(b) Only A3 achieves full diversity, so we need only find the coding gain for A3. The coding

gain of a full-rank code is the smallest (for all i ≠ j) geometric mean of the eigenvaluesof the Hermitian matrix (Ai – Aj)(Ai – Aj)*. In Example 11-5 it was demonstrated thatthe coding gain of the Alamouti code is 1. Since A3 is the conjugate transpose of theAlamouti code, and since the conjugate transpose does not change the eigenvalues of aHermitian matrix, it follows that the coding gain of A3 is also 1.

A11 j+

1 j+

A11– j+

1– j+

– 1 j+ 1 j–

1 j+ 1 j+

1– j+ 1– j–

1– j+ 1– j+– 2 2

2 2= =

A21 j+

1 j+

A21– j+

1 j–

– 1 j+ 1 j+

1 j+ 1– j+

1– j+ 1 j–

1 j– 1 j+– 2 2 j

2 j 2–= =

A41 j+

1 j–

A41– j+

1– j–

– 1 j+ 1 j–

1 j– 1 j+

1– j+ 1– j–

1– j– 1– j+– 2 2

2 2= =

Page 130: digital communication, barry lee solutions.pdf

s-122

+

+

– –

Digital

COMMUNICATION

John R. BarryEdward A. Lee

David G. Messerschmitt

third edition

Solutions Manualfor

Kluwer Academic Publishers

Page 131: digital communication, barry lee solutions.pdf

s-123

Chapter 12

Problem 12-1.

(a) There are only two codewords 00…0 and 11…1, which have distance n, so dH,min =n.

(b) The (7, 4) Hamming code has dH,min = 3, and its rate 4 ⁄ 7 code. A minimum distanceof dH,min = 3 in a repetition code requires n = 3, which has rate 1 ⁄ 3, considerablyworse than 4 ⁄ 7.

Problem 12-2.

(a) For all c ∈C, cHT = 0. The product cHT is a linear combination of rows of HT, orcolumns of H. Hence the minimum number of columns of H that can be added toproduce 0 is

wH( c ) (S.613)

which equals dH,min.

(b) From part (a), no linear combination of dH,min – 1 or fewer columns of H can be zero,so H has rank dH,min – 1. Since H has dimension n × (n – k), its rank cannot be largerthan n – k, so

dH,min – 1 ≤ n – k , (S.614)

from which the result follows.

Problem 12-3.

(a) Reverse the order of the first three columns of G.

(b) The resulting parity-check matrix is:

H = , (S.615)

minc ∈Cc ≠ 0

110

011

111

001

010

100

000

1 1 0 000 1

Page 132: digital communication, barry lee solutions.pdf

s-124

(c) In each case the most likely error pattern is the one with the smallest Hamming weight:

For those entries with two or three one bits in , the choices for may not be unique.Syndromes for which this is the case correspond to errors that cannot be reliablycorrected, but can be detected.

(d) This code is equivalent to the dual of the (7, 4) Hamming code.

(e) dH,min = 4, so only one bit error can be reliably corrected.

(f) c = 1010011.

Problem 12-4. In this case, m = n – k = 4, and the parity-check matrix has as columns allpossible 4-bit patterns except the all zero pattern. To get it into systematic form, simply arrangethe columns so that the last four columns form an identity matrix. The first 11 columns canappear in any order.

(a) One example of a systematic parity-check matrix:

H = (S.616)

Since this is in the form H = [P, I], a valid generator matrix is G = [I, –PT]:

G = . (S.617)

00000001001000110100010101100111

000 0000000 0001000 0010000 0011000 0100000 0101000 0110010 0000

10001001101010111100110111101111

000 1000000 1001000 1010000 1011100 0001100 0000001 0000100 0010

es es

e e

110100011111000110111110000100111001100110010101110101010001

1 0 0 0 0 0 0 0 0 0 0 1 1 1 10 1 0 0 0 0 0 0 0 0 0 1 1 1 00 0 1 0 0 0 0 0 0 0 0 0 0 1 10 0 0 1 0 0 0 0 0 0 0 1 1 0 10 0 0 0 1 0 0 0 0 0 0 0 1 0 10 0 0 0 0 1 0 0 0 0 0 0 1 1 00 0 0 0 0 0 1 0 0 0 0 0 1 1 10 0 0 0 0 0 0 1 0 0 0 1 1 0 00 0 0 0 0 0 0 0 1 0 0 1 0 0 10 0 0 0 0 0 0 0 0 1 0 1 0 1 00 0 0 0 0 0 0 0 0 0 1 1 0 1 1

Page 133: digital communication, barry lee solutions.pdf

s-125

(b) The parity-check matrix is

H = . (S.618)

If e is all zero except for a 1 in position i, then eH´ will be the transpose of the i-thcolumn of H, which from (S.618) is a binary representation of i.

Problem 12-5. This problem was solved in Example 12-11. From (12.31), the asymptoticcoding gain is Rdmin = (11 ⁄ 15) × 3 = 33 ⁄ 15 = 3.42 dB. In reality, from Fig. 12-7, we seethat the coding gain is only 2.6 dB at a bit-error probability of 10–5.

Problem 12-6. The rate is R = 4 ⁄ 15, and the minimum Hamming distance is dmin = 8, sothat the asymptotic coding gain with soft decoding is:

Rdmin = (4 ⁄ 15) × 8 = 31 ⁄ 15 = 3.29 dB. (S.619)

Problem 12-7.

(a) The rate system is described by the following equations

C(1)(D) = B(1)(D) ⊕ B(2)(D)(D ⊕ D2) (S.620)

C(2)(D) = B(1)(D)D ⊕ B(2)(D)(1 ⊕ D2) (S.621)

C(3)(D) = B(2)(D)D . (S.622)

These equations can be manipulated to get

0 = C(1)(D)D2 ⊕ C(2)(D)D ⊕ C(3)(D)(1 ⊕ D3) , (S.623)

which means that the parity-check matrix is

H(D) = [D2, D, 1 ⊕ D3] . (S.624)

(b) A similar technique can be used to derive the parity-check matrix, showing that it is thesame.

Problem 12-8. The condition mk = 0 for k < 0 implies that the starting state is the (0, 0)state. The condition mk = 0 for k ≥ 3 implies that the fifth state and beyond are zero. Hence

000000011111111000111100001111011001100110011101010101010101

Page 134: digital communication, barry lee solutions.pdf

s-126

the trellis is shown in the following figure:

The transitions are labeled with their weights, which are the Hamming distances from theobservation bit pairs. The minimum path metric corresponds to the bold path. Without errorsintroduced by the BSC, the observation would have been 1,1,0,1,1,1,0,0,0,0,..., implying thattwo bit errors were made. The decision is 1,0,0,....

Problem 12-9.

(a) The state transition diagram and trellis are shown in the following figure:

The error event with the minimum Hamming distance is again an event with length K =2, as shown in the following figure:

1

1

1

1

0

2

20

0

21

1

1

1

1

1

2

0

0

2

mk

(1, [1,1])

(0, [0,1])

(1, [1,0])

(0,[1,1])

(0, [

1,0]

)(1

,[0,0

])

(1, [0,1])

(0, [0,0])

[1,1][1,0]

[0,1][0,0]

(0,[1,1])

(0,[0,0])

(1,[0,1])

(0,[0,1])

(1,[1,1])

(1,[0,0])

(0, [1,0])

(1,[1,0])

(0,[0,0]) (0,[0,0]) (0,[0,0])

(1,[0,1])

(0,[1,0])

(0,[0,1])

Page 135: digital communication, barry lee solutions.pdf

s-127

The minimum Hamming distance is therefore three. Assume a BSC with probability ofcorrect transmission p. The probability of the error event is

p2(1 – p) + p3 = 3p2(1 – p) + p3 . (S.625)

For p small this is approximately 3p2 which is significantly worse than the probabilitycomputed in (12.86), which for small p is approximately 10p3.

(b) The trellis is shown below re-labeled with the binary antipodal outputs.

The minimum distance error event is shown below.

The ML soft decoder therefore has dmin = ds where ds = 2a. Hence, assuming theprobability of error is dominated by this event,

Pr[error event] ≈ Q , (S.626)

where σc2 is the variance of the noise of the coded system, which is twice the variance

of the noise of the uncoded system, σc2 = 2σu

2. Hence, the coded system is 10log(3 ⁄2) ≈ 1.8 dB better than the uncoded system. This is a full 2.2 dB worse than theencoder of Fig. 12-10(a). This is not surprising because the minimum distance is farworse.

32

31

(0,[-a,-a])

(1,[a,a])

(0,[a,a])

(1,[-a,-a])

(0,[-a,a])

(1,[a,-a])

(0,[a,-a])

(1,[-a,a])

d1 = ds d2 = ds d3 = ds

3

3aσc

-----------

Page 136: digital communication, barry lee solutions.pdf

s-128

Problem 12-10. One stage of the trellis is shown in Fig. S-9.

(a) The minimum Hamming distance error event has distance 3, by inspection of Fig. S-9a.Following the dvelopment of (12.86) we get that the probability of this error event isbounded by

Pr[this error event] ≤ p2(1 – p) + p3 = 3p2(1 – p) + p3 . (S.627)

(b) By inspection of Fig. S-9b we see that the minimum Euclidean distance error event hasdistance 2a . Hence

Pr[this error event] ≈ Q = Q . (S.628)

The uncoded system has probability of error

Pr[error, uncoded] = Q (S.629)

so the coding gain is approximately

20log ≈ 1.8 dB . (S.630)

Note that this is the same coding gain achieved by the coder in Fig. 12-37 with a softdecoder. This coder is simpler, however, since its trellis has only two states.

Problem 12-11. The code is not linear because it does not include the zero vector. (A linearcode is by definition closed under addition, and adding any codeword to itself produces thezero vector.)

Fig. S-9. One stage of the trellis for the coder in Fig. 12-38. In (a) it is labeled with the binaryoutput. In (b) it is labeled with the channel symbols, assuming binary antipodal signaling.

(b)(a)

(1, 01)

(0, 00)

(0,10)(1,11)

(1, [–a, +a])

(0, [–a, –a])

(0,[+a,–a])(1,[+a,+a])

32

33

3

2 3a2σc

--------------- 2 3a

2 2σu------------------

aσc-----

32

-------

Page 137: digital communication, barry lee solutions.pdf

s-129

Problem 12-12. The length is n = 7, the dimension is k = 3, the minimum Hammingdistance is dH,min = 4, and the codewords are:

Note that all of the non-zero codewords have the same weight.

Problem 12-13. Suppose that a message bit of m = 0 gets mapped to the the symbolsa0 = a1 = a2 = –c, while m = 1 gets mapped to a0 = a1 = a2 = +c.

Consider first the soft (ML) decoder. It decides = 1 if and only if ||r – s ||2 < ||r + s ||2,where s = [c, c, c]T. But:

||r – s ||2 – ||r + s ||2 = ||r ||2 + ||s ||2 – 2c∑iri – (||r ||2 + ||s ||2 + 2c∑iri)= –4c∑iri . (S.631)

Therefore, the soft decoder decides = 1 if and only if ∑iri > 0.

On the other hand, the hard decoder decides = 0 if and only if at least two of the threeobservations are negative. Therefore, we can achieve = 1 and = 0 by choosing:

r0 = –0.1, r1 = –0.1, r2 = +0.3 . (S.632)

The sum is positive, even though the majority are negative.

Problem 12-14. The generator matrix has one row and n = 2µ + 1 – 1 columns:

G(D) = [1, D, 1 + D, D2, 1 + D2, … , 1 + D + D2 … + Dµ]

= [10101…1010] + [0110011…0011]D + [0000…01111…1]Dµ

= [1, D, D2, … Dµ]

= [1, D, D2, … Dµ]H , (S.633)

where we recognize H as the parity-check matrix for a (n = 2µ + 1 – 1, k = 2µ + 1 – µ – 2)Hamming code. Among other things, this implies that rows of H are linearly independent.Therefore, the k-th output block yk (with dimension 1 × n) of this convolutional encoder willbe a linear combination of the rows of H. In other words, it will be in the dual space to the (n,

00000001110100011101011010011001110001110110100110100111

msoft

msoft

mhardmsoft mhard

10101010101…01010101

01100110011…0011001100011110000…00001111

11111111111…1111111100000000000…11111111

Page 138: digital communication, barry lee solutions.pdf

s-130

k) Hamming code. Clearly, the way the rows of H were constructed implies that they all havethe same Hamming weight, namely 2µ. In fact, any nonzero linear combination of these rowswill also have a Hamming weight of 2µ. (This is a well-known property: the nonzero vectors inthe dual space of a Hamming code have the same weight.)

The only way to get an output block of zero is if all of the memory elements are zero; this is aconsequence of the linear independence of the rows of H. In other words, the only statetransition that produces an all-zero block is the self loop from state zero back to state zero.Every one of the other transitions in the encoder state diagram produce an output block ofHamming weight 2µ. This fact implies that the smallest weight convolutional codeword willbe due to the impulsive message mi = δi. In particular, it will produce µ + 1 nonzero outputblocks, all of weight 2µ. Therefore, the minimum distance of this convolutional code isdmin = (µ + 1)2µ.

Problem 12-15. The encoder G(D) = [1 + D + D2, 1 + D + D2, 1 + D + D2, 1 + D2] isas sketched below:

It has two memory elements, and thus four states. One stage of the trellis is shown below:

On the left, the transitions are labeled by the output block [ck(0) ck

(1) ck(2) ck

(3)]. On theright, the labels are the Hamming weight of the output block.

ck(3)

ck(0)

mk–2mk–1mk

DD D

ck(1)

ck(2)

00

10

01

11

0000

1111

1110

0001

1111

0000

1110

0001

00

10

01

11

0

4

3

1

4

0

3

1

Page 139: digital communication, barry lee solutions.pdf

s-131

To find the minimum distance, we can apply the Viterbi algorithm to the following semi-infinite trellis:

The metric for the initial branch from state zero to state zero at stage zero is set to infinity. Allremaining branch metrics are the Hamming weight of the corresponding output block. Thegoal is to find the path back to state zero with the smallest path metric. We can abort the Viterbialgorithm after stage 8, since at that point all of the survivors for the nonzero states have alarger survivor metric than the survivor metric for the zero state. Therefore, the highlightedpath identifies the minimum-Hamming-weight codeword. The minimum distance isdmin = 10. Interestingly, the minimum-weight codeword is produced by a message[11000…], not the impulsive message. (The impulsive message produces a codeword withHamming weight 4 + 3 + 4 = 11.)

Problem 12-16. In Fig. S-10 we show the state transition diagram with the zero state brokenand the branches labeled with z raised to the square of the Euclidean distance of that branchfrom the zero branch. The path enumerator polynomial is therefore

T(z) = z12a2 +z16a2

+ … (S.634)

In Problem 12-10 we found that the error event with the minimum Euclidean distancehad distance 2a = , consistent with this result. The compact form of thispolynomial is

T(z) = . (S.635)

00

10

01

11

∞4

04

3

1

04

3

1

40

3

1

04

3

1

4

0

3

1

04

3

1

4

0

3

1

04

3

1

4

0

3

1

04

3

1

4

0

3

1

04

3

1

4

0

3

1

04

3

1

4

0

3

1

4

11

7

6

8

10

6

9

8

10

9

9

7

10

9

8

10

10

8

11

10

10

11

11

9

10

11

10

12

7

5

Fig. S-10. A state transition diagram with the zero state broken and the branches labeledwith z raised to the square of the Euclidean distance of that branch from the zero branch.

1

02

01 z4a2

z8a2

z4a2

3 12a2

z12a2

1 z4a2–--------------------

Page 140: digital communication, barry lee solutions.pdf

s-132

Problem 12-17.

(a) By inspection,

T(x, y, z) = x2yz3 + x3y2z4 + x4y3z5 + x5y4z6 + … (S.636)

or equivalently

T(x, y, z) = . (S.637)

(b) From (S.636) the length four error event (corresponding to x5) has four bit errors andHamming distance 6.

(c) The broken state transition diagram of Fig. 12-35 can be modified as shown in Fig. S-11. Since we are only interested in length and distance (and not the number of bit errors)we need only two variables, x and z. The path enumerator polynomial is found using(12.185):

T(x, z) = . (S.638)

By long division, we compute the first few terms

T(x, z) = x3z5 + x4z6 + x5z6 + x5z7 + x6z8 + x7z7 + … (S.639)

From this we see that there are two distance 6 error events, one with length 3 and onewith length 4. (The length is the exponent of x minus one.) Also, there are two lengthfour error events, one with distance 6 and one with distance 7.

Problem 12-18. The generator for the (3, 1) repetition code is G = [1, 1, 1]. It spans a one-dimensional subspace. The set of vectors orthogonal to this subspace is 110, 011, 101, 000.

x2yz3

1 xyz–------------------

Fig. S-11. The state transition diagram of Fig. 12-35 is modified for enumerating the pathlengths as shown.

[0,0]

[0,1]

[1,0] [1,1]

[0,0]

xz1xz1

xz1

xz2

xz2xz0

xz1

x3z5

1 x2z– xz–------------------------------

Page 141: digital communication, barry lee solutions.pdf

s-133

Any two of the nonzero vectors can be stacked to produce a valid parity-check matrix. Forexample, taking the first two yields a parity-check matrix of:

H = . (S.640)

(a) The corresponding Taner graph is:

Clearly it has no cycles.

(b) Suppose that a message bit of m = 0 gets mapped to the symbols a0 = a1 = a2 = −1,while m = 1 gets mapped to a0 = a1 = a2 = +1. The channel adds independent noiseN (0, σ2), producing r = [r0, r1, r2]T. First we find the exact a posteriori LLR for themessage bit, defined by (12.106):

λ = log

= log

= log , (S.641)

where the second equality follows from Bayes’ rule, and the last equality follows fromour assumption that the message bit is equally likely to be zero and one. In AWGN, thisreduces to:

λ = log

= log (where we introduce s = )

= || r + s ||2 – || r – s ||2

= 2sTr – (–2sTr)

= sTr

= ∑iri . (S.642)

10

11

01

PARITY-BIT NODES

CHECK NODES

Pr[m = 1|r ]

Pr[m = 0|r ]f(r|m = 1)Pr[m = 1]

f(r|m = 0)Pr[m = 0]f(r|m = 1)

f(r|m = 0)

f(r|m = 1)

f(r|m = 0)

2πσ2( )–3 ⁄ 2e1–

2σ2---------- r s– 2

2πσ2( )–3 ⁄ 2e1–

2σ2---------- r s+ 2

--------------------------------------------------------- 1

11

12σ2----------

12σ2----------

2σ2------

2σ2------

Page 142: digital communication, barry lee solutions.pdf

s-134

This is the exact LLR for the message bit. (It is also the exact LLR for each of the threecoded bits, since the three coded bits are identical for the repetition code.)

Now let’s look at the LLR’s produced by message passing.

• The initial downward message from the i-th bit node is ri:

• The tanh rule reduces to a “pass through” rule for check nodes of degree two, so that theupward messages for the first iteration are as shown below:

• The first and last bit node receive only one upward message. Their downward messageon the same branch must exclude this message, so their downward messageis the sameas before, namely, (2 ⁄ σ2)ri. The middle bit node, on the other hand, receives twoupward messages. According to the message-passing algorithm, the downwardmessages of the second iteration are as shown below:

• Again using the “pass through” rule, the upward messages for thesecond iteration are:

2σ2------

2σ2------r0

2σ2------r1

2σ2------r2

2σ2------r1

2σ2------r1

2σ2------r0

2σ2------r1

2σ2------r2

2σ2------r0

2σ2------r22

σ2------ r1 r2+( ) 2

σ2------ r0 r1+( )

2σ2------ r1 r2+( ) 2

σ2------ r0 r1+( )

2σ2------r2

2σ2------r0

Page 143: digital communication, barry lee solutions.pdf

s-135

After the second iteration, the i-th bit node produces its estimate of λi by adding itschannel value ri to its incoming upward messages, yielding:

λ0 = r0 + (r1 + r2)

λ1 = r1 + r0 + r2

λ2 = r2 + (r0 + r1), (S.643)

for bit nodes 0, 1, and 2, respectively. We see that all three are identical, as they shouldbe for the repetition code, and that all three reduce to the exact LLR of ∑iri, as foundin (S.642).

Problem 12-18. Yes. When m = 2, the Hamming code reduces to the (3, 1) repetition code.A valid parity check matrix for the repetition code is:

H = (S.644)

The corresponding Taner graph is obviously cycle-free:

2σ2------

2σ2------ 2

σ2------

2σ2------ 2

σ2------ 2

σ2------

2σ2------ 2

σ2------

2σ2------

10

11

01

PARITY-BIT NODES

CHECK NODES

Page 144: digital communication, barry lee solutions.pdf

s-135

+

+

– –

Digital

COMMUNICATION

John R. BarryEdward A. Lee

David G. Messerschmitt

third edition

Solutions Manualfor

Kluwer Academic Publishers

Page 145: digital communication, barry lee solutions.pdf

s-136

Chapter 13 – Signal Space Coding

Problem 13-1. In the transmitter, generate two-dimensional symbols in the conventionalway. Now group the transmitted baseband symbols into pairs, and transmit the real part of thetwo-dimensional symbol first, then the imaginary part. In the receiver, group the receivedsamples at the slicer into pairs, and apply them to a complex slicer.

Problem 13-2.

(a) The coding gain is, of course, γΛ = 1. The shaping gain is

γS = = . (S.641)

(b) The coding gains are the same. The circular shaping provides 10log10π ⁄ 3 = 0.2 dB ofshaping gain.

Problem 13-3. For the hexagonal constellation, the fundamental volume is the volume of ahexagon with inscribed radius dmin ⁄ 2. Hence,

V(Λ) = 6(dmin ⁄ 2)2 tan(π ⁄ 6) (S.642)

and the coding gain is

γΛ = = 1.155 . (S.643)

This is 0.6 dB.

Problem 13-4. Clearly

dmin(α ⋅ Λ) = α ⋅ dmin(Λ) (S.644)

and

V(α ⋅ Λ) = αN ⋅ V(Λ) . (S.645)

Thus, as a function of α, the coding gain is

γα ⋅ Λ = = = γΛ . (S.646)

Problem 13-5. The volume is

V[CN(R)] = dx . (S.647)

2πR2

12R2 2⁄--------------------- π

3---

2

3 π6---

tan-----------------------

α2dmin2

α2V 2 N/ Λ( )------------------------------

dmin2

V 2 N/ Λ( )-----------------------

CN R( )∫

Page 146: digital communication, barry lee solutions.pdf

s-137

The integral can be separated into the product of N integrals, each evaluating to 2R, soV[CN(R)] = (2R)N. To evaluate the power, the uniform density function over the N-cube hasheight (2R) –N, so the power is

P[CN(R)] = (2R) –N || x ||2dx = (2R) –N xi2dx . (S.648)

Each of the integrals evaluates to the same value,

xi2dx = V[CN – 1(R)] , (S.649)

and thus

P[CN(R)] = N ⋅R2 ⁄ 3 . (S.650)

Substituting the volume and power into the shaping gain, we get γCN(R) = 1.

Problem 13-6.

(a) Assuming the radius of the N-sphere is R, P is the power of SN(R) divided by thenumber of complex symbols, N ⁄ 2. Hence P = 2R2 ⁄ (N+ 2), and for fixed P the radiusas a function of N is R2 = (N + 2)P ⁄ 2. Assuming N is even, the volume of a sphere ofthis radius is

V(SN ) = . (S.651)

The number of points in the signal constellation is proportional to the volume, since thefundamental volume is assumed to be constant, and hence ν is proportional to thelogarithm of the volume divided by N ⁄ 2, or

ν α ⋅ log2 = log2(πP) + log2( +1) – ⋅log2(N ⁄ 2)! . (S.652)

Thus, the spectral efficiency is a constant plus a term that depends on N, where the latteris

log2( +1) – ⋅log2(N ⁄ 2)! . (S.653)

When N goes from 2 to 4, the spectral efficiency increases by 0.085 bits. When N goesfrom 2 to 6, the spectral efficiency increases by 0.138 bits.

CN R( )∫ i 1=

N∑CN R( )

CN R( )∫ 2R3

3-----------

N2----- 1+

P

π N2----- 1+

P N 2/

N2-----

!

--------------------------------------------

2N-----

π N2----- 1+

P N 2/

N2-----

!

-------------------------------------------- N2----- 2

N-----

N2----- 2

N-----

Page 147: digital communication, barry lee solutions.pdf

s-138

(b) By the Sterling approximation, the increase in spectral efficiency as N → ∞ relative toN = 2 is log2e – 1 = 0.443 bits per complex symbol. Of course, the absolute (asopposed to relative) spectral efficiency depends on Λ as well as N.

Problem 13-7. Let us assume that R is chosen such that XN has unit variance components;that is, R2 = N + 2. The marginal density of XK will be the density of XN integrated over N –K components. The integral is over a sphere of radius and dimension N – K. Theintegrand is the density of XN, which is a constant 1 ⁄ VN(R), and thus the integral becomesproportional to the volume of an (N – K)-dimensional sphere. Thus,

fbXK(bxK) = = . (S.654)

Taking R2 = N + 2, or equivalently R2 = N when N is large, the only term that is afunction of xK as N → ∞ is the numerator, which assumes the functional form:

exp– ||xK ||2 ⁄ 2 , (S.655)

a Gaussian density with unit variance and independent components. The remaining constantsare of course the normalization to unit area, and must equal asymptotically.

Problem 13-8. It is straightforward to calculate this probability,

Pr|| X || ≤ R – ε = = = → 0 . (S.656)

R2 xK2–

V N K– R2 xK2–( )

V N R( )--------------------------------------------------------

V N K– 1( )V N 1( )

--------------------------1

xK2

R2---------------–

N 2/

1xK

2

R2---------------–

K 2/

---------------------------------------

2π( ) K 2/–

V SN R ε–( )( )V SN R( )( )

------------------------------------- R ε–( )N

RN---------------------- 1 ε

R----–

N

Page 148: digital communication, barry lee solutions.pdf

s-139

Problem 13-9. The trellis for the convolutional coder is shown in the following figure:

Comparing this to (XREF FIG 14.9)a, if we choose the mapping:

then the trellis is identical. Thus with a line coder that implements this mapping, thecode is equivalent.

(1,[0,1])

(0,[1,0])

(1,[1,0])

(0,[0,1])

(1,[0,0])

(0,[1,1])

(1,[1,1])

(0,[0,0])

Ck(1)Ck

(2) Ak0 0 a0 1 –a1 0 ja1 1 – ja

Page 149: digital communication, barry lee solutions.pdf

s-140

Problem 13-10. The trellis for the convolutional coder is shown in the following figure:

To get a trellis equivalent to that in (XREF FIG 14.9) we select the mapping:

Problem 13-11. One stage of the trellis is shown in Fig. S-12a. The minimum-distance errorevent is shown in Fig. S-12b for three different correct state trajectories. Also shown is thedistance. The distance is different for all three.

Problem 13-12.

(a) Four bits per symbol are required, so the alphabet needs 16 symbols. 16-QAM willwork.

(0,[0,0])

(1,[1,1])

(0,[1,1])

(0,[1,0])

(1,[0,1])

(1,[1,0])

(1,[0,0])

(0,[0,1])

Ck(1)Ck

(2) Ak0 0 a0 1 ja1 0 – a1 1 – ja

Fig. S-12. a. One stage of the trellis for the trellis coder in (XREF 14.33). b. The minimum-distance error events when the correct state trajectory is the path shown as a dashed line.

(b)

(a)

d = 2√2d = 2√5d = 4√13+1

+3

+1

–1

–1

–3

+1

–1

–3

+3

–3

(1,+3)(0, –1)

(0, –3)

(1,+1)

Page 150: digital communication, barry lee solutions.pdf

s-141

(b) The required alphabet size is 32. The cross constellation in (XREF FIG 14.23) willwork, although there are others in Chapter 6.

(c) Coding is required. To get 4 dB total gain, using Ungerboeck’s rule of thumb (see thefirst paragraph in Section 14.2.1), an eight state trellis code will work. The coder in(XREF FIG 14.23) will do the job, and provide the additional benefit of 90 degree phaseinvariance. If phase invariance is not an issue, then the coder in (XREF FIG 14.18) willalso work, although one additional uncoded bit is required.

Problem 13-13. Consider the error events in (XREF FIG 14.38). There are several errorevents represented here because of the parallel paths. To find the minimum-distance error eventof these, first find the minimum distances between transitions in the same stage of the trellis.These are shown in the figure. For example, in the first stage, the upper parallel pair oftransitions are taken from subset A in (XREF FIG 14.13). The lower pair are taken from subsetC. Hence the minimum distance in this stage is the minimum distance between symbols in Aand C, or √2. The minimum distance in the third stage is similarly computed. The minimumdistance in the second stage is the minimum distance between symbols in A and B, which withsome simple geometry can be shown to be √2 –√2 ≈ 0.77. The minimum distance of all theerror events shown is the square root of the sums of the squares of these stage distances, or

√2+2 –√2+2 = √6 –√2 ≈ 2.14 . (S.657)

It is easy to see that any error event with length greater than three stages will have adistance greater than these error events, so 2.14 is the distance of the second closesterror event.

Problem 13-14.

(a) The partition is shown in (XREF FIG 14.39), along with the minimum distances. Noticethat at the final partitioning stage (into 16 subsets) there is no improvement in minimumdistance for some of the subsets.

(b) The average power of the 16-QAM constellation has been computed elsewhere and is10. The 32-cross constellation has all the same points, plus 16 additional points withaverage power

(52 + 32) + (52 + 12) = 30 (S.658)

Fig. S-13. A set of error events and their minimum distance.

=

MIN.DIST.(A,C)

MIN.DIST.(A,C)

=

MIN.DIST.(A,B) = 2

2 2–

2

816------ 8

16------

Page 151: digital communication, barry lee solutions.pdf

s-142

Fig. S-14. Set partitioning for a 32-cross constellation. The minimum distances betweenpoints in the subset are shown.

Page 152: digital communication, barry lee solutions.pdf

s-143

so the overall average power is

(10 + 30) = 20 . (S.659)

This is 10log(2) = 3 dB more power.

(c) It should be adequate to use the subsets in the third row of (XREF FIG 14.39). Theminimum distance between parallel transitions is 4. There are 4 such subsets, so mtilde= 1. Compared to the 16-QAM constellation in part (b), which has minimum distance 2,this is a 6 dB improvement. However, the power of the 32-cross constellation has to bereduced by 3 dB, so if the parallel transitions have the smallest distance (we can assurethis with proper coder design) then the total gain will be about 3 dB. UsingUngerboeck’s rule of thumb (see the first paragraph of Section 14.2.1), a coder with 4states should work.

(d) It should be adequate to use the subsets in the fourth row of (XREF FIG 14.39). Thereare 8 such subsets, so = 2. The minimum distance between parallel transitions will be4 , which is about 9 dB better than the minimum distance of 2 in the 16-QAMconstellation. Again, of this 9 dB improvement, 3 dB must be sacrificed to normalizethe power, leaving a 6 dB gain. This is more than we need, so we could use a 16 statetrellis coder to get about 5 dB gain, and the minimum-distance error event will probablynot be the parallel transitions.

Problem 13-15.

(a) The trick here is to compare to an uncoded system with the same average power. If the8-PSK symbols have amplitude a, then it has average power a2. If the 16-QAMsymbols have real and imaginary parts that are ± b or ± 3b, then they have averagepower 10b2. Hence, for the two systems to have the same average power, we requirethat a = √10b≈3.16b. The subsets that are used for parallel transitions are in the thirdrow of (XREF FIG 14.16). The symbols within each subset have minimum distance 4b,so if the parallel transitions dominate the probability of error, then

Pr[error] ≈ Q . (S.660)

The minimum distance for the 8-PSK alphabet is a√2 –√2≈0.77a, so for the uncodedsystem

Pr[error] ≈ Q . (S.661)

The difference is

20log = 20log = 4.3 dB, (S.662)

which is very good indeed.

12---

m2

2bσ

-------

0.77 a2σ-------

4b0.77a---------------

40.77 10----------------------

Page 153: digital communication, barry lee solutions.pdf

s-144

(b) The trellis is shown in (XREF FIG 14.40). We can show that the error event with shapeas in (XREF FIG 14.38) is b, which is greater than the distance 4b of the paralleltransitions. Hence the assumption in (a) seems reasonable.

Problem 13-16. First note that every error event is a minimum-distance error event for allpossible actual paths through the trellis. Note further that every error event starts and ends witha symbol error, and has no symbol errors in between, so w(e) = 2 for all e ∈E. Consequently,(XREF EQN 9.150) becomes

R = 2∑e∈ EPr[ψ] . (S.663)

Define E(ψ) to be the set of error events for the actual path ψ through the trellis. Thenthe summation in XREF LAST can be rewritten

∑e∈EPr[ψ] = ∑ψ∑e∈E(ψ)Pr[ψ] = (∑ψPr[ψ])(∑e∈E(ψ)1) . (S.664)

The first summation is unity, and the second is infinite because every ψ has an infinitenumber of error events. Consequently, R is unbounded.

Problem 13-17.

(a) One stage of the trellis is shown in the following figure.

Fig. S-15. Yet another trellis.

(1,Y)

(1,Z)

(0,Z)

(1,X)

(0,X)

(0,W)

(1,W)

(0,Y)

20

(1,-j)

(1,-1)

(0,j)

(0,1)

(0,j)

(0,1)

(1,-j)

(1,-1)

Page 154: digital communication, barry lee solutions.pdf

s-145

(b) For different assumed correct state trajectories, the minimum-distance error events areshown below.

In each case the distance is dE,min = . The probability of occurrence is thereforeQ( ⁄ 2σ).

(c) The 2-PSK alphabet with the same power is A = –1,+1 which has a minimumdistance of 2. The coded system is therefore

20log( ⁄ 2) = 1.76 dB (S.665)

better.

(0,1)

(0,-j)(0,-j)

(1,j)

(1,-1)(0,-j)

(1,j)(1,j)

(1,-1)(1,-1)

(0,1)

(0,1)

(0,-j)(1,-1)

(0,1)

(1,j)

66

6

Page 155: digital communication, barry lee solutions.pdf

E-1Errata Last updated: April 5, 2007

Are there more? Please send feedback to [email protected]. Thank you.

page 16: The second –1/2T in Fig. 2-4(b) should be 1/2T.

page 17: The Hilbert transformer gives a – π/2 phase shift, not a –π phase shift.

page 44: Remove the subscript e on the right-hand side of the equation.

page 47: Correct table 2-3: rect( t , T/2) ↔ , and ↔ rect( θ , W ) .

page 60: In equation (3.15), exchange the order of dα and dβ.

page 60: After (3.21), replace “a complex variable s” by “an imaginary variable s = jω”; replace “Laplace” by “Fourier”; and replace “at –s” by “at – ω”.

page 73: Third line of (3.79) is missing dθ.

page 81, last line, a “j” is missing: replace “H( f ) = – sign( f )” by “H( f ) = – j ⋅sign( f )”.

page 82, (3.109): the minus sign before (–τ) should be a plus sign.

page 83, second line after (3.114): “Xk + 1” should be “Xk”.

page 92, (3.133): the minus sign before λqj( t ) should be a plus sign.

page 94, (3.144): pj( t ) should be pj( t ).

page 99: The first convolution in (3.163) is missing a factor of s.

page 100, (3.172): there is a factor of “H( f )” missing from the right-hand side.

page 103, (3.188): the first summation should be .

page 104, (3.194): The factor of x( t0) is missing a minus sign: it should be e –Λ( t ).

page 104, (3.196): Replace e –Λ(u ) inside the integral by e Λ(u ).

page 107, Problem 3-9: remove the absolute value in fa <|f | < fb.

page 108, (3.210): Replace a by q.

page 120, (4.21): Remove the lower bound “0 ≤”.

page 121, the right-hand side of (4.23) is missing a minus sign.

page 122: The axis labels at the bottom of Fig. 4-4 were cut off during production; they should be–10, –5, 0, 5, 10, …, 35, 40, as shown in the figure at the end of this document.

page 123, (4.25) is missing a minus sign and a dx.

page 132: The summation ∑m in (5.1) should be ∑k.

page 139, al ≠ k should be al ≠ k .

page 149, the middle term of the second line of (5.33) should be Re(m – n)*(E [a] – m).

page 150, the second line of (5.34) is missing a factor of 1/L.

page 159: The factor of K in the third line after (5.55) should be conjugated.

page 160, Exercise 5-1, second line, p( t ) should be h( t ) ∗ h*( –t ).

page 162, in the second line of the second paragraph, “filter” should be “sampler”.

πfT( )sinπfT

----------------------- Wk( )sinWkT

----------------------

RNN

m 0=

∞∑m ∞–=

∞∑

Page 156: digital communication, barry lee solutions.pdf

E-2

page 171, the convolution near the end of the paragraph after (5.77) should be ak ∗ mk .

page 188, Exercise 5-35 should refer to Fig. 5-34, not Fig. 5-32.

page 189: In the last line of Example 5-36, “4-PAM” should be “4-PSK”.

page 190: In (5.113), the second factor of should be squared, yielding:

Pr[error] = 4 – 4 . (E.1)

page 194: The label in Fig. 5-38(a) should be 8.4 dB, to match that in part (b).

page 204: After (6.5), insert “assuming that fcT is an integer”.

page 211: The second element of hi after (6.25) should be hi,.

page 225: The x-axis label of Fig. 6-11 should be 1/ν, not 1/b.

page 235: The center frequency in the top figure should be 15/T, not 15/2T.

page 245: The r.h.s. of (6.104) should be 1/N, as should |xm| at top of p. 245.

page 318: In (7.85) and (7.86), both of the “>” should be “<”, so that 0 = = = 0.

page 334: Replace ink by Xkpage 343: In (7.169) Vl should be Vm.

page 532, Prob. 10-4: a factor of e–t is missing from both impulse responses; they should be h( t ) = e–tu( t – T ) andh( t ) = e–tu( t – 2T ).

page 532, Prob. 10-5: “single-input double-output” should be “double-input single-output”

-5 0 5 35 40-10

2-PAM, 4-QAM

4-PAM, 16-QAM

8-PAM, 64-QAM

32-PAM1024-QAM

64-PAM

SHANNON LIMIT

(GAUSSIA

N INPUTS

),log

2(1 + SNR)

10 15 20 25 30

SNR, E[X]/E[N] (dB)

16-PAM256-QAM

1–2

10–

10–

10–

10–

10–

0

1

2

3

4

5

6

CA

PA

CIT

Y

(BIT

S P

ER

RE

AL

SY

MB

OL)

1 1

M----------–

11

M---------- –

Q 3E/N M 1–

-------------------- 0 1

1

M---------- –

2Q2 3E/N

M 1–--------------------

0

a a a