selft practice exercises

Upload: huynh-ngoc-tan

Post on 04-Jun-2018

229 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/13/2019 Selft Practice Exercises

    1/62

    Digital Communication Exercises

    Contents

    1 Converting a Digital Signal to an Analog Signal 2

    2 Decision Criteria and Hypothesis Testing 7

    3 Generalized Decision Criteria 11

    4 Vector Communication Channels 13

    5 Signal Space Representation 17

    6 Optimal Receiver for the Waveform Channel 23

    7 The Probability of Error 28

    8 Bit Error Probability 34

    9 Connection with the Concept of Capacity 39

    10 Continuous Phase Modulations 41

    11 Colored AGN Channel 44

    12 ISI Channels and MLSE 47

    13 Equalization 52

    14 Non-Coherent Reception 58

    1

  • 8/13/2019 Selft Practice Exercises

    2/62

    1 Converting a Digital Signal to an Analog Signal

    1. [1, Problem 4.15].Consider a four-phase PSK signal represented by the equivalent lowpass signal

    u(t) =n

    Ing(t nT)

    where In takes on one of of the four possible values

    1/2(1 j) with equal probability. Thesequence of information symbols{In} is statistically independent (i.i.d).

    (a) Determine the power density spectrum ofu(t) when

    g(t) =

    A, 0 t T,0, otherwise.

    (b) Repeat (1a) when

    g(t) =

    A sin(t/T), 0 t T,0, otherwise.

    (c) Compare the spectra obtained in (1a) and (1b) in terms of the 3dB bandwidth and thebandwidth to the first spectral zero. Here you may find the frequency numerically.

    Solution:

    We have thatSU(f) = 1T|G(f)|2 m= CI(m)ej2fmT, E(In) = 0, E(|In|2) = 1, hence

    CI(m) =

    1, m= 0,

    0, m = 0.

    therefore

    m= CI(m)ej2fmT = 1 SU(f) = 1T|G(f)|2.

    (a) For the rectangular pulse:

    G(f) =ATsin f T

    f T ej2fT/2 |G(f)|2 =A2T2 sin

    2 f T

    (f T)2

    where the factor ej2fT/2 is due to the T /2 shift of the rectangular pulse from the center.Hence:

    SU(f) =A2T

    sin2 f T

    (f T)2

    (b) For the sinusoidal pulse: G(f) = T0 A sin(t/T)exp(j2f t)dt. By using the trigonometricidentity sin x= exp(jx)exp(jx)

    2j it is easily shown that:

    G(f) = 2AT

    cos T f

    1 4T2f2 ej2fT/2 |G(f)|2 =

    2AT

    2cos2 T f

    (1 4T2f2)2

    Hence:

    SU(f) =

    2A

    2T

    cos2 T f

    (1 4T2f2)2

    2

  • 8/13/2019 Selft Practice Exercises

    3/62

    (c) The 3dB frequency for (1a) is:

    sin2 f3dBT

    (f3dBT)2 =

    1

    2 f3dB= 0.44

    T

    (where this solution is obtained graphically), while the 3dBfrequency for the sinusoidal pulseon (1b) is: f3dB= 0.59T .The rectangular pulse spectrum has the first spectral null at f= 1/T, whereas the spectrumof the sinusoidal pulse has the first null at 3/2T. Clearly the spectrum for the rectangularpulse has a narrower main lobe. However, it has higher sidelobes.

    2. [1, Problem 4.21].The lowpass equivalent representation of a PAM signal is

    u(t) =n

    Ing(t nT)

    Suppose g(t) is a rectangular pulse and

    In = an an2where {an} is a sequence of uncorrelated 1 binary values (1, 1) random variables that occur withequal probability.

    (a) Determine the autocorrelation function of the sequence{In}.(b) Determine the power density spectrum ofu(t).

    (c) Repeat (2b) if the possible values ofan are (0, 1).

    Solution:

    (a)

    CI(m) = E{In+mIn} =E{(an+m an+m2)(an an2)}

    =

    2, m= 0,

    1, m= 20, otherwise.

    = 2(m) (m 2) (m + 2)

    (b) SU(f) = 1T|G(f)|2

    m= CI(m)e

    j2fmT, where

    m=CI(m)ej2fmT = 4 sin2 2fT,

    and

    |G(f)|2 = (AT)2

    sin f T

    f T

    2.

    Therefore:

    SU(f) = 4A2T

    sin f T

    f T

    2sin2 2f T

    1E{anam}= 0 for n =m.

    3

  • 8/13/2019 Selft Practice Exercises

    4/62

    (c) If{an} takes the values (0, 1) with equal probability then E{an} = 1/2 and E{an+man} =14 [1 + (m)]. Then:

    CI(m) = 1

    4

    [2(m)

    (m

    2)

    (m + 2)]

    ii(f) = sin

    2 2f T

    SU(f) = A2T

    sin f T

    f T

    2sin2 2f T

    Thus, we obtain the same result as in (2b) but the magnitude of the various quantities isreduced by a factor of 4.

    3. [2, Problem 1.16].A zero mean stationary process x(t) is applied to a linear filter whose impulse response is definedby a truncated exponential:

    h(t) =

    aeat, 0 t T,0, otherwise.

    Show that the power spectral density of the filter output y(t) is defined by

    SY(f) = a2

    a2 + 42f2(1 2 exp(aT)cos2f T+ exp(2aT))SX(f)

    whereSX(f) is the power spectral density of the filter input.

    Solution:

    The frequency response of the filter is:

    H(f) =

    h(t)exp(j2f t)dt

    =

    a exp(

    at)exp(

    j2f t)dt

    = a

    exp((a +j2f)t)dt

    = a

    a +j2f[1 eaT(cos2f Tj sin2f T)].

    The squared magnitude response is:

    |H(f)|2 = a2

    a2 + 42f2

    1 2eaT cos2f T+ e2aTAnd the required PSD follows.

    4. [1, Problem 4.32].The information sequence {an} is a sequence of i.i.d random variables, each taking values +1 and1 with equal probability. This sequence is to be transmitted at baseband by a biphase codingscheme, described by

    s(t) =n

    ang(t nT)

    whereg (t) is defined by

    g(t) =

    1, 0 t T /2,1, T /2 t T.

    4

  • 8/13/2019 Selft Practice Exercises

    5/62

    (a) Find the power spectral density ofs(t).

    (b) Assume that it is desirable to have a zero in the power spectrum at f = 1/T. To this endwe use precoding scheme by introducing bn = an+ kan1, where k is some constant, andthen transmit the

    {bn

    }sequence using the same g(t). Is it possible to choose k to produce a

    frequency null at f = 1/T? If yes, what are the appropriate value and the resulting powerspectrum?

    (c) Now assume we want to to have zeros at all multiples off0 = 1/4T. Is it possibl to havethese zeros with an appropriate choice ofk in the previous part? If not then what kind ofprecoding do you suggest to result in the desired nulls?

    Solution:

    (a) Since = 0, 2a = 1, we have SS(f) = 1T|G(f)|2.

    G(f) = T

    2

    sin(fT/2)

    fT/2 ej2fT/4 T

    2

    sin(fT/2)

    fT/2 ej2f3T/4

    = T2

    sin(fT/2)fT/2

    ej2fT(2j sin(fT/2))

    = jTsin2(fT/2)

    fT/2 ej2fT

    |G(f)|2 = T2

    sin2(fT/2)

    fT/2

    2

    SS(f) = T

    sin2(fT/2)

    fT/2

    2

    (b) For non-independent information sequence the power spectrum ofs(t) is given by SS(f) =1T|G(f)|2

    m= CB(m)e

    j2fmT.

    CB(m) = E{bn+mbn}= E{an+man} + kE{an+m1an} + kE{an+man1} + k2E{an+m1an1}

    =

    1 + k2, m= 0,

    k, m= 10, otherwise.

    Hence: m=

    CB(m)ej2fmT = 1 + k2 + 2k cos2f T

    We want:

    SS(1/T) = 0

    m=CB(m)e

    j2fmTf=1/T

    = 0 1 + k2 + 2k= 0 k= 1

    and the resulting power spectrum is:

    SS(f) = 4T

    sin2 fT/2

    fT/2

    2sin2 f T

    5

  • 8/13/2019 Selft Practice Exercises

    6/62

  • 8/13/2019 Selft Practice Exercises

    7/62

    2 Decision Criteria and Hypothesis Testing

    Remark 1. Hypothesis testing is another common name for decision problem: You have to decidebetween two or more hypothesis, say H0, H1, H2, . . . where Hi can be interpreted as the unknown

    parameter has value i. Decoding a constellation withKsymbols can be interpreted as selecting thecorrect hypothesis from H0, H1, . . . , H K1 whereHi is the hypothesis that Si was transmitted.

    1. Consider an equal probability binary sourcep(0) =p(1) = 1/2, and a continuous output channel:

    fR|M(r|1) = aear r 0fR|M(r|0) = bebr r 0 b > a >0

    (a) Find a constantKsuch that the optimal decision rule is r10

    K.

    (b) Find the respective error probability.

    Solution:

    (a) Optimal decision rule:

    p(0)fR|M(r|0)01

    p(1)fR|M(r|1)

    Using the defined channel distributions:

    bebr01

    aear

    101

    ab

    e(ab)r

    001

    ln(a

    b) + (b a)r

    r10

    ln(ab )

    a b =K

    (b)

    p(e) = p(0)Pr{

    r > K|0}

    +p(1)Pr{

    r < K|1}

    = 1

    2

    K

    bebtdt + K

    0

    aeatdt

    = 1

    2[ebK + 1 eaK]

    2. Consider a binary source: Pr{x= 2} = 2/3, Pr{x= 1} = 1/3, and the following channely= A x, A N(1, 1)

    wherex and A are independent.

    7

  • 8/13/2019 Selft Practice Exercises

    8/62

  • 8/13/2019 Selft Practice Exercises

    9/62

    (b) Again, since we have equiprobable signals, the MAP and ML decision rules are the same.The decision rules are as follows,

    p1< p2|H0} = 1 (N, /220)

    (N)

    PM = Pr{decoding H0 ifH1 was transmitted} = 1 Pr{Y >|H1} = (N, /221)

    (N)

    where(K, x/) is the lower incomplete gamma function 3.

    3(s, x) =x0 t

    s1etdt.

    10

  • 8/13/2019 Selft Practice Exercises

    11/62

    3 Generalized Decision Criteria

    1. Bayes decision criteria.Consider an equiprobable binary symmetric sourcem {0, 1}. For the observation,R, conditionalprobability density function is

    fR|M(r|M= 0) =

    12 , |r| 1 : fR|M(r|M= 0) m= 1.

    |r| 1 : fR|M(r|M= 0) = 0

    |r|

  • 8/13/2019 Selft Practice Exercises

    12/62

    2. Non Gaussian additive noise.

    Consider the sourcem {1, 1}, Pr{m= 1} = 0.9, Pr{m= 1} = 0.1. The observation,y, obeys

    y= m + N, N U[2, 2]

    (a) Obtain the decision rule for the minimum probability of error criterion and the minimalprobability of error.

    (b) For the cost matrix C =

    0 1100 0

    , obtain the optimal Bayes decision rule and the error

    probability.

    Solution:

    (a)

    f(y|1) = 14 , 1< y

  • 8/13/2019 Selft Practice Exercises

    13/62

    4 Vector Communication Channels

    Remark 2. Vectors are denoted with boldface letters, e.g. x, y.

    1. General Gaussian vector channel.Consider the Gaussian vector channel with the sources p(m0) =q, p(m1) = 1 q, s0= [1, 1]T, s1=[1, 1]T. For sendingm0 the transmitter sends s0 and for sending m1 the transmitter sends s1.The observations, ri, obeys

    r= si+ n n= [n1, n2], n N(0, n), n =

    21 00 22

    The noise vector, n, and the messages mi are independent.

    (a) Obtain the optimal decision rule using MAP criterion, and examine it for the following cases:

    i. q= 12 , 1= 2.

    ii. q= 12

    , 21 = 222.

    iii. q= 13 , 21 = 222.(b) Derive the error probability for the obtained decision rule.

    Solution:

    (a) The conditional probability distribution functionR|Si N(Si, n):

    f(r|si) = 1(2)2 detn

    exp

    1

    2(r si)T1n (r si)

    The MAP optimal decision rule

    p(m0)f(r|s0)m0

    m1

    p(m1)f(r|s1)

    q(2)2 detn

    exp

    1

    2(r s0)T1n (r s0)

    m0

    m1

    1 q(2)2 detn

    exp

    1

    2(r s1)T1n (r s1)

    qexp

    1

    2(r s0)T1n (r s0)

    m0

    m1

    (1 q)exp

    1

    2(r s1)T1n (r s1)

    (r s1)T1n (r s1) (r s0)T1n (r s0)m0

    m1

    2 ln1 q

    q

    AssignrT = [x, y]

    (x + 1)2

    21+

    (y+ 1)2

    22 (x 1)

    2

    21 (y 1)

    2

    22

    m0

    m1

    2 ln1 q

    q

    x21

    + y

    22

    m0

    m1

    1

    2ln

    1 qq

    13

  • 8/13/2019 Selft Practice Exercises

    14/62

    i. For the caseq= 12

    , 1= 2 the decision rule becomes

    x + ym0

    m1

    0

    ii. For the caseq= 12

    , 21 = 222 the decision rule becomes

    x + 2ym0

    m1

    0

    iii. For the caseq= 13

    , 21 = 222 the decision rule becomes

    x + 2ym0

    m1

    ln 2

    (b) Denote K 12ln1q

    q , and define z = x21

    + y22

    . The conditional distribution ofZ is

    Z|si N((1)i 21+

    22

    2122

    , 21+

    22

    2122

    ), i= 0, 1

    The decision rule in terms ofz , K

    zm0

    m1

    K

    The error probability

    p(e) =p(m0) Pr

    {z < K

    |m0

    }+p(m1) Pr

    {z > K

    |m1

    }Assigning the conditional distribution

    Pr{z < K|m0} = 1 QK 21+22

    2122

    21+22

    2122

    Pr{z > K|m1} = QK+ 21+22

    2122

    21+22

    2122

    For the case q= 12

    , 1= 2 the error probability equalsQ 221.

    2. Non Gaussian additive vector channel.Consider a binary hypothesis testing problem in which the sources s0 = [1, 2, 3], s1 = [1, 1, 3]are equiprobable. The observations, ri, obeys

    r= si+ n, n= [n0, n1, n2]

    wheren elements are i.i.d with the following probability density function

    fNK (nk) =1

    2e|nk|

    14

  • 8/13/2019 Selft Practice Exercises

    15/62

    Obtain the optimal decision rule using MAP criteria.

    Solution:

    The optimal decision rule using MAP criteria

    p(s0)f(r|s0) 01

    p(s1)f(r|s1)

    f(r|s0)01

    f(r|s1)

    The conditional probability distribution function

    f(r|si) = fN(r si) =2

    k=0

    fN(nk =rk sik)

    =

    1

    2 e|r0

    si,0

    |1

    2 e|r1

    si,1

    |1

    2 e|r2

    si,2

    |=

    1

    8 e[

    |r0

    si,0

    |+

    |r1

    si,1

    |+

    |r2

    si,2

    |]

    An assignment of the si elements yield

    |r0 1| + |r1 2| + |r2 3|10

    |r0 1| + |r1+ 1| + |r2+ 3|

    |r1 2| + |r2 3|10

    |r1+ 1| + |r2+ 3|

    Note that the above decision rule compares the distance from the axis in both hypotheses, unlikein the Gaussian vector channel in which the Euclidean distance is compared.

    3. Gaussian two-channel.Consider the following two-channel problem, in which the observations under the two hypothesesare

    H0 :

    Z1Z2

    =

    1 00 1

    2

    V1V2

    +

    11

    2

    H1 :

    Z1Z2

    =

    1 00 1

    2

    V1V2

    +

    112

    whereV1 and V2 are independent, zero-mean Gaussin variables with variance 2.

    (a) Find the minimum probability of error receiver if both hypotheses are equally likely. Simplify

    the receiver structure.(b) Find the minimum probability of error.

    Solution:

    LetZ =

    Z1Z2

    . The conditional distribution ofZ is

    Z|H0 N(0, ), Z|H1 N(1, ),0=

    11

    2

    , , 1=

    112

    = 2

    1 00 1

    4

    15

  • 8/13/2019 Selft Practice Exercises

    16/62

    (a) The decision rule

    f(z|H1)f(z

    |H1)

    H1

    H0

    p(H0)

    p(H1)

    log f(z|H1) log f(z|H0)H1

    H0

    0

    2

    2(z1+ 2z2)

    H1

    H0

    0

    z1+ 2z2H1

    H0

    0

    (b) DefineX=Z1+ 2Z2. SinceV1, V2 are independentZ1, Z2 are independent as well. A linear

    combination ofZ1, Z2 yia Gaussian R.V with the following parameters

    E{X|H0} = 2, E{X|H1} = 2,V ar{X|H0} =V ar{X|H1} = 22

    And the probability of error events

    PFA = Pr{H=H1|H=H0} =

    0

    f(x|H1)dx,

    PM = Pr{H=H0|H=H1} = 1

    0

    f(x|H0)dx

    16

  • 8/13/2019 Selft Practice Exercises

    17/62

    5 Signal Space Representation

    1. [1, Problem 4.9].Consider a set ofMorthogonal signal waveformssm(t), 1 m M, 0 t T4, all of which havethe same energy

    5

    . Define a new set of waveforms as

    sm(t) =sm(t) 1

    M

    Mk=1

    sk(t), 1 m M, 0 t T

    Show that theMsignal waveforms{sm(t)} have equal energy, given by

    = (M 1) M

    and are equally correlated, with correlation coefficient

    mn=

    1

    T

    0 sm(t)sn(t)dt= 1

    M 1Solution:

    The energy of the signal waveform s m(t) is:

    =

    |sm(t)|2 dt=

    sm(t) 1MMk=1

    sk(t)

    2dt=

    T0

    s2m(t) + 1

    M2

    Mk=1

    Ml=1

    T0

    sk(t)sl(t)dt 2M

    Mk=1

    T0

    sm(t)sk(t)dt

    = +

    1

    M2

    M

    k=1

    M

    l=1

    kl 2

    M

    = 1

    M 2

    M=

    M 1M

    The correlation coefficient is given by:

    mn = 1

    T0

    sm(t)sn(t)dt=

    T0

    sm(t) 1

    M

    Mk=1

    sk(t)

    sn(t) 1

    M

    Ml=1

    sl(t)

    dt

    = 1

    T0

    sm(t)sn(t)dt + 1

    M2

    Mk=1

    Ml=1

    T0

    sk(t)sl(t)dt

    1

    2

    M

    Mk=1

    T0

    sm(t)sk(t)dt

    =1

    M2 M 2M

    M1M

    = 1M 1

    2. [1, Problem 4.10].

    4sj(t), sk(t)= 0, j =k, j, k {1, 2, . . . , M }.5The energy of the signal waveform sm(t) is: =

    |sm(t)|

    2 dt

    17

  • 8/13/2019 Selft Practice Exercises

    18/62

  • 8/13/2019 Selft Practice Exercises

    19/62

    Thus, the signals fn(t) are orthogonal. It is also straightforward to prove that the signalshave unit energy:

    |fn(t)|2 dt= 1, n= 1, 2, 3

    Hence, they are orthonormal.(b) We first determine the weighting coefficients

    xn =

    x(t)fn(t)dt, n= 1, 2, 3

    x1 =

    40

    x(t)f1(t)dt= 12

    10

    dt +1

    2

    21

    dt 12

    32

    dt +1

    2

    43

    dt= 0

    x2 =

    40

    x(t)f2(t)dt=1

    2

    40

    x(t)dt= 0

    x1 =

    40

    x(t)f1(t)dt= 12

    10

    dt 12

    21

    dt +1

    2

    32

    dt +1

    2

    43

    dt= 0

    As it is observed, x(t) is orthogonal to the signal waveforms fn(t), n= 1, 2, 3 and thus it cannot represented as a linear combination of these functions.

    3. [1, Problem 4.11].Consider the following four waveforms

    s1(t) =

    2, 0 t

  • 8/13/2019 Selft Practice Exercises

    20/62

    In matrix notation, the four waveforms can be represented as

    s1(t)s2(t)s3(t)s4(t)

    =

    2 1 1 12 1 1 01

    1 1

    1

    1 2 2 2

    f1(t)f2(t)f3(t)f4(t)

    Note that the rank of the transformation matrix is 4 and therefore, the dimensionality of thewaveforms is 4.

    (b) The representation vectors are

    s1 =

    2 1 1 1s2 =

    2 1 1 0s3 =

    1 1 1 1

    s4 =

    1 2 2 2

    (c) The distance between the first and the second vector is:

    d1,2=

    |s1 s2|2 =

    4 2 2 12 = 25Similarly we find that:

    d1,3 =

    |s1 s3|2 =

    1 0 2 02 = 5d1,4 =

    |s1 s4|2 =

    1 1 1 32 = 12d2,3 =

    |s2 s3|2 =

    3 2 0 1

    2

    =

    14

    d2,4 = |s2 s4|2 = 3 3 3 22 =

    31

    d3,4 =

    |s3 s4|2 =

    0 1 3 32 = 19Thus, the minimum distance between any pair of vectors is dmin =

    5.

    4. [2, Problem 5.4].

    (a) Using Gram-Schmidt orthogonalization procedure, find a set of orthonormal basis functionsto represent the following signals

    s1(t) =

    2, 0 t

  • 8/13/2019 Selft Practice Exercises

    21/62

    Define

    s21 =

    T0

    s2(t)1(t)dt=

    10

    4 1dt= 4

    g2(t) = s2(t) s211(t) = 4, 1 t

  • 8/13/2019 Selft Practice Exercises

    22/62

    The integral is then written as a sum of three integrals, T0

    |y(t) xi(t)|2 dt= t1

    0

    |y(t) xi(t)|2 dt + t2t1

    |y(t) xi(t)|2 dt + Tt2

    |y(t) xi(t)|2 dt

    Since the second integral over the interval [t1, t2] is constant as a function ofi, the optimumdecision rule reduces to,

    mini

    t10

    |y(t) xi(t)|2 dt + Tt2

    |y(t) xi(t)|2 dt

    And therefore, the optimum receiver may ignore the interval [t1, t2].

    (b) In an appropriate orthonormal basis of dimension N M, the vectors xi and y are givenby,

    xTi =

    xi1 xi2 . . . xiN

    yT = y1 y2 . . . yNAssume that xim = x1m for all i, the optimum decision rule becomes,

    mini

    Mk=1

    |yk xik|2 mini

    Mk=1,k=m

    |yk xik|2 + |ym xim|2

    Since|ym xim|2 is constant for all i, the optimum decision rule becomes,

    mini

    Mk=1,k=m

    |yk xik|2

    Therefore, the projectionxm might be ignored by the optimum receiver.

    (c) The result does not hold true if the noise is colored Gaussian noise. This is due to the factthat the noise along one component is correlated with other components and hence mightnot be irrelevant. In such a case, all components turn out to be relevant. Equivalently, byduality, the same result holds in the time domain.

    22

  • 8/13/2019 Selft Practice Exercises

    23/62

    6 Optimal Receiver for the Waveform Channel

    1. [1, Problem 5.4].A binary digital communication system employs the signals

    s0(t) =0, 0 t < T,

    0, otherwise.s1(t) =

    A, 0 t < T,0, otherwise.

    for transmitting the information. This is calledon-off signaling. The demodulator cross-correlatesthe received signal r(t) with si(t), i= 0, 1 and samples the output of the correlator att = T.

    (a) Determine the optimum detector for an AWGN channel and the optimum threshold, assumingthat the signals are equally probable.

    (b) Determine the probability of error as a function of the SNR. How does on-off signallingcompare with antipodal signaling?

    Solution:

    (a) The correlation type demodulator employs a filter:

    f(t) =

    1T

    , 0 t < T,0, otherwise.

    Hence, the sampled outputs of the cross-correlators are:

    r= si+ n, i= 0, 1

    wheres0= 0, s1= A

    Tand the noise termn is a zero-mean Gaussian random variable withvariance 2n =

    N02

    . The probability density function for the sampled output is:

    f(r

    |s0) =

    1

    N0e

    r2

    N0 f(r

    |s1) =

    1

    N0e

    (rAT)2N0

    The minimum error decision rule is:

    f(r|s1)f(r|s0)

    s1s0

    1

    rs1s0

    1

    2A

    T

    (b) The average probability of error is:

    p(e) = 1

    2

    12A

    T

    f(r|s0)dr+1

    2 12A

    T

    f(r|s1)dr

    = 1

    2

    12A

    T

    1N0

    er2

    N0 dr+1

    2

    12A

    T

    1N0

    e(rAT)2

    N0 dr

    = 1

    2

    12

    2N0

    AT

    12

    ex2

    2 dx +1

    2

    12 2N0AT

    12

    ex2

    2 dx

    = Q

    1

    2

    2

    N0A

    T

    = Q(

    SNR)

    23

  • 8/13/2019 Selft Practice Exercises

    24/62

    where

    SNR =12

    A2T

    N0

    Thus, the on-off signaling requires a factor of two more energy to achieve the same probability

    of error as the antipodal signaling.

    2. [2, Problem 5.11].Consider the optimal detection of the sinusoidal signal

    s(t) = sin

    8t

    T

    , 0 t T

    in additive white Gaussian noise.

    (a) Determine the correlator output (att = T) assuming a noiseless input.

    (b) Determine the corresponding match filter output, assuming that the filter includes a delayTto make it casual.

    (c) Hence show that these two outputs are the same at time instant t = T.

    Solution:

    For the noiseless case, the received signal r(t) =s(t), 0 t T.

    (a) The correlator output is:

    y(T) =

    T0

    r()s()d =

    T0

    s2()d=

    T0

    sin2

    8

    T

    d =

    T

    2

    (b) The matched filter is defined by the impulse response h(t) = s(T t). The matched filteroutput is therefore:

    y(t) =

    r()h(t )d=

    s()s(T t + )d

    =

    T0

    sin

    8

    T

    sin

    8(T t + )

    T

    d

    = 1

    2

    T0

    cos

    8(T t)

    T

    d 1

    2

    T0

    cos

    8(T t + )

    T

    d

    = T

    2 cos

    8(t T)

    T

    T

    16sin

    8(T t)

    T

    T

    16sin

    8t

    T

    .

    (c) When the matched filter output is sampled at t = T, we get

    y(T) = T2

    which is exactly the same as the correlator output determined in item (2a).

    3. SNR Maximization with a Matched Filter.Prove the following theorem:For the real system shown in Figure 1, the filterh(t) that maximizes the signal-to-noise ratio atsample timeTs is given by the matched filterh(t) =x(Ts t).

    24

  • 8/13/2019 Selft Practice Exercises

    25/62

    +x(t)

    n(t)

    h(t)sTt =

    )y(Ts

    Figure 1: SNR maximization by matched filter.

    solution:

    Compute the SNR at sample time t = Ts as follows:

    Signal Energy = [x(t) h(t)|t=Ts ]2

    =

    x(t)h(Ts t)dt2

    = [x(t), h(Ts t)]2

    The sampled noise at the matched filter output has energy or mean-square

    Noise Energy = E

    n(t)h(Ts t)dt

    n(s)h(Ts s)ds

    =

    N02

    (t s)h(Ts t)h(Ts s)dtds

    = N0

    2

    h2(Ts t)dt

    = N0

    2 h2

    The signal-to-noise ratio, defined as the ratio of the signal power in to the noise power, equals

    SNR = 2

    N0

    [x(t), h(Ts t)]2h2

    The Cauchy-Schwarz Inequality states that

    [x(t), h(Ts t)]2 x2 h2

    with equality if and only if x(t) = kh(Ts t) where k is some arbitrary constant. Thus, byinspection, the SNR is maximized over all choices for h(t) when h(t) =x(Ts t). The filter h(t)is matched to x(t), and the corresponding maximum SNR (for any k) is

    SNRmax= 2N0

    x2

    4. The optimal receiver.Consider the signals s0(t), s1(t) with the respective probabilitiesp0, p1.

    s0(t) =

    ET, 0 t < aT,

    ET, aT t < T,

    0, otherwise.

    s1(t) =

    2ET cos

    2tT

    , 0 t < T,

    0, otherwise.

    25

  • 8/13/2019 Selft Practice Exercises

    26/62

    The observation, r(t), obeys

    r(t) = si(t) + n(t), i= 0, 1

    E

    {n(t)n()

    } =

    N0

    2

    (t

    ), n(t)

    N(0,

    N0

    2

    (t

    )).

    (a) Find the optimal receiver for the above two signals, write the solution in terms ofs0(t) ands1(t).

    (b) Find the error probability of the optimal receiver for equiprobable signals.

    (c) Find the parameter a, which minimizes the error probability.

    Solution:

    (a) We will use a type II, which uses filters matched to the signals si(t), i = 0, 1. The optimalreceiver is depicted in Figure 2.

    r(t)

    (t)h0

    (t)h1

    Tt =

    Tt =

    +

    +

    Ep2

    1ln

    0-

    2

    N0

    Ep2

    1ln

    1-

    2

    N0

    Max

    0y

    1y

    Figure 2: Optimal receiver - II.

    whereh0(t) =s0(T t), h1(t) =s1(T t).

    The Maxblock in Figure 2 can be implemented as follows

    y= y0 y1s0(t)

    s1(t)0

    The R.V y obeys

    y = [h0(t) r(t)]t=T

    +N0

    2 lnp0 E

    2 [h1(t) r(t)]

    t=T

    N02

    lnp1+E

    2

    = N0

    2 ln

    p0

    p1+ [(h0(t)

    h1(t))

    r(t)]t=T

    Hence the optimal receiver can be implemented using one convolution operation instead oftwo convolution operations, as depicted in Figure 3.

    (b) For an equiprobable binary constellation, in an AWGN channel, the probability of error isgiven by

    p(e) =Q

    d/2

    , d= s0 s1

    d2 = s0 s12 = s02 + s12 2 s0, s1

    26

  • 8/13/2019 Selft Practice Exercises

    27/62

    r(t) (t)h-(t)h 10Tt =

    +

    1

    0ln

    p

    p

    2

    N0

    Decision

    Rule

    Figure 3: Optimal receiver - II.

    where 2 is the noise variance.

    The correlation coefficient between the two signals, , equals

    = s0, s1

    s0 s1 =s0, s1

    E

    and for equal energy signals

    d2 = 2E 2 s0, s1 d =

    2E(1 )

    p(e) = Q

    E(1 )N0

    (c) is the only parameter, in p(e), affected by a. An explicit calculation of yields

    s0, s1 = T0

    s0(t)s1(t)dt

    =

    aT0

    E

    T

    2E

    T cos

    2t

    T dt

    TaT

    E

    E

    2E

    E cos

    2t

    T dt

    =

    2E

    2sin2a +

    2

    E

    2sin2a

    =

    2

    sin2a

    p(e) = Q

    E(1

    2 sin2a)

    N0

    In order to minimize the probability of error, we will maximize the Q function argument:

    sin2a = 1 a = 3

    4

    27

  • 8/13/2019 Selft Practice Exercises

    28/62

    7 The Probability of Error

    1. [1, Problem 5.10].A ternary communication system transmits one of three signals, s(t), 0,or s(t), every T seconds.The received signal is one either r(t) =s(t) + z(t), r(t) =z(t) orr(t) = s(t) + z(t), wherez(t) iswhite Gaussian noise with E{z(t)} = 0 and zz() = 12 E{z(t)z()} =N0(t) . The optimumreceiver computes the correlation metric

    U= Re

    T0

    r(t)s(t)dt

    and comparesUwith a thresholdA and a thresholdA. IfU > A the decision is made that s(t)was sent. IfU < A, the decision is made in favor ofs(t). IfA U A, the decision is madein favor of 0.

    (a) Determine the three conditional probabilities of error p(e|s(t)), p(e|0)) and p(e| s(t)).(b) Determine the average probability of error p(e) as a function of the threshold A, assuming

    that the three symbols are equally probable a priori.

    (c) Determine the value ofA that minimizes p(e).

    Solution:

    (a) U = Re

    T0

    r(t)s(t)dt

    , where r(t) =

    s(t) + z(t)s(t) + z(t)

    z(t)

    depending on which signal was

    sent. If we assume that s(t) was sent:

    U= Re

    T0

    s(t)s(t)dt

    + Re

    T0

    z(t)s(t)dt

    = 2E+ N

    where E = 12T

    0 s(t)s(t)dt is a constant, and N = Re

    T0

    z(t)s(t)dt

    is a Gaussian

    random variable with zero mean and variance 2EN0. Hence, given thats(t) was sent, theprobability of error is:

    p1(e) = Pr{N < A 2E} =Q

    2E A2EN0

    Whens(t) is transmitted: U = 2E+N, and the corresponding conditional error proba-bility is:

    p2(e) = Pr{N > A + 2E} =Q

    2E A2EN0

    and finally, when 0 is transmitted: U=N, and the corresponding error probability is:

    p3(e) = Pr{N > A or N < A} = 2Q

    A2EN0

    (b)

    p(e) = 1

    3[p1(e) +p2(e) +p3(e)] =

    2

    3

    Q

    2E A

    2EN0

    + Q

    A2EN0

    28

  • 8/13/2019 Selft Practice Exercises

    29/62

    (c) In order to minimize p(e):dp(e)

    dA = 0 A= E

    where we differentiate Q(x) = x 12 e t2

    2dt with respect to x, using the Leibnitz rule:ddx

    f(x)

    g(a)da

    = dfdxg(f(x)).Using this threshold:

    p(e) =4

    3Q

    E

    2N0

    2. [1, Problem 5.19].Consider a signal detector with an input

    r= A + n, A >0where +A andA occur with equal probability and the noise variable n is characterized by theLaplacian p.d.f:

    f(n) = 12

    e2|n|

    (a) Determine the probability of error as a function of the parametersA and .

    (b) Determine the SNR required to achieve an error probability of 105. How does the SNRcompare with the result for Gaussian p.d.f?

    Solution:

    (a) Let =

    2 . The optimal receiver uses the criterion:

    f(r|A)f(r

    | A)

    =e[|rA||r+A|]A

    A1

    rA

    A0

    The average probability of error is:

    p(e) = 1

    2Pr{Error|A} +1

    2Pr{Error| A}

    = 1

    2

    0

    f(r|A)dr+12

    0

    f(r| A)dr

    = 1

    2 0

    2e|rA|dr+

    1

    2

    0

    2e|r+A|dr

    =

    4

    A

    e|x|dx +

    4

    A

    e|x|dx

    = 1

    2eA =

    1

    2e2A

    (b) The variance of the noise is 2, hence the SNR is:

    SNR = A2

    2

    29

  • 8/13/2019 Selft Practice Exercises

    30/62

    and the probability of error is given by:

    p(e) = 1

    2e

    2SNR

    Forp(e) = 105

    we obtain:ln(2 105) =

    2SNR SNR = 17.674 dB

    If the noise was Gaussian, then the probability of error for antipodal signalling is:

    p(e) =Q

    SNR

    where SNR is the signal to noise ratio at the output of the matched filter. With p(e) = 105

    we find

    SNR = 4.26 and therefore SNR = 12.594 dB. Thus the required signal to noiseratio is 5 dB less when the additive noise is Gaussian.

    3. [1, Problem 5.38].The discrete sequence

    rk = Ebck+ nk, k= 1, 2, . . . , nrepresents the output sequence of samples from a demodulator, where ck =1 are elements ofone of two possible code words, C1= [1 1 . . . 1] andC2= [1 1 . . . 1 1 . . . 1]. The code wordC2 has w elements that are +1 and n w elements that are1, where w is a positive integer.The noise sequence{nk} is white Gaussian with variance 2.

    (a) What is the optimum ML detector for the two possible transmitted signals?

    (b) Determine the probability of error as a function of the parameters 2, Eb, w.

    (c) What is the value ofw that minimizes the the error probability?

    Solution:

    (a) The optimal ML detector selects the sequenceCi that minimizes the quantity:

    D(r, Ci) =n

    k=1

    (rk

    Ebcik)2

    The metrics of the two possible transmitted sequences are

    D(r, C1) =

    wk=1

    (rk

    Eb)2 +

    nk=w+1

    (rk

    Eb)2

    D(r, C2) =w

    k=1

    (rk

    Eb)2 +

    nk=w+1

    (rk+

    Eb)2

    Since the first term of the right side is common for the two equations, we conclude that the

    optimal ML detector can base its decisions only on the last n w received elements ofr.That isw

    k=w+1

    (rk

    Eb)2

    wk=w+1

    (rk+

    Eb)2

    C2

    C1

    0

    or equivalentlyw

    k=w+1

    rk

    C1

    C2

    0

    30

  • 8/13/2019 Selft Practice Exercises

    31/62

  • 8/13/2019 Selft Practice Exercises

    32/62

    r(t)

    t)-(Ts0

    t)-(Ts1

    Tt =

    Tt =

    Max

    Figure 4: Optimal receiver type II.

    Solution:

    (a) The signals are equiprobable and have equal energy. We will use type II receiver, depictedin Figure 4.

    The distance between the signals is

    d2 = T

    0

    2E

    T

    sin

    2tT

    cos

    2tT

    2= 2E d= 2E

    The receiver depicted in Figure 4 is equivalent to the the following (and more efficient)receiver, depicted in Figure 5.

    r(t) t)-(Ts-t)-(Ts 10Tt =

    0

    1

    0

    s

    s

    Figure 5: Efficient optimal receiver.

    For a binary system with equiprobable signals s0(t) ands1(t) the probability of error is givenby

    p(e) =Q d

    2

    = Q

    d

    2

    N02

    = Q

    d

    2N0

    whered, the distance between the signals, is given by

    d= s0(t) s1(t) = s0 s1Hence, the probability of error is

    p(e) =Q d2N0 p(e) =QEN0(b) Let us define the random variable, Y =

    T2

    0 r(t)dt. Y obeys

    Y|s0= T

    2

    0

    s0(t)dt +

    T2

    0

    n(t)dt

    Y|s1= T

    2

    0

    s1(t)dt +

    T2

    0

    n(t)dt

    32

  • 8/13/2019 Selft Practice Exercises

    33/62

    Let us define the random variable N = T

    2

    0 n(t)dt. N is a zero mean Gaussian random

    variable, and variance

    Var{

    N}

    =E T2

    0 T2

    0

    n()n()dd= T2

    0 T2

    0

    N0

    2 (

    )dd=

    NoT

    4

    Y|si is a Gaussian random variable (note that Y is not gaussian, but a Gaussin Mixture!)with mean:

    E{Y|s0} = T

    2

    0

    s0(t)dt=

    2ET

    E{Y|s1} = T

    2

    0

    s1(t)dt= 0

    The variance ofY|si is identical under both cases, and equal to the variance ofN. For thegiven decision rule the error probability is:

    p(e) = p(s0) Pr{Y 0|s1}

    = 1

    2Q

    2

    2E

    N0

    +

    1

    4

    (c) We will use the same derivation procedure as in the previous item.Define the random variables Y, Nas follows:

    Y =

    aT0

    r(t)dt, N =

    aT0

    n(t)dt

    E{N} = 0, Var{N} = aT N02

    E{Y|s0} = 2ET

    aT0

    s0(t)dt= 2ET2

    (1 cos2a)

    E{Y|s1} =

    2E

    T

    aT0

    s1(t)dt=

    2ET

    2 sin2a

    Var{Y|s0} = Var{Y|s1} = Var{N}

    The distance between Y |s0 and Y |s1 equals

    d=

    2ET

    2 (1 cos(2a) sin(2a))

    For an optimal decision rule the probability of error equals Q d2 . Hence the probability oferror equals

    p(e) =Q

    1

    2

    E

    N0

    1a

    |(1 cos(2a) sin(2a))|

    which is minimized when 1a

    |(1 cos2a sin2a)| is maximized.Letaopt denote thea which maximizes the above expression. Numerical solution yields that

    aopt=0.5885

    33

  • 8/13/2019 Selft Practice Exercises

    34/62

    8 Bit Error Probability

    1. [3, Example 6.2].Compare the probability of bit error for 8PSK and 16PSK, in an AWGN channel, assuming

    b= 15dB =

    Eb

    N0 and equal a-priori probabilities. Use the following approximations: Nearest neighbor approximation given in class. b slog2 M. The approximation for Pe,bit given in class.

    Solution:

    The nearest neighbor approximation for the probability of error, in an AWGN channel, for anM-PSK constellation is

    Pe 2Q

    2ssin(

    M)

    .

    The approximation for Pe,bit (under Gray mapping at high enough SNR) is

    Pe,bit Pelog2 M

    .

    For 8PSK we have s = (log28) 1015/10 = 94.87. Hence

    Pe 2Q

    189.74 sin(/8)

    = 1.355 107.

    Using the approximation for Pe,bit we get

    Pe,bit= Pe

    3 = 4.52 108.

    For 16PSK we have s= (log216) 1015/10 = 126.49. HencePe 2Q

    252.98 sin(/16)

    = 1.916 103.

    Using the approximation for Pe,bit we get

    Pe,bit= Pe

    4 = 4.79 104.

    Note thatPe,bit is much larger for 16PSK than for 8PSK for the same b. This result is expected,since 16PSK packs more bits per symbol into a given constellation, so for a fixed energy-per-bitthe minimum distance between constellation points will be smaller.

    2. Bit error probability for rectangular constellation.Let p0(t) and p1(t) be two orthonormal functions, different from zero in the time interval [0 , T].The equiprobable signals defined in Figure 6 are transmitted through a zero-mean AWGN channelwith noise PSD equals N0/2.

    (a) CalculatePe for the optimal receiver.

    (b) CalculatePe,bit for the optimal receiver (optimal in the sense of minimal Pe).

    (c) ApproximatePe,bit for high SNR (d2

    N0

    2 ). Explain.

    34

  • 8/13/2019 Selft Practice Exercises

    35/62

    (t)p1

    (t)p0

    )010( )011( )001( )000(

    )110( )111( )101( )100(

    2d

    2d-

    2d 23d2d-23d-

    Figure 6: 8 signals in rectangular constellation.

    Solution:

    Let n0 denote the noise projection on p0(t) and n1 the noise projection on p1(t). Clearly niN(0, N0/2), i= 0, 1.

    (a) Let Pc denote the probability for correct symbol decision; hence Pe = 1 Pc.

    Pr{correct decision|(000) was transmitted} =

    1 Q

    d/2

    N0/2

    2(a)

    = Pr{correct decision|(100) was transmitted}(b)= Pr{correct decision|(010) was transmitted}(c)= Pr{correct decision|(110) was transmitted}= P1.

    where (a), (b) and (c) are due to the constellation symmetry.

    Pr{correct decision|(001) was transmitted} =

    1 Q

    d/2N0/2

    1 2Q

    d/2

    N0/2

    (a)= Pr

    {correct decision

    |(101) was transmitted

    }(b)= Pr{correct decision|(011) was transmitted}(c)= Pr{correct decision|(111) was transmitted}= P2.

    where (a), (b) and (c) are due to the constellation symmetry.

    35

  • 8/13/2019 Selft Practice Exercises

    36/62

    Hence

    Pc = 1

    2

    1 Q

    d/2

    N0/22

    +

    1 Q

    d/2

    N0/2

    1 2Q

    d/2

    N0/2

    Pe = 1 Pc Pe = 1

    2

    5Q

    d/2

    N0/2

    3Q

    d/2

    N0/2

    2.

    (b) Letb0denote the MSB,b2denote the LSB andb1denote the middle bit7. Letbi(s), i= 0, 1, 2

    denote theith bit of the constellation point s.

    Pr{error inb2|(000) was transmitted} =

    s:b2(s)=0Pr{swas received|(000) was transmitted}

    = Pr5d

    2 < N0< d

    2= Pr

    d

    2 < N01.

    where{nk} is a sequence of real-valued independent zero-mean Gaussian noise variables withvariance 2 =N0.

    49

  • 8/13/2019 Selft Practice Exercises

    50/62

    (a) Sketch the tree structure, showing the possible signal sequences for the received signalsy1, y2,andy3.

    (b) Suppose the Viterbi algorithm is used to detect the information sequence. How many metricsmust be computed at each stage of the algorithm?

    (c) How many surviving sequences are there in the Viterbi algorithm for this channel?

    (d) Suppose that the received signals are

    y1= 0.5, y2= 2.0, y3= 1.0Determine the surviving sequences through stage y3 and the corresponding metrics.

    Solution:

    (a) Figure 12 depicts part of the tree.

    1I

    2I

    3I

    3

    1

    3-

    1-

    3

    1

    1-

    3-

    3

    1

    1-

    3-

    3

    1

    1-

    3-

    3

    1

    1-

    3-

    3

    1

    1-

    3-

    Figure 12: Tree structure.

    (b) There are four states in the trellis (corresponding to the four possible values of the symbolIk1), and for each one there are four paths starting from it (corresponding to the fourpossible values of the symbol Ik). Hence, 16 metrics must be computed at each stage of theViterbi algorithm.

    (c) Since, there are four states, the number of surviving sequences is also four.

    (d) The metrics are

    k =

    (y1 0.8Ik)2, k= 1,

    k(yk 0.8Ik+ 0.6Ik1)2, k >1.

    50

  • 8/13/2019 Selft Practice Exercises

    51/62

    Table 1 details the metric for the first stage.

    I1 1

    3 3.61

    1 0.09-1 1.69-3 8.41

    Table 1: First stage metric.

    Table 2 details the metric for the second stage.

    I2 I1 2(I2, I1)

    3 3 5.573 1 0.13

    3 -1 6.533 -3 13.25

    1 3 12.611 1 3.331 -1 2.051 -3 8.77

    -1 3 24.77-1 1 11.65-1 -1 6.53-1 -3 9.41

    -3 3 42.05-3 1 25.09

    -3 -1 16.13-3 -3 15.17

    Table 2: Second stage metric.

    The four surviving paths at this stage are minI1

    2(x, I1)

    , x= 3, 1, 1, 3:(I2, I1) = (3, 1) : 2(3, 1) = 0.13

    (I2, I1) = (1, 1) : 2(1, 1) = 2.05(I2, I1) = (1, 1) : 2(1, 1) = 6.53(I

    2, I

    1) = (

    3,

    3) :

    2(

    3,

    3) = 15.17

    Table 3 details the metric for the third stage.

    The four surviving paths at this stage are minI2,I1

    3(x, I2, I1)

    , x= 3, 1, 1, 3:(I3, I2, I1) = (3, 3, 1) : 3(3, 3, 1) = 2.69

    (I3, I2, I1) = (1, 3, 1) : 3(1, 3, 1) = 0.13

    (I3, I2, I1) = (1, 3, 1) : 3(1, 3, 1) = 2.69(I3, I2, I1) = (3, 3) : 3(3, 1, 1) = 2.69

    51

  • 8/13/2019 Selft Practice Exercises

    52/62

  • 8/13/2019 Selft Practice Exercises

    53/62

    Solution:

    (a)

    Hzf(f) = 1

    H(f) =

    1, 0 |f|

  • 8/13/2019 Selft Practice Exercises

    54/62

    Solution:

    (a) If by{cn}we denote the coefficients of the FIR equalizer, then the equalized signal is:

    qm =

    1n=1

    cnxnm

    which in matrix notation is written as0.9 0.3 00.3 0.9 0.3

    0 0.3 0.9

    c1c0

    c1

    =

    01

    0

    The coefficients of the zero-forcing equalizer can be found by solving the above matrix equa-tion. Thus

    c1c0

    c1

    =

    0.47621.4286

    0.4762

    .

    (b) The values ofqm form = 2, 3 are given by

    q2 =1

    n=1cnx2n = c1x1= 0.1429

    q2 =1

    n=1cnx2n = c1x1= 0.1429

    q3 =1

    n=1cnx3n = 0

    q3 =1

    n=1cnx3n = 0.

    3. [1, Problem 10.15].Repeat problem (2) using the MMSE as the criterion for optimizing the tap coefficients. Assumethat the noise power spectral density is 0.1 W/Hz.

    Solution:

    A discrete time transversal filter equivalent to the cascade of the transmitting filter gT(t), thechannel c(t), the matched filter at the receiver gR(t) and the sampler, has tap gain coefficients{xm}, where

    xm = 0.9, m= 0,

    0.3, m= 1,0, otherwise.

    The noisek, at the output of the sampler, is a zero-mean Gaussian sequence with autocorrelationfunction:

    E{kl} =2xkl, |k l| 1.

    If the Z-transform of the sequence{xm}, X(z), assumes the factorization:

    X(z) =F(z)F(1/z)

    54

  • 8/13/2019 Selft Practice Exercises

    55/62

    then the filter 1/F(1/z) can follow the sampler to white the noise sequencek. In this case theoutput of the whitening filter, and input to the MSE equalizer, is the sequence

    un = kIkfnk+ nk

    where nk is zero mean white Gaussian with variance 2. The optimum coefficients of the MSEequalizer,ck, satisfy:

    1n=1

    cknk =k, k= 1, 0, 1

    where

    nk =

    xnk+ 2n,k, |n k| 1,0, otherwise.

    k = fk, 1 k 0,0, otherwise.With

    X(z) = 0.3z+ 0.9 + 0.3z1 = (f0+ f1z1)(f0 + f1 z)

    we obtain the parameters f0 and f1 as:

    f0=

    0.78540.1146 f1=

    0.11460.7854

    The parameters f0 and f1 should have the same sign since f0f1 = 0.3. To have a stable inverse

    system 1/F(1/z), we select f0 and f1 in such a way that the zero of the system F(1/z) =f0+ f1 z is inside the unit circle. Thus, we choose f0= 0.1146 andf1= 0.7854 and therefore,

    the desired system for the equalizers coefficients is:0.9 + 0.1 0.3 00.3 0.9 + 0.1 0.3

    0 0.3 0.9 + 0.1

    c1c0

    c1

    =

    0.78540.1146

    0

    Solving this system, we obtain

    c1= 0.8596, c0= 0.0886, c1= 0.0266.

    4. [1, Problem 10.21]8.

    Consider the following channel

    yn = 1

    2In+

    12

    In1+ vn

    {vn} is a real-values white noise Gaussian sequence with zero mean and variance N0. Supposethe channel is to be equalized by DFE having a two-tap feedforward filter (c0, c1) and a one-tapfeedback filter (c1). The{ci} are optimized using the MSE criterion.

    8Read [1, Sub-section 10.3.2] and [1, Example 10.3.1]

    55

  • 8/13/2019 Selft Practice Exercises

    56/62

    (a) Determine exactly the optimum coefficients as a function ofN0and approximate their valuesforN0 1.

    (b) Determine the exact value of the minimum MSE and a find first order approximation (interms ofN0) appropriate to the case N0

    1. Assume E

    {I2n

    }= 1.

    (c) Determine the exact value of the output SNR for the three-tap equalizer as a function ofN0and find a first order approximation appropriate to the case N0 1.

    (d) Compare the results in items (4b) and (4c) with the performance of the infinite-tap DFE.

    (e) Evaluate and compare the exact values of the output SNR for the three-tap and infinite-tapDFE in the special case where N0= 0.1 andN0= 0.01. Comment on how well the three-tapequalizer performs relative to the infinite-tap equalizer.

    Solution:

    (a) The tap coefficients of the feedforward filter are given by the following equations:

    0

    j=K1

    cjlj =fl,

    K1

    l

    0,

    where

    lj =

    lm=0

    fmfm+lj+ N0lj , K1 l, j 0.

    The tap coefficients of the feedback filter of the DFE are given in terms of the coefficients ofthe feedforward section by the following equations:

    ck = 0

    j=K1cjfkj , 1 k K2,

    In this case, K1= 1, resulting in the following two equations:

    0,0c0+ 0,1c1= f01,0c0+ 1,1c1= f1

    From the definition oflj the above system can be written as:12

    + N012

    12

    12

    + N0

    c0c1

    =

    1

    21

    2

    so:

    c0

    c1= 1

    2N20 + 32 N0+ 14 12+ N0

    N0

    2

    22N0 , for N0 1The coefficient for the feedback section is:

    c1= c0f1= 12

    c0 1, for N0 1.

    (b)

    Jmin(1) = 1 0

    j=K1cjfj =

    2N20 + N0

    2

    N20 + 32

    N0+ 14

    2N0, forN0 1

    56

  • 8/13/2019 Selft Practice Exercises

    57/62

    (c)

    = 1 Jmin(1)

    Jmin(1) =

    1 + 4N02N0(1 + 2N0)

    12N0

    , forN0 1

    (d) For the infinite tap DFE, we have from [1, Example 10.3.1]:

    Jmin = 2N0

    1 + N0+

    (1 + N0)2 1 2N0, for N0 1

    = 1 Jmin

    Jmin=

    1 + N0+

    (1 + N0)2 12N0

    (e) ForN0= 0.1 we have:

    Jmin(1) = 0.146, = 5.83 (7.66 dB)

    Jmin = 0.128, = 6.8 (8.32 dB)

    ForN0= 0.01 we have:

    Jmin(1) = 0.0193, = 51 (17.1 dB)

    Jmin = 0.0174, = 56.6 (17.5 dB)

    The three-tap equalizer performs very well compared to the infinite-tap equalizer. Thedifference in performance is 0.6 dB for N0= 0.1 and 0.4 dB for N0= 0.01.

    57

  • 8/13/2019 Selft Practice Exercises

    58/62

    14 Non-Coherent Reception

    1. Minimal frequency difference for orthogonality.

    (a) Consider the signals

    si(t) =

    2ET cos(2fit), 0 t T,

    0, otherwise., i= 0, 1

    Both frequencies obey fiT 1, i= 0, 1. What is the minimal frequency difference, |f0 f1|,required for the two signals, s0(t) and s1(t), to be orthogonal?

    (b) Now an unknown phase is added to one of the signals

    s0(t) =

    2ET cos(2f0t), 0 t T,

    0, otherwise., s1(t) =

    2ET cos(2f1t + ), 0 t T,

    0, otherwise.

    Find the minimal frequency difference required for the two signals to be orthogonal, for anunknown.

    Solution:

    We first solve for the general case, and then assign = 0 for item 1a.

    s0(t), s1(t) = 2ET

    T0

    cos(2f0t) cos(2f1t + )dt

    = 1

    2 2E

    T

    T0

    cos

    2(f0+ f1)t +

    + cos

    2(f0 f1)t

    dt

    = E sin 2(f0+ f1)t + 2(f0+ f1)T 0 because fiT1

    +

    sin 2(f0 f1)t 2(f0 f1)T T

    0

    E sin

    2(f0 f1)t

    2(f0 f1)T

    T

    0

    = 0demand

    We now consider the special cases.

    (a) For = 0:

    s0(t), s1(t) = 0 sin

    2(f0 f1)T

    = 0 2(f0 f1)T =n

    wheren is an integer, hence |f0 f1|min = 12T

    (b) For unknown:

    s0(t), s1(t) = 0 sin 2(f0 f1)t T0 = 0

    sin 2(f0 f1)T sin() = 0 2(f0 f1)T () = n 2

    58

  • 8/13/2019 Selft Practice Exercises

    59/62

    where the last step follows from the demand that the result will be zero for any , hence werequire that the difference between

    2(f0 f1)T

    and () will equal n 2, where n

    is an integer.Hence, the minimal frequency difference for the non-coherent scenario is

    |f0 f1|min = 1T

    We conclude that for the non-coherent scenario a double bandwidth is required comparingto the coherent scenario.

    2. Non coherent receiver forM orthogonal signals.Consider the following M orthogonal signals

    si(t) =

    2E

    T sin(it), 0 t T, i= 0, 1, . . . , M 1.

    The received signal is

    r(t) =2E

    T sin(it + ) + n(t)

    where U[0, 2) and n(t) is white Gaussian noise with power spectral density N02 .The set{rs,i, rc,i}M1i=0 is sufficient statistic for decodingr(t), where

    rc,i =

    T0

    r(t)

    2

    T cos(it)dt, rs,i =

    T0

    r(t)

    2

    T sin(it)dt

    In class it was obtained that the optimal receiver for equiprobable a-priori probabilities finds themaximalr 2i =r

    2c,i+ r

    2s,i, and chooses the respective si(t).

    The pdf ofr0 and ri, i= 1, . . . , M 1, given that s0(t) was transmitted, are:

    f(r0|s0) = 2r0N0

    e r

    20

    N0 e EN0 I0

    2

    E

    N0r0

    , r0 0

    f(ri|s0) = 2riN0

    er2iN0 , ri 0, i= 1, . . . , M 1

    For equiprobable a-priori probabilities and M= 2, the error probability of the optimal receiver is

    p(e) = 1

    2e

    E2N0

    Show that for equiprobable a-priori probabilities and general M, the error probability of theoptimal receiver is

    p(e) =M1i=1

    M 1

    i

    (1)i+1 1

    i + 1 e ii+1 EN0

    Guideline: LetA, B and Cbe i.i.d RVs with pdffY(y). LetX= max{A,B,C}. Derive the pdffX(x).

    Solution:

    59

  • 8/13/2019 Selft Practice Exercises

    60/62

    Due to symmetry

    p(e) =M1i=0

    p(e|si)p(si) =p(e|s0)

    The probability of error given s0(t) was transmitted obeys

    p(e|s0) = Pr{rmax = max{r1, . . . , rM1} > r0|s0}

    Note: theri, i= 1, . . . , M 1 are i.i.d.

    For i.i.d random variables y1, . . . , ynwith pdffY(y) and cdfFY(y), the cdf ofymax= max{y1, . . . , yn}obeys

    FYmax(y) = Pr{ymax < y} = Pr{y1, . . . , yn y}(a)=

    FY(y)n

    fYmax(y) = n

    FY(y)

    n1 fY(y)

    where (a) follows from the fact the the random variables are i.i.d.

    In order to find f(rmax|s0) we need to find F(ri|s0):

    F(ri|s0) = ri

    0

    2t

    N0e t2N0 dt= 1 e

    r2iN0

    Hence

    f(rmax|s0) = (M 1)

    1 er2maxN0

    M2 2rmaxN0

    er2maxN0

    f(rmax|s0) can be expanded as follows

    f(rmax|s0) = (M 1)M

    2j=0

    e r2maxN0 jM 2j

    2rmaxN0

    e r2maxN0

    =

    M2j=0

    (M 1)(1)je(j+1)r2max

    N02rmax

    N0

    M 2

    j

    i=j+1=

    M1i=1

    (1)i+1

    M 1i

    e

    ir2maxN0

    2rmaxi

    N0

    In order to calculate p(e|s0) we need to integrate the whole region in which rmax > r0

    p(e

    |s0) =

    r0=0

    f(r0

    |s0)

    rmax=r0

    f(rmax

    |s0)drmaxdr0

    Assigningf(rmax|s0) to the inner integral yieldsrmax=r0

    f(rmax|s0)drmax =M1i=1

    (1)i+1

    M 1i

    r0

    eir2maxN0

    2rmaxi

    N0drmax

    Rayleigh distribution

    =M1i=1

    (1)i+1

    M 1i

    e

    ir20N0

    60

  • 8/13/2019 Selft Practice Exercises

    61/62

    Hence

    p(e|s0) =r0=0

    2r0N0

    e r

    20

    N0 e EN0 I0

    2

    E

    N0r0

    M1

    i=1(1)i+1

    M 1

    i

    e ir

    20

    N0 dr0

    Multiplyingp(e|s0) byi + 1

    i + 1eE/(i+1)2

    N0/(i+1) eE/(i+1)2N0/(i+1) = 1

    and rearranging the summation elements yields

    p(e|s0) =M1i=1

    M 1

    i

    (1)i+1 1

    i + 1 e ii+1 EN0

    p(e)

    0

    2(i + 1)r0

    N0e r

    20

    N0/(i+1) eE/(i+1)2N0/(i+1) I0

    2

    E/(i + 1)2

    N0/(i + 1)

    r0dr0 0 Rice distribution=1

    =M1i=1

    M 1

    i

    (1)i+1 1

    i + 1 e ii+1 EN0

    61

  • 8/13/2019 Selft Practice Exercises

    62/62

    References[1] J. G. Proakis, Digital Communications, 4th Edition, John Wiley and Sons, 2000.

    [2] S. Haykin, Communication Systems, 4th Edition, John Wiley and Sons, 2000.

    [3] A. Goldsmith, Wireless Communications, Cambridge University Press, 2006.