chapter 1 review on background materialsin particular, a unit delay in time translates into the...
TRANSCRIPT
-
Wong & Lok: Theory of Digital Communications 1. Review
Chapter 1
Review on Background Materials
We start by reviewing some important concepts which will be needed in the following chapters.
1.1 Signals and Systems
1.1.1 De£nitions
Continuous-time signal A continuous-time signalx(t) is a function for which the independent vari-
able, namely timet, takes the real numbers.
Discrete-time signal A discrete-time signalx(n) is a sequence, i.e., the independent variablen takes
only the integers.
System A system is an operator that takes a signal as its input and produces a signal as its output.
Continuous-time systemA continuous-time system is a system whose inputs and outputs are continuous-
time signals.
Discrete-time systemA discrete-time system is a system whose inputs and outputs are discrete-time
signals.
Consider continuous-time systems. (Similar for discrete-time systems.)
1.1
-
Wong & Lok: Theory of Digital Communications 1. Review
Linearity A systemT is linear if and only if for any input signalsx1(t) andx2(t) and any two scalars
α andβ,
T [αx1(t) + βx2(t)] = αT [x1(t)] + βT [x2(t)].
Time-invariance A system istime-invariantif and only if for all x(t) and all values oft0, its response
to x(t − t0) is y(t − t0), wherey(t) is the response of the system tox(t).
Causality A system iscausalif and only if its output at any timet0 depends on the input at times up
to (and possibly including)t0 only.
1.1.2 Fourier Transform
If a signalx(t) satis£es certain conditions, then the(continuous-time) Fourier transformof x(t), de-
noted byX(f) is de£ned by
X(f) =∫ ∞−∞
x(t)e−j2πftdt. (1.1)
The signalx(t) can be recovered from its Fourier transform by taking the inverse Fourier transform
x(t) =∫ ∞−∞
X(f)ej2πftdf. (1.2)
Properties of the Fourier transform
1. Linearity
αx1(t) + βx2(t) ←→ αX1(f) + βX2(f)
2. Duality
X(t) ←→ x(−f)
3. Convolution
x1 ∗ x2(t) ←→ X1(f)X2(f)
4. Scaling (a 6= 0)x(at) ←→ 1|a|X
(f
a
)
1.2
-
Wong & Lok: Theory of Digital Communications 1. Review
5. Time-shift
x(t − t0) ←→ X(f)e−j2πft0
6. Modulation
x(t)ej2πf0t ←→ X(f − f0)
7. Differentiationdn
dtnx(t) ←→ (j2πf)nX(f)
8. Parseval’s relation ∫ ∞−∞
x1(t)x∗2(t)dt =
∫ ∞−∞
X1(f)X∗2 (f)df
1.1.3 Delta function
Thedelta functionis not a function in strict mathematical sense. Nevertheless, it is convenient to de£ne
and use it to obtain various results. The delta functionδ(t) is “de£ned” by the following properties:
δ(t) = 0 for all t 6= 0 (1.3)
and ∫ ∞−∞
x(t)δ(t)dt = x(0) (1.4)
for any functionx(t), which is continuous att = 0.
Response of a linear time-invariant (LTI) system
The convolution of any signalx(t) with δ(t) is the originalx(t)
x ∗ δ(t) =∫ ∞−∞
x(s)δ(t − s)ds = x(t). (1.5)
Consider an LTI systemT [·]:
T [x(t)] = T [x ∗ δ(t)] (1.6)= T
[∫ ∞−∞
x(s)δ(t − s)ds]
=∫ ∞−∞
x(s)T [δ(t − s)]ds
=∫ ∞−∞
x(s)h(t − s)ds (1.7)
1.3
-
Wong & Lok: Theory of Digital Communications 1. Review
whereh(t) = T [δ(t)] is called theimpulse responseof the LTI system. Therefore, the response of an
LTI system to an input signal is the convolution of the input signal and the impulse response.
Some useful Fourier transform pairs involving the delta function
δ(t) ←→ 11 ←→ δ(f)
δ(t − t0) ←→ e−j2πft0
ej2πf0t ←→ δ(f − f0)cos(2πf0t) ←→ 1
2δ(f − f0) + 1
2δ(f + f0)
sin(2πf0t) ←→ 12j
δ(f − f0) − 12j
δ(f + f0)
Fourier transform of periodic signals
Let x(t) be a periodic signal of periodT0, and let{xn} be the Fourier coef£cients, i.e.,
x(t) =∞∑
n=−∞xne
j2πnt/T0 . (1.8)
Taking Fourier transform on both sides, we have
X(f) =∞∑
n=−∞xnδ
(f − n
T0
). (1.9)
A period signal of special interest is the impulse train
x(t) =∞∑
n=−∞δ(t − nT0) (1.10)
Its Fourier coef£cients are given by
xn =1
T0
∫ T0/2−T0/2
x(t)e−j2πnt/T0dt =1
T0. (1.11)
Therefore,∞∑
n=−∞δ(t − nT0) = 1
T0
∞∑n=−∞
ej2πnt/T0 . (1.12)
This result is sometimes called thePoisson’s sum formula. The Fourier transform of the impulse train
is, thus, given by∞∑
n=−∞δ(t − nT0) ←→ 1
T0
∞∑n=−∞
δ(f − n
T0
). (1.13)
1.4
-
Wong & Lok: Theory of Digital Communications 1. Review
Signum function, unit step function, and integration property
Thesignum functionsgn(t) is de£ned by
sgn(t) =
1 if t > 0,
0 if t = 0,
−1 if t < 0.(1.14)
It can be viewed as the limit of the antisymmetric double exponential pulse
x(t) =
e−at if t > 0,
0 if t = 0,
−eat if t < 0,(1.15)
wherea > 0. The Fourier transform ofx(t) can be computed as
X(f) =−j4πf
a2 + 4π2f 2. (1.16)
Taking the limit asa ↓ 0, we havesgn(t) ←→ 1
jπf. (1.17)
The unit step functionu(t) is de£ned by
u(t) =
1 if t > 0,
12
if t = 0,
0 if t < 0.
(1.18)
It can be expressed as
u(t) =1
2sgn(t) +
1
2. (1.19)
Therefore, we have the Fourier transform pair
u(t) ←→ 1j2πf
+1
2δ(f). (1.20)
The integration of a functionx(t) can be expressed as∫ t−∞
x(τ)dτ = x ∗ u(t). (1.21)
By the convolution property and the Fourier transform ofu(t), we have∫ t−∞
x(τ)dτ ←→ X(f)j2πf
+1
2X(0)δ(f). (1.22)
1.5
-
Wong & Lok: Theory of Digital Communications 1. Review
1.1.4 Discrete Time Fourier Transform
Given a discrete time signalx(n), its discrete-time Fourier transform (DTFT)is given by
X(ω) =∞∑
n=−∞x(n)e−jωn (1.23)
Notice thatX(ω) is always periodic with period2π. Therefore, the “frequency” range of interest for
any discrete time signal is[−π, π). Conversely, givenX(ω), x(n) can be recovered by
x(n) =1
2π
∫ π−π
X(ω)ejωndω. (1.24)
Properties of the DTFT
1. Linearity
αx1(n) + βx2(n) ←→ αX1(ω) + βX2(ω)
2. Time-shift
x(n − n0) ←→ X(ω)e−jωn0
3. Frequency-shift
x(n)ejω0n ←→ X(ω − ω0)
4. Convolution
x ∗ h(n) ←→ X(ω)H(ω)
5. Multiplication
x1(n)x2(n) ←→ 12π
∫ π−π
X1(ω − ν)X2(ν)dν
6. Parseval’s relation∞∑−∞
x1(n)x∗2(n) =
1
2π
∫ π−π
X1(ω)X∗2 (ω)dω
1.6
-
Wong & Lok: Theory of Digital Communications 1. Review
Response of an LTI system
Thedirac-delta functionδ(n) is de£ned by
δ(n) =
1 if n = 0,
0 if n 6= 0.(1.25)
Notice that any discrete time signalx(n) can be written as
x(n) =∞∑
m=−∞x(m)δ(n − m). (1.26)
Consider an LTI systemT [·]. Let h(n) be its impulse response, i.e., its response toδ(n). Then
T [x(n)] = T
[ ∞∑m=−∞
x(m)δ(n − m)]
=∞∑
m=−∞x(m)T [δ(n − m)]
=∞∑
m=−∞x(m)h(n − m)
= x ∗ h(n) (1.27)
Therefore, the response of an LTI system to an input is the convolution of the input and the impulse
response.
1.1.5 Z-Transform
Given a discrete time signalx(n), its Z-transformis given by
X(z) =∞∑
n=−∞x(n)z−n (1.28)
if the series converges. The region in the complex plane where the series converges is called theregion
of convergence (ROC). Notice that by puttingz = e−jω, we obtain the DTFT ofx(n). Consider the
following examples:
1. x(n) = δ(n). ThenX(z) = 1 and the ROC is the entire complex plane.
2. Let
x(n) =
an n ≥ 0,0 n < 0.
1.7
-
Wong & Lok: Theory of Digital Communications 1. Review
Then
X(z) =∞∑
n=0
anz−n =∞∑
n=0
(az−1)n =1
1 − az−1where the last step holds only if|az−1| < 1. Therefore, the ROC is|z| > |a|.
3. Let
x(n) =
−an n < 0,0 n ≥ 0.
Then
X(z) = −−1∑
n=−∞anz−n = −
−1∑n=−∞
(az−1)n = −∞∑
n=1
(a−1z)n = − a−1z
1 − a−1z =1
1 − az−1
where the second last step holds only if|a−1z| < 1. Therefore, the ROC is|z| < |a|.
Notice that the expressions forX(z) are the same for examples 2 and 3. However, the ROC’s are
different.
Conversely, givenX(z) and its ROC,x(n) can be recovered by
x(n) =1
2πj
∮X(z)zn−1dz (1.29)
where the contour integral is over any simple contour in the interior of the ROC ofX(z) that circles
the origin exactly once in the counterclockwise direction. Notice that in many cases, there are simpler
methods to perform inverse Z-transforms.
Properties of the Z-transform
1. Linearity
αx1(n) + βx2(n) ←→ αX1(z) + βX2(z)
2. Time-shift
x(n − n0) ←→ X(z)z−n0
In particular, a unit delay in time translates into the multiplication of the z-transform byz−1.
3. Convolution
x ∗ h(n) ←→ X(z)H(z)
1.8
-
Wong & Lok: Theory of Digital Communications 1. Review
Z-transforms of some common sequences
Let
u(n) =
1 n ≥ 0,0 n < 0.
(1.30)
We consider the Z-transforms of some common sequences:
x(n) X(z) ROC
δ(n) 1 all z
anu(n) 11−az−1 |z| > |a|
−anu(−(n + 1)) 11−az−1 |z| < |a|
nanu(n) az−1
(1−az−1)2 |z| > |a|an cos(ω0n)u(n)
1−az−1 cos(ω0)1−2az−1 cos(ω0)+a2z−2 |z| > |a|
an sin(ω0n)u(n)az−1 sin(ω0)
1−2az−1 cos(ω0)+a2z−2 |z| > |a|
Inverse Z-transform with partial fractions
If X(z) is a rational function, its inverse Z-transform can be conveniently found by partial fraction
expansion followed by table look-up as illustrated by the following example where the ROC ofX(z)
is |z| > 0.5:
X(z) =10z2 − 3z
10z2 − 9z + 2=
10 − 3z−110 − 9z−1 + 2z−2
=4
2 − z−1 +−5
5 − 2z−1=
2
1 − 0.5z−1 +−1
1 − 0.4z−1 (1.31)
Therefore,
x(n) = 2(0.5)nu(n) − (0.4)nu(n). (1.32)
Implementation of discrete-time LTI systems
Recall that when a discrete-time signal passes through a discrete-time LTI system, the output is the
convolution of the signal and the impulse response of the system. Consider a£nite impulse response
1.9
-
Wong & Lok: Theory of Digital Communications 1. Review
z-1
z-1
z-1
h h h
y ( n )
0 1 2 hK-1
. . .
Σ
x ( n )
Figure 1.1: Direct form implementation of a discrete-time FIR system
(FIR) system, i.e., the impulse responseh(n) has a £nite length, sayK. Then the output signaly(n)
and the input signalx(n) are related by
y(n) =K−1∑k=0
h(k)x(n − k). (1.33)
One way to implement the discrete time system is the direct form implementation as shown in Fig-
ure 1.1. Notice that only delays, multiplications, and additions are required. All these operations can
be conveniently implemented.
1.2 Sampling
To process a continuous-time signal digitally, we £rst need to convert it to a discrete-time signal.
Samplingis a common conversion method. As shown in Figure 1.2, a continuous-time signalx(t) is
sampled at everyTs seconds to obtain the discrete-time samplesx(nTs) for every integern. These
ordered samples form a discrete-time signal.
1.10
-
Wong & Lok: Theory of Digital Communications 1. Review
Tst
x( t )
Figure 1.2: Sampling of a continuous-time signal
X( f )
W f-W
Figure 1.3: Spectrum of a bandlimited signal
1.2.1 Sampling Theorem
Main ideas:
• If the signalx(t) is bandlimitedto W (Hz), i.e.,X(f) = 0 for |f | ≥ W (see Figure 1.3), then itsuf£ces to sample it at intervalsTs = 12W . In other words, the sampling ratefs =
1Ts
can be as
low as2W .
• If x(t) is bandlimited toW and is sampled at intervalsTs ≤ 12W , then it is possible to perfectlyreconstructx(t) from its samples with suitable interpolating signals.
1.11
-
Wong & Lok: Theory of Digital Communications 1. Review
-W W f sss fs-f +W-f -W -f s f -W f +Ws
X ( f )δ
Figure 1.4: Spectrum ofxδ(t)
Theorem:
Let x(t) be bandlimited toW , i.e., X(f) = 0 for |f | ≥ W . Samplex(t) at intervalsTs, whereTs ≤ 12W , to yield the sequence{x(nTs)}∞n=−∞. Then
x(t) =∞∑
n=−∞2W ′Tsx(nTs)sinc[2πW ′(t − nTs)] (1.34)
whereW ′ is any number satisfyingW ≤ W ′ ≤ 1Ts
− W , and sincx is de£ned bysin xx
.
Special case:WhenTs = 12W ,
x(t) =∞∑
n=−∞x(nTs) sincπ
(t
Ts− n
). (1.35)
Proof:
Let
xδ(t) =∞∑
n=−∞x(nTs)δ(t − nTs). (1.36)
Notice that
xδ(t) = x(t)∞∑
n=−∞δ(t − nTs). (1.37)
Taking Fourier transform, we have
Xδ(f) = X(f) ∗[
1
Ts
∞∑n=−∞
δ(f − nTs
)
]
=1
Ts
∞∑n=−∞
X(f − n
Ts
). (1.38)
We see from Figure 1.4 that we needTs ≤ 12W or fs ≥ 2W . Otherwise, copies of the original signal
1.12
-
Wong & Lok: Theory of Digital Communications 1. Review
spectrum would overlap. The overlapping of replica of the original spectrum is calledaliasing. To get
backX(f), we can apply a low-pass £lterH(f) with a passband from−W ′ to W ′ where
W ≤ W ′ ≤ 1Ts
− W (1.39)
together with an appropriate gain. TakeH(f) as the ideal low-pass £lter given by
H(f) =
1 if |f | ≤ W ′
0 if |f | > W ′(1.40)
andTs as the gain. Then
X(f) = TsXδ(f)H(f). (1.41)
Taking inverse Fourier transform, we have
x(t) = Ts
[ ∞∑n=−∞
x(nTs)δ(t − nTs)]∗ [2W ′sinc(2πW ′t)]
=∞∑
n=−∞2W ′Tsx(nTs)sinc[2πW ′(t − nTs)] . (1.42)
Remark:
• The minimum sampling rate12W
is called theNyquist sampling rate.
• The frequency band between two adjacent copies ofX(f) in Xδ(f) is called aguard band. Itssize is( 1
Ts− W ) − W = fs − 2W .
1.2.2 Relationship between CTFT and DTFT
We can represent a continuous-time signal by its continuous-time Fourier transform (CTFT). For the
discrete-time signal obtained by sampling the continuous-time signal, we have a corresponding repre-
sentation, namely, the discrete-time Fourier transform (DTFT). It is natural to ask what the relationship
between the CTFT of the continuous-time signal and the DTFT of its sampled version is.
Consider a continuous-time signalxa(t) with CTFT Xa(Ω) whereΩ = 2πf . It is sampled with a
sampling intervalTs to give the sequencexa(nTs). We treat this sequence as our discrete-time signal
x(n). Recall the modulated impulse trainxa,δ(t) de£ned by
xa,δ(t) = xa(t)∞∑
n=−∞δ(t − nTs) =
∞∑n=−∞
xa(nTs)δ(t − nTs). (1.43)
1.13
-
Wong & Lok: Theory of Digital Communications 1. Review
Its CTFT is given by
Xa,δ(Ω) =∫ ∞−∞
xa,δ(t)e−jΩtdt
=∞∑
n=−∞xa(nTs)
∫ ∞−∞
δ(t − nTs)e−jΩtdt
=∞∑
n=−∞x(n)e−jΩnTs
= X(ΩTs). (1.44)
Equivalently,
X(ω) = Xa,δ
(ω
Ts
). (1.45)
Hence, the DTFT of the sample sequencex(n) is just a normalized version of the CTFT of the modu-
lated impulse trainxa,δ(t). By the previous expression forXa,δ(Ω) (see (1.38)), we have
X(ω) =1
Ts
∞∑k=−∞
Xa
(ω − 2πk
Ts
). (1.46)
Therefore, the DTFT of the sequence of samples has the same shape as the folded spectrum of the
continuous-time signal.
1.3 Gaussian Random Variables
1.3.1 Gaussian Random Variables
De£nition:
A Gaussianrandom variable is any continuous random variable with a probability density function of
the form
fX(x) =1√
2πσ2e−(x−µ)
2/2σ2 , (1.47)
whereµ is a constant, andσ > 0.
Properties:
Let X be a Gaussian random variable with the density function shown in (1.47).
1.14
-
Wong & Lok: Theory of Digital Communications 1. Review
1. The mean and the variance ofX areµ andσ2, respectively. Hence, a Gaussian random variable
is completely speci£ed by its mean and variance.
2. A zero-mean unit-variance Gaussian random variable has the density function
Z(x) =1√2π
e−x2/2. (1.48)
The probability distribution function of a zero-mean unit-variance Gaussian random variable is
given by
Φ(x) =∫ x−∞
Z(u)du. (1.49)
Very often, it is convenient to use theQ-functionde£ned by
Q(x) = 1 − Φ(x). (1.50)
TheQ-function gives the area under the tail ofZ(x). It will be used frequently in the following
chapters. We note that theQ-function is monotone decreasing and is bounded by
Q(x) ≤ 12e−x
2/2 (1.51)
for x ≥ 0. Moreover, the bound in (1.51) is the best of this type in the sense that
limx→∞
Q(x)
exp{−ax2/2} =
∞ if a > 10 if a ≤ 1
(1.52)
3. For the Gaussian random variableX with meanµ and varianceσ2,
Pr(X ≤ x) = FX(x) =∫ x−∞
fX(u)du. (1.53)
Changing the dummy argument by puttingv = (u − µ)/σ, we get
Pr(X ≤ x) = Φ(
x − µσ
). (1.54)
Similarly, we have
Pr(X > x) = Q(
x − µσ
). (1.55)
1.15
-
Wong & Lok: Theory of Digital Communications 1. Review
4. The Gaussian distribution is widely tabulated and is also available in most mathematical soft-
wares. For example, in Matlab, we can use the error function (erf) and the complementary error
function (erfc) to £nd values ofΦ(·) andQ(·). The relationships are given by
Φ(x) =1
2
[1 + erf
(x√2
)](1.56)
and
Q(x) =1
2erfc
(x√2
). (1.57)
5. For any constanta 6= 0, aX is also a Gaussian random variable with meanaµ and variancea2σ2.
1.3.2 Jointly Gaussian Random Variables
De£nition:
Two random variablesX andY arejointly Gaussianif their joint probability density function is of the
form
fXY (x, y) =1
2πσXσY√
1 − ρ2 exp(x−µXσX )2 − 2ρ(x−µXσX )(y−µYσY ) + (y−µYσY )2
−2(1 − ρ2)
, (1.58)
whereµX , µY , σX > 0, σY > 0, and−1 < ρ < 1 are constants.
Properties:
1. X andY are Gaussian random variables with meansµX andµY , and variancesσ2X andσ2Y .
2. The parameterρ is thecorrelation coef£cientgiven by
ρ =E[(X − µX)(Y − µY )]
σXσY. (1.59)
Therefore, two jointly Gaussian random variables are completely speci£ed by their means, vari-
ances and correlation coef£cient.
3. The random variablesX andY are uncorrelated if and only ifρ = 0. In this case, the joint
density function reduces to
fXY (x, y) =1√
2πσ2Xe−(x−µX)
2/2σ2X · 1√2πσ2Y
e−(y−µY )2/2σ2Y = fX(x)fY (y). (1.60)
Hence, uncorrelated Gaussian random variables are also independent.
1.16
-
Wong & Lok: Theory of Digital Communications 1. Review
4. Given any two constantsa andb (not both zeros),aX + bY is also a Gaussian random variable
with meanaµX + bµY and variancea2σ2X + 2abρσXσY + b2σ2Y .
5. Given any invertible2 × 2 matrixA, the random variablesU andV de£ned by U
V
= A
X
Y
(1.61)
are jointly Gaussian.
6. SupposeX andY are uncorrelated jointly Gaussian random variables with the same variance
σ2. If we write X andY as the real and imaginary parts, respectively, of the complex notation
Z = X + jY, (1.62)
thenZ is a(symmetric) complex Gaussianrandom variable. Strictly speaking,Z is not a random
variable. Its probabilistic behavior is actually governed by the joint density function ofX and
Y . However, for convenience, we usually associateZ with a mean and a variance as below:
E[Z] 4= µX + jµY = µZ (1.63)
var[Z] 4=1
2E [(Z − µZ)(Z − µZ)∗]
=1
2E [((X − µX) + j(Y − µY )) ((X − µX) + j(Y − µY ))∗]
= σ2. (1.64)
We also associateZ with the “density function”
fZ(z) =1
2πσ2exp
(−|z − µZ |
2
2σ2
), (1.65)
which is just a compact form of (1.58) whenρ = 0.
Sometimes, we also use the notion of “complex-valued random variable” for other non-Gaussian
random variables.
1.17
-
Wong & Lok: Theory of Digital Communications 1. Review
1.3.3 Gaussian Random Vectors
De£nition:
The random variablesX1, X2, . . . , Xn are jointly Gaussian if their joint probability density function is
of the form
fX(x) =1
(2π)n/2√
det(CX)exp
[−1
2(x − µX)TCX−1(x − µX)
], (1.66)
whereX = [X1, X2, . . . , Xn]T , x = [x1, x2, . . . , xn]T , µX = [µX1 , µX2 , . . . , µXn ]T is a constantn × 1
vector, det(·) is the determinant operation, andCX is ann × n positive-de£nite symmetric matrix.Instead of saying thatX1, X2, . . . , Xn are jointly Gaussian random variables, it is also customary to
say thatX is aGaussian random vector.
Properties:
1. EachXi is a Gaussian random variable with mean equal toµXi, and variance equal to the(i, i)-th
entry ofCX.
2. The covariance ofXi andXj, E[(Xi − µXi)(Xj − µXj)], is given by the (i, j)-th entry ofCX,which is, thus, called thecovariance matrix. Hence, a Gaussian random vector is completely
speci£ed by its mean and its covariance matrix.
3. The random variablesX1, X2, . . . , Xn are uncorrelated if and only ifCX is a diagonal matrix.
In this case, the joint density function factors into the product of the marginal density functions.
Hence, uncorrelated Gaussian random variables are also independent.
4. Given any non-zeron × 1 constant vectora, aTX is a Gaussian random variable with meanaT µX and varianceaTCXa.
5. Given any invertiblen × n matrixA, the random vectorY de£ned by
Y = AX (1.67)
is a Gaussian random vector with meanAµX and covariance matrixACXAT .
1.18
-
Wong & Lok: Theory of Digital Communications 1. Review
1.3.4 Related Random Variables
Let X andY be zero-mean uncorrelated (independent) jointly Gaussian random variables with the
same varianceσ2. De£ne the random variables
R =√
X2 + Y 2
Θ = arctan(Y/X) (1.68)
where the ranges ofR andΘ are [0,∞) and(−π, π], respectively. We note that the transformationfrom (X,Y ) to (R, Θ) is non-singular and
X = R cos Θ
Y = R sin Θ. (1.69)
If we interpret(X,Y ) as the rectangular coordinates, then(R, Θ) is the corresponding polar coordi-
nates. If we interpret(X,Y ) as the complex numberZ in (1.62), thenR andΘ are|Z| andarg(Z),respectively.
Properties:
1. R andΘ are independent.
2. R is Rayleighdistributed, i.e., its density function is
fR(r) =r
σ2exp
(− r
2
2σ2
), r ≥ 0. (1.70)
3. θ is auniformrandom variable, i.e., its density function is
fΘ(θ) =1
2π, − π < θ ≤ π. (1.71)
4. The random variableW = R2 is exponentiallydistributed, i.e., its density function is
fW (w) =1
2σ2exp
(− w
2σ2
), w ≥ 0. (1.72)
5. If X and/orY are not zero-mean, thenR is Riciandistributed, i.e.,
fR(r) =r
σ2exp
(−r
2 + s2
2σ2
)I0
(rs
σ2
), r ≥ 0, (1.73)
wheres2 = µ2X + µ2Y andI0(x) =
12π
∫ 2π0 exp(x cos θ)dθ is the zeroth order modi£ed Bessel
function of the £rst kind. We note thatΘ is neither uniform nor independent ofR in this case.
1.19
-
Wong & Lok: Theory of Digital Communications 1. Review
1.3.5 Central Limit Theorem
There are many different versions of this well-known and extremely useful theorem. Here we state the
simplest version:
Suppose{Xn} is a sequence of independent and identically distributed (iid) random variables with£nite meanµ and £nite positive varianceσ 2. If Sn = X1 + X2 + · · · + Xn, then the random variables
Yn =Sn − nµ
σ√
n(1.74)
converge to a zero-mean unit-variance Gaussian random variable in distribution, i.e., the distribution
functions ofYn converge (pointwise) toΦ(x) de£ned in (1.49).
1.4 Random Processes
1.4.1 General De£nitions
Random process
A random processis an indexed set of random variables de£ned on the same probability space.
• In communications, the index is usually a time index.
• If n(t) is a random process, then at any given timet0, n(t0) is a random variable.
• In many cases, it is convenient to view a random process as a mapping de£ned on the samplespace. It maps an outcome to a function of time.
For example,Ω = {0, 1}.
0 → cos(t),1 → sin(t).
Notice that at any given timet0, we have a random variablen(t0) that maps 0 tocos(t0) and 1
to sin(t0).
1.20
-
Wong & Lok: Theory of Digital Communications 1. Review
Mean of a random process
At any given timet0, n(t0) is a random variable. We can determine the mean of this random variable.
We denote this mean byµn(t0). If we calculate the mean at each timet, we get themeanof the process
µn(t) = E[n(t)] (1.75)
which is function of time. It gives the mean of the process at timet.
Autocorrelation function
At any givent1 andt2, n(t1) andn(t2) are random variables. We can determine the correlation of these
random variables. We denote this correlation byRn(t1, t2). If we calculate the correlation at each pair
of time instantst ands, we get theautocorrelation functionof the process
Rn(t, s) = E[n(t)n(s)] (1.76)
which is a function of two time variablest ands. Settings = t, we get
Rn(t, t) = E[n2(t)]. (1.77)
It represents the (ensemble average) power of the process at timet.
Autocovariance function
At any givent1 andt2, n(t1) andn(t2) are random variables. We can determine the covariance of these
random variables. We denote this covariance byCn(t1, t2). If we calculate the covariance at each pair
of time instantst ands, we get theautocovariance functionof the process
Cn(t, s) = E[(n(t) − µn(t))(n(s) − µn(s))] (1.78)
which is a function of two time variablest ands.
Cross-correlation function and cross-covariance function
Suppose that we are given two random processesn(t) andm(t). At any givent1 andt2, n(t1) and
m(t2) are random variables. We can determine the correlation and the covariance of these random
1.21
-
Wong & Lok: Theory of Digital Communications 1. Review
variables. We denote them byRnm(t1, t2) andCnm(t1, t2). If we perform the calculations at each pair
of time instantst ands, we get thecross-correlation function
Rnm(t, s) = E[n(t)m(s)], (1.79)
and thecross-covariance function
Cnm(t, s) = E[(n(t) − µn(t))(m(s) − µm(s))] (1.80)
of the processes.Rnm(t, s) andCnm(t, s) are functions of the two variablest ands.
1.4.2 Wide-Sense Stationary (WSS) Random Processes
De£nitions
• A random processn(t) is wide-sense stationary (WSS)if the following conditions hold:
1. µn(t) does not depend ont, i.e.,µn(t) takes the same value for allt. (We may just represent
it by µn.)
2. Rn(t, s) depends only on the differenceτ = t − s, but not ont or s individually. (We mayjust writeRn(τ).)
Notice that ifn(t) is WSS, then the (ensemble average) power of the process at any timet is
Rn(t − t) = Rn(0), and does not depend on the timet.
• Two random processesn(t) andm(t) arejointly WSS if the following conditions hold:
1. The processesn(t) andm(t) are each WSS.
2. Rnm(t, s) depends only on the differenceτ = t − s, but not ont or s individually. (Wemay just writeRnm(τ).)
• A random processn(t) is wide-sense cyclostationary (WSCS)with periodT if the followingconditions hold:
1. µn(t + kT ) = µn(t)
1.22
-
Wong & Lok: Theory of Digital Communications 1. Review
2. Rn(t + kT, s + kT ) = Rn(t, s)
for everyt, s and every integerk.
Supposēn(t) = n(t−∆) wheren(t) is a WSCS process and∆ is a uniform random variable inthe interval[0, T ) independent ofn(t). Thenn̄(t) is a WSS process and
µn̄ =1
T
∫ T0
µn(t)dt (1.81)
Rn̄(τ) =1
T
∫ T0
Rn(t + τ, t)dt. (1.82)
Response of an LTI system to a WSS random process
Supposex(t) is a WSS random process with meanµx and autocorrelation functionRx(τ). We pass
x(t) through an LTI £lter with impulse responseh(t) and obtain the output processy(t). Then the
output
y(t) = x ∗ h(t) =∫ ∞−∞
x(τ)h(t − τ)dτ. (1.83)
We can determine the mean ofy(t)
µy(t) = E[y(t)]
= E[∫ ∞
−∞x(τ)h(t − τ)dτ
]
=∫ ∞−∞
E[x(τ)]h(t − τ)dτ
= µx
∫ ∞−∞
h(t − τ)dτ
= µx
∫ ∞−∞
h(τ)dτ. (1.84)
Moreover, the autocorrelation function of y(t)
Ry(t, s) = E[y(t)y(s)]
= E[∫ ∞
−∞
∫ ∞−∞
x(τ)x(ν)h(t − τ)h(s − ν)dτν]
=∫ ∞−∞
∫ ∞−∞
E[x(τ)x(ν)]h(t − τ)h(s − ν)dτν
=∫ ∞−∞
∫ ∞−∞
Rx(τ − ν)h(t − τ)h(s − ν)dτν= Rx ∗ h ∗ h̃(t − s) (1.85)
1.23
-
Wong & Lok: Theory of Digital Communications 1. Review
whereh̃(t) = h(−t). SinceRy(t, s) depends only on the differenceτ = t − s but not ont or sindividually, we may just writeRy(τ). From (1.84) and (1.85), we see thaty(t) is also WSS. It is
straightforward to check thatRxy(t, s) = Rx ∗ h̃(t − s) depends only on the differenceτ = t − s, butnot ont or s individually. (We may just writeRxy(τ).) Therefore,x(t) andy(t) are jointly WSS.
1.4.3 Power Spectral Densities of WSS Random Processes
De£nition:
Let x(t) be a WSS random process. Itspower spectral density, denoted byΦx(f) is de£ned as the
Fourier transform ofRx(τ), i.e.,
Φx(f) =∫ ∞−∞
Rx(τ)e−j2πfτdτ. (1.86)
Properties:
1. Rx(τ) can be obtained from the inverse Fourier transform ofΦx(f):
Rx(τ) =∫ ∞−∞
Φx(f)ej2πfτdf. (1.87)
2. The power ofx(t) is given by
Rx(0) =∫ ∞−∞
Φx(f)df. (1.88)
3. Let y(t) be the output whenx(t) is passed through an LTI £lter with impulse responseh(t).
Similarly, we de£ne the power spectral density ofy(t), Φy(f), to be the Fourier transform of
Ry(τ), and the power ofy(t) can be obtained by the integral ofΦy(f). SinceRy(τ) = Rx ∗ h ∗h̃(τ),
Φy(f) = Φx(f)|H(f)|2 (1.89)
whereH(f) is the Fourier transform ofh(t).
4. Consider that the LTI £lter above is a bandpass £lter with a very narrowband passband around
f0, i.e.,
H(f) =
1 if f0 − ∆f2 < f < f0 + ∆f20 otherwise.
(1.90)
1.24
-
Wong & Lok: Theory of Digital Communications 1. Review
The output power is given by
Ry(0) =∫ ∞−∞
Φy(f)df
=∫ ∞−∞
Φx(f)|H(f)|2df≈ Φx(f0)∆f. (1.91)
The last approximation is valid if∆f is small enough so thatΦx(f) is essentially constant in the
passband. The output power per Hz is given by
Ry(0)
∆f= Φx(f0). (1.92)
Physically, the output power should be the power ofx(t) aroundf0 (more precisely, the power
of x(t) from f0 − ∆f2 to f0 + ∆f2 ). The output power per Hz should be the power ofx(t) perHz aroundf0. Hence,Φx(f0) is the power ofx(t) per Hz aroundf0, and is, therefore, called the
power spectral density.
1.4.4 Special processes
White processes
A WSS random processn(t) is white if µn = 0 andRn(τ) is a scalar multiple of the delta function
δ(τ).
1. Equivalently, a WSS random processn(t) is white if µn = 0 andΦn(f) is a constant because
the Fourier transform of a delta function is a constant.
2. Fort1 6= t2, the random variablesn(t1) andn(t2) are uncorrelated.
3. The power of the process isRn(0) = ∞. Hence, a white process has in£nite power. Of course,this is just a convenient mathematical model, and does not exist in reality. Nevertheless, this
model is useful especially in modeling thermal noise.
1.25
-
Wong & Lok: Theory of Digital Communications 1. Review
Gaussian random processes
A random processx(t) is aGaussianrandom process if for allk and for all(t1, t2, . . . , tk), the random
variables(x(t1), x(t2), . . . , x(tk)) are jointly Gaussian.
Two random processesx(t) andy(t) arejointly Gaussianif for all k1 andk2 and for all(t1, t2, . . . , tk1)
and(s1, s2, . . . , sk2), the random variables(x(t1), x(t2), . . . , x(tk1), y(s1), y(s2), . . . , y(sk2)) are jointly
Gaussian.
1. It can be shown that if a Gaussian random processx(t) is passed through an LTI system, then
the output processy(t) is also a Gaussian random process. Actually,x(t) andy(t) are jointly
Gaussian random processes.
2. In particular, a time sample of the output processy(t) is a Gaussian random variable.
3. More generally, if a random processx(t) is passed through a bank of LTI £lters, the outputs of
the £lters are jointly Gaussian random processes.
4. Very often, we model the thermal noise as an additive white Gaussian random process. This
means that the noise is assumed to add onto the signal, the noise is assumed to be a white
process, and the noise is assumed to be a Gaussian random process. We then call the noise
Additive White Gaussian Noise (AWGN).
Complex-valued random processes
Given two random processesx(t) andy(t), we can de£ne thecomplex-valued random processz(t) as
z(t) = x(t) + jy(t). (1.93)
Like complex-valued random variables,z(t) is not a random process. However, it is sometimes conve-
nient to treat it as if it were a random process. We can de£ne the mean, autocorrelation function, and
autocovariance function ofz(t) by extending the corresponding de£nitions for a random process. For
example, the autocorrelation function ofz(t) is
Rz(t, s) =1
2E[z(t)z(s)∗]
1.26
-
Wong & Lok: Theory of Digital Communications 1. Review
=1
2E [(x(t) + jy(t))(x(s) − jy(s))]
=1
2{Rx(t, s) + Ry(t, s) + j[Ryx(t, s) − Rxy(t, s)]} . (1.94)
If x(t) andy(t) are jointly WSS,Rz(t, s) depends only on the differenceτ = t − s, but not ont ors individually. (We may just writeRz(τ).) In this case, we sayz(t) is WSS. Cross-correlation and
covariance functions of two complex-valued random processes can be de£ned in a similar way. We
note that our convention is to include the factor1/2 in all second-order statistics involving complex-
valued random variables and processes. The power spectral density of a WSS complex-valued random
process is de£ned as the Fourier transform of its autocorrelation function.
1.4.5 Sampling Theorem for Bandlimited WSS Random Processes
Supposex(t) be a WSS random process. If its power spectral densityΦx(f) = 0 for |f | ≥ W , thenx(t) is calledbandlimitedto W .
As an immediate consequence of (1.34), the autocorrelation function satis£es
Rx(τ) =∞∑
n=−∞2W ′TsRx(nTs)sinc[2πW ′(τ − nTs)] (1.95)
whereW ′ is any number satisfyingW ≤ W ′ ≤ 1Ts
− W . Based on this, if we construct the randomprocess
x̃(t) =∞∑
n=−∞2W ′Tsx(nTs)sinc[2πW ′(t − nTs)] (1.96)
from the time samples ofx(t), thenx̃(t) is WSS and
E[|x(t) − x̃(t)|2
]= 0 (1.97)
We note that the equality in (1.97) should be interpreted as the mean-square convergence of the series
of random variables represented by (1.96) to the random variablex(t0).
1.27