digital communication and probability of error

Upload: chinmay2882

Post on 07-Aug-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/20/2019 digital communication and probability of error.

    1/58

    Chapter 3: Probability, Random Variables, Random Processes

    A First Course in Digital CommunicationsHa H. Nguyen and E. Shwedyk

    February 2009

    A First Course in Digital Communications 1/58

    http://find/

  • 8/20/2019 digital communication and probability of error.

    2/58

    Chapter 3: Probability, Random Variables, Random Processes

    IntroductionThe main objective of a communication system is the transfer

    of information over a channel.Message signal is best modeled by a random signal: Anysignal that conveys information must have some uncertainty init, otherwise its transmission is of no interest.

    Two types of imperfections in a communication channel:Deterministic imperfection, such as linear and nonlineardistortions, inter-symbol interference, etc.Nondeterministic imperfection, such as addition of noise,interference, multipath fading, etc.

    We are concerned with the methods used to describe andcharacterize a random signal, generally referred to as arandom process (also commonly called stochastic process ).In essence, a random process is a random variable evolving in

    time.A First Course in Digital Communications 2/58

    http://find/http://goback/

  • 8/20/2019 digital communication and probability of error.

    3/58

    Chapter 3: Probability, Random Variables, Random Processes

    Sample Space and Probability

    Random experiment : its outcome, for some reason, cannot bepredicted with certainty.Examples: throwing a die, ipping a coin and drawing a card

    from a deck.Sample space : the set of all possible outcomes, denoted by Ω.Outcomes are denoted by ω’s and each ω lies in Ω, i.e., ω ∈Ω.A sample space can be discrete or continuous .

    Events are subsets of the sample space for which measures of their occurrences, called probabilities, can be dened ordetermined.

    A First Course in Digital Communications 3/58

    http://find/

  • 8/20/2019 digital communication and probability of error.

    4/58

    Chapter 3: Probability, Random Variables, Random Processes

    Three Axioms of Probability

    For a discrete sample space Ω, dene a probability measure P onΩ as a set function that assigns nonnegative values to all events,denoted by E , in Ω such that the following conditions are satised

    Axiom 1: 0

    ≤P (E )

    ≤1 for all E

    ∈Ω (on a % scale

    probability ranges from 0 to 100%. Despite popular sportslore, it is impossible to give more than 100%).Axiom 2: P (Ω) = 1 (when an experiment is conducted therehas to be an outcome).

    Axiom 3: For mutually exclusive events1

    E 1, E 2, E 3, . . .wehave P ( ∞i=1 E i ) =∞i=1 P (E i ).

    1The events E 1 , E 2 , E 3 ,. . . are mutually exclusive if E i ∩ E j = ⊘ for all

    i = j , where ⊘ is the null set.A First Course in Digital Communications 4/58

    http://find/

  • 8/20/2019 digital communication and probability of error.

    5/58

    Chapter 3: Probability, Random Variables, Random Processes

    Important Properties of the Probability Measure

    1. P (E c) = 1 −P (E ), where E c denotes the complement of E .This property implies that P (E c) + P (E ) = 1 , i.e., somethinghas to happen.

    2. P (

    ) = 0 (again, something has to happen).3. P (E 1∪E 2) = P (E 1) + P (E 2) −P (E 1 ∩E 2). Note that if

    two events E 1 and E 2 are mutually exclusive thenP (E 1∪E 2) = P (E 1) + P (E 2), otherwise the nonzerocommon probability P (E 1 ∩E 2) needs to be subtracted off.

    4. If E 1 ⊆E 2 then P (E 1) ≤P (E 2). This says that if event E 1is contained in E 2 then occurrence of E 2 means E 1 hasoccurred but the converse is not true.

    A First Course in Digital Communications 5/58

    http://find/

  • 8/20/2019 digital communication and probability of error.

    6/58

    Chapter 3: Probability, Random Variables, Random Processes

    Conditional Probability

    We observe or are told that event E 1 has occurred but areactually interested in event E 2: Knowledge that of E 1 hasoccurred changes the probability of E 2 occurring.If it was P (E 2) before, it now becomes P (E 2|E 1), theprobability of E 2 occurring given that event E 1 has occurred.

    This conditional probability is given by

    P (E 2|E 1) =P (E 2 ∩E 1)

    P (E 1) , if P (E 1) = 0

    0, otherwise.

    If P (E 2|E 1) = P (E 2), or P (E 2 ∩E 1) = P (E 1)P (E 2), thenE 1 and E 2 are said to be statistically independent .Bayes ’ rule

    P (E 2|E 1) = P (E 1|E 2)P (E 2)

    P (E 1) ,

    A First Course in Digital Communications 6/58

    h b b l d bl d

    http://find/

  • 8/20/2019 digital communication and probability of error.

    7/58

    Chapter 3: Probability, Random Variables, Random Processes

    Total Probability Theorem

    The events

    {E i

    }ni=1 partition the sample space Ω if:

    (i)n

    i=1E i = Ω (1a)

    (ii) E i ∩E j = ⊘ for all 1 ≤ i, j ≤n and i = j (1b)If for an event A we have the conditional probabilities

    {P (A|E i )}ni=1 , P (A) can be obtained asP (A) =

    n

    i=1

    P (E i )P (A|E i ).Bayes’ rule :

    P (E i |A) = P (A|E i )P (E i )

    P (A) =

    P (A|E i )P (E i )n j =1 P (A

    |E j )P (E j )

    .

    A First Course in Digital Communications 7/58

    Ch t 3 P b bilit R d V i bl R d P

    http://find/

  • 8/20/2019 digital communication and probability of error.

    8/58

    Chapter 3: Probability, Random Variables, Random Processes

    Random Variables

    R

    4( )ω x 1( )ω x 2( )ω x3( )ω x

    A random variable is a mapping from the sample space Ω tothe set of real numbers.We shall denote random variables by boldface, i.e., x , y , etc.,while individual or specic values of the mapping x aredenoted by x (ω).

    A First Course in Digital Communications 8/58

    Chapter 3: Probability Random Variables Random Processes

    http://find/

  • 8/20/2019 digital communication and probability of error.

    9/58

    Chapter 3: Probability, Random Variables, Random Processes

    Cumulative Distribution Function (cdf)

    cdf gives a complete description of the random variable. It isdened as:

    F x (x) = P (ω ∈Ω : x (ω) ≤x) = P (x ≤x).The cdf has the following properties:

    1. 0 ≤F x (x) ≤1 (this follows from Axiom 1 of the probabilitymeasure).2. F x (x) is nondecreasing: F x (x1 ) ≤F x (x2 ) if x1 ≤x2 (this isbecause event x(ω)

    ≤x1 is contained in event x (ω)

    ≤x2 ).

    3. F x (−∞) = 0 and F x (+ ∞) = 1 (x (ω) ≤ −∞ is the empty set,hence an impossible event, while x (ω) ≤ ∞ is the wholesample space, i.e., a certain event).4. P (a < x ≤b) = F x (b) −F x (a).

    A First Course in Digital Communications 9/58

    Chapter 3: Probability Random Variables Random Processes

    http://find/

  • 8/20/2019 digital communication and probability of error.

    10/58

    Chapter 3: Probability, Random Variables, Random Processes

    Typical Plots of cdf I

    A random variable can be discrete , continuous or mixed .

    0.1

    0 ∞∞− x

    ( )F xx

    (a)

    A First Course in Digital Communications 10/58

    Chapter 3: Probability Random Variables Random Processes

    http://find/

  • 8/20/2019 digital communication and probability of error.

    11/58

    Chapter 3: Probability, Random Variables, Random Processes

    Typical Plots of cdf II

    0.1

    0 ∞∞− x

    ( )F xx

    ( b)

    0.1

    0 ∞∞− x

    ( )F xx

    (c)

    A First Course in Digital Communications 11/58

    Chapter 3: Probability, Random Variables, Random Processes

    http://find/

  • 8/20/2019 digital communication and probability of error.

    12/58

    Chapter 3: Probability, Random Variables, Random Processes

    Probability Density Function (pdf)

    The pdf is dened as the derivative of the cdf:

    f x (x) = dF x (x)

    dx .

    It follows that:

    P (x1

    ≤x

    ≤x2) = P (x

    ≤x2)

    −P (x

    ≤x1)

    = F x (x2) −F x (x1) = x2

    x 1f x (x)dx.

    Basic properties of pdf:1. f x (x) ≥0.2.

    −∞ f x

    (x)dx = 1 .3. In general, P (x ∈A) = A f x (x)dx.For discrete random variables, it is more common to denethe probability mass function (pmf): pi = P (x = xi).Note that, for all i, one has pi

    ≥0 and

    i

    pi = 1 .

    A First Course in Digital Communications 12/58

    Chapter 3: Probability, Random Variables, Random Processes

    http://find/

  • 8/20/2019 digital communication and probability of error.

    13/58

    p y, V ,

    Bernoulli Random Variable

    0 x

    1 0 x

    1

    p−1

    1

    (1 ) p−( ) p

    ( )F xx

    ( ) f xx

    A discrete random variable that takes two values 1 and 0 withprobabilities p and 1− p.Good model for a binary data source whose output is 1 or 0.Can also be used to model the channel errors.

    A First Course in Digital Communications 13/58

    Chapter 3: Probability, Random Variables, Random Processes

    http://find/

  • 8/20/2019 digital communication and probability of error.

    14/58

    p y, ,

    Binomial Random Variable

    0 x

    2

    05.0

    10.0

    20.0

    15.0

    25.0

    30.0

    4 6

    ( ) f xx

    A discrete random variable that gives the number of 1’s in asequence of n independent Bernoulli trials.

    f x (x) =n

    k=0

    nk

    pk (1− p)n − kδ (x−k), wherenk

    = n!

    k!(n

    −k)!

    .

    A First Course in Digital Communications 14/58

    Chapter 3: Probability, Random Variables, Random Processes

    http://find/http://goback/

  • 8/20/2019 digital communication and probability of error.

    15/58

    Uniform Random Variable

    0 x

    ab −1

    a b 0 x

    a b

    1

    ( )F xx( ) f xx

    A continuous random variable that takes values between aand b with equal probabilities over intervals of equal length.The phase of a received sinusoidal carrier is usually modeledas a uniform random variable between 0 and 2π. Quantizationerror is also typically modeled as uniform.

    A First Course in Digital Communications 15/58

    Chapter 3: Probability, Random Variables, Random Processes

    http://find/

  • 8/20/2019 digital communication and probability of error.

    16/58

    Gaussian (or Normal) Random Variable

    0 x

    0 x

    122

    1

    πσ

    2

    1

    ( ) f xx

    ( )F xx

    A continuous random variable whose pdf is:

    f x (x) = 1√ 2πσ 2 exp −(x −µ)2

    2σ2,

    µ and σ2 are parameters. Usually denoted as N (µ, σ 2).Most important and frequently encountered random variable

    in communications.A First Course in Digital Communications 16/58

    http://find/

  • 8/20/2019 digital communication and probability of error.

    17/58

    Chapter 3: Probability, Random Variables, Random Processes

  • 8/20/2019 digital communication and probability of error.

    18/58

    Expectation of Random Variables I

    Statistical averages , or moments , play an important role in thecharacterization of the random variable.

    The expected value (also called the mean value, rst moment)of the random variable x is dened as

    m x = E {x} ≡ ∞

    −∞ xf x (x)dx,

    where E denotes the statistical expectation operator .In general, the nth moment of x is dened as

    E {x n } ≡ ∞−∞ xn f x (x)dx.For n = 2 , E {x 2} is known as the mean-squared value of therandom variable.

    A First Course in Digital Communications 18/58

    Chapter 3: Probability, Random Variables, Random Processes

    http://find/

  • 8/20/2019 digital communication and probability of error.

    19/58

    Expectation of Random Variables II

    The nth central moment of the random variable x is:

    E {y}= E {(x −mx )n}= ∞

    −∞(x −mx )n f x (x)dx.

    When n = 2 the central moment is called the variance ,

    commonly denoted as σ2x :

    σ2x = var( x ) = E {(x −mx )2}= ∞

    −∞(x −m x )2f x (x)dx.

    The variance provides a measure of the variable’s“randomness”.The mean and variance of a random variable give a partial description of its pdf.

    A First Course in Digital Communications 19/58

    Chapter 3: Probability, Random Variables, Random Processes

    http://find/

  • 8/20/2019 digital communication and probability of error.

    20/58

    Expectation of Random Variables III

    Relationship between the variance, the rst and secondmoments:

    σ2x = E

    {x 2

    } −[E

    {x

    }]2 = E

    {x 2

    } −m2x .

    An electrical engineering interpretation: The AC power equals total power minus DC power.The square-root of the variance is known as the standard deviation, and can be interpreted as the root-mean-squared(RMS) value of the AC component.

    A First Course in Digital Communications 20/58

    Chapter 3: Probability, Random Variables, Random Processes

    http://find/

  • 8/20/2019 digital communication and probability of error.

    21/58

    The Gaussian Random Variable

    0 0.2 0.4 0.6 0.8 1−0.8

    −0.6

    −0.4

    −0.2

    0

    0.2

    0.4

    0.6

    t (sec)

    S i g n a

    l a m p l

    i t u d e ( v o

    l t s )

    (a) A muscle (emg) signal

    A First Course in Digital Communications 21/58

    Chapter 3: Probability, Random Variables, Random Processes

    http://find/

  • 8/20/2019 digital communication and probability of error.

    22/58

    −1 −0.5 0 0.5 10

    0.5

    1

    1.5

    2

    2.5

    3

    3.5

    4

    x (volts)

    f x ( x ) ( 1 / v o l

    t s )

    (b) Histogram and pdf fits

    HistogramGaussian fitLaplacian fit

    f x (x) = 1

    2πσ 2xe

    − (x − m x ) 2

    2 σ 2x (Gaussian)

    f x (x) = a

    2e− a | x | (Laplacian)

    A First Course in Digital Communications 22/58

    Chapter 3: Probability, Random Variables, Random Processes

    http://find/

  • 8/20/2019 digital communication and probability of error.

    23/58

    Gaussian Distribution (Univariate)

    −15 −10 −5 0 5 10 150

    0.05

    0.1

    0.15

    0.2

    0.25

    0.3

    0.35

    0.4

    x

    f x ( x )

    σx=1

    σx

    =2

    σx=5

    Range (±kσ x ) k = 1 k = 2 k = 3 k = 4P (m x −kσ x < x ≤m x −kσ x ) 0.683 0.955 0.997 0.999Error probability 10− 3 10− 4 10− 6 10− 8Distance from the mean 3.09 3.72 4.75 5.61

    A First Course in Digital Communications 23/58

    Chapter 3: Probability, Random Variables, Random Processes

    http://find/

  • 8/20/2019 digital communication and probability of error.

    24/58

    Multiple Random Variables I

    Often encountered when dealing with combined experimentsor repeated trials of a single experiment.Multiple random variables are basically multidimensionalfunctions dened on a sample space of a combinedexperiment.Let x and y be the two random variables dened on the samesample space Ω. The joint cumulative distribution function isdened as

    F x ,y (x, y ) = P (x

    ≤x, y

    ≤y).

    Similarly, the joint probability density function is:

    f x ,y (x, y ) = ∂ 2F x ,y (x, y )

    ∂x∂y .

    A First Course in Digital Communications 24/58

    Chapter 3: Probability, Random Variables, Random Processes

    http://find/

  • 8/20/2019 digital communication and probability of error.

    25/58

    Multiple Random Variables II

    When the joint pdf is integrated over one of the variables, oneobtains the pdf of other variable, called the marginal pdf :

    −∞f x ,y (x, y )dx = f y (y),

    ∞−∞ f x ,y (x, y )dy = f x (x).Note that:

    −∞ ∞

    −∞f x ,y (x, y )dxdy = F (∞, ∞) = 1

    F x ,y (−∞, −∞) = F x ,y (−∞, y) = F x ,y (x, −∞) = 0 .

    A First Course in Digital Communications 25/58

    Chapter 3: Probability, Random Variables, Random Processes

    http://find/http://goback/

  • 8/20/2019 digital communication and probability of error.

    26/58

    Multiple Random Variables III

    The conditional pdf of the random variable y , given that the

    value of the random variable x is equal to x, is dened as

    f y (y|x) =f x ,y (x, y )

    f x (x) , f x (x) = 0

    0, otherwise.

    Two random variables x and y are statistically independent if and only if

    f y (y|x) = f y (y) or equivalently f x ,y (x, y ) = f x (x)f y (y).The joint moment is dened as

    E {x j y k}= ∞

    −∞ ∞

    −∞x j ykf x ,y (x, y )dxdy.

    A First Course in Digital Communications 26/58

    Chapter 3: Probability, Random Variables, Random Processes

    http://find/

  • 8/20/2019 digital communication and probability of error.

    27/58

    Multiple Random Variables IV

    The joint central moment is

    E {(x−mx ) j (y−my )k}= ∞

    −∞ ∞

    −∞(x−mx ) j (y−my )k f x ,y (x, y )dxdy

    where mx = E

    {x

    } and my = E

    {y

    }.

    The most important moments are

    E {xy } ≡ ∞

    −∞ ∞

    −∞xyf x ,y (x, y )dxdy (correlation)

    cov{x , y} ≡ E {(x −mx )(y −m y )}= E {xy } −mx my (covariance).

    A First Course in Digital Communications 27/58

    http://find/

  • 8/20/2019 digital communication and probability of error.

    28/58

    Chapter 3: Probability, Random Variables, Random Processes

  • 8/20/2019 digital communication and probability of error.

    29/58

    Examples of Uncorrelated Dependent Random Variables

    Example 1: Let x be a discrete random variable that takes on

    {−1, 0, 1} with probabilities {14 , 12 , 14}, respectively. Therandom variables y = x3 and z = x2 are uncorrelated butdependent.Example 2: Let x be an uniformly random variable over

    [−1, 1]. Then the random variables y

    = x

    and z

    = x2

    areuncorrelated but dependent.Example 3: Let x be a Gaussian random variable with zeromean and unit variance (standard normal distribution ). Therandom variables y = x and z =

    |x

    | are uncorrelated but

    dependent.Example 4: Let u and v be two random variables (discrete orcontinuous) with the same probability density function. Thenx = u −v and y = u + v are uncorrelated dependent randomvariables.

    A First Course in Digital Communications 29/58

    http://find/

  • 8/20/2019 digital communication and probability of error.

    30/58

    Chapter 3: Probability, Random Variables, Random Processes

  • 8/20/2019 digital communication and probability of error.

    31/58

    Jointly Gaussian Distribution (Bivariate)

    f x ,y (x, y ) = 1

    2πσ x σy 1 −ρ2x ,yexp

    − 1

    2(1 −ρ2x ,y )

    ×(x −mx )2

    σ2x − 2ρx ,y (x −m x )(y −my )

    σx σy+

    (y −my )2σ2y

    ,

    where mx , my , σx , σy are the means and variances.ρx ,y is indeed the correlation coefficient.Marginal density is Gaussian: f x (x)∼N (mx , σ2x ) andf y (y)∼N (my , σ2y ).When ρx ,y = 0

    → f x ,y (x, y ) = f x (x)f y (y)

    → random

    variables x and y are statistically independent.Uncorrelatedness means that joint Gaussian random variables are statistically independent . The converse is not true.Weighted sum of two jointly Gaussian random variables is alsoGaussian.

    A First Course in Digital Communications 31/58

    Chapter 3: Probability, Random Variables, Random Processes

    http://find/

  • 8/20/2019 digital communication and probability of error.

    32/58

    Joint pdf and Contours for σ x = σ y = 1 and ρ x ,y = 0

    −3−2

    −10

    12 3

    −2

    02

    0

    0.05

    0.1

    0.15

    x

    ρx ,y

    =0

    y

    f x , y

    ( x , y

    )

    x

    y

    −2 −1 0 1 2−2.5

    −2

    −1

    0

    1

    2

    2.5

    A First Course in Digital Communications 32/58

    Chapter 3: Probability, Random Variables, Random Processes

    http://find/

  • 8/20/2019 digital communication and probability of error.

    33/58

    Joint pdf and Contours for σ x = σ y = 1 and ρ x ,y = 0 . 3

    −3−2

    −10

    12 3

    −2

    0

    2

    0

    0.05

    0.1

    0.15

    x

    ρx ,y

    =0.30

    y

    f x , y

    ( x , y

    )

    a cross−section

    x

    y

    −2 −1 0 1 2−2.5

    −2

    −1

    0

    1

    2

    2.5

    A First Course in Digital Communications 33/58

    http://find/

  • 8/20/2019 digital communication and probability of error.

    34/58

    Chapter 3: Probability, Random Variables, Random Processes

    J i df d C f 1 d 0 95

  • 8/20/2019 digital communication and probability of error.

    35/58

    Joint pdf and Contours for σ x = σ y = 1 and ρ x ,y = 0 . 95

    −3−2

    −10

    12 3

    −2

    02

    0

    0.1

    0.2

    0.3

    0.4

    0.5

    x

    ρx ,y

    =0.95

    y

    f x , y

    ( x , y

    )

    x

    y

    −2 −1 0 1 2−2.5

    −2

    −1

    0

    1

    2

    2.5

    A First Course in Digital Communications 35/58

    Chapter 3: Probability, Random Variables, Random Processes

    M lti i t G i df

    http://find/

  • 8/20/2019 digital communication and probability of error.

    36/58

    Multivariate Gaussian pdf

    Dene −→x

    = [x

    1,x

    2, . . . ,x

    n ], a vector of the means−→m = [m1, m 2, . . . , m n ], and the n ×n covariance matrix C with C i,j = cov( x i , x j ) = E {(x i −m i )(x j −m j )}.The random variables {x i}ni=1 are jointly Gaussian if:

    f x 1 ,x 2 ,..., x n (x1, x2, . . . , x n ) = 1

    (2π)n det( C )×exp −

    12

    (−→x −−→m )C − 1(−→x −−→m )⊤ .

    If C is diagonal (i.e., the random variables {x i}ni=1 are alluncorrelated), the joint pdf is a product of the marginal pdfs:Uncorrelatedness implies statistical independent for multipleGaussian random variables.

    A First Course in Digital Communications 36/58

    Chapter 3: Probability, Random Variables, Random Processes

    R d P I

    http://find/

  • 8/20/2019 digital communication and probability of error.

    37/58

    Random Processes I

    Real number

    Timet k

    x1(t ,ω

    1)

    x2(t ,ω

    2)

    x M

    (t ,ω M

    )

    .

    .

    .

    ω1

    ω2

    ω M

    ω3

    ω j

    x (t ,ω)

    t 2

    t 1

    .

    .

    .

    .

    .

    .

    . . .

    x (t k ,ω)

    .

    .

    .

    A mapping from a sample space to a set of time functions .

    A First Course in Digital Communications 37/58

    Chapter 3: Probability, Random Variables, Random Processes

    R d P II

    http://goforward/http://find/http://goback/

  • 8/20/2019 digital communication and probability of error.

    38/58

    Random Processes II

    Ensemble : The set of possible time functions that one sees.Denote this set by x (t), where the time functions x1(t, ω1),x2(t, ω2), x3(t, ω3), . . . are specic members of the ensemble.At any time instant, t = tk , we have random variable x (tk ).At any two time instants, say t1 and t2, we have two differentrandom variables x (t1) and x (t2).Any relationship between them is described by the joint pdf f x (t 1 ),x (t 2 )(x1, x2; t1, t 2).A complete description of the random process is determined by

    the joint pdf f x (t 1 ),x (t 2 ),..., x (tN ) (x1, x2, . . . , x N ; t1, t 2, . . . , t N ).The most important joint pdfs are the rst-order pdf f x (t) (x; t) and the second-order pdf f x (t 1 )x (t 2 ) (x1, x2; t1, t 2).

    A First Course in Digital Communications 38/58

    Chapter 3: Probability, Random Variables, Random Processes

    Examples of Random Processes I

    http://find/

  • 8/20/2019 digital communication and probability of error.

    39/58

    Examples of Random Processes I

    (a) Thermal noise

    0 t

    (b) Uniform phase

    t 0

    A First Course in Digital Communications 39/58

    Chapter 3: Probability, Random Variables, Random Processes

    Examples of Random Processes II

    http://find/

  • 8/20/2019 digital communication and probability of error.

    40/58

    Examples of Random Processes II

    t 0

    (c) Rayleigh fading process

    t

    +V

    −V

    (d) Binary random data

    0

    T b

    A First Course in Digital Communications 40/58

    Chapter 3: Probability, Random Variables, Random Processes

    Classication of Random Processes

    http://find/

  • 8/20/2019 digital communication and probability of error.

    41/58

    Classication of Random ProcessesBased on whether its statistics change with time: the processis non-stationary or stationary .Different levels of stationarity:

    Strictly stationary: the joint pdf of any order is independent of a shift in time.N th-order stationarity: the joint pdf does not depend on thetime shift, but depends on time spacings:

    f x ( t 1 ) , x ( t 2 ) ,... x ( t N ) (x1 , x2 , . . . , x N ; t1 , t 2 , . . . , t N ) =f x ( t 1 + t ) , x ( t 2 + t ) ,... x ( t N + t ) (x1 , x2 , . . . , x N ; t1 + t, t 2 + t , . . . , t N + t).

    The rst- and second-order stationarity:

    f x (t 1 ) (x, t 1) = f x (t 1 + t) (x; t1 + t) = f x (t) (x)

    f x (t 1 ),x (t 2 ) (x1, x2; t1, t 2) = f x (t 1 + t),x (t 2 + t) (x1, x2; t1 + t, t 2 + t)= f x (t 1 ),x (t 2 ) (x1, x2; τ ), τ = t2 −t1.

    A First Course in Digital Communications 41/58

    Chapter 3: Probability, Random Variables, Random Processes

    Statistical Averages or Joint Moments

    http://find/

  • 8/20/2019 digital communication and probability of error.

    42/58

    Statistical Averages or Joint Moments

    Consider N random variables x (t1), x (t2), . . . x (tN ). The joint moments of these random variables is

    E {x k1 (t1), x k2 (t2), . . . x kN (tN )}= ∞

    x 1 = −∞ · · · ∞

    xN = −∞

    xk11 xk2

    2 · · ·xkN N f x (t 1 ),x (t 2 ),... x (tN )(x1, x2, . . . , x N ; t1, t 2, . . . , t N )dx1dx2 . . . dxN ,

    for all integers k j ≥1 and N ≥1.Shall only consider the rst- and second-order moments, i.e.,E {x (t)}, E {x 2(t)} and E {x (t1)x (t2)}. They are the meanvalue , mean-squared value and (auto)correlation .

    A First Course in Digital Communications 42/58

    Chapter 3: Probability, Random Variables, Random Processes

    Mean Value or the First Moment

    http://find/

  • 8/20/2019 digital communication and probability of error.

    43/58

    Mean Value or the First Moment

    The mean value of the process at time t is

    mx (t) = E {x (t)}= ∞

    −∞xf x (t ) (x; t)dx.

    The average is across the ensemble and if the pdf varies withtime then the mean value is a (deterministic) function of time.If the process is stationary then the mean is independent of tor a constant:

    m x = E {x (t)}= ∞−∞ xf x (x)dx.

    A First Course in Digital Communications 43/58

    Chapter 3: Probability, Random Variables, Random Processes

    Mean-Squared Value or the Second Moment

    http://find/http://goback/

  • 8/20/2019 digital communication and probability of error.

    44/58

    Mean Squared Value or the Second Moment

    This is dened as

    MSVx (t) = E {x 2(t)}= ∞

    −∞x2f x (t) (x; t)dx (non-stationary) ,

    MSVx = E {x 2(t)}= ∞

    −∞x2f x (x)dx (stationary) .

    The second central moment (or the variance) is:

    σ2x (t) = E [x (t)

    −m x (t)]2 = MSVx (t)

    −m2x (t) (non-stationary) ,

    σ2x = E [x (t) −m x ]2 = MSVx −m2x (stationary) .

    A First Course in Digital Communications 44/58

    Chapter 3: Probability, Random Variables, Random Processes

    Correlation

    http://find/

  • 8/20/2019 digital communication and probability of error.

    45/58

    Correlation

    The autocorrelation function completely describes the power

    spectral density of the random process.Dened as the correlation between the two random variablesx 1 = x (t1) and x2 = x (t2):

    Rx (t1, t 2) = E {x (t1)x (t2)}=

    x 1 = −∞ ∞

    x 2 = −∞x1x2f x 1 ,x 2 (x1, x2; t1, t 2)dx1dx2.

    For a stationary process:

    Rx (τ ) = E

    {x (t)x (t + τ )

    }= ∞x 1 = −∞ ∞

    x 2 = −∞x1x2f x 1 ,x 2 (x1, x2; τ )dx1dx2.

    Wide-sense stationarity (WSS) process: E {x (t)}= mx forany t, and Rx (t1, t 2) = Rx (τ ) for τ = t2

    −t1.

    A First Course in Digital Communications 45/58

    Chapter 3: Probability, Random Variables, Random Processes

    Properties of the Autocorrelation Function

    http://find/

  • 8/20/2019 digital communication and probability of error.

    46/58

    Properties of the Autocorrelation Function

    1. Rx (τ ) = Rx (−τ ). It is an even function of τ because thesame set of product values is averaged across the ensemble,regardless of the direction of translation.

    2. |Rx (τ )| ≤Rx (0) . The maximum always occurs at τ = 0 ,though there maybe other values of τ for which it is as big.Further Rx (0) is the mean-squared value of the random

    process.3. If for some τ 0 we have Rx (τ 0) = Rx (0), then for all integers

    k, Rx (kτ 0) = Rx (0).4. If mx = 0 then Rx (τ ) will have a constant component equal

    to m2x

    .5. Autocorrelation functions cannot have an arbitrary shape.

    The restriction on the shape arises from the fact that theFourier transform of an autocorrelation function must begreater than or equal to zero, i.e., F{Rx (τ )}≥0.

    A First Course in Digital Communications 46/58

    Chapter 3: Probability, Random Variables, Random Processes

    Power Spectral Density of a Random Process (I)

    http://find/

  • 8/20/2019 digital communication and probability of error.

    47/58

    Power Spectral Density of a Random Process (I)Taking the Fourier transform of the random process does not work.

    x2(t ,ω

    2)

    x M

    (t ,ω M

    )

    x1(t ,ω

    1)

    t

    t 0

    0

    0

    0

    0

    0

    | X 1( f ,ω

    1)|

    | X 2( f ,ω2)|

    | X M

    ( f ,ω M

    )|

    f

    f

    f .

    .

    .

    .

    .

    .

    .

    .

    .

    .

    .

    .

    Time−domain ensemble Frequency−domain ensemble

    t

    A First Course in Digital Communications 47/58

    http://find/

  • 8/20/2019 digital communication and probability of error.

    48/58

    Chapter 3: Probability, Random Variables, Random Processes

    Power Spectral Density of a Random Process (III)

  • 8/20/2019 digital communication and probability of error.

    49/58

    p y ( )

    Find the average value of P :

    E {P }= E 12T

    T

    − T x 2T (t)d t = E

    12T

    −∞ |X T (f )|2 df .

    Take the limit as T → ∞:lim

    T →∞1

    2T T

    − T E x 2T (t) dt = limT →∞

    12T

    −∞E |X T (f )|2 df,

    It follows that

    MSVx = limT →∞1

    2T T

    − T E x2T (t) dt

    = ∞

    −∞lim

    T →∞

    E |X T (f )|22T

    df (watts) .

    A First Course in Digital Communications 49/58

    Chapter 3: Probability, Random Variables, Random Processes

    Power Spectral Density of a Random Process (IV)

    http://find/

  • 8/20/2019 digital communication and probability of error.

    50/58

    p y ( )

    Finally,

    S x (f ) = limT →∞

    E |X T (f )|22T

    (watts/Hz) ,

    is the power spectral density of the process.It can be shown that the power spectral density and the autocorrelation function are a Fourier transform pair :

    Rx (τ ) ←→S x (f ) = ∞

    τ = −∞Rx (τ )e− j 2πf τ dτ.

    A First Course in Digital Communications 50/58

    Chapter 3: Probability, Random Variables, Random Processes

    Time Averaging and Ergodicity

    http://find/

  • 8/20/2019 digital communication and probability of error.

    51/58

    g g g y

    A process where any member of the ensemble exhibits the

    same statistical behavior as that of the whole ensemble.All time averages on a single ensemble member are equal tothe corresponding ensemble average:

    E {xn

    (t))} = ∞

    −∞ xn

    f x (x)dx

    = limT →∞

    12T

    T

    − T [x k (t, ωk )]n dt, ∀ n, k.

    For an ergodic process: To measure various statisticalaverages, it is sufficient to look at only one realization of theprocess and nd the corresponding time average.For a process to be ergodic it must be stationary . Theconverse is not true.

    A First Course in Digital Communications 51/58

    Chapter 3: Probability, Random Variables, Random Processes

    Examples of Random Processes

    http://find/

  • 8/20/2019 digital communication and probability of error.

    52/58

    p

    (Example 3.4) x (t) = A cos(2πf 0t + Θ ), where Θ is arandom variable uniformly distributed on [0, 2π]. This processis both stationary and ergodic.(Example 3.5) x (t) = x , where x is a random variable

    uniformly distributed on [−A, A], where A > 0. This processis WSS, but not ergodic.(Example 3.6) x (t) = A cos(2πf 0t + Θ ) where A is azero-mean random variable with variance, σ2A , and Θ isuniform in [0, 2π]. Furthermore, A and Θ are statisticallyindependent. This process is not ergodic, but strictlystationary.

    A First Course in Digital Communications 52/58

    http://find/

  • 8/20/2019 digital communication and probability of error.

    53/58

    Chapter 3: Probability, Random Variables, Random Processes

    Thermal Noise in Communication Systems

  • 8/20/2019 digital communication and probability of error.

    54/58

    A natural noise source is thermal noise, whose amplitudestatistics are well modeled to be Gaussian with zero mean.The autocorrelation and PSD are well modeled as:

    Rw (τ ) = kθGe−| τ | /t 0

    t0(watts) ,

    S w (f ) = 2kθG1 + (2 πf t 0)2

    (watts/Hz) .

    where k = 1 .38 ×10− 23 joule/ 0K is Boltzmann’s constant, Gis conductance of the resistor (mhos); θ is temperature indegrees Kelvin; and t0 is the statistical average of timeintervals between collisions of free electrons in the resistor (onthe order of 10− 12 sec).

    A First Course in Digital Communications 54/58

    Chapter 3: Probability, Random Variables, Random Processes

    (a) Power Spectral Density, S w

    ( f )

    http://find/

  • 8/20/2019 digital communication and probability of error.

    55/58

    −15 −10 −5 0 5 10 150

    f (GHz)

    S w

    ( f ) ( w a t t s /

    H z )

    −0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08 0.1τ (pico−sec)

    R w

    ( τ ) ( w a t

    t s )

    (b) Autocorrelation, Rw

    (τ)

    N 0 /2

    N 0 /2δ(τ)

    White noiseThermal noise

    White noiseThermal noise

    A First Course in Digital Communications 55/58

    Chapter 3: Probability, Random Variables, Random Processes

    The noise PSD is approximately at over the frequency range

    http://find/

  • 8/20/2019 digital communication and probability of error.

    56/58

    of 0 to 10 GHz ⇒ let the spectrum be at from 0 to ∞:

    S w (f ) = N 0

    2 (watts/Hz) ,where N 0 = 4kθG is a constant.Noise that has a uniform spectrum over the entire frequencyrange is referred to as white noise

    The autocorrelation of white noise isRw (τ ) =

    N 02

    δ (τ ) (watts) .

    Since Rw (τ ) = 0 for τ = 0 , any two different samples of white noise, no matter how close in time they are taken, areuncorrelated .Since the noise samples of white noise are uncorrelated, if thenoise is both white and Gaussian (for example, thermal noise)then the noise samples are also independ e nt .

    A First Course in Digital Communications 56/58

    Chapter 3: Probability, Random Variables, Random Processes

    Example

    http://find/

  • 8/20/2019 digital communication and probability of error.

    57/58

    Suppose that a (WSS) white noise process, x (t), of zero-mean andpower spectral density N 0/ 2 is applied to the input of the lter.(a) Find and sketch the power spectral density and

    autocorrelation function of the random process y (t) at theoutput of the lter.

    (b) What are the mean and variance of the output process y (t)?

    L

    R( )t x ( )t y

    A First Course in Digital Communications 57/58

    Chapter 3: Probability, Random Variables, Random Processes

    http://find/

  • 8/20/2019 digital communication and probability of error.

    58/58

    H (f ) = R

    R + j 2πfL =

    11 + j 2πfL/R

    .

    S y (f ) = N 0

    21

    1 + 2πLR2 f 2 ←→Ry (τ ) =

    N 0R4L

    e− (R/L )|τ | .

    0 0

    20 N

    (Hz) f (sec)τ

    L R N

    40

    ( ) (watts/Hz)S f y

    ( ) (watts) R τ y

    A First Course in Digital Communications 58/58

    http://find/