print copy for eee 586 spring 2011 ch3-4 for the students

Upload: erhan-ersoy

Post on 07-Apr-2018

223 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    1/44

    CHAPTER THREE

    OPERATIONS ON

    ONE RANDOM VARIABLE EXPECTATION

    3.0 INTRODUCTION

    The random variable was introduced in Chapter 2 as a means of providing a

    systematic definition oof events defined on a sample space. Specifically, it

    formed a mathematical model for describing characteristics of some real,

    physical world random phenomenon. In this chapter we extend our wor to

    include some important operations that may be performed on a random

    variable. Most of these operations are based on a single concept-expectation.

    3.1 EXPECTATION

    Expectation is the name given to the process of averaging when a random

    variable is involved. For a random variable X, the expected value of X, the

    mean value of X, or the statistical average of X. Occasionally we also use

    the notation X which is read the same way as E[X]; that is, X = E[X].

    Nearly everyone is familiar problem to the new concept of expectation

    may be the casiest way to proceed.

    Expected Value of a Random Variable

    The everyday averaging procedure used in the aove example carries over

    directly to random variables. In fact, if X is the discrete random variable

    fractional dollar vallue of pocket coins, it has 100 discrete values x1 that

    occur with probabilities P(x1), and its expected value E[X]is found in the same

    way as in the example:

    (3.1-1)

    The values x1 identify with the fractional dollar values in the example, while

    P(x1) is identified with the ratio of the number of people for the given dollar

    value to the total number of people had been used in the sample of theexample, all fractional dollar values would have shown up and the ratios

    would have approached P(x1). Thus, the average in the example would have

    become more like (3.1-1) for many more than 90 people.

    In general, the expected value of any random variable X is defined by

    (3.1-2)

    [ ] ( )=

    =100

    ||

    11xPxXE

    [ ] ( )

    == dxxxfxXE x

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    2/44

    If X happens to be discrete with N possible values x1 having probabilities

    P(x1) of occurence, then

    From (2.3-5). Upon substitution of (3.1-3))into (3.1-2), we have

    discrete random variable (3.1-4)

    Hencc, (3.1-1) is a special case of (3.1-4) when N=100. for some discrete

    random variables, N may be infinite in (3.1-3)and (3.1-4).

    ( ) ( ) ( )= =N

    x xxxPxf||

    11

    [ ] ( )=

    =N

    xPxxE||

    11

    Example 3.1-2 We determine the mean value of the continuous , exponentially

    distributed random variable for wich (2.5-9) applies:

    x > a

    x < a

    From (3.1-2) and an integral from Appendix C:

    ( ) baxeb

    xx

    f/

    0

    1

    =

    [ ] ( )

    +===a a

    bxba

    bax badxxeb

    edxe

    b

    xxE /

    /

    /

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    3/44

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    4/44

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    5/44

    When used in (3.1-6) gives the moments about the origin of the random

    variable X. Denote the nth moment by mn. Then,

    (3.2-2)

    Clearly m0 =1, the area of the function fx(x), while m1=x, the expected

    value of X.

    [ ] ( )

    == dxxfxxXEm nnn

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    6/44

    Central Moments

    Moments about the mean value of X are called central moments and

    are given the symbol n. They are defined as the expected value of the

    function

    (3.2-3)

    Which is

    (3.2-4)

    The moment 0=1, the area of fx(x), while 1=0.(why?)

    ( ) ( )nxxxg = ,...2,1,0=n

    ( )[ ] ( ) ( )

    == dxxfxXxXXE

    nn

    n

    Variance and Skew

    The second central moment 2 is so important we shall give it the name

    variance and the special notation 2x . Thus, variance is given by

    (3.2-5)

    The positive square rool x of variance is called the stantdard deviation of X; it is

    a measure of the spread in the function fx(x) about the mean.

    (3.2-6)

    [ ] 22222 22 XXEXXEXXXXEx +=+=

    2

    12

    22mmXXE ==

    ( )[ ] ( ) ( )

    === dxxfxXxXxEx22

    2

    2

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    7/44

    Example 3.2-1 Let X have the exponential density function given in Example

    3.1-2 By substitution into (3.2-5), the variance of X is

    By making the change of variable = x X we obtain

    The subscript indicates that 2x is the variance of a random variable X. For a

    random variable Y its variance would be a

    We use the fact that the expected value of a sum of functions of X equals the

    sum of expected values of individual functions, as the reader can readily

    verify as an exercise.

    ( )

    ( )

    ++==xa

    bbax

    x bxbadeb

    e 22/2/

    2

    ( )( )

    =

    a

    bax

    x dxebXx

    /22 1

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    8/44

    After using an integral from Appendix C. However, from Example 3.1-2,

    X= E[X]= (a+b), so

    The reader may wish to verify this result by finding the second moment

    E[x2] and using (3.2-6).

    The third central moment 3 = E[(X-X)3] is a measure of the asymmetry

    of fx(x) about x=X=m1. It will be called the skew of the density function. If a

    density is symmetric about x=X, it has zero skew. In fact, for this case n=0 for all odd values of n. (Why?) The normalized third central moment 3/ 3x is nown as skewness of the density function, or, alternatively, as the

    coefficient of skewness.

    Example 3.2-2 We continue Example 3.2-1 and compute the skew and

    coefficient of skewness for the exponential density. From (3.2-4) with n=

    3 we have

    22bx =

    ( ) [ ]322333

    33 XXXXXXEEXXE +===

    ( )323

    3223323

    3

    2323

    xXX

    xxXXxXxX

    x

    x

    =

    ++=+=

    Next, we have

    After using (C-48). On substituting X= a+b and from theearlier example, and reducing the algebra we find

    This density has a relatively large coefficient of skwness, as can be

    seen intuitively from Figure 2.5-3.

    ( ) +++==0

    353/

    3

    353 pppappqx

    x

    px pax

    22b

    x=

    2

    2

    3

    3

    3

    3

    =

    =

    x

    b

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    9/44

    3.3 FUNCTIONS THAT GIVE MOMENTS

    Two functions can be deined that allow moments to be calculated for a

    random variable X. They are the characteristic function and the moment

    generating function.

    Characteristic Function

    The characteristic function of a random varible X is defined by

    (3.3-1)

    Where f= . It is a function of the real number - < < . If (3.3-1) iswritten in terms of the density function, is seen to be the Fourier

    transform (with the sign of reversed) of fx(x):

    (3.3-2)

    1

    ( )x

    ( )

    =

    x

    x eE

    ( ) ( )

    = dxexfxxf

    x

    Because of fact, ifx () is known, x(x) can be found from the inverse Fouriertransform (with sign of x reversed)

    (3.3-3)

    By formal differentiation of (3.3-2) n times with respect to and setting =0 in thederivative, we may sho that the nth moment of x is given by

    (3.3-4)

    A major advantage of using x() to find moments is that x() always exist(Davenport, 1970, p. 426), so the moments can always be found ifx() is known,provided, of course, the derivatives ofx() exist.

    It can be shown that the maximum magnitude of a characteristic function

    is unity and occurs at = 0; that is,| x() | x(0)=1 (3.3-5)

    ( ) ( )

    dexfx xfx

    = 21

    ( )( )

    0

    =

    n

    x

    nn

    nd

    djm

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    10/44

    Example 3.3-1 Again we consider the random variable with the exponential

    density of Example 3.1-2 and find its characteristic function and first moment.

    By substituting the density function into (3.3-2),we get

    Evaluation of the integral follows the use of an integral from Appendix C:

    ( ) ( ) ( )

    == 0/1

    /

    /1

    axfb

    ba

    xfbaxx dxe

    bedxee

    b

    ( )( )

    ( )( )

    =

    1

    /1/

    /1 jb

    e

    b

    exfbba

    x

    bj

    ej

    = 1

    The derivative ofx () is

    so the first moment becomes

    n agreement with m1 found in Example 3.1-2.

    ( )

    ( )

    +

    =

    2

    11 bj

    jb

    bj

    jae

    d

    d fx

    ( )( )

    ,01

    bad

    djm x +=

    = =

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    11/44

    Moment Generating Function

    Another statiscal average closely related to the characteristic function is the

    moment generating function, defined by

    Mx(v)= E[evx]

    (3.3-6)

    Where v is a real number -< v < . Thus, Mx(v) is given by

    (3.3-7)

    The main advantage of the moment generating function derives from its

    ability to give the moments. Moments are related to Mx(v) by the expression:

    (3.3-8)

    ( ) ( )

    = dxexfvM vx

    xx

    ( )0== vn

    x

    n

    n

    dv

    vMdm

    The main disadvantage of the moment generating function, as opposed to the

    characteristic function, is that it may not exist for all random variables. In fact, Mx(v) exists only if all the moments exist (Davenport and Root, 1958, p. 52).

    Example 3.3-2 To illustrate the calculation and use of the moment generating

    function, let us reconsider the exponential density of the carlier examples. On

    use of (3.3-7) we have

    ( ) ( ) =

    a

    vxbax

    x dxeeb

    vM/1

    ( )[ ]

    bv

    e

    dxeb

    e

    av

    a

    xbvba

    =

    =

    1

    /1

    /

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    12/44

    In evaluaating Mx(v) we have used an integral from Appendix C. Bv

    differentiation we have the first moment

    Which, of course, is the same as previously found.

    3.4 TRANSFORMATIONS OF A RANDOM VARIABLE

    Quite often one may wish to transform (change) one random variable X into a

    new random variable Y by means of a transformation

    Y=T(X) (3.4-1)

    ( )01 == v

    x

    dv

    vdMm

    ( )[ ]( )

    babv

    bbvaev

    av

    +=

    += =02

    1

    1

    Continuous. Note that the transfrmation in all three cases is asumed continuous.

    The cocepts introduced in these three situations are broad enough that the reader

    should have no difficulty in extending them to other cases (see Problem 3-32).

    Monotonic Transformations of a Continuous Random Variable

    A transformation T is called monotonically increasing if T(x1)

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    13/44

    Where T-1 repressent the inverse of the transformation T. Now the probability

    of the event {Y y0} must equal the probability of the event {x x0} becauseof the one-to-one correspondence between X and Y. Thus,

    ( ) { } { } ( )0000 xFxXPyYPyF xy ===

    Figure 3.4-2 Monotonic transformations:

    (a) increasing, and (b) decreasing.

    [Adapted from Peebles (1976) with

    permission of publishers Addison-

    Wesley, Advanced Book Program.]

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    14/44

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    15/44

    Leibnizs rule, after the great German mathematician Gottfried Wilhelm von

    Leibniz (1646-1716), states that, if H(x,u) is cntinuous in x and u and

    Then the derivative of the integral with respect to the parameter u is

    ( )( )[ ]

    ( ) ( )( )

    ( )dx

    u

    uxH

    du

    uduuH

    du

    udG u

    u

    +=

    ,

    ,

    ( ) ( )( )( )

    =ud

    ue dxuxHuG ,

    Example 3.4-1 If we take T to be the linear transformation Y= T(X)= aX + b,

    where a and b are any real constans, then X= T-1 (Y)= (Y b)/a and dx / dy = 1/ a.

    From (3.4-9)

    If X is assumed to be gaussian with the density function given by (2.4-1), we get

    ( )aa

    byfyf xx

    1

    =

    ( ) ( )[ ]a

    eyfxaaby

    x

    yx

    1

    2

    1 22//2

    2

    =

    ( )[ ] 22 2/

    222

    1 xbaaxy

    x

    ea

    +=

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    16/44

    which is the density function of another gaussian random variable having

    ay = aax + b and 2y = a

    2 2x

    Thus, a linear transformation of a gaussian random variable produces

    another gaussian random variable. A linear amplifier having a random

    voltage X as its input is one example of a linear transformation.

    Nonmnotonic Transformations of a Continuous Random Variable

    A transformation may not be monotonic in the more general case. Figure

    3.4-3 illustrates one such transformation. There may now be more than

    one interval of values of X that correspond to the event {Y y0}. For thevalue of y shown in the figure, the event {Y y0} corresponds to the event{X x1 and x2 X x3 }. Thus, the probability of the event {Y y0} nowequals the probability

    Figure 3.4-3 nonmonotonic

    transformation . [Adapted from Peebles

    (1976) with permission of publishers

    Addison-Wesley, Advanced Book

    Program.]

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    17/44

    Of the event {x values yielding Y y0 }, which we shall write as {x| Y y0 }. Inother words

    (3.4-11)

    Formally, one may differantiate to obtain the density function of Y:

    (3.4-12)

    Although we shall not give a roof, the density function is also given by (Papoulis,

    1965, p. 126)

    (3.4-13)

    ( ) { } { } ( )( ) === 0|00 | yYx xoy dxxfyYxPyYPyF

    ( ) ( )( ) = 0|

    0

    0yYx

    xy dxxfdy

    dyf

    ( )( )

    ( )

    =

    =

    n

    n

    nxy

    xxdxxdT

    xfyf

    Where the sum is taken so as to include all the roots xn, n=1,2,...,which are the

    real solutions of the equation

    y=T(x) (3.4-14

    We illustrate the above concepts by an example.

    Example 3.4-2 We find fy(y) for the square- law transformation

    Y=T(X)=cX2

    Shown in Figure 3.4-4, where c is a real constant c >0. we shall use both the

    procedure leading to (3.4-12) and that leading to (3.4-13).

    In the former case, the event {Y y} occurs whenso (3.4-12) becomes

    y 0

    Upon use of Leibnizs rule we obtain

    y 0

    ( ) ( )=cy

    cydxxfx

    dy

    dyfy

    /

    /

    = cyxcy //

    { },| yYx

    ( ) ( ) ) ( ) )dy

    cydcyfx

    dy

    cydcyfxyfy

    //

    //

    =

    ) )

    cy

    cyfxcyfx

    2

    // +=

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    18/44

    If y= T(x) has no real roots for a given value of y, then T (y)=0.

    Figure 3.4-4 square-law transformation. [Adapted from Peebles (1976) withpermission of publishers Addison-Wesley, Advanced Book Program.]

    In the latter case where we use (3.4-13), we have so

    and x2 = . Furthermore, dT(x)/dx = 2cx so

    From (3.4-13) we again have

    y 0

    ,0,/ = YcYX cyx /1 =cy/

    ( )

    ( ) cyxxdx

    xdT

    cyc

    yccx

    xxdx

    xdT

    2

    222

    2

    1

    1

    ==

    ====

    ( )( ) ( )

    cy

    cyfxcyfxyfy

    2

    // +=

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    19/44

    Transformation of a Discrete Random Variable

    If X is a discrete random variable while Y=T(X) is a continuous transformation, the

    problem is especially simple. Here

    (3.4-15)

    (3.4-16)

    Where the sum is taken to include all the possible values xn , n= 1,2,..., of X.

    If the transformation is monotonic, there is a one-to-ane correspondence

    between X and Y so that a set {yn } corresponds to the set {xn} through the

    equation yn = T(xn). :The probability P(yn) equals P(xn). Thus,

    (3.4-17)

    (3.4-18)

    ( ) ( ) ( )

    ( ) ( ) ( )nn

    nx

    n nn

    xxuxPxF

    xxxPfx

    =

    =

    ( ) ( ) ( )

    ( ) ( ) ( )nn ny

    n

    nny

    yyuyPyF

    yyyPyf

    =

    =

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    20/44

    CHAPTER4MULTIPLERANDOM

    VARIABLES

    4.0INTRODUCTION

    InChapters2and3,variousaspectsofthetheoryofsinglerandomvariablewere

    studied.Therandomvariablewasfoundtobeapowerfulconcept. Itenabledmany

    realisticproblemstobedescribedinaprobabilisticwaysuchthatpracticalmeasures

    couldbe

    applied

    to

    the

    problem

    even

    though

    itwas

    random.

    We

    have

    seen

    that

    shell

    impactpositionalongthelineoffirefromacannonto atargetcanbedescribedbythe

    randomvariable.Fromknowledgeoftheprobabilitydistributionordensityfunctionof

    impactposition.Wecansolveforsuchpracticalmeasuresasthe meanvalueofimpact

    position,itsvariance,andskew.

    Thesemeasuresarenot,however,acompleteenoughdescriptionoftheproblemin

    mostcases.

    Naturally,wemayalsobeinterestedinhowmuchtheimpactpositions

    deviatefromthelineoffirein,say,theperpendicular(crossfire)direction.Inother

    words,wepefertodescribeimpactpositionasapointinaplaneasopposedtobeinga

    pointalongaline.Tohandlesuchsituationsitisnecessaryto extendthetheoryto

    includeseverlrandomvariables.Weaccomplishtheseextentionsinthisandthenext

    chapter.Fortunately,manysituationsofinterestinengineeringcanbehandledbythe

    theoryoftworandomvariables.Becauseofthisfact,weemphasizethetwovariable

    case,althoughthemoregeneraltheoryisalsostatedinmostdiscussionstofollow.

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    21/44

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    22/44

    Figure4.12ComparisonsofeventsinSwiththoseinS,

    between eventsinthetwospaces.EventAcorrespondstoallpointsinS,forwhichtheX

    coordinatevaluesarenotgreaterthan x.Similarly, eventBcorrespondstotheY

    coordinatevaluesinS,notexceedingy.Ofspecialinterestisto observethattheeventA

    B defined on S corresponds to the joint event {X x and Y y} defined on SJ ,which we write {X x, Y y}. This joint event is shown crosshatched in Figure 4.1-2.

    InthemoregeneralcasewhereNrandomvariablesX1 ,X2 ,...,XN aredefinedonasampleS, weconsiderthemtobecomponentsofanNdimensionalrandomvectoror

    Ndimensionalrandomvariable.ThejointsamplespaceSisnowNdimensional.

    4.2JOINTDISTRIBUTIONANDITSPROPERTIES

    TheprobabilitiesofthetwoeventsA={X x}andB={Y y}havealreadybeendefinedas

    functionsofxandy,respectively,calledprobabilitydistributionfunctions:

    Fx (x)=P{X x} (4.21)

    Fy (y)=P{Y y}

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    23/44

    Wemustintroduceanewconcepttoincludetheprobabilityofthejointevent{X x,Y y}.

    JointDistributionFunction

    Wedefinetheprobabilityofthejointevent{X x,Y y},whichisafunctionofthe

    numbersxandy,byajointprobabilitydistributionfuncionand denoteitbythesymbolFx

    ,

    y (x,y).Hence,

    F,(x,y)=P{X x,Y y} (4.23)

    ItshouldbeclearthatP{X x,Y y}=P(A B ), where the joint event A B is definedon S.

    To illustrate joint distribution, we take an example where both random

    variables X and Y are discrete.

    Example 4.2-1 Assume that the joint sample space S has only three possible

    elements: (1,1), (2,1), and (3,3). The probabilities of these elements are assumed to

    be P(1,1)=0.2, P(2,1)= 0.3, and P(2,1) = 0.3, and P(3,3) = 0.5. We find Fx.y (x,y).

    Inconstructingthejointdistributionfunction,weobservethat theevent{X x,Y y}hasnoelementsforanyx

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    24/44

    ForasinglerandomvariableX,wefoundinChapter2thatF(x) couldbeexpressedin

    generalasthesumofafunctionofstairstepform(duetothediscrete portionofa

    mixedrandomvariableX)andafunctionthatwascontinuous(duetothecontinuous

    portionofX).Suchasimpledecompositionof thejointdistributionwhenN>1isnot

    generallytrue[Cramer,1946,Section8.4].However,

    Figure4.21Ajointdistributionfunction

    (a),anditscorrespondingjointdensity

    ffunction(b),thatapplytoExamples4.22.

    itistruethatjointdensityfunctionsinpracticeoftencorrespondtoallrandom

    variablesbeingeitherdiscreteorcontinuous.Therefore,weshalllimitour

    considerationinthisbookalmostentirelytothesetwocaseswhenN>1.

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    25/44

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    26/44

    Example4.22 WefindexplicitexpressionsforFx.y (x,y),andthemarginaldistributionsFx(x)

    andFy (y)forthejointsamplespaceofExample4.21.

    Thejointdistributionderivesfrom(4.24)ifwerecognizethatonlythree

    probabilitiesarenonzero:

    WhereP(1,1)=0.2,P(2,1)=0.3,andP(3,3)=0.5.Ifwesety= :

    Fx (x)=Fx,y (x, )=P(1,1)u(x1)+P(2.1)u(x2)+P(3,3)u(x3)

    =0.2u(x1)+0.3u(x2)+0.5u(x3)

    Ifwesetx= :Fy (y) = Fx.y (,y)

    = 0.2u(y-1) + 0.3u(y-1) + 0.5 u (y-3)= 0.5u(y-1) + 0.5u(y-3)Plots of these marginal distributions are shown in Figure 4.2-2.

    ( ) ( ) ( ) ( )111,1,, = yuxuPyxF yx( ) ( )( ) ( ) ( )333,3

    )1(21,2

    +

    +

    yuxuP

    yuxuP

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    27/44

    Figure4.22Marginaldistributions

    applicabletoFigure4.21and

    Example4.22:(a)Fx (x)and(b)Fy

    (y).

    FromanNdimensionaljointdistributionfunctionwemayobtainakdimensionalmarginal

    distributionfunction,foranyselected groupofkoftheNrandomvariables,bysettingthe

    valuesoftheotherNkrandomvariablestoinfinity.Herekcanbeanyinteger1,2,3,...,

    N1.

    4.3JOINTDENSITYANDITSPROPERTIES

    Inthissectiontheconceptofaprobabilitydensityfunctionis extentedtoinclude

    multiplerandomvariables.

    JointDensityFunction

    FortworandomvariablesXandY,thejoint probabilitydensity function,denoted

    isdefinedbythesecondderivativeofthejointdistributionfunctionwhereveritexists:

    Weshallreferooftento asthejointdensityfunction.

    IfXandYarediscreterandomvariables, willpossessstep

    discontinuities(seeExample4.21andFigure4.21).Derivativesatthesediscontinuities

    ( ),,.

    yxf yx

    ( )( )yx

    yxFyxf

    yx

    yx

    =

    ,,

    .

    2

    .

    ( )yxf yx ,.( )yxf

    yx ,.

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    28/44

    Arenormallyundefined.However,byadmittingimpulsefunctions(seeApendixA), we

    are able to define atthesepoints.Therefore,thejointdensityfunctionmay

    befound foranytwodiscreterandomvariablesbysubstitutionof(4.24)into(4.31):

    (4.32)

    Anexampleofthejointdensityfunctionoftwodiscreterandomvariablesisshownin

    Figure4.21b.

    WhenNrandomvariablesX1 ,X2 ,...,XN areinvolved,thejointdensityfunction

    becomestheNfoldpartialderivativeoftheNdimensionaldistributionfunction:

    (4.33)

    ( )yxf yx ,.

    ( ) ( ) ( ) ( )= =

    =N

    n

    M

    m

    mnmnyx yyxxyxPyxf1 1

    . ,,

    ( )( )

    N

    Nxxx

    N

    Nxxxxxx

    xxxFxxxf N

    N

    =

    ...

    ,...,,,...,,

    21

    21,...,,

    21....21

    21

    Bydirectintegrationthisresultisequivalentto

    (4.34)

    PropertiesoftheJointDensity

    Severalpropertiesofajointdensityfunctionmaybelistedthatderivefromitsdefinition

    (4.31)

    and

    the

    properties

    (4.2

    6)of

    the

    joint

    distribution

    function:

    (4.35a)

    (4.35b)

    (4.35c)

    (4.35d)

    (4.35e)

    ( )Nxxx xxxF N ,...,, 21.... 21( ) =

    2 1

    2121..... ...,...,,... 21x x

    NNxxx

    xN

    dddfN

    ( ) ( )

    ( ) ( )

    ( ) ( ) ( )

    ( ) ( ) ( )

    ( ) ( )

    =

    y x

    yxx

    y x

    yxx

    y x

    yxyx

    yx

    yx

    ddfxF

    ddfxF

    ddfyxF

    ddf

    yxf

    2121.

    2121.

    2121..

    2121.

    .

    .

    .4

    .,3

    .2

    0,1

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    29/44

    ( ) { } ( )

    ( ) ( ) ( )

    ( ) ( )

    =

    =

    =

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    30/44

    ForNrandomvariablesX1 ,X2 ,...,XN ,thekdimensionalmarginaldensityfunctionis

    definedasthekfoldpartialderivativeofthekdimensionalmarginaldistribution

    function.Itcanalsobefoundfromthejointdensityfunctionbyintegratingoutall

    variablesexceptthekvariablesofinterestX1 ,X2 ,...,Xk :

    (4.38)

    4.4CONDITIONALDISTRIBUTIONANDDENSITY

    InSection2.6,theconditionaldistributionfunctionofarandomvariableX,givensome

    eventB,wasdefinedas

    (4.41)

    foranyeventBwithnonzeroprobability.Thecorrespondingconditionaldensityfunction

    wasdefinedthroughthederivative

    (4.42)

    Inthissectionthesetwofunctions areextendedtoincludeasecondrandomvariablethroughsuitabledefinitionsofeventB.

    ( )kxxx xxxf ,...,, 21,...,. 121

    ( ) += NkNxxx dxdxxxxf N 121,...,, ,...,,... 21

    ( ) { }{ }

    ( )BPBxXP

    BxXPBxFx

    == ||

    ( )( )dx

    BxdFBxf xx

    || =

    ConditionalDistributionandDensity PointConditioning

    Ofteninpracticalproblemsweareinterestedinthedistributionfunctionofonerandom

    variableXconditionedbythefactthatasecondrandomvariable Yhassomespecificvalue

    y.ThisiscalledpointconditioningandwecanhandlesuchproblemsbydefiningeventBby

    (4.43)

    Wherey is a small quantity that we evetually let approach o. For this event, (4.4-1)can be written

    (4.44)

    Wherewehaveused(4.35f)and(2.36d).

    Considertwocasesof(4.44).Inthefirstcase,assumeXandYarebothdiscrete

    randomvariableswithvaluesx1,i=1,2,...,N,andyJ ,j=1,2,...,M,respectively,whilethe

    prabilitiesofthesevaluesaredenotedP(x1)andP(yJ),

    { }yyYyyB +

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    31/44

    Respectively.Theprobabilityofthejointoccurenceofx1 andyJ isdenotedP(x1,yJ ).Thus,

    (4.45)

    (4.46)

    Nowsupposethatthespecificvalueofyofinterestisy.With substitutionof (4.45)and

    (4.46)into(4.44)andallowingy0, we obtain

    (4.4-7)

    After differentiation we have

    (4.4-8)

    ( ) ( ) ( )

    ( ) ( ) ( ) ( )JN M

    J

    Jyx

    J

    M

    J

    Jy

    yyxxyxPyxf

    yyyPyf

    =

    =

    11 1

    11.

    1

    ,,

    ( )( )

    ( )111

    1 ),(| xxuyP

    yxPyYxF

    N

    k

    kkx ==

    ( ) ( ) ( )1111 ),(

    | xxyP

    yxP

    yYxf

    N

    k

    k

    kx ==

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    32/44

    Example 4.41 To illustrate the use of(4.48)assume ajoint density function asgiven in

    Figure 4.41a.Here P(x1 ,y1)=2/15,P(x2 ,y1)=3/15,etc.SinceP(y3)=(4/15)+(5/15)=9/15,

    use of (4.48)will give fx (x|Y=y3)asshown inFigure 4.41b.

    The second case of(4.44)that isofinterest corresponds to Xand Yboth continuous

    random variables.Asy0 the denominator in (4.4-4) becomes 0. However, we canstill show that the conditional density fx (x|Y=y)may exist.Ify is very small, 84.4-4) can be written as

    (4.49)

    And,inthe limitasy0

    (4.410)

    For every ysuch that fy (y) 0.After differentiation ofboth sides of(4.410)with respect to

    x:

    (4.411)

    ( )( )

    ( ) yyf

    ydyfyyYyyxF

    y

    x

    yx

    x

    =+

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    33/44

    Figure 4.41joint density function(a) and a conditional density function

    (b) applicable to Example 4.4-1.

    When there isnoconfusion asto meaning,we shall often write (4.411)as

    (4.412)

    It canalso beshown that

    (4.413)

    Example 4.42 We find f(y|x)for the density functions defined inExample 4.31.Since

    and

    ( )( )

    ( )yf

    yxfyxf

    y

    yx

    x

    ,|

    .=

    ( )

    ( )

    ( )xf

    yxf

    xyf x

    yx

    y

    ,

    |

    .

    =

    ( ) ( ) ( )

    ( ) ( ) xx

    yx

    yx

    exuxf

    xeyuxuyxf

    +

    =

    = 1.

    ),(

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    34/44

    Are nonzero only for 0

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    35/44

    These last two expressions hold for Xand Yeither continuous or discrete case,the joint

    density isgiven by (4.32).The resulting distribution and density will bedefined.However,

    only for ya and yb such that the deominators of(4.415)and (4.416)are nonzero .This

    requirement issatisfied so long asthe interval ya

    +=

    +=

    +=

    y y y

    y yy

    yddudf

    0 220

    111

    Figure 4.42Conditional probability density functions applicable to Example 4.43.

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    36/44

    and zero for y0

    And zero for y

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    37/44

    The formofthe conditional distribution function for independent events is

    found by use of(4.41)with B={Y y}:

    (4.55)By substituting (4.53)into (4.55),we have

    Fx (x|Y y)=Fx (x) (4.56)

    In other words,the conditional distribution ceases to beconditional and simply equals

    the arginal distribution for independent random variables.It canalso beshown that

    Fy (y|X x)=Fy (y) (4.57)

    Conditional density function forms,for independent Xand Y,are found by

    differentiation of(4.56)and (4.57):

    (4.58)

    (4.59)

    ( ){ }

    { }

    ( )

    ( )yF

    yxF

    yyP

    yYxXPyYxF

    y

    yx

    x

    ,,|

    .=

    =

    ( ) ( )

    ( ) ( )yfxXyfxfyYxf

    yy

    xx

    =

    =

    |

    |

    Example 4.51For the densities ofExample 4.31:

    Therefore the random variables Xand Yare notindependent.

    In the more generalstudy ofthe statistical independence ofNrandom variables

    X1 are said to bestatistically independent if (1.56)isstatisfied.

    It canbeshown that if X1 ,X2 ,...,XN are statistically independent then any group

    ofthese random variables isindependent ofany other group.Furthermore,afunction of

    any group isindependent ofany function ofany other group ofthe random variables .For

    example,with N=4random variables:X4 isindependent ofX3 +X2 +X1 ;X3 isindependent of

    X2 +X1 ,etc.(see Papoulis,1965,p.238).

    4.6DISTRIBUTIONANDDENSITYOF

    ASUMOFRANDOMVARIABLES

    The problemoffinding the distribution and density functions for asum ofstatistically

    independent random variables isconsidered inthis section.

    ( ) ( ) ( ) ( )

    ( ) ( ) ( ) ( )( )

    ( )yxfy

    eyuxuyfxf

    xeyuxuyxf

    yx

    x

    yx

    yx

    yx

    ,1

    ,

    .2

    1

    .

    +

    =

    =

    +

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    38/44

    Sum ofTwo Random Variables

    Let Wbearandom variable equal to the sum oftwo independent random variables Xand

    Y:

    W=X+Y (4.61)

    This isavery practical problembecause Xmight represent arandom signal voltage and Y

    could represent random noise atsome instant n time.The sum Wwould represent a

    signalplusnoise voltage available to some receiver.

    The probability distribution function we seek is defined by

    F(w)=P{W w}=P{X+Y w} (4.62)

    Figure 4.61illustrates the region inthe xy plane where x+y w.Now from (4.35f),the

    probability corresponding to anelemental area dx dy inthe xy plane located atthe point (x

    ,y)is fx.y (x,y)dx dy.If we sum all such probabilities over the region where x +y wwe will

    obtain Fw (w).Thus

    (4.63)

    And,after using (4.54):

    (4.64)

    ( ) ( )

    =

    yw

    xyxw w

    dxdyyxfwF ,.

    ( ) ( ) ( )

    =

    yw

    xxyw w

    dxdyxfyfwF

    Figure 4.61Region inxy plane where

    x+y

    w.

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    39/44

    By differentiating (4.64),using Leibnizs rule,we get the desired density function

    (4.65)

    This expression isrecognized asaconvolution integral.Consequentlyy,we have shown

    that the density function ofthe sum oftwo statistically independent random variables is

    the convolution oftheir individual density functions.

    Example 4.61We use (4.65)to find the density ofW=X+Ywhere the densities ofXand

    Yare assumed to be

    ( ) ( ) ( )

    = dyywfyfwf xyw

    ( ) ( ) ( )[ ]

    ( ) ( ) ( )[ ]hyuyuayf

    axuxua

    xf

    y

    x

    =

    =

    1

    1

    ( ) ( )[ ] ( ) ( )[ ]

    ( )[ ] ( ) ( )[ ]

    ( )[ ] ( ) ( ) ( )

    +

    =

    =

    00

    01

    1

    1

    dyaywubyuudyywubyu

    dyaywuywubyuab

    dyaywuywubyuab

    wfw

    with 0

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    40/44

    Figure 4.6Two density functions

    (a)and (b)and their convolution

    (c).

    All these integrands are unity;the values ofthe integrals are determined by the unitstep

    functions through their control over limits ofintegration.After straightforward evaluation

    we get

    Which issketched inFigure 4.62c.

    Sum ofSeveral Random Variables

    When the sum YofNindenpendent random variables X1 ,X2 ,...,XN isto beconsidered,we

    may extend the above analysis for two random variables.Let Y1 =X1 +X2 .Then we know

    from the preceding work that fy1 (y1 )=

    ( )

    ( )

    +

    =

    0

    /1

    /

    abwba

    b

    abw

    wfw

    bawb

    bwa

    aw

    +

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    41/44

    fx2 (x2)*fx1 (x1)Next,we know that X3 will beindenpendent ofY1 =X1 +X2 because X3 is

    indenpendent ofboth X3 and Y1 to find the density function ofY2 =X3 +Y1 ,we get

    (4.66)

    By continuing the process we find that the density function ofY1 =X1 +X2 +...+XN isthe

    (N1)fold convolution ofthe Nindividual density function ofthe Nindividual density

    functions:

    (4.67)

    The distribution function ofYisfound from the integral offy (y)using (2.36c).

    4.7CENTRALLIMITTHEOREM

    Broadly defined,the central limittheorem says that the probability distribution

    function ofthe sum of alarge number ofrandom variables approaches agaussian

    distribution.Although the theorem isknown to apply to some cases ofstatistically

    dependent random variables ,most applications,

    and the largest body

    of

    knowledge,

    aredirected toward statistically independent random variables.Thus,inall succeeding

    discussions we assume statistically independent random variables.

    ( ) ( ) ( )121332321 12 * yxxfxfyxxxf yxx +==++=

    ( ) ( ) ( )1233 12 ** xfxfxf xxx=

    ( ) ( ) ( ) ( )11 11 *...** xfxfxfyf xNxNxy NN =

    Unequal Distributions

    Let and bethe means and variances,respectively,ofrandom variables X1 ,i=

    1,2,...,N,which may have arbitary probability densities.The central limittheorem states

    that the sum YN =X1 +X2 +...+XN ,which hasmean and variance

    ,hasaprobability distribution that asymptotically approaches

    gaussian asN . Necessary conditions for the theorems validity are difficult to

    state, but sufficent conditions are known to be (Cramer,1946; Thomas, 1969)

    i = 1,2,...,N (4.7-1a)

    i = 1,2,...,N (4.7-1b)

    Where B1 and B2 are positive numbers.These conditions guarantee that noone random

    variable inthe sum dominates.

    The reader should observe that the central limittheorem guaratees only that the

    distribution ofthe sum ofrandom variables becomes gaussian.It does notfollow that the

    probability density isalways gaussian.For continuous

    The asterisk denotes convolution

    21

    1xX

    NN XXXY +++= ...212222

    ...21 NxxxyN

    +++=

    [ ]2

    311

    1

    2

    ||

    01

    BXXE

    Bx

    >

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    42/44

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    43/44

    From Problem328.If this roved the density ofWN must begaussian from (3.33)and the

    fact that Fourier transforms are unique.The characteristic function ofWN is

    (4.76)

    The last stepin(4.76)follows from the independence and equal distributin ofthe X1 .

    Next,the exponential in(4.76)isexpanded inaTaylorpolynomial with aremainder term

    RN /N:

    (4.77)

    ( ) [ ] ( )

    ==

    N

    x

    WJ

    W XXN

    jEeE N

    N

    11

    1exp

    ( )

    = XXN

    jE

    x

    1exp

    ( )

    XX

    N

    jE

    x

    1

    ( ) ( )

    ( ) [ ] NREN

    N

    RXX

    N

    jXX

    N

    jEx

    N

    N

    x

    /2/1

    21

    2

    2

    1

    2

    1

    +=

    +

    +

    +=

    Where E[RN ]approaches zero asN (Davenport, 1970, p. 442). On substitution of(4.7-7) into (4.7-6) and forming the natural logarithm, we have

    (4.78)

    Since

    |z| < 1 (4.7-9)

    We identify z with (2 /2N) E[RN ] /N and write (4.7-8) as

    (4.7-10)

    So

    (4.7-11)

    Finally, we have

    (4.7-12)

    Which was to be shown.

    We illustrate the use of the central limit theorem through an example.

    ( )[ ] ( ) [ ]{ }NRENInNIn NwN /2/12 +=

    ( )

    +++= ...

    321

    32 zzzzIn

    ( )[ ] ( ) [ ] [ ] ...22

    2/

    222 +

    +=

    N

    RE

    N

    NRENIn NNwN

    ( )[ ]{ } ( ) 2/2limlim =

    =

    NN w

    N

    w

    N

    InIn

    ( ) 2/2

    lim

    = eNw

    N

  • 8/6/2019 Print Copy for EEE 586 Spring 2011 Ch3-4 for the Students

    44/44