practice probability theory exam

Upload: deborah-hrr

Post on 10-Mar-2016

213 views

Category:

Documents


0 download

DESCRIPTION

Covers 1 year of 4th year undergrad level probability theory coursework

TRANSCRIPT

  • 1. There are n balls labeled by 1, 2,..., n and n bags labeled by 1, 2,..., n.

    (a) Randomly put n balls in n bags (allowing more than one ball in one bag), whatis the probability that there is at least one ball in the bag of the same label?

    (b) Randomly put n balls in n bags such that there is exactly one ball in each bag,what is the probability that there is no ball in the bag of the same label?

    2. The random vector (X, Y, Z)T follows a multivariate Normal distribution with meanvector 0 = (0, 0, 0)T , and covariance matrix 1 0 1

    0 1

    In particular, X and Z are independent.

    (a) Define U = Y Z and W = Y + Z. What are respectively the marginal distri-butions of U and W?

    (b) Compute Cov(U,W ). Are they independent? Explain your answer.

    (c) Obtain the conditional distribution of X, given W = Y + Z.

    3. Let X1, X2,... be IID variables from uniform distribution on (1, 2), and let Hn denotethe harmonic average of the first n variables

    Hn =nn

    i=1X1i

    .

    (a) Show that HnP c as n, and identify the constant c.

    (b) Show thatn(Hnc) converges in distribution, and identify the limit distribution.

    1

  • Solutions:

    1. (a) P (desired event) = 1P (there is no ball in the bag of the same label) = 1(n1)n/nn.

    (b) This is the same as envelop matching problem.

    P (desired event) = 1 P (at least one match)= 1

    ((n

    1

    )(n 1)!

    n(n

    2

    )(n 2)!

    n+ + (1)n1

    (n

    n

    )(n n)!

    n

    )= 1 1

    1!+

    1

    2! 1

    3!+ + (1)n 1

    n!

    2. (a) U is distributed as N(0, 2 2). W is distributed as N(0, 2 + 2).(b) Cov(U,W ) = Var(Y )Var(Z) = 0. Since they are jointly bivariate normal, they

    are independent.

    (c) The joint distribution of X and W is bivariate Normal distribution with meanzero and variance-covariance matrix(

    1 2 + 2

    )Hence, X conditional on W follows a Normal distribution with mean W

    2+2and

    variance 1 .3. (a) It is easy to find that 1

    X1, 1X2, , 1

    Xnis a random sample from density 1/y2, 1

    2 0.(a) The MLE of is = nn

    i=1 logXi. By invariance principle, the MLE of 1/ is

    1/ = ni=1 logXi/n.(b) Since E( logX1) = 1 , the MLE is unbiased. The Cramer-Rao variance lower

    bound is 1/(n2). Since Var(logX) = 12

    , the MLE achieves the CR bound.

    (c) EX = +1 , so X is unbiased for+1 . And Var(X) =

    n(+2)(+1)2

    . The CR bound

    for estimating +1 is2

    n(+1)4. The estimator X does not achieve the bound.

    5. Let X1, , Xn be a random sample from N(, 2).(a) (X, S2) is complete and sufficient. (need to show details)

    (b) Note T = S2 Gamma(n12 , 22

    n1). Denote =n1

    2 and =22

    n1 . We have

    E(Sr) = E[(S2)r/2] =

    0

    tr/21

    ()et/t1dt = r

    ( r+n12 )(n12 )

    [2

    n 1]r2 .

    Using the Rao-Blackwell, the estimator Sr(n1

    2)

    ( r+n12

    )[n12 ]

    r2 is the UMVUE for r.

    (c) X is independent of S2. Therefore

    EX

    S= EXE[

    1

    S] = 1

    (n22 )(n12 )

    [2

    n 1]12 .

    The UMVUE of / is XS(n1

    2)

    (n22

    )[n12 ]

    12 .

    6. (a) Yi follows Bin(1, p) with p = P (Xi > 0) = (), where () is the CDF ofN(0, 1). Then pMLE = Y and MLE =

    1(Y ).(b) Yi Bin(1, p). By the factorization theorem, T =

    ni=1 Yi sufficient for p. The

    conditional distribution P (Y|T ) is free of p, and free of as well. So T = ni=1 Yiis sufficient for .

    (c) Since T =n

    i=1 Yi Bin(n, p), the distribution family {Binomial(n, p), 0 < p 1, we have p2 = (2) > p1 = (1). and

    f(y|2)f(y|1) =

    {p2p1

    1 p11 p2

    }T (1 p21 p1

    )n,

    which is non-decreasing in T . Define p0 = (0). By Karlin-Rubin theorem, thesize test is: reject H0 if T > t0, where t0 is chosen such that

    Pp0(T > t0) = .

    Using the normal approximate T N(np0, np0(1 p0)). We can get t0 =np0 + z

    np0(1 p0).

    1

    qualify_Jan15_vshort_wsol-12015Jan_Theory456_Solution_short