estimating quantiles of a selected exponential population

11
Statistics & Probability Letters 52 (2001) 9 – 19 Estimating quantiles of a selected exponential population Somesh Kumar * , Aditi Kar Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, India Received April 1999; received in revised form July 2000 Abstract Suppose independent random samples are available from k exponential populations with a common location and scale parameters 1;2;:::; k , respectively. The population corresponding to the largest sample mean is selected. The problem is to estimate a quantile of the selected population. In this paper, we derive the uniformly minimum variance unbiased estimator (UMVUE) using (U-V) method of Robbins (in: Gupta and Berger (Eds.), Statistical Decision Theory and Related Topics — IV, Vol. 1, Springer, New York, 1987, pp. 265 –270) and Rao–Blackwellization. Further, a general inadmissibility result for ane equivariant estimators is proved. As a consequence, an estimator improving upon the UMVUE with respect to the squared error and the scale invariant loss functions is obtained. c 2001 Elsevier Science B.V. All rights reserved MSC: 62F10; 62C15 Keywords: Selection rule; Quantiles; Uniformly minimum variance unbiased estimator; Ane equivariant estimator; Inadmissible estimator 1. Introduction The problem of estimating parameters of a selected population has received considerable attention in recent years. This type of problem arises in various agricultural, industrial, medical or economic experiments. For example, a farmer experiments with k fertilizers and selects the best one. Naturally, he would be interested in having an estimate of the average yield if the selected fertilizer is used. A car manufacturer, who has selected the most reliable model of engine for his cars, would like to know the reliability of the selected engine during actual use. One may refer to Cohen and Sackrowitz (1982, 1989), Sackrowitz and Samuel-Cahn (1984), Sharma and Vellaisamy (1989) and Vellaisamy (1992a) for more examples of situations where one may be interested in parameters of the selected populations. Initial attempts to formulate and investigate this problem were made by Rubinstein (1961, 1965), Stein (1964) and Sarkadi (1967). Stein (1964) considered estimation of the mean of the selected normal population and proved that the largest sample mean is an admissible and * Corresponding author. Tel.: +91-(0)3222-83662; fax: +91-(0)3222-55303/82700. E-mail address: [email protected] (S. Kumar). 0167-7152/01/$ - see front matter c 2001 Elsevier Science B.V. All rights reserved PII: S0167-7152(00)00174-7

Upload: somesh-kumar

Post on 02-Jul-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Statistics & Probability Letters 52 (2001) 9–19

Estimating quantiles of a selected exponential population

Somesh Kumar ∗, Aditi KarDepartment of Mathematics, Indian Institute of Technology, Kharagpur 721302, India

Received April 1999; received in revised form July 2000

Abstract

Suppose independent random samples are available from k exponential populations with a common location � andscale parameters �1; �2; : : : ; �k , respectively. The population corresponding to the largest sample mean is selected. Theproblem is to estimate a quantile of the selected population. In this paper, we derive the uniformly minimum varianceunbiased estimator (UMVUE) using (U-V) method of Robbins (in: Gupta and Berger (Eds.), Statistical Decision Theoryand Related Topics — IV, Vol. 1, Springer, New York, 1987, pp. 265–270) and Rao–Blackwellization. Further, a generalinadmissibility result for a�ne equivariant estimators is proved. As a consequence, an estimator improving upon theUMVUE with respect to the squared error and the scale invariant loss functions is obtained. c© 2001 Elsevier ScienceB.V. All rights reserved

MSC: 62F10; 62C15

Keywords: Selection rule; Quantiles; Uniformly minimum variance unbiased estimator; A�ne equivariant estimator;Inadmissible estimator

1. Introduction

The problem of estimating parameters of a selected population has received considerable attention in recentyears. This type of problem arises in various agricultural, industrial, medical or economic experiments. Forexample, a farmer experiments with k fertilizers and selects the best one. Naturally, he would be interested inhaving an estimate of the average yield if the selected fertilizer is used. A car manufacturer, who has selectedthe most reliable model of engine for his cars, would like to know the reliability of the selected engine duringactual use. One may refer to Cohen and Sackrowitz (1982, 1989), Sackrowitz and Samuel-Cahn (1984),Sharma and Vellaisamy (1989) and Vellaisamy (1992a) for more examples of situations where one may beinterested in parameters of the selected populations. Initial attempts to formulate and investigate this problemwere made by Rubinstein (1961, 1965), Stein (1964) and Sarkadi (1967). Stein (1964) considered estimationof the mean of the selected normal population and proved that the largest sample mean is an admissible and

∗ Corresponding author. Tel.: +91-(0)3222-83662; fax: +91-(0)3222-55303/82700.E-mail address: [email protected] (S. Kumar).

0167-7152/01/$ - see front matter c© 2001 Elsevier Science B.V. All rights reservedPII: S0167 -7152(00)00174 -7

10 S. Kumar, A. Kar / Statistics & Probability Letters 52 (2001) 9–19

minimax estimator with respect to the squared error loss. However, its bias performance is not good and thebias tends to in�nity as the number of populations increases. Further classes of estimators for the normalpopulations have been proposed and studied by Dahiya (1974), Hsieh (1981), Cohen and Sackrowitz (1982),Hwang (1987) and Venter (1988).Estimation of the mean of the selected population for nonnormal distributions has also received considerable

attention of researchers in the recent past. One may refer to Sackrowitz and Samuel-Cahn (1984), Vellaisamy(1992a) and Song (1992) who have investigated this problem for negative exponential, gamma and uniformdistributions, respectively. A more general problem where one may select more than one population has beenconsidered by Jayaratnam and Panchapakesan (1986) and Vellaisamy (1992b).In this paper, we consider estimation of quantiles of a selected exponential population. Estimation of

quantiles is widely used in reliability, life testing and related areas. Epstein (1962), Epstein and Sobel (1954)and Saleh (1981) discuss various practical aspects of the quantile estimation. In Section 2, we introduce thenotation and derive the uniformly minimum variance unbiased estimator (UMVUE). In Section 3, a newestimator is obtained which dominates the UMVUE when the loss function is quadratic. In Section 4, thebias and risk functions of these estimators are compared numerically with those of the maximum likelihoodestimator (MLE).

2. Deriving the UMVUE

Let �1; : : : ; �k be k (¿2) populations with �i having associated probability density

fi(x) =1�ie−(x−�)=�i ; x¿�; �i¿0; i = 1; : : : ; k: (2.1)

We call the population with the largest mean to be the best. Suppose independent random samples (Xi1; : : : ; Xin),i = 1; : : : ; k, are available from �1; : : : ; �k , respectively. Let Xi and Yi denote the minimum and the mean,respectively, of the ith sample. A natural selection rule is to choose the population corresponding to the largestsample mean, that is, choose the population �i if Yi is the largest among (Y1; : : : ; Yk). Here, one may ignorethe case of ‘ties’ where more than one Yi’s may be jointly largest since the distributions are continuous.Optimal properties of the natural selection rule have been investigated in detail by Bahadur and Goodman(1952), Lehmann (1966) and Eaton (1967). In the present problem, we use this rule for the selection of thebest population. A quantile of the selected population is then

�J = �+ b�J ; b¿0; (2.2)

where J = i, if Yi¿Yj for all j 6= i, j = 1; : : : ; k, i = 1; : : : ; k.If e−b =p, then �J is the pth quantile of the selected exponential population. An unbiased estimator � for

�J will satisfy

E(�− �J ) = 0 for all (�; �1; : : : ; �k): (2.3)

Thus � will be unbiased also for E(�J ) and a UMVUE for �J will be a UMVUE for E(�J ).Further de�ne Zi = Yi − Xi, then Xi and Zi are statistically independent and 2n�−1i Zi has a Chi-square

distribution with 2(n− 1) degrees of freedom. The density of Xi isfXi(x) = n�

−1i exp{−n(x − �)�−1i }; x¿�; �i¿0: (2.4)

In order to derive an unbiased estimator for �J , we use the (U-V) method of Robbins. For this purpose we�rst present two lemmas.

S. Kumar, A. Kar / Statistics & Probability Letters 52 (2001) 9–19 11

Lemma 2.1. Suppose the random variable (X; Y ) have the joint probability density

f(x; y) =( n�

)n 1�(n− 1)e

−(n=�)(y−�)(y − x)n−2; y¿x¿�; �¿0:

Then; for any given function u(x; y); satisfying the condition

limy→∞ e

−�(y−�)∫ y

xu(x; t)(t − x)n−2 dt = 0 for all x¿�; (2.5)

the function v(x; y) de�ned by

v(x; y) =1

(y − x)n−2∫ y

xu(x; t)(t − x)n−2 dt

satis�es Ev(X; Y ) = (�=n)Eu(X; Y ) for all (�; �).

Proof. The proof follows by using an integration by parts technique in the expression for Eu(X; Y ). Theintegration is done with respect to y and the two functions taken are

(n�−1)n(�(n− 1))−1e−n(y−�)=� and u(x; y)(y − x)n−2; respectively:Next, suppose (X1; Y1); : : : ; (Xk; Yk) are independent random variables with (Xi; Yi) having the probability

density

fXi;Yi(x; y) =( n�

)n 1�(n− 1)e

−(n=�i)(y−�)(y − x)n−2; y¿x¿�; �i¿0: (2.6)

For convenience, we write

x= (x1; : : : ; xk); y= (y1; : : : ; yk); y(i) = (y1; : : : ; yi−1; yi+1; : : : ; yk) and

�= (�; �1; : : : ; �k):

The following lemma is a straightforward generalization of Lemma 2.1.

Lemma 2.2. For any ui(x; y) satisfying the condition

limyi→∞ e

−�(yi−�)∫ yi

xiui(x1; y1; : : : ; xi; t; : : : ; xk ; yk)(t − xi)n−2 dt = 0 (2.7)

for all (x; y(i)) and �; the function vi(x; y) given by

vi(x; y) =1

(yi − xi)n−2∫ yi

xiui(x; y(i))(t − xi)n−2 dt (2.8)

satis�es Evi(X ;Y) = (�i=n)Eui(X ;Y) for all �.

Let us de�ne

ui(x; y) =

{n if yi¿yj for all j 6= i;0 otherwise:

(2.9)

Then ui satis�es condition (2.7) andk∑i=1

�nui(X ;Y) = �J :

Therefore∑k

i=1 vi(X ;Y) where vi is obtained using relation (2.8) will be unbiased for �J .

12 S. Kumar, A. Kar / Statistics & Probability Letters 52 (2001) 9–19

It is easily seen that for ui of (2.9), vi is given by

vi(x; y) =

n(n− 1)(yi − xi)n−2 [(yi − xi)

n−1 − {max(xi; y(i))− xi}n−1] if yi¿yj for all j 6= i;

0 otherwise:(2.10)

Let (y(1); : : : ; y(k)) denote the order statistics of (y1; : : : ; yk) where y(1) is the smallest and y(k) the largestof all yi’s. Further, let x(j) be the xi corresponding to y(j). Then we can write

k∑i=1

vi(x; y) =n

n− 1(y(k) − x(k))[1−

{max(x(k); y(k−1))− x(k)

(y(k) − x(k))}n−1]

: (2.11)

Using the fact that E((nX1 − Y1)=(n− 1)) = �, an unbiased estimator for �J is obtained as

�(X ;Y) =nX1 − Y1n− 1 +

bnn− 1(Y(k) − X (k))

[1−

{max(X (k); Y(k−1))− X (k)

Y(k) − X (k)}n−1]

: (2.12)

Remark 2.3. The term (nX1 − Y1)=(n− 1) in � can be replaced by any other unbiased estimator of � to getdi�erent unbiased estimators for �J .

Next, let us de�ne X =min{X1; : : : ; Xk} and Ti = Yi − X ; i = 1; : : : ; k. Then (X; T1; : : : ; Tk) is complete andsu�cient (see Ghosh and Razmpour, 1984). Also X and T = (T1; : : : ; Tk) are independently distributed withprobability densities

fX (x) = n(∑

�−1i)exp

{−n(x − �)

(∑�−1i

)}; x¿�; �i¿0 (2.13)

and

fT (t) = nkn−1(n− 1)(∑

�−1i)−1

(��−ni ) (�n)−k (�tn−1i

) (∑t−1i)exp

{−n∑

ti�−1i};

ti¿0; �i¿0; i = 1; : : : ; k: (2.14)

The UMVUE of �J is then simply obtained by Rao–Blackwellization of �(X ;Y) with respect to (X;T).Denoting by t = (t1; : : : ; tk) and y= (y1; : : : ; yk), where yi = ti + x; i = 1; : : : ; k, we have

E{�(X ;Y)|X = x;T = t}= nn− 1E(X1|X = x;T = t)−

y1n− 1 +

bnn− 1y(k)

− bnn− 1E(X (k)|X = x;T = t)

− bnn− 1E

[{max(X (k); Y(k−1))− X (k)}n−1(Y(k) − X (k))n−2 |X = x;Y = y

]: (2.15)

The �rst expectation on the right-hand side of (2.15) is evaluated in Sharma and Kumar (1994) and isgiven by

E(X1|X = x;T = t) = x + 1n(t1 − 1∑

t−1i

): (2.16)

Further, when t(k) = ti, one can obtain

E(X (k)|X = x;T = t) = E(Xi|X = x;T = t) = x + 1n(ti − 1∑

t−1i

): (2.17)

using arguments similar to those for calculating conditional expectation of X1.

S. Kumar, A. Kar / Statistics & Probability Letters 52 (2001) 9–19 13

Denote the last expectation on the right-hand side of (2.15) by P. When t(k) = ti and t(k−1) = tj, i 6= j, wecan write

P = E[{max(Xi; Yj)− Xi}n−1

(Yi − Xi)n−2 |X = x;T = t]

=[{max(x; yj)− x}n−1

(yi − x)n−2 fXi;Yi(x; yi)∫ y1

x· · ·∫ yi−1

x

×∫ yi+1

x· · ·∫ yk

x

k∏j=1j 6=i

fXj;Yj (xj; yj) dxk : : : dxi+1 dxi−1 : : : dx1

+∫ y2

x· · ·∫ yk

x

{max(z; yj)− z}n−1(yi − z)n−2 fXi;Yi(z; yi)

k∏j=2j 6=i

fXj;Yj (xj; yj) dxk : : : dxi+1 : : : dz dxi−1 : : : dx2

+ (k − 2) similar terms]=fX (x) fT (t):

Integrating and simplifying, we get

P =(tjti

)n−1 (∑t−1i)−1 [

1 +n− 1ntj(∑

t−1i − t−1i)]: (2.18)

Substituting from expressions (2.15)–(2.17) into (2.15), we obtain the UMVUE of �J as

�∗(X;T) = X + bT(k) +(∑

T−1i

)−1 [b− 1n− 1 − bn

n− 1(T(k−1)T(k)

)n−1

+b(T(k−1)T(k)

)n (1− T(k)

∑T−1i

)]: (2.19)

3. Improving upon the UMVUE

Consider the group G = {ga;c: ga;c(x) = ax + c; a¿0; c ∈ R} of a�ne transformations. Under this trans-formation: Xij → aXij + c, Xi → aXi + c, Yi → aYi + c, X → aX + c, Ti → aTi and �J → a�J + c. Theestimation problem is invariant if we take the scale invariant loss

L(a; �J ) = (a− �J )2=�2J : (3.1)

The form of an a�ne equivariant estimator for �J based on (X;T) is given by

��(X;T) = X + T1�(W); (3.2)

where W =(W1; : : : ; Wk−1) and Wi = Ti+1=T1, i = 1; : : : ; k − 1.We prove a general inadmissibility result for a�ne equivariant estimators in the following theorem:

Theorem 3.1. Let �� be a�ne equivariant estimator of the form (3:2). Further; de�ne the estimator �∗� by

�∗� = ��; if �(w)¿�0(w)

= ��0 ; if �(w)¡�0(w);

14 S. Kumar, A. Kar / Statistics & Probability Letters 52 (2001) 9–19

where �0(w) = (bn − 1)=nk max{1; w1; : : : ; wk−1}. Then �∗� improves �� with respect to the scale invariantloss (3:1) if P(�(w)¡�0(w))¿0 for some �.

Proof. Consider the conditional risk function of �� given W = w, w= (w1; : : : ; wk−1);

R(�; ��|w) = E[{(X + T1�(W)− �J )2=�2J}|W = w]: (3.3)

The expression on the right-hand side of (3.3) is a convex function of �(w) with minimum attained at

�̂(w; �) =−E[{(X − �J )T1=�2J}|W = w]E[(T 21 =�

2J )|W = w]

: (3.4)

In order to evaluate �̂, we partition R+k into disjoint sets A1; : : : ; Ak , where Aj= {w=(w1; : : : ; wk−1): wj−1¿1;wj−1=w1¿1; : : : ; wj−1=wj−2¿1; wj=wj−1¡1; wj+1=wj−1¡1; : : : ; wk−1=wj−1¡1}For w ∈ Aj, T(k) = Tj. Therefore, when w ∈ Aj,

�̂(w; �) =−E{(X − �j)T1|W = w}E(T 21 |W = w)

= E(�j − X )E(T1|W = w)=E(T 21 |W = w) (3.5)

as X and T are independently distributed.The conditional distribution of T1|W = w is

fT1|W=w(t1|w) =nkn−1

�(kn− 1) tkn−21 e−nt1(1=�1+w1=�2+···+wk−1=�k )

[1�1+w1�2+ · · ·+ wk−1

�k

]kn−1;

t1¿0; wi¿0; i = 1; : : : ; k − 1:One can easily see that

E(T1|W = w) =kn− 1n

[�−11 +

k−1∑i=1

wi�−1i+1

]−1

and

E(T 21 |W = w) =k(kn− 1)

n

[�−11 +

k−1∑i=1

wi�−1i+1

]−2:

Substituting from these expressions in (3.5) and using the fact that E(X ) = �+ (n∑�−1i )

−1, we obtain

�̂(w; �) =1k

(b�j − 1

n∑�−1i

)(�−11 +

k−1∑i=1

wi�−1i+1

); (3.6)

when w ∈ Aj, j = 1; : : : ; k.In order to apply the orbit-by-orbit improvement technique of Brewster and Zidek (1974), we need the

in�mum and supremum of �̂(w; �) over � for �xed values of w. For j = 1, �̂(w; �) may be written as

�1(w; �) =1k

(b− 1

n(1 +∑�i)

)(1 +

∑�iwi

); (3.7)

where � = (�1; : : : ; �k−1), �i = �1=�i+1, i = 1; : : : ; k − 1.The function �1 is the same as in Kumar and Sharma (1996) and it is shown there that

inf��1(w; �) = inf

c�2(w; c);

S. Kumar, A. Kar / Statistics & Probability Letters 52 (2001) 9–19 15

where

�2(w; c) =1k

(b− 1

n(1 + c)

)(1 + cw(k−1));

w(k−1) is the largest of w1; : : : ; wk−1 and c =k−1∑i=1

�i:

Now on A1, w(k−1)61 and so one can easily check that �2(w; c) is an increasing function of c. Therefore,

infc�2(w; c) = lim

c→0�2(w; c) =

nb− 1nk

:

The supremum of �1(w; �) is +∞ as �i → +∞ for any i.On w ∈ Aj, j = 2; : : : ; k, we obtain in�mum and supremum of �(w; �) by similar reparametrization. For

example, on A2 we can write �(w; �) as

�3(w; �∗) =1k

{b− 1

n(1 +∑

i 6=2 �∗i )

}(w1 + �∗1 +

k−1∑i=2

wi�∗i+1

);

where �∗i = �2=�i, i 6= 2.On A2; T(k) = T2 and it can be shown that

inf�∗�3(w; �∗) =

infc∗�4(w; c∗) if T(k−1) = T1;

infc∗�5(w; c∗) if T(k−1) = Tj+1; j = 2; : : : ; k − 1;

where

�4(w; c∗) =1k

(b− 1

n(1 + c∗)

)(w1 + c∗);

�5(w; c∗) =1k

(b− 1

n(1 + c∗)

)(w1 + c∗wj); if T(k−1) = Tj+1; j = 2; : : : ; k − 1

and

c∗ =∑i 6=2�∗i :

Finally using the fact that on A2; w1 = max{1; w1; : : : ; wk−1}, one can easily prove that �4(w; c∗) and�5(w; c∗) are increasing functions of c∗ and so

infc∗�4(w; c∗) =

bn− 1nk

w1 = infc∗�5(w; c∗):

The supremum of �3(w; �∗) is again +∞.Using similar argument for j = 3; : : : ; k, we �nally conclude that

inf��̂(w; �) =

(bn− 1)nk

max{1; w1; : : : ; wk−1}and

sup��̂(w; �) = +∞:

An application of the Brewster–Zidek Technique (1974) then completes the proof of the theorem.

16 S. Kumar, A. Kar / Statistics & Probability Letters 52 (2001) 9–19

Remark 3.2. The result of Theorem 3.1 holds if we replace the scale invariant loss function of (3.1) by thesquared error loss.

Corollary 3.3. The UMVUE �∗(X;T) of �J is inadmissible.

Proof. The UMVUE �∗ given by (2.19) is of the form (3.2) and the function � for it, is given by

�∗(w) =1T1

[bT(k) + P−1

{b− 1n− 1 − bn

n− 1un−1 + bun(1− T(k)P)

}];

where P =∑T−1i and u = T(k−1)=T(k). The inadmissibility of �∗ is proved if one can show that there is a

region in the w-space with positive probability where �0(w)¿�∗(w). That this is indeed true can be seen byobserving that

�0(w)− �∗(w) =k{n(n− 1)b+ 1}

kn(n− 1) ¿0;

when all Ti’s are equal. A continuity argument shows that the inequality will hold in a neighbourhood ofT1 = · · ·= Tk .

Remark 3.4. We have also considered the quantile estimation problem when samples of unequal sizes, sayn1; n2; : : : ; nk are drawn from the k exponential populations. The derivation of the initial unbiased estimatorand consequently that of the UMVUE can be done following steps similar to those in Section 2.The UMVUE �∗UN (X;T) in this case is given by

�∗UN (X;T) = X + bT(k) +(∑ (ni − 1)

Ti

)−1 [(b− 1)− bn(k)

(T(k−1)T(k)

)n(k)−1

+ b(T(k−1)T(k)

)n(k){(n(k)− 1)− T(k)

∑ (ni − 1)Ti

}];

where n(k) is the ni corresponding to T(k).Further, the inadmissibility of �∗UN (X;T) can also be established on lines similar to those in the proofs of

Theorem 3.1 and Corollary 3.3. The value of �0(w) in this case is obtained as

�0(UN )(w) =(bn(k)− 1)∑

nimax{1; w1; : : : ; wk−1}:

4. Numerical comparisons

A natural estimator of �J is the maximum likelihood estimator (MLE) d(X;T) = X + bT(1). The exactexpressions for the bias and the risk of d can be obtained. However, they are quite tedious and henceomitted. In this section, we compare numerically, the bias and mean squared error risk of d with those ofthe UMVUE �∗ and its improvement �∗� for k = 2. These do not depend on �. The numerical values arecalculated using simulations based on 5000 samples of size n = 30 for � = 0 and selected values of b, �1and �2. These are presented in Tables 1–4. The values of b, we have considered are 0.1, 0.25, 0.5 and 1.0.As e−b = p, lower values of b correspond to higher-order quantiles. We observe that for small b, �∗ and �∗�perform much better than d in terms of risk. However, as b increases the performance of d improves and itis better than both �∗ and �∗�. Also near �1 =�2 the risk performance of MLE improves. The absolute bias ofthe MLE is almost always more than that of �∗�. Another point to be noted is that as �1=�2 or �2=�1 increase,�∗ and �∗� tend to have the same risk. Similar behaviour has been observed for n= 40 and 50.

S. Kumar, A. Kar / Statistics & Probability Letters 52 (2001) 9–19 17

Table 1k = 2, n = 30, b = 0:1

�1 �2 B1 B2 R1 R2 R3

1.0 1.0 0.025248 0.001354 0.001074 0.000867 0.0011641.0 2.0 0.020155 −0:000306 0.001788 0.001775 0.0022051.0 3.0 0.022061 −0:000426 0.003519 0.003519 0.0040962.0 1.0 0.020451 −0:000011 0.001798 0.001774 0.0022252.0 2.0 0.050497 0.002707 0.004298 0.003470 0.0046572.0 3.0 0.043296 0.000601 0.005176 0.004817 0.0061302.0 4.0 0.040310 −0:000612 0.007150 0.007099 0.0088203.0 2.0 0.043715 0.001682 0.005100 0.004802 0.0062573.0 3.0 0.075745 0.004061 0.009669 0.007807 0.0104793.0 4.0 0.070848 0.003175 0.011012 0.009629 0.0126714.0 3.0 0.071050 0.003195 0.010933 0.009783 0.0128624.0 4.0 0.100993 0.005414 0.017190 0.013880 0.018629

Table 2k = 2, n = 30, b = 0:25

�1 �2 B1 B2 R1 R2 R3

1.0 1.0 0.038128 0.003310 0.005393 0.004143 0.0033581.0 2.0 0.017125 −0:000730 0.008950 0.008872 0.0090251.0 3.0 0.017740 −0:001008 0.019180 0.019180 0.0196922.0 1.0 0.017759 −0:000114 0.009068 0.008914 0.0090992.0 2.0 0.076257 0.006621 0.021571 0.016573 0.0134322.0 4.0 0.034250 −0:001461 0.035800 0.035490 0.0361013.0 1.0 0.018539 −0:000200 0.019238 0.019238 0.0197823.0 3.0 0.114386 0.009931 0.048535 0.037288 0.0302223.0 4.0 0.091701 0.008046 0.055206 0.046464 0.0403264.0 2.0 0.035518 −0:000227 0.036272 0.035658 0.0363954.0 3.0 0.091899 0.007778 0.053683 0.046707 0.0407094.0 4.0 0.152514 0.013241 0.086285 0.066290 0.053728

Table 3k = 2, n = 30, b = 0:5

�1 �2 B1 B2 R1 R2 R3

1.0 1.0 0.059595 0.006571 0.020886 0.015945 0.0103741.0 2.0 0.012075 −0:001438 0.034778 0.034471 0.0336241.0 3.0 0.010539 −0:001979 0.075469 0.075469 0.0758222.0 1.0 0.13272 −0:000285 0.035356 0.034734 0.0338992.0 2.0 0.119191 0.013143 0.083545 0.063780 0.0414962.0 3.0 0.057052 0.003321 0.098579 0.089880 0.0762063.0 1.0 0.012051 −0:000463 0.075819 0.075819 0.0762063.0 3.0 0.178787 0.019714 0.187976 0.143504 0.0933673.0 4.0 0.016163 0.126454 0.215149 0.180049 0.1346824.0 1.0 0.012717 −0:000598 0.134633 0.134633 0.1351194.0 2.0 0.026544 −0:000570 0.141426 0.138935 0.1355974.0 3.0 0.126648 0.015418 0.207989 0.180362 0.135530

18 S. Kumar, A. Kar / Statistics & Probability Letters 52 (2001) 9–19

Table 4k = 2, n = 30, b = 1:0

�1 �2 B1 B2 R1 R2 R3

1.0 1.0 0.102530 0.013094 0.082984 0.063332 0.0370061.0 3.0 −0:003865 −0:003921 0.301248 0.301248 0.3010841.0 4.0 −0:005153 −0:005153 0.535260 0.535260 0.5352602.0 2.0 0.205059 0.026188 0.331934 0.253329 0.1480252.0 4.0 0.003951 −0:005704 0.554083 0.549212 0.5297703.0 1.0 −0:000925 −0:000989 0.302914 0.302914 0.3027173.0 2.0 0.076988 0.015978 0.384554 0.354614 0.2872413.0 4.0 0.195963 0.032397 0.858604 0.717948 0.5043914.0 1.0 −0:001234 −0:001234 0.538164 0.538164 0.5381644.0 2.0 0.008595 −0:001254 0.564302 0.554264 0.5341654.0 3.0 0.196145 0.030697 0.828063 0.718106 0.5067154.0 4.0 0.410118 0.052375 1.327737 1.013317 0.592101

We have also compared numerically the bias and risk values of the UMVUE �∗, its improvement �∗� andthe MLE d for k = 3 and 4. The numerical values were calculated using simulations based on 3000 samplesof size n= 30; 40 and 50 for �= 0 and selected values of b, �1, �2, �3 (and �4). The observations regardingbias and risk comparisons are almost similar to those for the case k = 2.The following notation has been used in tables: B1 = Bias of d, B2 = Bias of �∗�, R1 = Risk of �

∗, R2 =Risk of �∗�, R3 = Risk of d.

References

Bahadur, R.R., Goodman, A.L., 1952. Impartial decision rules and su�cient statistics. Ann. Math. Statist. 23, 553–562.Brewster, J.F., Zidek, J.V., 1974. Improving on equivariant estimators. Ann. Statist. 2, 21–38.Cohen, A., Sackrowitz, H.B., 1982. Estimating the mean of the selected population. In: Gupta, S.S., Berger, J.O. (Eds.), StatisticalDecision Theory and Related Topics-III, Vol. 1. Academic Press, New York, pp. 243–270.

Cohen, A., Sackrowitz, H.B., 1989. Two stage conditionally unbiased estimators of the selected mean. Statist. Probab. Lett. 8, 273–278.Dahiya, R.C., 1974. Estimation of the mean of the selected population. J. Amer. Statist. Assoc. 69, 226–230.Eaton, M.L., 1967. Some optimum properties of ranking procedures. Ann. Math. Statist. 38, 124–137.Epstein, B., 1962. Simple estimates of the parameters of exponential distributions. In: Sarhan, A.E., Greenberg, B.G. (Eds.), Contributionsto Order Statistics. Wiley, New York, pp. 361–371.

Epstein, B., Sobel, M., 1954. Some theorems relevant to life testing from an exponential distribution. Ann. Math. Statist. 25, 373–381.Ghosh, M., Razmpour, A., 1984. Estimation of the common location parameter of several exponentials. Sankhya Ser. A 48, 383–394.Hsieh, H.K., 1981. On estimating the mean of the selected population with unknown variance. Comm. Statist.-Theory Methods 10,1869–1878.

Hwang, J.T., 1987. Empirical Bayes estimators for the means of the selected populations. Tech. Report, Cornell University.Jayaratnam, S., Panchapakesan, S., 1986. Estimation after subset selection from exponential populations. Comm. Statist.-Theory Methods15, 3459–3473.

Kumar, S., Sharma, D., 1996. A note on estimating quantile of exponential populations. Staist. Probab. Lett. 26, 115–118.Lehmann, E.L., 1966. On a theorem of Bahadur and Goodman. Ann. Math. Statist. 37, 1–6.Rubinstein, D., 1961. Estimation of the reliability of a system in development. Report R-61-ELC-1, Tech. Info. Service, GeneralElectric Co.

Rubinstein, D., 1965. Estimation of failure rates in a dynamic reliability program. Report 65-RGO-7, Tech. Info. Service, GeneralElectric Co.

Sackrowitz, H.B., Samuel-Cahn, E., 1984. Estimation of the mean of a selected negative exponential population. J. Roy. Statist. Soc. Ser.B 46, 242–249.

Saleh, A.K. Md.E., 1981. Estimating quantiles of exponential distributions. In: Csorgo, M., Dawson, D., Rao, J.N.K., Saleh, A.K.Md.E.(Eds.), Statistics and Related Topics. North-Holland, Amsterdam, pp. 279–283.

Sarkadi, K., 1967. Estimation after selection. Studia Sci. Math. Hung. 2, 341–350.Sharma, D., Kumar, S., 1994. Estimating quantiles of exponential populations. Statist. Decision 12, 343–352.

S. Kumar, A. Kar / Statistics & Probability Letters 52 (2001) 9–19 19

Sharma, D., Vellaisamy, P., 1989. Quantile estimation of the selected normal population. Sankhya Ser. B 52, 311–322.Song, R., 1992. On estimating the mean of the selected uniform population. Comm. Statist.-Theory Methods 21, 2707–2719.Stein, C., 1964. Contribution to the discussion of Bayesian and non Bayesian decision theory. Handout, IMS Meeting, Amherst, MA.Vellaisamy, P., 1992a. Inadmissibility results for the selected scale parameters. Ann. Statist. 20, 2183–2191.Vellaisamy, P., 1992b. Average worth and simultaneous estimation of the selected subset. Ann. Inst. Statist. Math. 44, 551–562.Venter, J.H., 1988. Estimation of the mean of the selected population. Comm. Statist.-Theory Methods 17, 797–805.