reliability estimation of the selected exponential populations

6
Statistics and Probability Letters 79 (2009) 1372–1377 Contents lists available at ScienceDirect Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro Reliability estimation of the selected exponential populations Somesh Kumar a,* , Ajaya Kumar Mahapatra a , P. Vellaisamy b,** a Department of Mathematics, Indian Institute of Technology, Kharagpur-721302, India b Department of Mathematics, Indian Institute of Technology Bombay, Powai, Mumbai-400076, India article info Article history: Received 6 January 2009 Received in revised form 22 February 2009 Accepted 22 February 2009 Available online 6 March 2009 MSC: 62F10 62C20 abstract Let Π 1 , Π 2 ,..., Π k be k populations with Π i being exponential with an unknown location parameter μ i and a common but known scale parameter σ , i = 1,..., k. Suppose independent random samples are drawn from the populations Π 1 , Π 2 ,..., Π k . Let {X i1 , X i2 ,..., X in } denote the sample drawn from ith population, i = 1,..., k. A subset of the populations with high reliabilities is selected according to Gupta’s [Gupta, S.S., 1965. On some multiple decision (Selection and Ranking) rules. Technometrics 7, 225–245] subset selection procedure. We consider the problem of estimating simultaneously the reliability functions of the populations in the selected subset. The uniformly minimum variance unbiased estimator (UMVUE) is derived and its inadmissibility is established. An estimator improving the natural estimator is also obtained by using the differential inequality approach used by Vellaisamy and Punnen [Vellaisamy, P., Punnen, A.P., 2002. Improved estimators for the selected location parameters. Statist. Papers 43, 291–299]. © 2009 Elsevier B.V. All rights reserved. 1. Introduction The problem of estimating parameters after selection has been widely discussed in the literature. Many researchers have studied the estimation of the location and scale parameters of exponential populations. Recent references include Sackrowitz and Samuel-Cahn (1984), Cohen and Sackrowitz (1989), Vellaisamy (1992, 1996), Kumar and Kar (2001a,b), Vellaisamy (2003), and Vellaisamy and Jain (2008). The problem of estimation after selection seems to have been initially formulated and investigated by Rubinstein (1961, 1965) in the context of reliability estimation. Rubinstein used a sequential scheme for selecting the components in a manufacturing process. He derived unbiased estimators for the failure rates of the selected components. His methods also give unbiased estimator of selected Poisson parameters for a wide class of selection procedures. Vellaisamy and Punnen (2002) have considered the simultaneous estimation of location parameters after subset selection from exponential populations with known common scale parameter σ . It was shown that the natural estimator dominates the uniformly minimum variance unbiased estimator (UMVUE) in terms of mean squared error (MSE). Further, they have obtained an improvement over the natural estimator using a differential inequality approach due to Berger (1980) and Dasgupta (1986). In this paper, we consider simultaneous estimation of reliability functions for the model as considered by Vellaisamy and Punnen (2002). To the best of our knowledge, the problem of estimating reliability functions of the selected populations has not been addressed in the literature so far. Let Π 1 , Π 2 ,..., Π k be k independent populations, where Π i follows a two parameter exponential distribution with density f i (x|μ i ,σ) = 1 σ e -( x-μ i σ ) , x i i R, i = 1,..., k. (1) * Corresponding author. Tel.: +91 3222283662; fax: +91 3222255303. ** Corresponding author. E-mail addresses: [email protected] (S. Kumar), [email protected] (A.K. Mahapatra), [email protected] (P. Vellaisamy). 0167-7152/$ – see front matter © 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.spl.2009.02.012

Upload: somesh-kumar

Post on 26-Jun-2016

214 views

Category:

Documents


2 download

TRANSCRIPT

Statistics and Probability Letters 79 (2009) 1372–1377

Contents lists available at ScienceDirect

Statistics and Probability Letters

journal homepage: www.elsevier.com/locate/stapro

Reliability estimation of the selected exponential populationsSomesh Kumar a,∗, Ajaya Kumar Mahapatra a, P. Vellaisamy b,∗∗a Department of Mathematics, Indian Institute of Technology, Kharagpur-721302, Indiab Department of Mathematics, Indian Institute of Technology Bombay, Powai, Mumbai-400076, India

a r t i c l e i n f o

Article history:Received 6 January 2009Received in revised form 22 February 2009Accepted 22 February 2009Available online 6 March 2009

MSC:62F1062C20

a b s t r a c t

Let Π1,Π2, . . . ,Πk be k populations with Πi being exponential with an unknownlocation parameter µi and a common but known scale parameter σ , i = 1, . . . , k.Suppose independent random samples are drawn from the populations Π1,Π2, . . . ,Πk.Let {Xi1, Xi2, . . . , Xin} denote the sample drawn from ith population, i = 1, . . . , k. A subsetof the populations with high reliabilities is selected according to Gupta’s [Gupta, S.S.,1965. On somemultiple decision (Selection and Ranking) rules. Technometrics 7, 225–245]subset selection procedure. We consider the problem of estimating simultaneously thereliability functions of the populations in the selected subset. The uniformly minimumvariance unbiased estimator (UMVUE) is derived and its inadmissibility is established.An estimator improving the natural estimator is also obtained by using the differentialinequality approach used by Vellaisamy and Punnen [Vellaisamy, P., Punnen, A.P., 2002.Improved estimators for the selected location parameters. Statist. Papers 43, 291–299].

© 2009 Elsevier B.V. All rights reserved.

1. Introduction

The problem of estimating parameters after selection has been widely discussed in the literature. Many researchershave studied the estimation of the location and scale parameters of exponential populations. Recent references includeSackrowitz and Samuel-Cahn (1984), Cohen and Sackrowitz (1989), Vellaisamy (1992, 1996), Kumar and Kar (2001a,b),Vellaisamy (2003), and Vellaisamy and Jain (2008). The problem of estimation after selection seems to have been initiallyformulated and investigated by Rubinstein (1961, 1965) in the context of reliability estimation. Rubinstein used a sequentialscheme for selecting the components in a manufacturing process. He derived unbiased estimators for the failure rates of theselected components. His methods also give unbiased estimator of selected Poisson parameters for a wide class of selectionprocedures. Vellaisamy and Punnen (2002) have considered the simultaneous estimation of location parameters after subsetselection from exponential populations with known common scale parameter σ . It was shown that the natural estimatordominates the uniformly minimum variance unbiased estimator (UMVUE) in terms of mean squared error (MSE). Further,they have obtained an improvement over the natural estimator using a differential inequality approach due to Berger (1980)and Dasgupta (1986). In this paper, we consider simultaneous estimation of reliability functions for themodel as consideredby Vellaisamy and Punnen (2002). To the best of our knowledge, the problem of estimating reliability functions of theselected populations has not been addressed in the literature so far.Let Π1,Π2, . . . ,Πk be k independent populations, where Πi follows a two parameter exponential distribution with

density

fi(x|µi, σ ) =1σe−(

x−µiσ ), x > µi, µi ∈ R, i = 1, . . . , k. (1)

∗ Corresponding author. Tel.: +91 3222283662; fax: +91 3222255303.∗∗ Corresponding author.E-mail addresses: [email protected] (S. Kumar), [email protected] (A.K. Mahapatra), [email protected] (P. Vellaisamy).

0167-7152/$ – see front matter© 2009 Elsevier B.V. All rights reserved.doi:10.1016/j.spl.2009.02.012

S. Kumar et al. / Statistics and Probability Letters 79 (2009) 1372–1377 1373

We assume throughout that the scale parameters are known and equal and the location parameters µi’s are unknown andpossibly unequal. In reliability and life testing situations, we refer to the location parameter µi as the minimum guaranteetime. So, without loss of generality, we can take µi > 0, i = 1, . . . , k and the common scale parameter σ = 1. Let{Xi1, Xi2, . . . , Xin} be a random sample of size n drawn from the ith population, i = 1, . . . , k. The reliability function θi(t) ofthe ith population at time t > 0 is given by

θi(t) = P(Xij > t) ={1, if t < µi,

e−(t−µi), t ≥ µi,

for j = 1, . . . , n and i = 1, . . . , k. Since t is a positive constant, the problem of estimating θi(t) is equivalent to thatof estimating θi = eµi . First we aim to select a subset of the populations with high reliabilities. We call the populationassociated with the maximum µi as the best population. Let Xi = min{Xi1, Xi2, . . . , Xin}, i = 1, . . . , k. Then clearly Xifollows an exponential distribution with density

fXi(x|µi) = ne−n(x−µi), x > µi > 0. (2)

Let X(1) > X(2) > · · · > X(k) be the ordered values of {X1, . . . , Xk}. Suppose a subset of the given k populations is selectedaccording to Gupta’s subset selection procedure (Gupta, 1965), that is, selectΠi if and only if Xi ≥ X(1) − d, for some d suchthat the probability of correct selection (CS) is at least P∗, a specified quantity (Gupta and Panchapakesan, 1979). That is,

Prob(CS) ≥ P∗,1k< P∗ < 1

and also d satisfies the relation∫∞

0F k−1(z + d)f (z)dz = P∗,

where f (u) and F(u) are the density function and cumulative distribution function of Xi respectively. We are interested inestimating θ = {θ1I1, θ2I2, . . . , θkIk}, where θi = eµi ,

Ii = 1, if Xi > X(1)i − d,= 0, otherwise, i = 1, . . . , k,

and

X(1)i = max{X1, . . . , Xi−1, Xi+1 . . . , Xk}.

Note that the dimension of θ is random. We consider the squared error loss defined by

L(θ̂ , θ) =‖ (θ̂ − θ) ‖2 =k∑i=1

(θ̂i − θi)2Ii, (3)

where θ̂ is any estimator of θ . Note that X = (X1, . . . , Xk) is a complete and sufficient statistic.This paper is organized as follows. In Section 2, a natural estimator of θ is proposed and the UMVUE is derived using the

UV method of Robbins (1988). In Section 3, the UMVUE and the natural estimator are shown to be inadmissible. Further inSection 4, we have derived some improved estimators by solving a differential inequality in the light of Vellaisamy (1992)and Vellaisamy and Punnen (2002).

2. A natural estimator and the UMVUE of θ

It is straightforward to check that θ̂i = n−1n e

Xi is the UMVUE of θi for the component problem (that is, based on the ithsample alone). So a natural estimator of θ is given by

δN ={δ1N I1, . . . , δ

kN Ik}, where δiN =

n− 1neXi for i = 1, . . . , k. (4)

To derive the UMVUE of θiIi, we follow the UV method of Robbins. The following lemma is useful for the derivation of theUMVUE.

Lemma 1. Let X1, . . . , Xk be independent random variables, where the density of Xi is as given in (2) and µ = {µ1, . . . , µk}.Further suppose U : Rk → R be a real-valued function such that for all µ,µi > 0, i = 1, . . . , k(i) Eµ[U(X)] <∞(ii) Eµ[eXiU(X)] <∞.

1374 S. Kumar et al. / Statistics and Probability Letters 79 (2009) 1372–1377

Then

V (X) = eXiU(X)− e(n+1)Xi∫∞

XiU(X1, . . . , Xi−1, z, Xi+1, . . . , Xk)e−nzdz

satisfies

Eµ[V (X)] = θiEµ[U(X)] for all µ.

Proof. The proof follows directly by using an integration by parts in the second expectation above.An application of Lemma 1 yields the unbiased estimator of θ as δU = {U1, . . . ,Uk}, where

Ui(X) = eXi Ii −1ne(n+1)Xi−nZi , (5)

and Zi = max{Xi, X(1)i − d}. �

Remark 2. Let d = 0. In this case only the population corresponding to X(1) is selected and θM =∑ki=1 θiIi denotes the

reliability function of the selected (best) population. Then the UMVUE of θM is given by

δU = eX(1) −1n

k∑i=1

e(n+1)Xi−nX(1) . (6)

3. Inadmissibility of the UMVUE and the natural estimator

First, we show that the natural estimator δN dominates the UMVUE δU .

Theorem 3. For estimating θ with respect to the squared error loss given in (3), the natural estimator δN dominates the UMVUEδU .

Proof. Let Ici = 1− Ii. Then the risk of the unbiased estimator δU = {U1, . . . ,Uk}, where Ui is as in (5), is given by

R(δU , θ

)=

k∑i=1

{eXi Ii −

1neXi Ii −

1neXie−n(X(1)i−d−Xi)Ici − θiIi

}2=

k∑i=1

{(n− 1neXi − θi

)Ii −

1neXie−n(X(1)i−d−Xi)Ici

}2=

k∑i=1

{(n− 1neXi − θi

)2Ii +

1n2e2Xie−2n(X(1)i−d−Xi)Ici

}

k∑i=1

{(n− 1neXi − θi

)2Ii

}= R(δN , θ), (7)

which is the risk of natural estimator δN . Thus under mean squared error criterion, the natural estimator δN dominates theUMVUE δU of θ . �

We further try to find an alternative estimator for θ . For this we consider d = 0. In this case the estimation problem isreduced to the reliability function of the selected best population, that is, θM . Consider the non-informative prior τ(µ) givenby

τ(µ) =

{1, if µi ∈ R, i = 1, . . . , k,0, otherwise.

The generalized Bayes estimator of θM with respect to the quadratic loss

L(θ, θ̂) =

(θ̂ − θM

θM

)2. (8)

is given by

S. Kumar et al. / Statistics and Probability Letters 79 (2009) 1372–1377 1375

δGB =

∫µ1θMf (x|µ)τ(µ)dµ∫

µ1θ2Mf (x|µ)τ(µ)dµ

,

=

∫ xk−∞

. . .∫ x1−∞

1eµ1 e

−n(x1−µ1) . . . e−n(xk−µk)dµ1 . . . dµk∫ xk−∞

. . .∫ x1−∞

1e2µ1e−n(x1−µ1) . . . e−n(xk−µk)dµ1 . . . dµk

=n− 2n− 1

eX1 , if X1 ≥ Xj, j 6= 1. (9)

The following remark immediately follows.

Remark 4. The estimator n−2n−1eX(1) is generalized Bayes for the estimation of θM with respect to the non-informative prior

τ(µ) given by

τ(µ) =

{1, if µi ∈ R, i = 1, . . . , k,0, otherwise.

The loss function is the quadratic loss given in (8).

The above remark motivates us to consider a competing estimator δ∗N of θ given by

δ∗N ={δ∗1N I1, . . . , δ

∗kN Ik

},

where δ∗iN =n−2n−1e

Xi for i = 1, . . . , k.We now prove the following theorem.

Theorem 5. For estimating θ with respect to the squared error loss (3), the estimator δ∗N dominates δN .

Proof. Consider the risk difference

R(θ, δ∗N)− R(θ, δN) =k∑i=1

Eµ[{(δ∗iN2− δiN

2)− 2(δ∗iN − δ

iN)θi}Ii].

Using Lemma 1, the above expectation can be written as

R(θ, δ∗N)− R(θ, δN) =k∑i=1

[{(δ∗iN

2− δiN

2)− 2eXi(δ∗iN − δ

iN)θi

}Ii + 2e(n+1)Xi

∫∞

Zi(δ∗iN − δ

iN)e−nzdz

]

=

k∑i=1

[{(δ∗iN − δ

iN)2+ 2(δiN − e

Xi)(δ∗iN − δiN)}Ii + 2e(n+1)Xi

∫∞

Zi(δ∗iN − δ

iN)e−nzdz

]

=

k∑i=1

[2n− 1n2(n− 1)2

e2Xi Ii −2

n(n− 1)2e(n+1)Xie−(n−1)Zi

]

= −

k∑i=1

[1

n2(n− 1)2e2Xi Ii +

2n(n− 1)2

e(n+1)Xie−(n−1)(X(1)i−d)Ici

]< 0.

Hence the theorem. �

4. Some improved estimators

In this Section, we shall obtain a class of improved estimators which dominate estimators of the form δc =

{ceX1 I1, . . . , ceXk Ik}, where 0 < c < 1. Note that for c = n−2n−1 and

n−1n , δ

c corresponds to estimators δ∗N and δN respectively.

Theorem 6. Let Y = e−(n−1)∑ki=1 Xi and for y ∈ (0, 1], s(y) > 0 be a real-valued function such that

(i) ys(y) is decreasing and(ii)− 2(1−c)

(n−1) ≤ (ys′(y)+ s(y)) ≤ 0.

Then under the squared error loss (3), the estimator T2 = {T21I1, . . . , T2kIk}, where

T2i =[c − (n− 1)

(Ys′(Y )+ s(Y )

)]eXi

dominates the estimator δc as an estimator of θ.

1376 S. Kumar et al. / Statistics and Probability Letters 79 (2009) 1372–1377

Proof. Let T r = {Tr1I1, . . . , TrkIk}, r = 1, 2 be any two estimators of θ of the form Tri = Tri(Xi, X(1)i, . . . , X(k−1)i), fori = 1, . . . , k. Then the risk difference of T 1 and T 2 is as follows,

R(T 2, θ)− R(T 1, θ) =k∑i=1

Eµ[{(T2i2 − T1i2)− 2(T2i − T1i)θi}Ii].

Using Lemma 1, we get the unbiased estimator of E(θiT2iIi) as

E[θiT2iIi] = E[{eXiT2iIi − e(n+1)Xiη2i(Zi, X(1)i, . . . , X(k−1)i)}] (10)

where η2i(x1, x2, . . . , xk) =∫∞

x1T2i(z, x2, . . . , xk)e−nzdz. Let hereafter

η2i(x1, x2, . . . , xk) = η2i

for notational simplicity. Hence the unbiased estimator of risk difference of the ith term is given by

Di(X) = (T2i2 − T1i2)Ii − 2(eXiT2iIi − e(n+1)Xiη2i − eXiT1iIi + e(n+1)Xiη1i

)= [{(T2i2 − T1i2)− 2eXi(T2i − T1i)}Ii + 2e(n+1)Xi(η2i − η1i)]

= [{(T2i − T1i)2 + 2(T1i − eXi)(T2i − T1i)}Ii − 2e(n+1)Xi(η1i − η2i)]. (11)

Next, let Fi(x1, . . . , xk) = η1i(x1, . . . , xk)− η2i(x1, . . . , xk). Then (11) can be written as

Di(X) ={(F 1(i)i )2e2nXi + 2(T1i − eXi)F

1(i)i e

nXi}Ii − 2e(n+1)XiFi(Zi, X(1)i, . . . , X(k−1)i), (12)

where F 1(i)i (x1, . . . , xk) = ∂∂xiFi(x1, . . . , xk).

Let now T1i(Xi, X(1)i, . . . , X(k−1)i) = ceXi , for 0 < c < 1. Substituting this value in (12) we have

Di(X) = {(F1(i)i )2e2nXi − 2(1− c)F 1(i)i e

(n+1)Xi}Ii − 2e(n+1)XiFi(Zi, X(1)i, . . . , X(k−1)i).

So, our problem reduces to solve the differential inequality

{(F 1(i)i )2e(n−1)Xi − 2(1− c)F 1(i)i }Ii − 2{Fi(Zi, X(1)i, . . . , X(k−1)i)} ≤ 0. (13)

In order to solve the above differential inequality, we proceed as follows. Let

Fi(Zi, X(1)i, . . . , X(k−1)i) = e−(n−1)Zis(y),

and

F 1(i)i = −(n− 1)[e−(n−1)Ziys′(y)+ e−(n−1)Xis(y)

].

Note that for s(y) > 0, we have e−(n−1)Zis(y) > 0. Substituting these values in (13), we get

Di(X) =[(n− 1)2

(ys′(y)+ s(y)

)2+ 2(n− 1)(1− c)

(ys′(y)+ s(y)

)]e−(n−1)Xi Ii − 2e−(n−1)Zis(y)

[(n− 1)2

(ys′(y)+ s(y)

)2+ 2(n− 1)(1− c)

(ys′(y)+ s(y)

)]e−(n−1)Xi Ii

≤ 0,

provided− 2(1−c)(n−1) ≤ ys

′(y)+ s(y) < 0. Now, we have

Fi(Xi, X(1)i, . . . , X(k−1)i) = η1i(Xi, X(1)i, . . . , X(k−1)i)− η2i(Xi, X(1)i, . . . , X(k−1)i)

⇒ T2i = T1i + F1(i)i e

nXi .

Therefore the ith coordinate of the improved estimator is given by

T2iIi =[c − (n− 1)Ys′(Y )− (n− 1)s(Y )

]eXi Ii. (14)

This proves the theorem. �

We now give some examples of s(y)which satisfy the conditions of Theorem 6.

Remark 7. (i) Define s(y) = αy(1+y) , for some α > 0 and y ∈ (0, 1]. For this choice, s(y) satisfies all the conditions of

Theorem 6. Also−α <(ys′(y)+ s(y)

)≤ −

α4 < 0.

(ii) Define s(y) = α e−y

y , for some α > 0 and y ∈ (0, 1]. Then s(y) also satisfies all the conditions of Theorem 6. Also−α <

(ys′(y)+ s(y)

)≤ −αe−1 < 0.

S. Kumar et al. / Statistics and Probability Letters 79 (2009) 1372–1377 1377

In both the cases we can take α = 2(1−c)(n−1) . Note that since Xi > µi > 0, so 0 < y ≤ 1. Therefore the choice of s(y) looks

reasonable.

Corollary 8. Consider the estimation of θM . Let y and s(y) be as defined in Theorem 6. Then the estimator

T2 =[c − (n− 1)

(Ys′(Y )+ s(Y )

)]eX(1)

dominates the estimator T1 = ceX(1) for 0 < c < 1 under the squared error loss.

The following theorem is similar to Theorem 3.2 of Vellaisamy and Punnen (2002). Assume v = nk and c = n−2n−1 .

Theorem 9. Let T 2, defined in Theorem 6, be an estimator of θ . Then the risk of T 2 is O(kn2).

Proof. The risk of T 2 under quadratic loss is given by

R(T 2, θ) ≤k∑i=1

E

[(n− 2n− 1

eXi−µi − 1)2+ (n− 1)2e2Xi−2µi

(Ys′(Y )+ s(Y )

)2− 2(n− 1)eXi−µi

(Ys′(Y )+ s(Y )

) (n− 2n− 1

eXi−µi − 1)]

≤k

(n− 1)2+

4v(n− 1)2(n− 2)

+4

(n− 1)E

[k∑i=1

eXi−µi(n− 2n− 1

eXi−µi − 1)J(Xi)

], (15)

where

J(Xi) = 1, if Xi > µi + logn− 1n− 2

= 0, otherwise.

A straightforward calculation proves the theorem. �

As a consequence, the improved estimators T 2 for δN and δ∗

N are consistent.

5. Conclusion

We have dealt with an interesting problem namely that of simultaneous estimation of M reliability functions, whereM is random. Surprisingly, we are able to get shrinkage estimators improving upon the natural estimator, the UMVUEand a generalized Bayes estimator using a differential inequality approach. This result resembles Stein’s phenomenon forthe simultaneous estimation of location and scale parameters, where differential inequality leads to shrinkage estimatorsimproving upon the usual estimators. (See Berger (1980) and Dasgupta (1986)).

References

Berger, J.O., 1980. Improving on inadmissible estimators in continuous exponential families with application to simultaneous estimation of gamma scaleparameters. Ann. Statist. 8, 545–575.

Cohen, A., Sackrowitz, H., 1989. Two-stage conditionally unbiased estimators of the selected mean. Statist. Probab. Lett. 8, 273–278.Dasgupta, A., 1986. Simultaneous estimation of multiparameter gamma distribution under quadratic losses. Ann. Statist. 14, 206–219.Gupta, S.S., 1965. On some multiple decision (Selection and Ranking) rules. Technometrics 7, 225–245.Gupta, S.S., Panchapakesan, S., 1979. Multiple Decision Procedures: Theory and Methodology of Selecting and Ranking populations. JohnWiley, New York.Kumar, S., Kar, A., 2001a. Estimating quantiles of a selected exponential population. Statist. Probab. Lett. 52, 9–19.Kumar, S., Kar, A., 2001b.Minimum variance unbiased estimation of quantile of a selected exponential population. Amer. J. Math. Management Sci. 21 (1-2),183–191.

Robbins, H., 1988. The UV method of estimation. In: Gupta, S.S., Berger, J.O. (Eds.), Statistical Decision Theory and Related Topics-IV, V. 1. Springer-Verlag,New York, pp. 265–270.

Rubinstein, D., 1961. Estimation of the reliability of a system in development. Rep. R-61-ELC-1, Tech. Info. Service, GE Co.Rubinstein, D., 1965. Estimation of failure rates in a dynamic reliability program. Rep. 65-RGO-7, Tech. Info. Service, GE Co.Sackrowitz, H., Samuel-Cahn, E., 1984. Estimation of the mean of a selected negative exponential population. J. R. Statist. Soc., Ser B 46, 242–249.Vellaisamy, P., 1992. Inadmissibility results for the selected scale parameters. Ann. Statist. 20, 2183–2191.Vellaisamy, P., 1996. A note on the estimation of the selected scale parameters. J. Statist. Plann. Inference 55, 39–46.Vellaisamy, P., Punnen, A.P., 2002. Improved estimators for the selected location parameters. Statist. Papers 43, 291–299.Vellaisamy, P., 2003. Quantile estimation of a selected exponential population. J. Statist. Plann. Inference 115, 461–470.Vellaisamy, P., Jain, S., 2008. Estimating the parameter of the population selected from discrete exponential family. Statist. Probab. Lett. 78, 1076–1087.