estimation of the matrix variate elliptical model · the posterior distributions of the matrix...

18
Estimation of the matrix variate elliptical model Janet Van Niekerk [email protected] Andriette Bekker [email protected] Mohammad Arashi m arashi [email protected] and Daan De Waal [email protected] Abstract: The problem of estimation within the matrix variate elliptical model is addressed. In this paper a subjective Bayesian approach is followed to derive new estimators for the parameters of the matrix variate elliptical model by assuming the previously intractable normal-Wishart prior. These new estimators are compared to the estimators derived under a normal-inverse Wishart prior as well as the objective Jeffreys’ prior which results in the maximum likelihood estimators, using different measures. A valuable contribution is the development of algorithms for the simulation of the posterior distributions of the matrix variate parameters with emphasis on the new proposed estimators. A simulation study as well as the Fisher’s Iris data set are used to illustrate the novelty of these new estimators and to investigate the accuracy gained by assuming the normal-Wishart prior. Keywords and phrases: Bayesian inference; Bessel function of matrix argument; characteristic matrix; matrix variate elliptical model; normal-inverse Wishart; normal-Wishart; squared error loss; maximum posterior mode.. AMS(2000) subject classification: 62F15, 62H12, 62H10, 60E05 1. Introduction The objective of this paper is to derive estimators for the parameters of the matrix variate elliptical model from a subjective Bayesian viewpoint. Subjective analysis generally produces more admissible results compared to objective analysis, since added information is used. Although objective Bayesian analysis for the matrix variate elliptical model was considered by [8], very few results and estimators for the Bayesian analysis of model (1) exist. This paper contributes to the literature by presenting a more general subjective Bayesian estimation framework for the matrix case (1). Prior information will be reflected by using the normal-inverse Wishart and the normal-Wishart prior distributions respectively (see [20] for the multivariate elliptical model). The squared error loss function as well as the loss function defined by [5] will be used for the Bayesian inference. The matrix variate elliptical model with density function f (X)= c s,p |Σ| - s 2 |Ω| - p 2 h tr(X - μ) 0 Σ -1 (X - μ)Ω -1 (1) 1 imsart-generic ver. 2014/10/16 file: Appendix.tex date: April 20, 2015

Upload: others

Post on 12-Aug-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Estimation of the matrix variate elliptical model · the posterior distributions of the matrix variate parameters with emphasis on the new proposed estimators. A simulation study

Estimation of the matrix variate elliptical model

Janet Van Niekerk

[email protected]

Andriette Bekker

[email protected]

Mohammad Arashi

m arashi [email protected]

and

Daan De Waal

[email protected]

Abstract: The problem of estimation within the matrix variate elliptical model is addressed. Inthis paper a subjective Bayesian approach is followed to derive new estimators for the parametersof the matrix variate elliptical model by assuming the previously intractable normal-Wishart prior.These new estimators are compared to the estimators derived under a normal-inverse Wishart prioras well as the objective Jeffreys’ prior which results in the maximum likelihood estimators, usingdifferent measures. A valuable contribution is the development of algorithms for the simulation ofthe posterior distributions of the matrix variate parameters with emphasis on the new proposedestimators. A simulation study as well as the Fisher’s Iris data set are used to illustrate the noveltyof these new estimators and to investigate the accuracy gained by assuming the normal-Wishartprior.

Keywords and phrases: Bayesian inference; Bessel function of matrix argument; characteristicmatrix; matrix variate elliptical model; normal-inverse Wishart; normal-Wishart; squared error loss;maximum posterior mode..

AMS(2000) subject classification: 62F15, 62H12, 62H10, 60E05

1. Introduction

The objective of this paper is to derive estimators for the parameters of the matrix variate ellipticalmodel from a subjective Bayesian viewpoint. Subjective analysis generally produces more admissibleresults compared to objective analysis, since added information is used. Although objective Bayesiananalysis for the matrix variate elliptical model was considered by [8], very few results and estimatorsfor the Bayesian analysis of model (1) exist. This paper contributes to the literature by presenting amore general subjective Bayesian estimation framework for the matrix case (1). Prior information willbe reflected by using the normal-inverse Wishart and the normal-Wishart prior distributions respectively(see [20] for the multivariate elliptical model). The squared error loss function as well as the loss functiondefined by [5] will be used for the Bayesian inference. The matrix variate elliptical model with densityfunction

f(X) = cs,p|Σ|−s2 |Ω|−

p2 h[tr(X− µ)′Σ−1(X− µ)Ω−1

](1)

1

imsart-generic ver. 2014/10/16 file: Appendix.tex date: April 20, 2015

Page 2: Estimation of the matrix variate elliptical model · the posterior distributions of the matrix variate parameters with emphasis on the new proposed estimators. A simulation study

[email protected]/Estimation of the matrix variate elliptical model 2

is the underlying data generating model where ⊗ denotes the Kronecker product. The aim is to estimatethe location and scale matrices, µp×s and Σp×p, respectively where Ωs×s is assumed to be a knownhyperparameter (see section 6 for discussion of this assumption). The considered prior distributions willthus be for µp×s and Σp×p. (see [9].) The density function (1) can also be written as (see [4]):

f(X|µ,Σ) =

∫ ∞0

w(z)fNµ,z−1Σ⊗Ω(X|µ,Σ)dz (2)

for a scalar function w(z) where fNµ,z−1Σ⊗Ω(·) is the matrix variate normal density function having the

same expected value as X and where z−1Σ⊗Ω is a scale matrix. Note that∫∞0w(z)dz = 1 and that (2)

is different from the class of normal scale mixtures. For normal scale mixtures, the weighting functionis actually a density function where 0 ≤ w(z) ≤ 1 for all values of z. However, for the elliptical modelgiven in (1), the weighting function does not have the same restriction but rather −1 ≤ w(z) ≤ 1, for allvalues of z. Note that w(z) is derived from the inverse Laplace transform of the density function of theparticular elliptical model. See [1] and [2] for more details.

In section 2, an incomplete type II Bessel function of matrix argument is defined. This follows fromrepresenting the cumulative distribution of the largest characteristic root of Σ for the Wishart prior.Section 3 deals with derivations of the posterior distributions, Bayes estimators of the parameters of thematrix variate elliptical model for the normal-inverse Wishart prior. The normal-Wishart prior formsthe base of section 4. The newly developed results of sections 3 and 4 will subsequently be applied toparticular subfamilies of the matrix variate elliptical distribution in section 5. In section 6 algorithmsfor simulating these posterior distributions and calculating the proposed estimators are presented as anillustration of the theoretical results, followed by the application to the well-known Fisher’s Iris dataset.

The benefit of this approach is that the Wishart distribution can now be utilized as a prior for the scalematrix opposed to the inverse-Wishart distribution as applied previously. This is demonstrated clearly inSection 6 by the comparative simulation study where new algorithms are introduced for the calculation ofthe proposed estimators from a simulated dataset, followed by the application to the Fisher’s Iris dataset.

2. An incomplete Bessel function

The type II Bessel function of [14], Bδ(N), of matrix argument is defined by

Bδ(N) =

∫U>0

|U|−δ−p+12 etr(−NU−U−1)dU (3)

The integral is absolutely convergent for Re(N) > 0 if and only if −Re(δ) > p−12 . We now present an

extension to this function, as follows:

Definition 1. The incomplete type II Bessel function of matrix argument, denoted by Bδ(N; Q), isdefined by

Bδ(N; Q) =

∫0<U<Q

|U|−δ−p+12 etr(−NU−U−1)dU (4)

where Re(N) > 0,Q > 0 and −Re(δ) > p−12 .

Remark 1. Replacing N by ∆12 N∆

12 ,∆ > 0 and changing the matrix variate M = ∆

12 U∆

12 with

Jacobian |∆|−p+12 gives the following more general form:

Bδ(∆,N; R) = |∆|δ∫

0<M<R

|M|−δ−p+12 etr(−NM−∆M−1)dM (5)

imsart-generic ver. 2014/10/16 file: Appendix.tex date: April 20, 2015

Page 3: Estimation of the matrix variate elliptical model · the posterior distributions of the matrix variate parameters with emphasis on the new proposed estimators. A simulation study

[email protected]/Estimation of the matrix variate elliptical model 3

with R = ∆12 Q∆

12 .

3. Normal-inverse Wishart prior

Let z be a known positive scalar and Σ be a positive definite random matrix of dimension p. Let Ψ =z−1Σ follow an inverse Wishart distribution with parameter Φ and m degrees of freedom. For anyz > 0, a generated variate of Ψ will produce a generated variate of Σ since Σ =zΨ. Now assumea normal-inverse Wishart prior for (µ,Ψ) with prior distributions for µ and Ψ respectively, µ|Ψ ∼Np,s(θp×s,

1n0

Ψp×p⊗Ωs×s) and Ψ ∼ W−1(Φ, p,m). From eq. 2.2.1, p.55 and Definition 4.2.1, p.111 of[12] the prior densities are

π(µ|Ψ) = (2π)−sp2 | 1

n0Ψ|− s2 |Ω|−

p2 etr

[−n0

2Ψ−1(µ− θ)Ω−1(µ− θ)′

], µ ∈ Rp×s (6)

and

π(Ψ) =

[Γp

(m− p− 1

2

)]−1|12Φ| 12 (m−p−1)|Ψ|− 1

2metr

[−1

2Ψ−1Φ

],Ψ > 0,Φ > 0 and m > 2p. (7)

with the joint prior density function as

π(µ,Ψ) ∝ |Ψ|− 12 (m+s)etr

[−1

2Ψ−1

(n0(µ− θ)Ω−1(µ− θ)′ + Φ

)](8)

with known hyperparameters θ,Ω, n0 and m where etr(·) denotes exp[tr(·)]. Now

π(µ,Σ|z) ∝ π(µ,Ψ)|J(Ψ→ Σ)|

∝ z−p(p+1−m−s)

2 |Σ|− 12 (m+s)etr

[−1

2zΣ−1

(n0(µ− θ)Ω−1(µ− θ)′ + Φ

)](9)

since the Jacobian is J(Ψ → Σ) = z−p(p+1)

2 . Following [1], to extend the results of [16] and [18] theconjugate prior for the matrix variate elliptical model can be obtained as

π(µ,Σ) ∝∫ ∞0

π(µ,Σ|z)w(z)dz (10)

This representation of a prior distribution coincides with the representation in (2). It should be notedthat in (2), z is not a random variable.

3.1. Posterior distributions

The likelihood function is obtained from (2) as follows

L(µ,Σ|X,V)

= ni=1

∫ ∞0

w(z)fNµ,z−1Σ⊗Ω(x|µ,Σ)dz

∝∫ ∞0

w(z)|z−1Σ|−ns2 |Ω|−np2 etr

[−1

2

n∑i=1

(Xi − µ)′zΣ−1(Xi − µ)Ω−1

]dz

=

∫ ∞0

w(z)zsnp2 |Σ|− sn2 |Ω|−

np2 etr

[−1

2zΣ−1

[V + n(X− µ)Ω−1(X− µ)′

]]dz (11)

imsart-generic ver. 2014/10/16 file: Appendix.tex date: April 20, 2015

Page 4: Estimation of the matrix variate elliptical model · the posterior distributions of the matrix variate parameters with emphasis on the new proposed estimators. A simulation study

[email protected]/Estimation of the matrix variate elliptical model 4

where

V =

n∑i=1

(Xi −X)Ω−1(Xi −X)′ (12)

From (9) and (11) the joint posterior density function is

q(µ,Σ|X,V) ∝ π(µ,Σ)L(µ,Σ|X,V)

=

∫ ∞0

z−p(p+1−m−s−ns)

2 w(z)|Σ|− 12 (m+sn+s)

×etr[−1

2zΣ−1

(n0(µ− θ)Ω−1(µ− θ)′ + Φ

)]×etr

[−1

2zΣ−1

[V + n(X− µ)Ω−1(X− µ)′

]]dz (13)

Theorem 1. The marginal posterior distribution of the location matrix, µ, under model (1) and prior(9) is a matrix variate t-distribution with parameters b,W and 1

n+n0Ω and degrees of freedom (m+ns−

2p) with density function

q(µ|X,V) =Γp(12 (m+ ns+ s− p− 1)

12 spΓp

(12 (m+ ns− p− 1)

) | 1

n+ n0Ω|−

12p|W|−

12m

×|Ip + W−1(µ− b)

(1

n+ n0Ω

)−1(µ− b)′|− 1

2 (m+ns+s−p−1) (14)

with W = nn0

n+n0(X− θ)Ω−1(X− θ)′ + Φ + V and µ ∈ Rp×s.andwithb = 1

n+n0

(nX− n0θ

).

Let

ς(α) =

∫ ∞0

zαw(z)dz (15)

provided this integral exists.

Theorem 2. The marginal posterior density function of the characteristic matrix, Σ, under model (1)and prior (9) is

q(Σ|X,V) =2−

p(m+ns−p−1)2

Γp(m+ns−p−1

2

) |W|m+ns−p−12 |Σ|− 1

2 (m+ns)

×∞∑k=0

(− 1

2 tr[Σ−1W

])kk!

ς(−p(p+ 1−m− ns)− 2k

2) (16)

with W as defined in Theorem 1 and Σ > 0, provided ς(−p(p+1−m−ns)−2k2 ) exists.

3.2. Statistical properties

In this section some statistical properties of the posterior distribution of Σ is derived. The characteristicfunction for Σ is obtained followed by the joint density function of eigenvalues of Σ, as well as the densityfunction of the largest eigenvalue of Σ amongst others.

imsart-generic ver. 2014/10/16 file: Appendix.tex date: April 20, 2015

Page 5: Estimation of the matrix variate elliptical model · the posterior distributions of the matrix variate parameters with emphasis on the new proposed estimators. A simulation study

[email protected]/Estimation of the matrix variate elliptical model 5

Theorem 3. The characteristic function of Σ under model (1) and prior (9) is

ϕ (T) =2−

p(m+ns−p−1)2 (−i)

p2 (m+ns)− p(p+1)

2

Γp(m+ns−p−1

2

) |W|m+ns−p−1

2 |T| 12 (m+ns)− p+12

×∫ ∞0

z−p(p+1−m−ns)

2 w(z)Bm+ns−p−12

(− i

2zTW

)dz (17)

with W as defined in Theorem 1, Tp×p is a real positive definite arbitrary matrix and Bδ (·) is the typeII Bessel function of matrix argument (see (1)).

Proof. From Theorem 2 follows that the characteristic function of Σ is

ϕ (T) = E[etr (iTΣ) |X,V

]=

2−p(m+ns−p−1)

2

Γp(m+ns−p−1

2

) |W|m+ns−p−12

∫ ∞0

z−p(p+1−m−ns)

2 w(z)

×∫

Σ>0

|Σ|− 12 (m+ns)etr [iTΣ] etr

[−1

2zΣ−1W

]dΣdz

Using eq. 1.6.18 on p.39 of [12], (17) follows.

Theorem 4. The joint density function of the eigenvalues Λ = diag(λ1, ...λp), λp > ... > λ1 > 0 of Σunder model (1) and prior (9) is

g (Λ) =π

12p

2

Γp(p2

)pi<j

(λi − λj)1

Γp(m+ns−p−1

2

) |W|m+ns−p−12

i=1pλ− 1

2 (m+ns)i

×∞∑k=0

∑κ

(−1)k2−p(m+ns−p−1)

2 −kCκ(Λ−1

)Cκ (W)

k!Cκ(Ip)ς

(−p(p+ 1−m− ns)

2+ k

)(18)

with W as defined in Theorem 1 and Cκ(·) is the zonal polynomial (see [12]) corresponding to κ, provided

ς(−p(p+1−m−ns)

2 + k)

exists.

Theorem 5. The cumulative distribution function of Σ under model (1) and prior (9) is given by

P (Σ < A) =∞∑l=0

∞∑k=0

∑κ

2−k−l (−1)l [tr(A−1W

)]lk!l!

ς(l + k)Cκ(A−1W

)(19)

for any A > 0, provided ς(l + k) exists with W as defined in Theorem 1.

Remark 2. Note that λ(p) < a is equivalent to Σ < aIp since HΛH′=Σ. To obtain the cumulativedistribution function of λ(p), the largest eigenvalue of Σ under model (1) and prior (9), the previoustheorem can be used with A = aIp. Also, λ(1) > b is equivalent to Σ > bIp.

Theorem 6. The cumulative distribution function of λ(p), the largest eigenvalue of Σ under model (1)and prior (9) is

Fλ(p)(a) =

∞∑l=0

∞∑k=0

∑κ

(2a)−k−l (−1)l[tr (W)]

l

k!l!ς(l + k)Cκ (W) (20)

provided ς(l + k) exists and with W as defined in Theorem 1.

imsart-generic ver. 2014/10/16 file: Appendix.tex date: April 20, 2015

Page 6: Estimation of the matrix variate elliptical model · the posterior distributions of the matrix variate parameters with emphasis on the new proposed estimators. A simulation study

[email protected]/Estimation of the matrix variate elliptical model 6

Proof. From (19) the cumulative distribution function of λ(p), the largest eigenvalue of Σ under model(1) and prior (9) is

Fλ(p)(a) = P (Σ < aIp)

=

∞∑l=0

∞∑k=0

∑κ

2−k−l (−1)l [tr(1aW

)]lk!l!

ς(l + k)Cκ

(1

aW

)

3.3. Bayesian inference

In this section Bayes estimators are derived for the matrix parameters utilizing the two loss functionsmentioned in Section 1.

Theorem 7. Under the squared error loss function, the Bayes estimator of the location matrix, µ, undermodel (1) and prior (9) is the posterior mean (PM estimator) of µ, therefore

µ = b =1

n+ n0

(nX− n0θ

)(21)

Proof. From Theorem 4.3.1, p.135 of [12], the marginal posterior distribution of the location matrix, µ isa matrix variate t-distribution with parameters b,W, 1

n+n0Ω and degrees of freedom (m+ns− 2p) from

(14). Under the squared error loss function the Bayes estimator for µ is

µ = E[µ|X,V

]= b (22)

Remark 3. Under the squared error loss function, the Bayes estimator of Σ, under model (1) and prior(9) is the posterior mean (PM estimator) of Σ, hence

Σ = E[Σ|X,V

](23)

Remark 4. Under the loss function L(ω, ω) = log

[q(ω|X,V)q(ω|X,V)

](see Theorem 1 of [5]), the Bayes estima-

tors of µ and Σ, respectively, are the modes of the respective posterior distributions (MAP estimators).

Lemma 8. The hth posterior moment of |Σ| under model (1) and prior (9) is

mh = E[|Σ|h|X,V; z

]=

2−phς(ph)Γp

(m+ns−p−1−2h

2

)Γp(m+ns−p−1

2

) |W|h (24)

with W as defined in Theorem 1, provided ς(ph) exists.

Proof. From (16) and Definition 4.2.1, p.111 of [12] the hth posterior moment is given by

E[|Σ|h|X,V; z

]=

2−p(m+ns−p−1)

2

Γp(m+ns−p−1

2

) |W|m+ns−p−12

∫ ∞0

z−p(p+1−m−ns)

2 w(z)

×∫

Σ>0

|Σ|− 12 (m+ns−2h)etr

[−1

2zΣ−1W

]dΣdz (25)

and (24) follows.

imsart-generic ver. 2014/10/16 file: Appendix.tex date: April 20, 2015

Page 7: Estimation of the matrix variate elliptical model · the posterior distributions of the matrix variate parameters with emphasis on the new proposed estimators. A simulation study

[email protected]/Estimation of the matrix variate elliptical model 7

Theorem 9. Under the squared error loss function, the Bayes estimator of |Σ| under model (1) andprior (9) is

|Σ| =2−pς(p)Γp

(m+ns−p−3

2

)Γp(m+ns−p−1

2

) |W| (26)

with W as defined in Theorem 1, provided ς(p) exists.

Proof. The result is immediate from (24) with h = 1.

Remark 5. The results derived for model (1) and prior (9) simplifies for s = 1 to the results obtainedfor the multivariate elliptical model and for s = 1, p = 1 to the univariate elliptical model (see [20]).

The question arises whether the normal-Wishart prior performs as well as or better than the normal-inverse Wishart prior. To this end, we need to study the Bayesian analysis of the elliptical model underthe latter prior structure, which is the focus of the forthcoming section.

4. Normal-Wishart prior

In this section the normal-Wishart prior for the matrix variate elliptical model is considered. Similarlyto Section 2, the prior distributions for µ and Ψ =z−1Σ respectively are from eq. 2.2.1 on p.55 and eq.3.2.1 on p.87 of [12], µ|Ψ ∼ Np,s(θp×s, 1

n0Ψp×p⊗Ωs×s) and Ψ ∼W (Φ, p,m) with density functions

π(µ|Ψ) = (2π)−sp2 | 1

n0Ψ|− s2 |Ω|−

p2 etr

[−n0

2Ψ−1(µ− θ)Ω−1(µ− θ)′

], µ ∈ Rp×s (27)

and

π(Ψ) = 2−mp2

[Γp(

m

2)]−1|Φ|− 1

2m|Ψ| 12 (m−p−1)etr[−1

2ΨΦ−1

],Ψ > 0,Φ > 0 and m ≥ p. (28)

with the joint prior density function as

π(µ,Ψ) ∝ |Ψ| 12 (m−p−1−s)etr[−n0

2Ψ−1(µ− θ)Ω−1(µ− θ)′

]etr

[−1

2ΨΦ−1

](29)

It then follows that

π(µ,Σ|z) ∝ z−p(m−s)

2 |Σ| 12 (m−p−1−s)etr[−n0z

2Σ−1(µ− θ)Ω−1(µ− θ)′

]etr

[− 1

2zΣΦ−1

](30)

4.1. Posterior distributions

From (11) and (30) the joint posterior density function follows as,

q(µ,Σ|X,V) ∝∫ ∞0

z−p(m−s−sn)

2 w(z)|Σ| 12 (m−p−1−s−ns)etr[− 1

2zΣΦ−1

]×etr

[−n0z

2Σ−1(µ− θ)Ω−1(µ− θ)′

]×etr

[−1

2zΣ−1

[V + n(X− µ)Ω−1(X− µ)′

]]dz (31)

with V as defined in (12).

imsart-generic ver. 2014/10/16 file: Appendix.tex date: April 20, 2015

Page 8: Estimation of the matrix variate elliptical model · the posterior distributions of the matrix variate parameters with emphasis on the new proposed estimators. A simulation study

[email protected]/Estimation of the matrix variate elliptical model 8

Theorem 10. The marginal posterior density function of the location matrix, µ, under model (1) andprior (30) is

q(µ|X,V) =|Ω|−

p2 (n+ n0)

sp2

(2π)sp2 B−m+ns

2

(14Φ−1Y

) |2Φ|− s2

×B−m+s+ns2

(1

4Φ−1

(Y+(n+ n0)(µ− b)Ω−1(µ− b)′

))(32)

with Y = nn0

n+n0(X− θ)Ω−1(X− θ)′+V and µ ∈ Rp×s.

Remark 6. The marginal posterior distribution of the location matrix in matrix elliptical models is robustwith respect to departures from normality, under the non-conjugate normal-Wishart prior.

Theorem 11. The marginal posterior density function of the characteristic matrix, Σ, under model (1)and prior (30) is

q(Σ|x,V) =2p(−m+ns)

2

B−m+ns2

(14Φ−1Y

) |Φ| 12 (−m+ns)|Σ| 12 (m−p−1−ns)

×∞∑k=0

∞∑l=0

(−2)−k−l (

tr[ΣΦ−1

])k (tr[Σ−1Y

])lk!l!

ς

(−p(m− sn)

2− k + l

)(33)

with Y as defined in Theorem 10 and Σ > 0, provided ς(−p(m−sn)2 − k + l

)exists.

4.2. Statistical properties

Theorem 12. The characteristic function of Σ under model (1) and prior (30) is

ϕ (T) =2p(−m+ns)

2

B−m+ns2

(14Φ−1Y

) |Φ| 12 (−m+ns)

∫ ∞0

z−p(m−sn)

2 w(z)

×| 1

2zΦ−1−iT′| 12 (m+ns)B−m+ns

2

[1

4

(Φ−1−2izT′

)Y

]dz (34)

with Y as defined in Theorem 10, Tp×p is a real positive definite arbitrary matrix.

Proof. From (33) the characteristic function of Σ|X,V is given by

ϕ (T) =2p(−m+ns)

2

B 12 (−m+ns)

(14Φ−1Y

) |Φ| 12 (−m+ns)

∫ ∞0

z−p(m−sn)

2 w(z)

×∫

Σ>0

|Σ| 12 (m−p−1−ns)etr[− 1

2zΣΦ−1+iT′Σ

]etr[−z

2Σ−1Y

]dΣdz

Applying eq. 1.6.18, p.39 of [12] the proof is complete.

imsart-generic ver. 2014/10/16 file: Appendix.tex date: April 20, 2015

Page 9: Estimation of the matrix variate elliptical model · the posterior distributions of the matrix variate parameters with emphasis on the new proposed estimators. A simulation study

[email protected]/Estimation of the matrix variate elliptical model 9

Theorem 13. The joint density function of the eigenvalues Λ = diag(λ1, ...λp), λp > ... > λ1 > 0 of Σunder model (1) and prior (30) has the form

g (Λ) =π

12p

2

Γp(p2

)pi<j

(λi − λj)2p2 (−m+ns)

B−m+ns2

(14Φ−1Y

)×|Φ|

12 (−m+ns)i=1

pλ12 (m−p−1−ns)i ς

(−p(m− sn) + 2k − 2l

2

∞∑k=0

∑κ

∞∑l=0

∑λ

1

k!l!

(−1

2

)k+l ∑φ∈κ,λ

Cκ,λφ(Λ,Λ−1

)Cκ,λφ

(Φ−1,Y

)Cφ (Ip)

(35)

with Y as defined in Theorem 10, provided ς(−p(m−sn)+2k−2l

2

)exists.

Theorem 14. The cumulative distribution function of Σ under model (1) and prior (30) is

P (Σ < A) =2p(−m+ns)

2 ς(−p(m−sn)2

)B−m+ns

2( 14Φ−1Y; Ip)

B−m+ns2

(14Φ−1Y

) |Φ| 12 (−m+ns)|A| 12 (m−ns) (36)

for any A > 0 and Bm−ns2

(·; ·) the incomplete type II Bessel function of matrix argument (see (5)), with

Y as defined in Theorem 10 and provided ς(−p(m−sn)2

)exists.

Theorem 15. The cumulative distribution function of λ(p), the largest eigenvalue of Σ under model (1)and prior (30) has the form

Fλ(p)(a) =

2p2 (−m+ns)a

p2 (m−ns)ς

(−p(m−sn)2

)B−m+ns

2( 14Φ−1Y; Ip)

B−m+ns2

(14Φ−1Y

) |Φ| 12 (−m+ns) (37)

with Y as defined in Theorem 10 and provided ς(−p(m−sn)2

)exists.

Proof. From (19) the cumulative distribution function of λ(p) follows easily.

4.3. Bayesian inference

Theorem 16. Under the squared error loss function, the Bayes estimator (PM estimator) of the locationmatrix, µ, under model (1) and prior (30) is

µ = b =1

n+ n0

(nX− n0θ

)(38)

Proof. Under the squared error loss function the Bayes estimator of µ is the posterior mean, i.e. µ =E[µ|X,V

]. Note that the expected value of (µ− b) is

E[µ− b|X,V

]=

|Ω|−p2 (n+ n0)

sp2

(2π)sp2 B 1

2 (−m+ns)

(14Φ−1Y

) |2Φ|− s2∫µ

(µ− b)

×B 12 (−m+s+ns)

(1

4Φ−1

(Y+(n+ n0)(µ− b)Ω−1(µ− b)′

))dµ

imsart-generic ver. 2014/10/16 file: Appendix.tex date: April 20, 2015

Page 10: Estimation of the matrix variate elliptical model · the posterior distributions of the matrix variate parameters with emphasis on the new proposed estimators. A simulation study

[email protected]/Estimation of the matrix variate elliptical model 10

This is an integral of an odd function and hence E[µ− b|X,V

]= 0. Therefore from Theorem 10 the

Bayes estimator of the location matrix, µ, is µ = b = 1n+n0

(nX− n0θ

).

Lemma 17. The hth posterior moment of |Σ| under model (1) and prior (30) is

mh = E[|Σ|h|X,V

]=

2phς(ph)B−m+ns−2h2

(14Φ−1Y

)B−m+ns

2

(14Φ−1Y

) |Φ|h (39)

with Y as defined in Theorem 10, provided ς(ph) exists.

Proof. From (33), the hth posterior moment is given by

E[|Σ|h|X,V;z

]=

2p(−m+ns)

2

B−m+ns2

(14Φ−1Y

) |Φ| 12 (−m+ns)

∫ ∞0

z−p(m−sn)

2 w(z)

×∫

Σ>0

|Σ| 12 (m−p−1−ns+2h)etr

[− 1

2zΣΦ−1

]etr[−z

2Σ−1Y

]dΣdz (40)

Applying eq. 1.6.18, p.39 of [12], the proof is complete.

Theorem 18. Under the squared error loss function, the Bayes estimator of |Σ| under model (1) andprior (30) is

|Σ| =2pς(p)B−m+ns−2

2

(14Φ−1Y

)B−m+ns

2

(14Φ−1Y

) |Φ| (41)

with Y as defined in Theorem 10, provided ς(p) exists.

Proof. The result is immediate from (39) with h = 1.

Remark 7. The results derived for model (1) and prior (30) simplifies for s = 1 to the results obtainedfor the multivariate elliptical model and for s = 1, p = 1 to the univariate elliptical model (see [20]).

5. Appendix A

5.1. Proof of Theorem 1.

Proof. From (13) follows that

q(µ|X,V) ∝∫ ∞0

z−p(p+1−m−s−ns)

2 w(z)

∫Σ>0

|Σ|− 12 (m+sn+s)

×etr[−1

2zΣ−1

(n0(µ− θ)Ω−1(µ− θ)′ + Φ

)]×etr

[−1

2zΣ−1

[V + n(X− µ)Ω−1(X− µ)′

]]dΣdz (42)

LetA∗ = V + n(X− µ)Ω−1(X− µ)′ (43)

imsart-generic ver. 2014/10/16 file: Appendix.tex date: April 20, 2015

Page 11: Estimation of the matrix variate elliptical model · the posterior distributions of the matrix variate parameters with emphasis on the new proposed estimators. A simulation study

[email protected]/Estimation of the matrix variate elliptical model 11

with V as defined in (12) andB = n0(µ− θ)Ω−1(µ− θ)′ + Φ, (44)

then from (42) follows that

q(µ|X,V) ∝∫ ∞0

z−p(p+1−m−s−ns)

2 w(z)

∫Σ>0

|Σ|− 12 (m+ns+s)etr

[−1

2zΣ−1 (B + A∗)

]dΣdz

Note that one can identify, from the above, the kernel of an inverse Wishart distribution with parametersz (B + A∗), p and m+ ns+ s. The marginal posterior density function of the location matrix µ is thengiven by

q(µ|X,V) ∝∫ ∞0

z−p(p+1−m−s−ns)

2 z−p(m+ns+s−p−1)

2 w(z)|B + A∗|− 12 (m+ns+s−p−1)dz

= |B + A∗|− 12 (m+ns+s−p−1) (45)

since∫∞0w(z)dz = 1. We note that

n(X− µ)Ω−1(X− µ)′ + n0(µ− θ)Ω−1(µ− θ)′

= (n+ n0)(µ− b)Ω−1(µ− b)′ +nn0n+ n0

(X− θ)Ω−1(X− θ)′ (46)

with

b =1

n+ n0

(nX− n0θ

)(47)

Hence from (43), (44) and (46) it follows that

B + A∗ =nn0n+ n0

(X− θ)Ω−1(X− θ)′ + Φ + V+(n+ n0)(µ− b)Ω−1(µ− b)′ (48)

= W+(n+ n0)(µ− b)Ω−1(µ− b)′

withW =

nn0n+ n0

(X− θ)Ω−1(X− θ)′ + Φ + V (49)

Therefore from (45) and (48) the marginal posterior density kernel is

q(µ|X,V) ∝ |W + (n+ n0)(µ− b)Ω−1(µ− b)′|− 12 (m+ns+s−p−1)

From eq. 4.2.1, p.134 of [12], the kernel can be identified as a matrix variate t-distribution with parametersb,W and 1

n+n0Ω and degrees of freedom (m+ ns− 2p).

5.2. Proof of Theorem 2.

Proof. From (13) follows that

q(Σ|X,V) ∝∫ ∞0

z−p(p+1−m−s−ns)

2 w(z)

∫µ

|Σ|− 12 (m+sn+s)

×etr[−1

2zΣ−1

(n0(µ− θ)Ω−1(µ− θ)′ + Φ

)]×etr

[−1

2zΣ−1

[V + n(X− µ)Ω−1(X− µ)′

]]dµdz

imsart-generic ver. 2014/10/16 file: Appendix.tex date: April 20, 2015

Page 12: Estimation of the matrix variate elliptical model · the posterior distributions of the matrix variate parameters with emphasis on the new proposed estimators. A simulation study

[email protected]/Estimation of the matrix variate elliptical model 12

From (46) and (47) follows that

q(Σ|X,V) =

∫ ∞0

z−p(p+1−m−s−ns)

2 w(z)|Σ|− 12 (m+ns+s)

×etr[−z

2Σ−1

(nn0n+ n0

(X− θ)Ω−1(X− θ)′ + Φ + V

)]×∫µ

etr

[−1

2

(1

n+ n0z−1Σ

)−1(µ− b)Ω−1(µ− b)′

]dµdz (50)

From eq. 2.2.1, p.55 of [12] the latter integral can be solved as∫µ

etr

[−1

2

(1

n+ n0z−1Σ

)−1(µ− b)Ω−1(µ− b)′

]dµ = (2π)

12 sp | 1

n+ n0z−1Σ| 12 s|Ω| 12p (51)

Therefore from (49), (50), (51) and using the Taylor series expansion it follows that

q(Σ|X,V) ∝∫ ∞0

z−p(p+1−m−ns)

2 w(z)|Σ|− 12 (m+ns)

×etr[−1

2zΣ−1(

nn0n+ n0

(X− θ)Ω−1(X− θ)′ + Φ + V)

]dz (52)

= |Σ|− 12 (m+ns)

∞∑k=0

(− 1

2 tr[Σ−1W

])kk!

∫ ∞0

z−p(p+1−m−ns)−2k

2 w(z)dz

Hence

q(Σ|X,V) ∝ |Σ|− 12 (m+ns)

∞∑k=0

(− 1

2 tr[Σ−1W

])kk!

ς(−p(p+ 1−m− ns)− 2k

2) (53)

and the normalizing constant, c, equals(∫ ∞0

z−p(p+1−m−ns)

2 w(z)

∫Σ>0

|Σ|− 12 (m+ns)etr

[−1

2zΣ−1W

]dΣdz

)−1From Definition 3.4.1, p.111 of [12] and the fact that

∫∞0w(z)dz = 1 follows that

c−1 = 2p(m+ns−p−1)

2 Γp

(m+ ns− p− 1

2

)|W|−

m+ns−p−12 (54)

Therefore from (53) and (54), (16) follows immediately.

5.3. Proof of Theorem 4

Proof. From Theorem 3.2.17, p.104 of [17] the joint density function of the eigenvalues of Σ is given by

g (Λ) =π

12p

2

Γp(p2

)pi<j

(λi − λj)∫O(p)

q(HΛH′|X,V; z)dH (55)

imsart-generic ver. 2014/10/16 file: Appendix.tex date: April 20, 2015

Page 13: Estimation of the matrix variate elliptical model · the posterior distributions of the matrix variate parameters with emphasis on the new proposed estimators. A simulation study

[email protected]/Estimation of the matrix variate elliptical model 13

for any H ∈ O(p) and dH the normalized Haar invariant measure on O(p) (see [17], p.72).Note that from (52) ∫

O(p)

q(HΛH′|X,V; z)dH

=2−

p(m+ns−p−1)2

Γp(m+ns−p−1

2

) |W|m+ns−p−12 |Λ|− 1

2 (m+ns)

×∫ ∞0

z−p(p+1−m−ns)

2 w(z)

∫O(p)

etr

[−1

2zHΛ−1H′W

]dHdz

We note that∫O(p)

etr[− 1

2zHΛ−1H′W]dH = 0F

(p)0

(Λ−1,− 1

2zW)

where 0F(p)0 (., .) is the hypergeo-

metric function with two matrix arguments (see Definition 7.3.2. p.259 of [17]), and hence∫ ∞0

z−p(p+1−m−ns)

2 w(z)

∫O(p)

etr

[−1

2zHΛ−1H′W

]dHdz

=1

Γp(m+ns−p−1

2

) |W|m+ns−p−12 |Λ|− 1

2 (m+ns)

×∞∑k=0

∑κ

(−1)k2−p(m+ns−p−1)

2 −kCκ(Λ−1

)Cκ (W)

k!Cκ(Ip)ς

(−p(p+ 1−m− ns)

2+ k

)the proof is complete since Cκ

(aΣ−1

)= akCκ

(Σ−1

)from eq. 2.7, p.467 of [6] .

5.4. Proof of Theorem 5

Proof. From Theorem 2 it follows that

P (Σ < A) =2−

p(m+ns−p−1)2

Γp(m+ns−p−1

2

) |W|m+ns−p−12

∫ ∞0

z−p(p+1−m−ns)

2 w(z)

×∫

0<Σ<A

|Σ|− 12 (m+ns)etr

[−1

2zΣ−1W

]dΣdz

Now make the following consecutive transformations Θ = Σ−1, J(Σ→ Θ) = |Θ|−p−1 and U = Θ−A−1

to obtain

P (Σ < A) =2−

p(m+ns−p−1)2

Γp(m+ns−p−1

2

) |W|m+ns−p−12

∫ ∞0

z−p(p+1−m−ns)

2 w(z)

×∫

Θ>A−1

|Θ| 12 (m+ns)−p−1etr

[−1

2zΘW

]dΘdz

=2−

p(m+ns−p−1)2

Γp(m+ns−p−1

2

) |W|m+ns−p−12

∫ ∞0

z−p(p+1−m−ns)

2 w(z)

×|A|− 12 (m+ns)+p+1etr

[−1

2zA−1W

] ∫U>0

|AU + I| 12 (m+ns)−p−1etr

[−1

2zUW

]dUdz

imsart-generic ver. 2014/10/16 file: Appendix.tex date: April 20, 2015

Page 14: Estimation of the matrix variate elliptical model · the posterior distributions of the matrix variate parameters with emphasis on the new proposed estimators. A simulation study

[email protected]/Estimation of the matrix variate elliptical model 14

Using the fact that

|AU + I| 12 (m+ns)−p−1 = |AU|12 (m+ns)−p−1|I+ (AU)

−1 | 12 (m+ns)−p−1

and

|I+ (AU)−1 | 12 (m+ns)−p−1 =

∞∑k=0

∑κ

(− 1

2 (m+ ns) + p+ 1)κ

(−1)k

k!Cκ(U−1A−1

)we get

P (Σ < A) =2−

p(m+ns−p−1)2

Γp(m+ns−p−1

2

) |W|m+ns−p−12

∫ ∞0

z−p(p+1−m−ns)

2 w(z)

×etr[−1

2zA−1W

] ∞∑k=0

∑κ

(− 1

2 (m+ ns) + p+ 1)κ

(−1)k

k!

×∫

U>0

|U|12 (m+ns)−p−1

etr

[−1

2zUW

]Cκ(U−1A−1

)dUdz

=2−

p(m+ns−p−1)2

Γp(m+ns−p−1

2

) |W|m+ns−p−12

∫ ∞0

z−p(p+1−m−ns)

2 w(z)

×etr[−1

2zA−1W

] ∞∑k=0

∑κ

(− 1

2 (m+ ns) + p+ 1)κ

(−1)k

k!

×Γp

(1

2(m+ ns)− p+ 1

2,−κ

)|12zW|−[ 1

2 (m+ns)− p+12 ]Cκ

(1

2zA−1W

)dz (56)

=

∞∑l=0

∞∑k=0

∑κ

2−k−l (−1)l [tr(A−1W

)]lk!l!

ς(l + k)Cκ(A−1W

)from eq. 3.18, p.12 of [15] and eq. 1.5.9 of [12] the proof is complete.

6. Appendix B

6.1. Proof of Theorem 10

Proof. With V as defined in (12), the marginal posterior density for µ follows from (31) as

q(µ|X,V) ∝∫ ∞0

z−p(m−s−sn)

2 w(z)

∫Σ>0

|Σ| 12 (m−p−1−s−ns)etr[− 1

2zΣΦ−1

]×etr

[−n0z

2Σ−1(µ− θ)Ω−1(µ− θ)′

]×etr

[−1

2zΣ−1

[V + n(X− µ)Ω−1(X− µ)′

]]dΣdz (57)

with A∗ as defined in (43),D = n0(µ− θ)Ω−1(µ− θ)′ (58)

imsart-generic ver. 2014/10/16 file: Appendix.tex date: April 20, 2015

Page 15: Estimation of the matrix variate elliptical model · the posterior distributions of the matrix variate parameters with emphasis on the new proposed estimators. A simulation study

[email protected]/Estimation of the matrix variate elliptical model 15

and applying eq. 1.6.18, p.39 of [12], it follows from (57) that

q(µ|X,V) ∝∫ ∞0

z−p(m−s−sn)

2 w(z)

∫Σ>0

|Σ| 12 (m−p−1−s−ns)

×etr[− 1

2zΣΦ−1

]etr

[−1

2zΣ−1 (D + A∗)

]dΣdz

=

∫ ∞0

z−p(m−s−sn)

2 w(z)B−m+s+ns2

(1

4Φ−1 (D + A∗)

)|2zΦ|

12 (m−s−ns)dz

∝∫ ∞0

z−p(m−s−sn)

2 zp(m−s−ns)

2 w(z)B−m+s+ns2

(1

4Φ−1 (D + A∗)

)dz

= B−m+s+ns2

(1

4Φ−1 (D + A∗)

)(59)

since∫∞0w(z)dz = 1. From (43), (46) and (58) follows that

D + A∗ = (n+ n0)(µ− b)Ω−1(µ− b)′ +nn0n+ n0

(X− θ)Ω−1(X− θ)′ + V

= Y+(n+ n0)(µ− b)Ω−1(µ− b)′ (60)

whereY =

nn0n+ n0

(X− θ)Ω−1(X− θ)′+V (61)

with b as defined in (47). Therefore from (59) and (60) we get

q(µ|X,V) = cB−m+s+ns2

(1

4Φ−1

(Y+(n+ n0)(µ− b)Ω−1(µ− b)′

))The normalizing constant, c, is calculated by using the integral presentation of the Bessel function asfollows (eq. 1.6.18, p.39 of [12])

c−1 =

∫µ

B−m+s+ns2

(1

4Φ−1

(Y+(n+ n0)(µ− b)Ω−1(µ− b)′

))dµ

=

∫µ

∫U>0

etr

(−1

2U−1

(Y+(n+ n0)(µ− b)Ω−1(µ− b)′

))×etr

(−1

2Φ−1U

)|U|− 1

2 (−m+s+ns)− p+12 |1

2Φ−1|− 1

2 (−m+s+ns)dUdµ

= |12Φ−1|− 1

2 (−m+s+ns)

∫U>0

etr

(−1

2ΦU

)etr

(−1

2U−1Y

)|U|− 1

2 (−m+s+ns)− p+12

×∫µ

etr

(−n+ n0

2U−1(µ− b)Ω−1(µ− b)′

)dµdU

We note that from eq. 2.2.1, p.55, [12],∫µ

etr

(−n+ n0

2U−1(µ− b)Ω−1(µ− b)′

)dµ = (2π)

sp2 | 1

n+ n0U| s2 |Ω|

p2

imsart-generic ver. 2014/10/16 file: Appendix.tex date: April 20, 2015

Page 16: Estimation of the matrix variate elliptical model · the posterior distributions of the matrix variate parameters with emphasis on the new proposed estimators. A simulation study

[email protected]/Estimation of the matrix variate elliptical model 16

Using eq. 1.6.18, p.39 of [12] it follows that c−1 equals

(2π)sp2 |Ω|

p2 (n+ n0)

− sp2 |12Φ−1|− 1

2 (−m+s+ns)

∫U>0

etr

(−1

2Φ−1U

)etr

(−1

2U−1Y

)|U|− 1

2 (−m+ns)− p+12 dU

= (2π)sp2 |Ω|

p2 (n+ n0)

− sp2 |12Φ−1|− s2B−m+ns

2

(1

4Φ−1Y

)

6.2. Proof of Theorem 11

Proof. Let V and Y be defined as in (12) and (61) respectively. Then from (31) and (46) and applyingeq. 2.2.1, p.55 of [12], we have

q(Σ|X,V) ∝∫ ∞0

z−p(m−s−sn)

2 w(z)|Σ| 12 (m−p−1−s−ns)etr[− 1

2zΣΦ−1

]∫µ

etr[−n0z

2Σ−1(µ− θ)Ω−1(µ− θ)′

]×etr

[−1

2zΣ−1

[V + n(X− µ)Ω−1(X− µ)′

]]dµdz

=

∫ ∞0

z−p(m−s−sn)

2 w(z)|Σ| 12 (m−p−1−s−ns)etr[− 1

2zΣΦ−1

]×etr

[−z

2Σ−1(

nn0n+ n0

(X− θ)Ω−1(X− θ)′+V)

]×∫µ

etr

[(1

n+ n0z−1Σ

)−1(µ− b)Ω−1(µ− b)′

]dµdz

=

∫ ∞0

z−p(m−sn)

2 w(z)|Σ| 12 (m−p−1−ns)etr[− 1

2zΣΦ−1

]etr[−z

2Σ−1Y

]dz (62)

=

∞∑k=0

∞∑l=0

(−2)−k−l (

tr[ΣΦ−1

])k (tr[Σ−1Y

])l |Σ| 12 (m−p−1−ns)k!l!

(63)

×ς(−p(m− sn)

2− k + l

)(64)

from the Taylor series expansion and (15), where

c−1 =

∫ ∞0

z−p(m−sn)

2 w(z)

∫Σ>0

|Σ| 12 (m−p−1−ns)etr[− 1

2zΣΦ−1

]etr[−z

2Σ−1Y

]dΣdz

From eq. 1.6.18, p.39, [12] and∫∞0w(z)dz = 1, the proof is complete.

imsart-generic ver. 2014/10/16 file: Appendix.tex date: April 20, 2015

Page 17: Estimation of the matrix variate elliptical model · the posterior distributions of the matrix variate parameters with emphasis on the new proposed estimators. A simulation study

[email protected]/Estimation of the matrix variate elliptical model 17

6.3. Proof of Theorem 13

Proof. From Theorem 3.2.17. of [17] the joint density of the eigenvalues of Σ is given by

g (Λ) =π

12p

2

Γp(p2

)pi<j

(λi − λj)∫O(p)

q(HΛH′|X,V; z)dH (65)

where the latter integral equals

2p(−m+ns)

2

B−m+ns2

(14Φ−1Y

) |Φ| 12 (−m+ns)|Λ| 12 (m−p−1−ns)

×∫ ∞0

z−p(m−sn)

2 w(z)

∫O(p)

etr

[− 1

2zHΛH′Φ−1

]etr[−z

2HΛ−1H′Y

]dHdz

We note from eq.1.2, p. 465 of [6] that∫O(p)

etr

[− 1

2zHΛH′Φ−1

]etr[−z

2HΛ−1H′Y

]dH

=

∞∑k=0

∑κ

∞∑l=0

∑λ

1

k!l!

∑φ∈κ,λ

Cκ,λφ(− 1

2zΛ,−z2Λ−1

)Cκ,λφ

(Φ−1,Y

)Cφ (Ip)

and the proof is complete.

6.4. Proof of Theorem 14

Proof. Note that

P (Σ < A) =2p(−m+ns)

2

B−m+ns2

(14Φ−1Y

) |Φ| 12 (−m+ns)

∫ ∞0

z−p(m−sn)

2 w(z)

×∫

0<Σ<A

|Σ| 12 (m−p−1−ns)etr[− 1

2zΣΦ−1

]etr[−z

2Σ−1Y

]dΣdz

Now make the transformation Θ = A−12 ΣA−

12 with Jacobian J(Σ→ Θ) = |A|

p+12 then

P (Σ < A) =2p(−m+ns)

2

B−m+ns2

(14Φ−1Y

) |Φ| 12 (−m+ns)|A| 12 (m−ns)∫ ∞0

z−p(m−sn)

2 w(z)

×∫

0<Θ<Ip

|Θ| 12 (m−p−1−ns)etr[− 1

2zA

12 ΘA

12 Φ−1

]etr[−z

2A−

12 Θ−1A−

12 Y]dΘdz

=2p(−m+ns)

2 B−m+ns2

( 14Φ−1Y; Ip)

B−m+ns2

(14Φ−1Y

) |Φ| 12 (−m+ns)|A| 12 (m−ns)∫ ∞0

z−p(m−sn)

2 w(z)dz

from (5).

imsart-generic ver. 2014/10/16 file: Appendix.tex date: April 20, 2015

Page 18: Estimation of the matrix variate elliptical model · the posterior distributions of the matrix variate parameters with emphasis on the new proposed estimators. A simulation study

[email protected]/Estimation of the matrix variate elliptical model 18

References

[1] Arashi, M., Iranmanesh, Anis, Norouzirad, M. and Salarzadeh Jenatabadi, H. (2014). Bayesian anal-ysis in multivariate regression models with conjugate priors, Statistics, 48(6):1324-1334.

[2] Arashi, M., Saleh, A.K.Md.E. and Tabatabaey, S.M.M. (2013). Regression model with ellipticallycontoured errors, Statistics, 47(6):1266-1284.

[3] Bekker, A. and Roux, J.J.J. (1995). Bayesian multivariate normal analysis with a Wishart prior,Comm. Statist. Theo. Meth., 24(10):2485-2497.

[4] Chu, K.C. (1973). Estimation and decision for linear systems with elliptical random processes, IEEETrans. Aut. Cont. Ac18:499-505.

[5] Das, S. and Dey, D.K. (2010). On Bayesian inference for multivariate generalized gamma distribution,Statistics and Probability letters, 80(19-20):1492-1499.

[6] Davis, A.W. (1979). Invariant polynomials with two matrix arguments extending the zonal polyno-mials: Applications to multivariate distribution theory, Ann. Inst. Statist. Math., 31:465-485.

[7] Erdelyi, A., Magnus, W., Oberhettinger, F. and Tricomi, F.G. (1953). Higher transcendental func-tions: volume 1, McGraw-Hill Inc., United States of America.

[8] Fang, K.T. and Li, R.Z. (1999). Bayesian statistical inference on elliptical matrix distributions, J.Mult. Anal., 70:66-85.

[9] Fang, K.T. and Zhang, Y. (1990). Generalized Multivariate Analysis. Springer-Verlag and SciencePress, Berlin and Beijing.

[10] Gradshteyn, I.S. and Ryzhik, I.M. (2007). Table of integrals, series and products. Academic Press,United States of America.

[11] Gelman, A., Carlin, J.B., Stern, H.S. and Rubin, D.B. (1995). Bayesian Data Analysis. London:Chapman and Hall.

[12] Gupta, A.K. and Nagar, D.K. (2000). Matrix variate distributions. Chapman & Hall, United Statesof America.

[13] Hastings, W.K. (1970). Monte Carlo Sampling Methods Using Markov Chains and Their Applica-tions, Biometrika, 57 (1): 97–109.

[14] Herz, C.Z. (1955). Bessel functions of matrix argument, Ann. Math., 61:474-523.[15] James, A. T. (1961). Zonal polynomials of the real positive definite symmetric matrices, Ann. Math.,

74(2):456–469.[16] Jammalamadaka, S.R., Tiwari, R.C. and Chib, S. (1987). Bayes prediction in the linear model with

spherically symmetric errors, Econ. Lett., 24(1):39-44.[17] Muirhead, R.J. (1982). Aspects of Multivariate Statistical theory. John Wiley & Sons, Inc., Hoboken,

New Jersey.[18] Ng, V.M. (2010). On Bayesian inference with conjugate priors for scale mixtures of normal distribu-

tions. J. Appl. Prob. Stat., 5:69-76.[19] Sun, D. and Berger, J.O. (2006). Objective Bayesian Analysis for the Multivariate Normal Model,

Proc. Valencia / ISBA 8th World Meeting on Bayesian Statistics, Alicante, Spain.[20] Van Niekerk, J., Bekker, A., Roux, J.J.J. and Arashi, M. (2013). Subjective Bayesian analysis for

the elliptical model, Comm. Statist. Theo. Meth., Accepted.

imsart-generic ver. 2014/10/16 file: Appendix.tex date: April 20, 2015