multivariate distributions, an overview
TRANSCRIPT
![Page 1: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/1.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Multivariate Distributions: A brief overview
(Spherical/Elliptical Distributions, Distributions on the Simplex & Copulas)
A. Charpentier (Université de Rennes 1 & UQàM)
Université de Rennes 1 Workshop, November 2015.
http://freakonometrics.hypotheses.org
@freakonometrics 1
![Page 2: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/2.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Geometry in Rd and Statistics
The standard inner product is < x,y >`2= xTy =∑i xiyi.
Hence, x ⊥ y if < x,y >`2= 0.
The Euclidean norm is ‖x‖`2 =< x,x >12`2
=(∑n
i=1 x2i
) 12 .
The unit sphere of Rd is Sd = {x ∈ Rd : ‖x‖`2 = 1}.
If x = {x1, · · · , xn}, note that the empirical covariance is
Cov(x,y) =< x− x,y − y >`2
and Var(x) = ‖x− x‖`2 .
For the (multivariate) linear model, yi = β0 + βT1xi + εi, or equivalently,
yi = β0+ < β1,xi >`2 +εi
@freakonometrics 2
![Page 3: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/3.jpg)
Arthur CHARPENTIER - Multivariate Distributions
The d dimensional Gaussian Random Vector
If Z ∼ N (0, I), then X = AZ + µ ∼ N (µ,Σ) where Σ = AAT.
Conversely (Cholesky decomposition), if X ∼ N (µ,Σ), then X = LZ + µ forsome lower triangular matrix L satisfying Σ = LLT. Denote L = Σ
12 .
With Cholesky decomposition, we have the particular case (with a Gaussiandistribution) of Rosenblatt (1952)’s chain,
f(x1, x2, · · · , xd) = f1(x1) · f2|1(x2|x1) · f3|2,1(x3|x2, x1) · · ·· · · fd|d−1,··· ,2,1(xd|xd−1, · · · , x2, x1).
f(x;µ,Σ) = 1(2π) d2 |Σ| 12
exp(− 1
2 (x− µ)TΣ−1(x− µ)︸ ︷︷ ︸‖x‖µ,Σ
)for all x ∈ Rd.
@freakonometrics 3
![Page 4: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/4.jpg)
Arthur CHARPENTIER - Multivariate Distributions
The d dimensional Gaussian Random Vector
Note that ‖x‖µ,Σ = (x− µ)TΣ−1(x− µ) is the Mahalanobis distance.
Define the ellipsoid Eµ,Σ = {x ∈ Rd : ‖x‖µ,Σ = 1}
Let
X =
X1
X2
∼ Nµ1
µ2
,Σ11 Σ12
Σ21 Σ22
then
X1|X2 = x2 ∼ N (µ1 + Σ12Σ−122 (x2 − µ2) ,Σ11 −Σ12Σ−1
22 Σ21)
X1 ⊥⊥X2 if and only if Σ12 = 0.
Further, if X ∼ N (µ,Σ), then AX + b ∼ N (Aµ+ b,AΣAT).
@freakonometrics 4
![Page 5: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/5.jpg)
Arthur CHARPENTIER - Multivariate Distributions
The Gaussian Distribution, as a Spherical Distributions
If X ∼ N (0, I), then X = R ·U , where
R2 = ‖X‖`2 ∼ χ2(d)
andU = X/‖X‖`2 ∼ U(Sd),
with R ⊥⊥ U .
−2−1
0
1
2
−2−1
01
2
−2
−1
0
1
2
●
@freakonometrics 5
![Page 6: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/6.jpg)
Arthur CHARPENTIER - Multivariate Distributions
The Gaussian Distribution, as an Elliptical Distributions
If X ∼ N (µ,Σ), then X = µ+R ·Σ12 ·U︸ ︷︷ ︸, where
R2 = ‖X‖`2 ∼ χ2(d)
andU = X/‖X‖`2 ∼ U(Sd),
with R ⊥⊥ U .
−2−1
0
1
2
−2−1
01
2
−2
−1
0
1
2
●
@freakonometrics 6
![Page 7: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/7.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Spherical Distributions
Let M denote an orthogonal matrix, MTM = MMT = I. X has a sphericaldistribution if X L= MX.
E.g. in R2, cos(θ) − sin(θ)sin(θ) cos(θ)
X1
X2
L=X1
X2
For every a ∈ Rd, aTX
L= ‖a‖`2 ·Xi, for any i ∈ {1, · · · , d}.
Further, the generating function of X can be written
E[eitTX ] = ϕ(tTt) = ϕ(‖t‖2`2
), ∀t ∈ Rd,
for some ϕ : R+ → R+.
@freakonometrics 7
![Page 8: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/8.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Uniform Distribution on the Sphere
Actually, more complex that it seems...x1 = ρ sinϕ cos θx2 = ρ sinϕ sin θx3 = ρ cosϕ
with ρ > 0, ϕ ∈ [0, 2π] and θ ∈ [0, π].
If Φ ∼ U([0, 2π]) and Θ ∼ U([0, π]),we do not have a uniform distribution on the sphere...
see https://en.wikibooks.org/wiki/Mathematica/Uniform_Spherical_Distribution,http://freakonometrics.hypotheses.org/10355
@freakonometrics 8
![Page 9: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/9.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Spherical Distributions
Random vector X as a spherical distribution if
X = R ·U
where R is a positive random variable and U is uniformlydistributed on the unit sphere of Rd, Sd, with R ⊥⊥ U
E.g. X ∼ N (0, I).
−2 −1 0 1 2
−2
−1
01
2
●
●
● ●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●●
●
●
●●
●●
●●
●
●
●
●
●
●
●
●
●●
●
●
● ●●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●●
●
●
●
●●
●●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●●●
●
●
●●
●
●
●
●
●●
●
●
●●
●●
●●
●●
●
●
●
●
●●
●●
● ●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
● ●
●
●
●
●
●
●
●●
●●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●●
●
●
●
●
●
●
●●
●●
●
●
●
●
●●
●●
● ● ● ●
●
●
●
●●
●
●●
●
●
●
●
●●
●●
●●
●
●
●
●
●
●
●●
●
●
●●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
−2 −1 0 1 2
−2
−1
01
2
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
● ●
●
●
●
●●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
● ●●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
0.02
0.04
0.06
0.08
0.12
0.14
@freakonometrics 9
![Page 10: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/10.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Elliptical Distributions
Random vector X as a elliptical distribution if
X = µ+R ·A ·U
where A satisfies AA′ = Σ, U(Sd), with R ⊥⊥ U . DenoteΣ
12 = A.
E.g. X ∼ N (µ,Σ).
−2 −1 0 1 2
−2
−1
01
2
●
●
●
●
●●
●
●
● ●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
● ●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●●
●●
●
●
●●
●
●
●●
●
●
●●
●
●
●
●
●
●
●●
●
●
●●●
●
●●
●●
●●
●●
●
●
●●
●●
●
●
●
●
●●
●●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
● ●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●●
●
●
●
●
● ●
●●
● ●
●●
●●
● ●
●
●
●
●
● ●
●
●
●
●
●
●
●
●●●
●
●
●●
●●
●●
●
●
●●
●
●
●●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●●
●●
●●
−2 −1 0 1 2
−2
−1
01
2
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
0.02
0.04
0.06
0.08
0.12
0.14
@freakonometrics 10
![Page 11: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/11.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Elliptical Distributions
X = µ+RΣ12U where R is a positive random variable, U ∼ U(Sd), with
U ⊥⊥ R. If X ∼ FR, then X ∼ E(µ,Σ, FR).
Remark Instead of FR it is more common to use ϕ such that
E[eitTX ] = eit
Tµϕ(tTΣt), t ∈ Rd.
E[X] = µ and Var[X] = −2ϕ′(0)Σ
f(x) ∝ 1|Σ| 12
f(√
(x− µ)TΣ−1(x− µ))
where f : R+ → R+ is called radial density. Note that
dF (r) ∝ rd−1f(r)1(x > 0).
@freakonometrics 11
![Page 12: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/12.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Elliptical Distributions
If X ∼ E(µ,Σ, FR), then
AX + b ∼ E(Aµ+ b,AΣAT, FR)
If
X =
X1
X2
∼ Eµ1
µ2
,Σ11 Σ12
Σ21 Σ22
, FR
then
X1|X2 = x2 ∼ E(µ1 + Σ12Σ−122 (x2 − µ2) Σ11 −Σ12Σ−1
22 Σ21, F1|2)
whereF1|2 is the c.d.f. of (R2 − ?) 1
2 given X2 = x2.
@freakonometrics 12
![Page 13: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/13.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Mixtures of Normal Distributions
Let Z ∼ N (0, I). Let W denote a positive random variable, Z ⊥⊥W . Set
X = µ+√WΣ
12Z,
so that X|W = w ∼ N (µ, wΣ).
E[X] = µ and Var[X] = E[W ]Σ
E[eitTX ] = E
[eit
Tµ− 12W t
TΣt)], t ∈ Rd.
i.e. X ∼ E(µ,Σ, ϕ) where ϕ is the generating function of W , i.e. ϕ(t) = E[e−tW ].
If W has an inverse Gamma distribution, W ∼ IG(ν/2, ν/2), then X has amultivariate t distribution, with ν degrees of freedom.
@freakonometrics 13
![Page 14: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/14.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Multivariate Student t
X ∼ t(µ,Σ, ν) if
X = µ+ Σ12
Z√W/ν
where Z ∼ N (0, I) and W ∼ χ2(ν), with Z ⊥⊥W .
Note thatVar[X] = ν
ν − 2Σ if ν > 2.
@freakonometrics 14
![Page 15: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/15.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Multivariate Student t
(r = 0.1, ν = 4), (r = 0.9, ν = 4), (r = 0.5, ν = 4) and (r = 0.5, ν = 10).
@freakonometrics 15
![Page 16: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/16.jpg)
Arthur CHARPENTIER - Multivariate Distributions
On Conditional Independence, de Finetti & Hewitt
Instead of X L= MX for any orthogonal matrix M , consider the equality for anypermutation matrix M , i.e.
(X1, · · · , Xd)L= (Xσ(1), · · · , Xσ(d)) for any permutation of {1, · · · , d}
E.g. X ∼ N (0,Σ) with Σi,i = 1 and Σi,j = ρ when i 6= j. Note that necessarily
ρ = Corr(Xi, Xj) ≥ −1
d− 1 .
From de Finetti (1931), X1, · · · , Xd, · · · are exchangeable {0, 1} variables if andonly if there is a c.d.f. Π on [0, 1] such that
P[X = x] =∫ 1
0θx
T1[1− θ]n−xT1dΠ(θ),
i.e. X1, · · · , Xd, · · · are (conditionnaly) independent given Θ ∼ Π.
@freakonometrics 16
![Page 17: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/17.jpg)
Arthur CHARPENTIER - Multivariate Distributions
On Conditional Independence, de Finetti & Hewitt-Savage
More generally, from Hewitt & Savage (1955) random variables X1, · · · , Xd, · · ·are exchangeable if and only if there is F such that X1, · · · , Xd, · · · are(conditionnaly) independent given F .
E.g. popular shared frailty models. Consider lifetimes T1, · · · , Td, with Cox-typeproportional hazard µi(t) = Θ · µi,0(t), so that
P[Ti > t|Θ = θ] = Fθ
i,0(t)
Assume that lifetimes are (conditionnaly) independent given Θ.
@freakonometrics 17
![Page 18: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/18.jpg)
Arthur CHARPENTIER - Multivariate Distributions
The Simplex Sd ⊂ Rd
Sd ={x = (x1, x2, · · · , xd) ∈ Rd
∣∣∣∣∣xi > 0, i = 1, 2, · · · , d;d∑i=1
xi = 1}.
Henre, the simplex here is the set of d-dimensional probability vectors. Note thatSd = {x ∈ Rd+ : ‖x‖`1 = 1}
Remark Sometimes the simplex is
Sd−1 ={x = (x1, x2, · · · , xd−1) ∈ Rd−1
∣∣∣∣∣xi > 0, i = 1, 2, · · · , d;d−1∑i=1
xi≤1}.
Note that if x ∈ Sd−1, then (x, 1− xT1) ∈ Sd.
If h : Rd+ → R+ is homogeneous of order 1, i.e. h(λx) = λ · h(x) for all λ > 0.Then
h(x) = ‖x‖`1 · h(
x
‖x‖`1
)where x
‖x‖`1
∈ Sd.
@freakonometrics 18
![Page 19: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/19.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Compositional Data and Geometry of the Simplex
Following Aitchison (1986), given x ∈ Rd+ define the closure operator C
C[x1, x2, · · · , xd] =[
x1∑di=1 xi
,x2∑di=1 xi
, . . . ,xd∑di=1 xi
]∈ Sd.
It is possible to define (Aitchison) inner product on Sd
< x,y >a= 12d∑i,j
log xixj
log yiyj
=∑i
log xix
log yiy
where x denotes the geometric mean of x.
It is then possible to define a linear model with compositional covariates,
yi = β0+ < β1,xi >a +εi.
@freakonometrics 19
![Page 20: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/20.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Dirichlet Distributions
Given α ∈ Rd+, and x ∈ Sd ⊂ Rd
f (x1, · · · , xd;α) = 1B(α)
d∏i=1
xαi−1i ,
where
B(α) =∏di=1 Γ(αi)
Γ(∑d
i=1 αi
)Then
Xi ∼ Beta(αi,
(∑d
j=1αj
)− αi
).
and E(X) = C(α).
01
23
4
5
01
23
45
0
1
2
3
4
5
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
0.00.2
0.40.6
0.8
1.0
0.00.2
0.40.6
0.81.0
0.0
0.2
0.4
0.6
0.8
1.0 ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●
●●●●●●●●●●●●●●
●●●●●●●●●●●●
●●●●●●●●●●●
●●●●●●●●●●
●●●●●●●●●
●●●●●●●●●
●●●●●●●
●●●●●●●
●●●●●●●●●●●●●●●●●●
●
●●
●
● ●
●
●
●
●
●
●●
●
●
●●
●
●
●
@freakonometrics 20
![Page 21: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/21.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Dirichlet Distributions
Stochastic RepresentationLet Z = (Z1, · · · , Zd) denote independent G(αi, θ) randomvariables. Then S = Z1 + · · ·+ Zd = ZT1 has a G(αT1, θ)distribution, and
X = C(X) = Z
S=(
Z1∑di=1 Zi
, · · · , Zd∑di=1 Zi
)
has a Dirichlet distribution Dirichlet(α).
01
23
4
5
01
23
45
0
1
2
3
4
5
●
●
●
●
●
●
●
●
●
●●
●●
●
●
●
●
0.00.2
0.40.6
0.8
1.0
0.00.2
0.40.6
0.81.0
0.0
0.2
0.4
0.6
0.8
1.0 ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●●●
●●●●●●●●●●●●●●
●●●●●●●●●●●●●●
●●●●●●●●●●●●
●●●●●●●●●●●
●●●●●●●●●●
●●●●●●●●●
●●●●●●●●●
●●●●●●●
●●●●●●●
●●●●●●●●●●●●●●●●●●
●●
●
●
●
●
●
●
●
●
● ● ●●●
●
●
●
●
●
@freakonometrics 21
![Page 22: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/22.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Uniform Distribution on the Simplex
X ∼ D(1) is a random vector uniformly distributed on the simplex.
Consider d− 1 independent random variables U1, · · · , Ud−1 with a U([0, 1])distribution. Define spacings, as
Xi = U(i−1):d − U where Ui:d are order statistics
with conventions U0:d = 0 and Ud:d = 1. Then
X = (X1, · · · , Xd) ∼ U(Sd).
@freakonometrics 22
![Page 23: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/23.jpg)
Arthur CHARPENTIER - Multivariate Distributions
‘Normal distribution on the Simplex’
(also called logistic-normal).
Let Y ∼ N (µ,Σ) in dimension d− 1. Set Z = (Y , 0) and
X = C(eZ) =(
eZ1
eZ1 + · · ·+ eZd, · · · , eZd
eZ1 + · · ·+ eZd
)
@freakonometrics 23
![Page 24: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/24.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Distribution on Rd or [0, 1]d
Technically, things are more simple when X = (X1, · · · , Xd) take values in aproduct measurable space, e.g. R× · · · × R.
In that case, X has independent components if (and only if)
P[X ∈ A] =d∏i=1
P[Xi ∈ Ai], where A = A1 × · · · ,×Ad.
E.g. if Ai = (−∞, xi), then
F (x) = P[X ∈ (−∞,x] =d∏i=1
P[Xi ∈ (−∞, xi] =d∏i=1
Fi(xi).
If F is absolutely continous,
f(x) = ∂dF (x)∂x1 · · · ∂xd
=d∏i=1
fi(xi).
@freakonometrics 24
![Page 25: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/25.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Fréchet classes
Given some (univariate) cumulative distribution functions F1, · · · , Fd R→ [0, 1],let F(F1, · · · , Fd) denote the set of multivariate cumulative distribution functionof random vectors X such that Xi ∼ Fi.
Note that for any F ∈ F(F1, · · · , Fd), ∀x ∈ Rd,
F−(x) ≤ F (x) ≤ F+(x)
whereF+(x) = min{Fi(xi), i = 1, · · · , d},
andF−(x) = max{0, F1(x1) + · · ·+ Fd(xd)− (d− 1)}.
Note that F+ ∈ F(F1, · · · , Fd), while usually F− /∈ F(F1, · · · , Fd).
@freakonometrics 25
![Page 26: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/26.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Copulas in Dimension 2
A copula C : [0, 1]2 → [0, 1] is a cumulative distribution function with uniformmargins on [0, 1].
Equivalently, a copula C : [0, 1]2 → [0, 1] is a function satisfying
• C(u1, 0) = C(0, u2) = 0 for any u1, u2 ∈ [0, 1],
• C(u1, 1) = u1 et C(1, u2) = u2 for any u1, u2 ∈ [0, 1],
• C is a 2-increasing function, i.e. for all 0 ≤ ui ≤ vi ≤ 1,
C(v1, v2)− C(v1, u2)− C(u1, v2) + C(u1, u2) ≥ 0.
@freakonometrics 26
![Page 27: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/27.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Copulas in Dimension 2
Borders of the copula function
!0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4!0.2
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
!0.2 0.0
0.2 0.4
0.6 0.8
1.0 1.2
Border conditions, in dimension d = 2, C(u1, 0) = C(0, u2) = 0, C(u1, 1) = u1 etC(1, u2) = u2.
@freakonometrics 27
![Page 28: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/28.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Copulas in Dimension 2
If C is the copula of random vector (X1, X2), then C couples marginaldistributions, in the sense that
P(X1 ≤ x1, X2 ≤ x2) = C(P(X1 ≤ x1),P(X2 ≤ x2))
Note tht is is also possible to couple survival distributions: there exists a copulaC? such that
P(X > x, Y > y) = C?(P(X > x),P(Y > y)).
Observe thatC?(u1, u2) = u1 + u2 − 1 + C(1− u1, 1− u2).
The survival copula C? associated to C is the copula defined by
C?(u1, u2) = u1 + u2 − 1 + C(1− u1, 1− u2).
Note that (1− U1, 1− U2) ∼ C? if (U1, U2) ∼ C.
@freakonometrics 28
![Page 29: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/29.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Copulas in Dimension 2
If X has distribution F ∈ F(F1, F2), with absolutely continuous margins, thenits copula is
C(u1, u2) = F (F−11 (u1), F−1
2 (u2)), ∀u1, u2 ∈ [0, 1].
More generally, if h−1 denotes the generalized inverse of some increasing functionh : R→ R, defined as h−1(t) = inf{x, h(x) ≥ t, t ∈ R}, thenC(u1, u2) = F (F−1
1 (u1), F−12 (u2)) is one copula of X.
Note that copulas are continuous functions; actually they are Lipschitz: for all0 ≤ ui, vi ≤ 1,
|C(u1, u2)− C(v1, v2)| ≤ |u1 − v1|+ |u2 − v2|.
@freakonometrics 29
![Page 30: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/30.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Copulas in Dimension d
The increasing property of the copula function is related to the property that
P(X ∈ [a, b]) = P(a1 ≤ X1 ≤ b1, · · · , ad ≤ Xd ≤ bd) ≥ 0
for X = (X1, · · · , Xd) ∼ F , for any a ≤ b (in the sense that ai ≤ bi.
Function h : Rd → R is said to be d-increaasing if for any [a, b] ⊂ Rd,Vh ([a, b]) ≥ 0, where
Vh ([a, b]) = ∆bah (t) = ∆bd
ad∆bd−1ad−1
...∆b2a2
∆b1a1h (t)
for any t, where
∆biaih (t) = h (t1, · · · , ti−1, bi, ti+1, · · · , tn)− h (t1, · · · , ti−1, ai, ti+1, · · · , tn) .
@freakonometrics 30
![Page 31: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/31.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Copulas in Dimension d
Black dot, + sign, white dot, - sign.
@freakonometrics 31
![Page 32: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/32.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Copulas in Dimension d
A copula in dimension d is a cumulative distribution function on [0, 1]d withuniform margins, on [0, 1].
Equivalently, copulas are functions C : [0, 1]d → [0, 1] such that for all 0 ≤ ui ≤ 1,with i = 1, · · · , d,
C(1, · · · , 1, ui, 1, · · · , 1) = ui,
C(u1, · · · , ui−1, 0, ui+1, · · · , ud) = 0,
C is d-increasing.
The most important result is Sklar’s theorem, from Sklar (1959).
@freakonometrics 32
![Page 33: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/33.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Sklar’s Theorem
1. If C is a copula, and if F1 · · · , Fd are (univariate) distribution functions,then, for any (x1, · · · , xd) ∈ Rd,
F (x1, · · · , xn) = C(F1(x1), · · · , Fd(xd))
is a cumulative distribution function of the Fréchet class F(F1, · · · , Fd).
2. Conversely, if F ∈ F(F1, · · · , Fd), there exists a copula C such that theequation above holds. This function is not unique, but it is if marginsF1, · · · , Fd are absolutely continousand then, for any (u1, · · · , ud) ∈ [0, 1]d,
C(u1, · · · , ud) = F (F−11 (u1), · · · , F−1
d (ud)),
where F−11 , · · · , F−1
n are generalized quantiles.
@freakonometrics 33
![Page 34: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/34.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Copulas in Dimension d
Let (X1, · · · , Xd) be a random vector with copula C. Let φ1, · · · , φd, φi : R→ Rdenote continuous functions strictly increasing, then C is also a copula of(φ1(X1), · · · , φd(Xd)).
If C is a copula, then function
C?(u1, · · · , ud) =d∑k=0
(−1)k∑
i1,··· ,ik
C(1, · · · , 1, 1− ui1 , 1, ...1, 1− uik , 1, ...., 1)
,
for all (u1, · · · , ud) ∈ [0, 1]× ...× [0, 1], is a copula, called survival copula,associated with C.
If (U1, · · · , Ud) ∼ C, then (1− U1 · · · , 1− Ud) ∼ C?. And if
P(X1 ≤ x1, · · · , Xd ≤ xd) = C(P(X1 ≤ x1), · · · ,P(Xd ≤ xd)),
for all (x1, · · · , xd) ∈ R, then
P(X1 > x1, · · · , Xd > xd) = C?(P(X1 > x1), · · · ,P(Xd > xd)).
@freakonometrics 34
![Page 35: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/35.jpg)
Arthur CHARPENTIER - Multivariate Distributions
On Quasi-Copulas
Function Q : [0, 1]d → [0, 1] is a quasi-copula if for any 0 ≤ ui ≤ 1, i = 1, · · · , d,
Q(1, · · · , 1, ui, 1, · · · , 1) = ui,
Q(u1, · · · , ui−1, 0, ui+1, · · · , ud) = 0,
s 7→ Q(u1, · · · , ui−1, s, ui+1, · · · , ud) is an increasing function for any i, and
|Q(u1, · · · , ud)−Q(v1, · · · , vd)| ≤ |u1 − v1|+ · · ·+ |ud − vd|.
For instance, C− is usually not a copula, but it is a quasi-copula.
Let C be a set of copula function and define C− and C+ as lower and upperbounds for C, in the sense that
C−(u) = inf{C(u), C ∈ C} and C+(u) = sup{C(u), C ∈ C}.
Then C− and C+ are quasi copulas (see connexions with the definition ofChoquet capacities as lower bounds of probability measures).
@freakonometrics 35
![Page 36: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/36.jpg)
Arthur CHARPENTIER - Multivariate Distributions
The Indepedent Copula C⊥, or Π
The independent copula C⊥ is the copula defined as
C⊥(u1, · · · , un) = u1 · · ·ud =d∏i=1
ui (= Π(u1, · · · , un)).
Let X ∈ F(F1, · · · , Fd), then X⊥ ∈ F(F1, · · · , Fd) will denote a random vectorwith copula C⊥, called ‘independent version’ of X.
@freakonometrics 36
![Page 37: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/37.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Fréchet-Hoeffding bounds C− and C+, and Comonotonicity
Recall that the family of copula functions is bounded: for any copula C,
C−(u1, · · · , ud) ≤ C(u1, · · · , ud) ≤ C+(u1, · · · , ud),
for any (u1, · · · , ud) ∈ [0, 1]× ...× [0, 1], where
C−(u1, · · · , ud) = max{0, u1 + ...+ ud − (d− 1)}
andC+(u1, · · · , ud) = min{u1, · · · , ud}.
If C+ is always a copula, C− is a copula only in dimension d = 2.
The comonotonic copula C+ is defined as C+(u1, · · · , ud) = min{u1, · · · , ud}.The lower bound C− is the function defined asC−(u1, · · · , ud) = max{0, u1 + ...+ ud − (d− 1)}.
@freakonometrics 37
![Page 38: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/38.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Fréchet-Hoeffding bounds C− and C+, and Comonotonicity
Let X ∈ F(F1, · · · , Fd). Let X+ ∈ F(F1, · · · , Fd) denote a random vector withcopula C+, called comotonic version of X. In dimension d = 2, letX− ∈ F(F1, F2) be a counter-comonotonic version of X.
@freakonometrics 38
![Page 39: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/39.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Fréchet-Hoeffding bounds C− and C+
1. If d = 2, C− is the c.d.f. of (U, 1− U) where U ∼ U([0, 1]).
2. (X1, X2) has copula C− if and only if there is φ strictly increasing and ψstrictly decreasing sucht that (X1, X2) = (φ(Z), ψ(Z)) for some randomvariable Z.
3. C+ is the c.d.f. of (U, · · · , U) where U ∼ U([0, 1]).
4. (X1, · · · , Xn) has copula C+ if and only if there are functions φi strictlyincreasing such that (X1, · · · , Xn) = (φ1(Z), · · · , φn(Z)) for some randomvariable Z.
Those bounds can be used to bound other quantities. If h : R2 → R is2-croissante, then for any (X1, X2) ∈ F(F1, F2)
E(φ(F−11 (U), F−1
2 (1− U))) ≤ E(φ(X1, X2)) ≤ E(φ(F−11 (U), F−1
2 (U))),
where U ∼ U([0, 1]), see Tchen (1980) for more applications
@freakonometrics 39
![Page 40: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/40.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Elliptical Copulas
Let r ∈ (−1,+1), then the Gaussian copula with parameter r (in dimensiond = 2) is
C(u1, u2) = 12π√
1− r2
∫ Φ−1(u1)
−∞
∫ Φ−1(u2)
−∞exp
(x2 − 2rxy + y2
2(1− r2)
)dxdy
where Φ is the c.d.f. of the N (0, 1) distribution
Φ(x) =∫ x
−∞
1√2π
exp(−z
2
2
)dz.
@freakonometrics 40
![Page 41: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/41.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Elliptical Copulas
Let r ∈ (−1,+1), and ν > 0, then the Student t copula with parameters r and νis ∫ T−1
ν (u1)
−∞
∫ T−1ν (u2)
−∞
1πν√
1− r2
Γ(ν2 + 1
)Γ(ν2) (
1 + x2 − 2rxy + y2
ν(1− r2)
)− ν2 +1
dxdy.
where Tν is the c.d.f. of the Student t distribution, with ν degrees of freedom
Tν(x) =∫ x
−∞
Γ(ν+12 )
√νπ Γ(ν2 )
(1 + z2
ν
)−( ν+12 )
@freakonometrics 41
![Page 42: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/42.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Archimedean Copulas, in dimension d = 2
Let φ denote a decreasing convex function (0, 1]→ [0,∞] such that φ(1) = 0 andφ(0) =∞. A (strict) Archimedean copula with generator φ is the copula definedas
C(u1, u2) = φ−1(φ(u1) + φ(u2)), for all u1, u2 ∈ [0, 1].
E.g. if φ(t) = tα − 1; this is Clayton copula.
The generator of an Archimedean copula is not unique.Further, Archimedeancopulas are symmetric, since C(u1, u2) = C(u2, u1).
@freakonometrics 42
![Page 43: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/43.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Archimedean Copulas, in dimension d = 2
The only copula is radialy symmetric, i.e. C(u1, u2) = C?(u1, u2) is such that
φ(t) = log e−αt − 1e−α − 1 . This is Frank copula, from Frank (1979)).
Some prefer a multiplicative version of Archimedean copulas
C(u1, u2) = h−1[h(u1) · h(u2)].
The link is h(t) = exp[φ(t)], or conversely φ(t) = h(log(t)).
@freakonometrics 43
![Page 44: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/44.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Archimedean Copulas, in dimension d = 2
Remark in dimension 1, P(F (X) ≤ t) = t, i.e. F (X) ∼ U([0, 1]) if X ∼ F .
Archimedean copulas can also be characterized by their Kendall function,
K(t) = P[C(U1, U2) ≤ t] = t− λ(t) where λ(t) = φ(t)φ′(t)
and where (U1, U2) ∼ C. Conversely,
φ(t) = exp(∫ t
t0
ds
λ(s)
),
where t0 ∈ (0, 1) is some arbitrary constant.
@freakonometrics 44
![Page 45: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/45.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Archimedean Copulas, in dimension d = 2
Note that Archimedean copulas can also be defined when φ(0) ≤ ∞.
Let φ denote a decreasing convex function (0, 1]→ [0,∞] such that φ(1) = 0.Define the inverse of φ as
φ−1(t) =
φ−1(t), for 0 ≤ t ≤ φ(0)0, for φ(0) < t <∞.
An Archimedean copula with generator φ is the copula defined as
C(u1, u2) = φ−1(φ(u1) + φ(u2)), for all u1, u2 ∈ [0, 1].
Non strict Archimedean copulas have a null set, {(u1, u2), φ(u1) + φ(u2) > 0} nonempty, such that
P((U1, U2) ∈ {(u1, u2), φ(u1) + φ(u2) > 0}) = 0.
@freakonometrics 45
![Page 46: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/46.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Archimedean Copulas, in dimension d = 2
This set is bounded by a null curve, {(u1, u2), φ(u1) + φ(u2) = 0}, with mass
P((U1, U2) ∈ {(u1, u2), φ(u1) + φ(u2) = 0}) = − φ(0)φ′(0+) ,
which is stricly positif if −φ′(0+) < +∞.
E.g. if φ(t) = tα − 1, with α ∈ [−1,∞), with limiting case φ(t) = − log(t) whenα = 0; this is Clayton copula.
@freakonometrics 46
![Page 47: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/47.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Archimedean Copulas, in dimension d = 2ψ(t) range θ
(1) 1θ
(t−θ − 1) [−1, 0) ∪ (0,∞) Clayton, Clayton (1978)
(2) (1 − t)θ [1,∞)
(3) log 1−θ(1−t)t
[−1, 1) Ali-Mikhail-Haq
(4) (− log t)θ [1,∞) Gumbel, Gumbel (1960), Hougaard (1986)
(5) − log e−θt−1e−θ−1
(−∞, 0) ∪ (0,∞) Frank, Frank (1979), Nelsen (1987)
(6) − log{1 − (1 − t)θ} [1,∞) Joe, Frank (1981), Joe (1993)
(7) − log{θt + (1 − θ)} (0, 1]
(8) 1−t1+(θ−1)t [1,∞)
(9) log(1 − θ log t) (0, 1] Barnett (1980), Gumbel (1960)
(10) log(2t−θ − 1) (0, 1]
(11) log(2 − tθ) (0, 1/2]
(12) ( 1t− 1)θ [1,∞)
(13) (1 − log t)θ − 1 (0,∞)
(14) (t−1/θ − 1)θ [1,∞)
(15) (1 − t1/θ)θ [1,∞) Genest & Ghoudi (1994)
(16) ( θt
+ 1)(1 − t) [0,∞)
@freakonometrics 47
![Page 48: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/48.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Archimedean Copulas, in dimension d ≥ 2
Archimedean copulas are associative (see Schweizer & Sklar (1983), i.e.
C(C(u1, u2), u3) = C(u1, C(u2, u3)), for all 0 ≤ u1, u2, u3 ≤ 1.
In dimension d > 2, assume that φ−1 is d-completely monotone (where ψ isd-completely monotine if it is continuous and for all k = 0, 1, · · · , d,(−1)kdkψ(t)/dtk ≥ 0).
An Archimedean copula in dimension d ≥ 2 is defined as
C(u1, · · · , un) = φ−1(φ(u1) + ...+ φ(un)), for all u1, · · · , un ∈ [0, 1].
@freakonometrics 48
![Page 49: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/49.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Archimedean Copulas, in dimension d ≥ 2
Those copulas are obtained iteratively, starting with
C2(u1, u2) = φ−1(φ(u1) + φ(u2))
and then, for any n ≥ 2,
Cn+1(u1, · · · , un+1) = C2(Cn(u1, · · · , un), un+1).
Let ψ denote the Laplace transform of a positive random variable Θ, then(Bernstein theorem), ψ is completely montone, and ψ(0) = 1. Then φ = ψ−1 isan Archimedean generator in any dimension d ≥ 2. E.g. if Θ ∼ G(a, a), thenψ(t) = (1 + t)1/α, and we have Clayton Clayton copula.
@freakonometrics 49
![Page 50: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/50.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Archimedean Copulas, in dimension d ≥ 2
Let X = (X1, · · · , Xd) denote remaining lifetimes, with joint survivaldistribution function that is Schur-constant, i.e. there is S : R+ → [0, 1] such that
P(X1 > x1, · · · , Xd > xd) = S(x1 + · · ·+ xd).
Then margins Xi are also Schur-contant (i.e. exponentially distributed), and thesurvival copula of X is Archimedean with generator S−1. Observe further that
P(Xi − xi > t|X > x) = P(Xj − xj > t|X > x),
for all t > 0 and x ∈ Rd+. Hence, if S is a power function, we obtain Claytoncopula, see Nelsen (2005).
@freakonometrics 50
![Page 51: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/51.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Archimedean Copulas, in dimension d ≥ 2
Let (Cn) be a sequence of absolutely continuous Archimedean copulas, withgenerators (φn). The limit of Cn, as n→∞ is Archimedean if either
• there is a genetor φ such that s, t ∈ [0, 1],
limn→∞
φn(s)φ′n(t) = φ(s)
φ′(t) .
• there is a continuous function λ such that limn→∞
λn(t) = λ(t).
• there is a function K continuous such that limn→∞
Kn(t) = K(t).
• there is a sequence of positive constants (cn) such that limn→∞
cnφn(t) = φ(t),for all t ∈ [0, 1].
@freakonometrics 51
![Page 52: Multivariate Distributions, an overview](https://reader031.vdocument.in/reader031/viewer/2022030312/58ee25f11a28abcb1b8b45cd/html5/thumbnails/52.jpg)
Arthur CHARPENTIER - Multivariate Distributions
Copulas, Optimal Transport and Matching
Monge Kantorovich,
minT :R→R
{∫[`(x1, T (x1))dF1(x1); wiht T (X1) ∼ F2 when X1 ∼ F1]
}for some loss function `, e.g. `(x1, x2) = [x1 − x2]2.
In the Gaussian case, if Xi ∼ N (0, σ2i ), T ?(x1) = σ2/σ1 · x1.
Equivalently
minF∈F(F1,F2)
{∫`(x1, x2)dF (x1, x2)
}= minF∈F(F1,F2)
{EF [`(X1, X2)]}
If ` is quadratic, we want to maximize the correlation,
maxF∈F(F1,F2)
{EF [X1 ·X2]}
@freakonometrics 52