jj ii stat440/840: statistical computingpmarriot/compstat/lect04.pdfhome page title page contents jj...
TRANSCRIPT
![Page 1: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/1.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 1 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
STAT440/840: Statistical ComputingPaul Marriott
MC 6096
February 16, 2005
![Page 2: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/2.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 2 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Chapter 4: Simulation
• In this chapter we examine how to simulate random numbersfrom a range of statistical distributions.
• Many computers have built-in routines for generating inde-pendent random numbers from the uniform distribution U [0, 1],so we shall focus on how these may be manipulated in orderto obtain random numbers from other distributions.
• Computer generated random numbers are not random at all,they are deterministic. However, providing the computer man-ufacturer has done their job properly, computer generated ran-dom numbers have identical properties to truly random num-bers.
• We will assume that we have some method of generating theseand ask how to generate from other distributions if we startfrom a uniform random number generator
![Page 3: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/3.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 3 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Set up, notation, etc.
• Suppose we wish to generate independent random variables
X1, X2, . . .
each with distribution function
F (x) = Pr(Xi ≤ x)
and density function
f (x) =dF (x)
dx
• We assume that we have access to an infinite supply of inde-pendent U [0, 1] random variables which we denote by
U1, U2, . . .
where, by definition,
Pr(Ui ≤ u) = u
for all u ∈ [0, 1] and each i.
![Page 4: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/4.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 4 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Examples
What is the distribution of
X =
12∑i=1
Ui − 6?
![Page 5: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/5.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 5 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Code for simulation
sim.fn1 <- function(nreps = 1000) {# nreps is the number#of repetitions (default = 1000).# each repetition yields#1 value of XX <- NULLfor (i in 1:nreps)
{X[i] <- sum(runif(12)) - 6
}answer <- Xanswer
}
![Page 6: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/6.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 6 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Example
• The Box-Muller algorithm. This is an exact method of trans-forming independent U [0, 1] random variables into N(0, 1)random variables.
• Here, we examine how the method works.
1. Generate U1. Set Θ = 2πU1.2. Generate U2. Set E = − log U2 and R =
√2E. (Note:
log = loge)3. Then X = R cos Θ and Y = R sin Θ are independent
N(0, 1).
![Page 7: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/7.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 7 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Inversion method
Method: TakeU ∼ U [0, 1]
and letX = F−1(U)
Then X has distribution function F (·).
Proof: Clearly,
Pr(X ≤ x) = Pr(F−1(U) ≤ x)
= Pr(U ≤ F (x))
= F (x)
sincePr(U ≤ u) = u
Note: In the discrete case, the above mechanism works providedwe define F−1(·) by F−1(u) = min{x : F (x) ≥ u}.
![Page 8: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/8.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 8 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Example: Exponential
• The exponential distribution:
F (x) = 1− exp(−λx)
for x ∈ [0,∞).
• Solving F (x) = u gives
x = F−1(u)
= −λ−1 log(1− u)
ThusX = −λ−1 log(1− U)
is exponentially distributed.
• Note that if U ∼ U [0, 1] then 1 − U ∼ U [0, 1] also, whichimplies
X = −λ−1 log U
is exponentially distributed too.
![Page 9: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/9.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 9 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Rejection Method
Suppose we wish to simulate random variables X1, X2, . . . fromthe density f , and have a method available for generating Y1, Y2, . . .from the density g.
1. Generate Y from g.
2. With probability h(Y ), set X = Y ; else return to 1.
![Page 10: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/10.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 10 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Choice of h(x)
Our analysis of the rejection method starts by noting that
Pr(Y ≤ x and Y is accepted) =
∫ x
−∞g(y)h(y) dy
so
Pr(Y is accepted) =
∫ ∞
−∞g(y)h(y) dy.
Combining these as a conditional probability, we therefore have
Pr(Y ≤ x|Y is accepted) =
∫ x
−∞ g(y)h(y) dy∫ ∞−∞ g(y)h(y) dy
.
This shows that the accepted values have density
g(x)h(x)/{∫ ∞
−∞g(y)h(y) dy}
![Page 11: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/11.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 11 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Choice of h(x)
Now, iff (x)/g(x) ≤ M < ∞
for each x,g(x) > 0 and some fixed M > 0, we may take
h(x) = f (x)/{g(x)M}
Under this choice of h(x), the accepted values have density
g(x)f (x)/{g(x)M}∫ ∞−∞ g(y)f (y){g(y)M}−1 dy
=f (x)M−1∫ ∞
−∞ f (y)M−1 dy
= f (x),
which is the target density.
![Page 12: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/12.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 12 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Choice of h(x)
Furthermore, we have that
Pr(Y is accepted) =
∫ ∞
−∞g(y)f (y){g(y)M}−1 dy
= M−1.
Hence the number of proposals until a Y is accepted is geometri-cally distributed with mean M .
![Page 13: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/13.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 13 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
General rejection sampling algorithm
To sample from f (x) where f (x) ≤ Mg(x) for all x.
1. Generate Y from the density g(y), and then X from U [0, Mg(Y )].
2. Accept Y if X ≤ f (Y ).
3. Repeat.
Note that the event X ≤ f (Y ) occurs with probability f (Y ){g(Y )M}−1
which is the acceptance probability given above.
![Page 14: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/14.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 14 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Example: Beta distribution
• Suppose we wish to simulate from a Beta(2, 3) distribution.
• This has density function 12x(1 − x)2, which has maximumvalue 16/9.
• Thus we may bound the Beta(2, 3) density with a rectangle ofthis height (or any height greater than 16/9).
![Page 15: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/15.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 15 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Example: Beta distribution
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
. ..
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
..
.
.
.
.
.
.
.
..
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.. .
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0.0 0.2 0.4 0.6 0.8 1.0
0.0
0.5
1.0
1.5
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
..
.
.
.
..
.
.
.
.
. .
.
.
0.0 0.2 0.4 0.6 0.8 1.0
0.0
0.5
1.0
1.5
Rejection sampling with Uniform envelope
![Page 16: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/16.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 16 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Efficiency of the method
• The efficiency of the method is governed by how many pointsare rejected – a feature that is determined by how similar f (x)and the bounding function are.
• The general form of the algorithm allows the bounding func-tion, which is usually called the envelope, to be of the formMg(x) instead of flat, but the idea is the same in essence.
• An important feature of the rejection method is that we needto know f only up to a normalising constant.
![Page 17: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/17.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 17 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
The Beta(a, b) distribution
• This has densityf (x) = kabf1(x)
where kab is a constant and
f1(x) = xa−1(1− x)b−1
on x ∈ [0, 1].
• The Beta(a, b) density is bounded if and only if
a ≥ 1 and b ≥ 1
When this is the case, we may bound f1(x) with a uniformenvelope.
![Page 18: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/18.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 18 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Example: Beta density bounded case
The maximum value of f1(x) is
M1 =(a− 1)a−1(b− 1)b−1
(a + b− 2)a+b−2
so the algorithm is as follows:
1. Generate Y ∼ U [0, 1].
2. Generate X ∼ U [0, M1].
3. Accept Y if X ≤ f1(Y ).
![Page 19: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/19.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 19 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Example: Beta density unbounded case
• Ifa ∈ (0, 1) and b ≥ 1
for example, then we cannot get away with a flat envelope.
• Instead, we takeY = U 1/a
whereU ∼ U [0, 1]
so thatg(x) = axa−1
• Thenf1(x)
g(x)
is bounded by a−1 for all x
![Page 20: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/20.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 20 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Example: Beta density unbounded case
1. Generate Y = U 1/a.
2. Generate X ∼ U [0, Y a−1].
3. Accept Y if X ≤ f1(Y ).
![Page 21: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/21.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 21 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Splus code
sim.fn2 <-function(a = 0.5, b = 2,nvals = 1000){# Rejection sampling for Beta(0.5,2) distribution.X.accepted <- NULLi <- 0repeat {
Y <- runif(1)ˆ(1/a)X <- runif(1, 0, Yˆ(a - 1))if(X <= Yˆ(a-1)*(1-Y)ˆ(b - 1)){
i <- i+1X.accepted[i] <- Y
}if(i >= nvals) break
}answer <- X.acceptedanswer}
![Page 22: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/22.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 22 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Figure
Histogram of simvals
simvals
Rel
ativ
e F
requ
ency
0.0 0.2 0.4 0.6 0.8 1.0
01
23
45
6
0.0 0.2 0.4 0.6 0.8 1.00.
00.
20.
40.
60.
81.
0
QQplot of simvals
simvals
Tru
e
![Page 23: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/23.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 23 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
The Polar Algorithm
The Polar Algorithm (for two independent N(0, 1) variables)
1. REPEAT: Generate independent V1 ∼ U [−1, 1] and V2 ∼U [−1, 1] until W = V 2
1 + V 22 ≤ 1.
2. Let C =√−2W−1 log W .
3. Set X = CV1 and Y = CV2, then X and Y are independentN(0, 1).
![Page 24: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/24.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 24 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Ratio of Uniforms: The Cauchy distribution
• This has density function proportional to
(1 + x2)−1
on x ∈ (−∞,∞).
• Let (U, V ) be uniformly distributed on the unit disc.
• Then V/U has the same distribution as the ratio of two in-dependent N(0, 1) variables (see the Polar algorithm, above)and this is the distribution of tan Θ where Θ is uniform on[0, 2π].
• Thus V/U is Cauchy distributed
![Page 25: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/25.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 25 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Ratio of Uniforms: algorithm
Thus a simple algorithm for generating Cauchy variables is
1. REPEAT: Generate independent U ∼ U [−1, 1] and V ∼U [−1, 1] until U 2 + V 2 ≤ 1.
2. Set X = V/U .
![Page 26: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/26.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 26 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
General ratio of uniforms
In principle, the ratio of uniforms method can be used to generatefrom an arbitrary density, and also when the density is knownonly up to a constant of proportionality. This is shown by thefollowing theorem:
Let h(x) be a non-negative function with∫h(x) dx < ∞
and define the set
Ch ={
(u, v) : 0 ≤ u ≤√
h(v/u)}
.
Then Ch has finite area, and if (U, V ) is uniformly distributedover Ch then
X = V/U has densityh(x){∫h(x) dx
}.
![Page 27: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/27.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 27 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Ratio of Uniforms
Note: This result is most useful when Ch is contained in somerectangle
[0, a]× [b−, b+]
as rejection sampling may then be used to sample (U, V ) pairsuniformly from Ch.
The algorithm is then
1. REPEAT: Generate independent U ∼ U [0, a] and
V ∼ U [b−, b+]
until(U, V ) ∈ Ch
2. Let X = V/U .
![Page 28: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/28.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 28 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Ratio of Uniforms
The following result may be useful for constructing such a rect-angle.
Theorem Suppose h(x) and x2h(x) are bounded. Then
Ch ⊂ [0, a]× [b−, b+]
where
a =√
sup h,
b+ =√
sup{x2h(x) : x ≥ 0},b− = −
√sup{x2h(x) : x ≤ 0}.
![Page 29: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/29.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 29 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Examples
1. The exponential distribution with density
h(x) = e−x
on (0,∞). Then a = 1, b− = 0 and b+ = 2/e, and (u, v) ∈Ch is equivalent to
u2 ≤ e−v/u
or equivalently,v ≤ −2u log u
2. The normal distribution with density proportional to
exp(−x2/2)
Here, a = 1, b2+ = b2
− = 2/e, and
(u, v) ∈ Ch
is equivalent tov2 ≤ −4u2 log u
![Page 30: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/30.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 30 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Composition
A mixture distribution is one that has density
f (x) =
r∑i=1
pifi(x)
where, for each i,fi(x)
is a density function and
{p1, . . . , pr}
are a set of weights that satisfy
r∑i=1
pi = 1
. The value of pi may be interpreted as the probability that an arbi-trary observation comes from the distribution with density fi(x).
![Page 31: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/31.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 31 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Example: Mixture of Two Normals
sim.fn3 <- function(p1 = 0.3, m1 = 0,m2 = 1, v1 = 1, v2 = 1, nvals = 100) {
X.vals <- NULLi <- 0
repeat {i <- i+1u <- runif(1)if(u <= p1) {dum <- rnorm(1, m1, sqrt(v1))
}else {dum <- rnorm(1, m2, sqrt(v2))
}X.vals[i] <- dumif(i >= nvals) break}
answer <- X.valsanswer }
![Page 32: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/32.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 32 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Example
Simulated values
Rel
ativ
e F
requ
ency
−2 0 2 4 6 8
0.00
0.05
0.10
0.15
0.20
0.25
Non−symmetric mixture of two normals
![Page 33: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/33.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 33 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Multivariate distributions
The multivariate normal distribution Let X be a p-dimensionalmultivariate normal random variable. Then the density of X isgiven by
|2π det Σ|−1/2 exp
{−1
2(x− µ)′Σ−1(x− µ)
}where µ ∈ Rp is the mean and Σ is the p × p positive-definitevariance-covariance matrix
![Page 34: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/34.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 34 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
The multivariate normal distribution
Method 1: This method is based on factorising the variance-covariance matrix as Σ = SS
′for some p × p matrix S. Pro-
viding we are able to do this factorisation, then we are able tosimulate X as follows:
• Take Z1, . . . , Zp independent univariate N(0, 1), and let
Z ′ = (Z1, . . . , Zp)
• Set X = µ + SZ . Then
X ∼ N(µ, Σ)
It is always possible to express the variance-covariance matrix Σas Σ = SS ′. For example, the Cholesky decomposition
![Page 35: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/35.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 35 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
The multivariate normal distribution
Method 2: This method generates the p-dimensional variable viaa sequence of p separate univariate simulations. We write
X = µ + Y
whereY = (Y1, . . . , Yp)
′ ∼ N(0, Σ)
and generate the value y1 from the univariate distribution of Y1,then we generate y2 from the distribution of
Y2|Y1 = y1
then y3 from the distribution of
Y3|{Y1 = y1, Y2 = y2}
etc. This conditioning approach may be used for general multi-variate distributions. Note A|B means A conditional on B.
![Page 36: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/36.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 36 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
The multivariate normal distribution
For the multivariate normal distribution, it can be shown that allof the conditional distributions are univariate normal. More pre-cisely, letting Ak denote the upper k × k sub-matrix of Σ, and
a′ = (Σ1k, . . . , Σk−1,k)
the conditional distribution of Yk given
Wk = (Y1, . . . Yk−1)′
is univariate normal with mean
a′A−1k−1Wk
and varianceΣkk − a′A−1
k−1a
![Page 37: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/37.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 37 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
General multivariate distributions
To generate Y ∈ Rp with multivariate distribution
F (y) = Pr(Y1 ≤ y1, . . . , Yp ≤ yp)
• Generate y1 from the marginal distribution of Y1, i.e. from thedistribution
F (y,∞, . . . ,∞)
• Generate y2 from the conditional distribution of
Y2|Y1 = y1
• Generate y3 from the conditional distribution of
Y3|{Y1 = y1, Y2 = y2}
etc.
![Page 38: JJ II STAT440/840: Statistical Computingpmarriot/CompStat/lect04.pdfHome Page Title Page Contents JJ II J I Page 1 of 63 Go Back Full Screen Close Quit •First •Prev •Next •Last](https://reader030.vdocument.in/reader030/viewer/2022040112/5e741991aec2394f7e5bd744/html5/thumbnails/38.jpg)
Home Page
Title Page
Contents
JJ II
J I
Page 38 of 63
Go Back
Full Screen
Close
Quit
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
General multivariate distributions
The reason this works is that the density of Y may be factorisedinto a sequence of conditional densities as follows:
f (y1, . . . , yp) = f (y1)f (y1, . . . , yp)
f (y1)= f (y1)f (y2, . . . , yp|y1)
= f (y1)f (y1, y2)
f (y1)
f (y1, . . . , yp)
f (y1, y2)= f (y1)f (y2|y1)f (y3 . . . , yp|y1, y2)...= f (y1)f (y2|y1)f (y3|y1, y2)
· · · f (yp|y1, . . . , yp−1).