2014 spring crunch seminar (sde/levy/fractional/spectral method)
Post on 11-Jun-2015
95 Views
Preview:
TRANSCRIPT
Numerical methods for SPDEs with TαS processes
Mengdi Zheng, George EmKarniadakis
Brown University/ Pizza Seminar
March 21, 2014
Contents
� Background on TαS processes� Numerical simulations of TαS processes
� Compound Poisson approximation (CP)� Seires reprsentation
� Simulation of reaction diffusion equations with TαS white noises� MC/CP, MC/S, PCM/CP, PCM/S
� Simulation of overdamped Langevin equations with TαS whitenoises� Integral type PDE (by CP approximation)� TFPDEs
� Future work
2 of 39
Section 1.1: Levy processes� Definition of a Levy process Xt (a continuous random walk):
� Independent increments: for t0 < t1 < ... < tn, random variables(RVs) Xt0 , Xt1 − Xt0 ,..., Xtn−1 − Xtn−1 are independent;
� Stationary increments: the distribution of Xt+h − Xt does not dependon t;
� RCLL: right continuous with left limits;� Stochastic continuity: ∀ε > 0, limh→0 P(|Xt+h − Xt | ≥ ε) = 0;� X0 = 0 P-a.s..
� Decomposition of a Levy process Xt = Gt + Jt + ct: a Gaussianprocess (Gt), a pure jump process (Jt), and a drift (ct).
� say some other facts here
3 of 39
Section 1.2: Pure jump process Jt
� Definition of the jump: 4Jt = Jt − Jt− .� Definition of the Poisson random measure (an RV):
N(t,U) =∑
0≤s≤tI4Js∈U , U ∈ B(R0), U ⊂ R0. (1)
Figure : Represent a sample path of Jt in a space of jump times andsizes; N(t,U) is the number of dots in the box
4 of 39
Section 1.2: Pure jump process Jt (continued)
� Definition of Levy measure ν(x):
ν(U) = E[N(1,U)], U ∈ B(R0), U ⊂ R0. (2)
� Definition of the compensated Poisson random measure:
N(dt, dz) = N(dt, dz)− ν(dz)dt = N(dt, dz)−E[N(dt, dz)]. (3)
� Pure jump process Jt in an integral form:
Jt =
∫ t
0
∫R0
zN(dτ, dz) =
∫ t
0
∫R0
z(N(dτ, dz) + ν(dz)dτ). (4)
5 of 39
Section 1.3: Tempered α-stable processes (TαS) Lt
� A pure jump process
� Levy measure:
ν(x) =ce−λ|x |
|x |α+1, 0 < α < 2. (5)
� c : alters the intensity of jumps of all sizes.� α: determines the importance of smaller jumps.� λ: tempers the bigger jumps.
� Probability density function (PDF) for Lt is not known in a closedform unless α = 1
2 .
� Limit behavior: in a short time it looks like an α-stable process; ina long time it looks like a Brownian motion.
6 of 39
Section 1.3: TαS processes Lt (continued)
� Parameters (c, α, λ) completely characterize a TαS process.
−0.05 0 0.050
5
10
15
20
25
30
35
40
x
i(x)
Levy measure of T_S processes
_=0.1,c=0.01,h=100_=0.1,c=0.01,h=1_=0.2,c=0.01,h=100
0 0.2 0.4 0.6 0.8 1−1
0
1
2
3
4
5
timeL t(t
)
sample paths of T_S processes w/ different h
h=0.01h=1h=10
Q=10000(truncation)_=1, c=1
Figure : Leve measures of TαS processes Lt (left); Samples paths of Lt
(right).
7 of 39
Section 2: Numerical simulation of TαS processes� Compound Poisson (CP) approximation
� R. Cont, P. Tankov, Financial Modelling with Jump Processes,Chapman & Hall/CRC Press, 2004.
� Series representation� J. Rosınski, Series representations of Levy processes from the
perspective of point processes in: Levy Processes - Theory andApplications, O. E. Barndorff-Nielsen, T. Mikosch and S. I. Resnick(Eds.), Birkhauser, Boston, (2001), pp. 401–415.
� J. Rosınski, On series representations of innitely divisible randomvectors, Ann. Probab., 18 (1990), pp. 405–430.
� J. Rosınski, Series representations of infinitely divisible random vectorsand a generalized shot noise in Banach spaces, University of NorthCarolina Center for Stochastic Processes, Technical Report No. 195,(1987).
� J. Rosınski, Tempering stable processes, Stoch. Proc. Appl., (2007),pp. 117.
8 of 39
Section 2.1: Simulation of TαS processes by CPapproximation
� Main idea: we simulate the large jumps as a CP process andreplace the small ones (size ≤ δ) with their expectation as a driftterm.
� The CP approximation X δt for this TαS subordinator Xt (with
ν(x) = ce−λx
xα+1 Ix>0, positive jumps only) is:
Xt ≈ X δt =
∑s≤t4Xs I4Xs≥δ+E[
∑s≤t4Xs I4Xs<δ] ≈
∞∑i=1
Jδi It≤Ti+bδt.
(6)
� Intensity of the CP process: U(δ) = c∫∞δ
eλxdxxα+1 (by num int).
� Jump size distribution: pδ(x) = 1U(δ)
ceλx
xα+1 Ix≥δ for Jδi (by rejection
sampling method).
� Drift: bδ = c∫ δ
0e−λxdx
xα (by num int).9 of 39
Section 2.1: Simulation of TαS processes by CPapproximation—-Algorithm of CP processes
Here we describe how to simulate the trajectories of a CP process
with intensity U(δ) and jump size distribution νδ(x)U(δ) , on a simulation
time domain [0,T ] at time t.
� Simulate an RV N from Poisson distribution with parameterU(δ)T , as the total number of jumps on the interval [0,T ].
� Simulate N independent RVs, Ti , uniformly distributed on theinterval [0,T ], as jump times.
� Simulate N jump sizes, Yi with distribution νδ(x)U(δ) .
Then the trajectory at time t is given by∑N
i=1 ITi≤tYi .
10 of 39
Section 2.1: Simulation of TαS processes by CPapproximation—-Algorithm of rejection method
The distribution pδ(x) = 1U(δ)
ceλx
xα+1 Ix≥δ can be bounded by :
pδ(x) ≤ δ−αe−λδ
αU(δ)f δ(x), (7)
where f δ(x) = αδ−α
xα+1 Ix≥δ. The algorithm is: REPEATGenerate RVs W and V : independent and uniformly distributed on[0, 1]Set X = δW−1/α
Set T = f δ(X )δ−αe−λδ
pδ(X )αU(δ)
UNTIL VT ≤ 1RETURN X .
11 of 39
Section 2.2: Simulation of TαS processes by seriesrepresentation
� Let {εj}, {ηj}, and {ξj} be sequences of i.i.d. RVs s.t.P(εj = ±1) = 1/2, ηj ∼ Exponential(λ), and ξj ∼Uniform(0, 1).Let {Γj} be arrival times in a Poisson process with rate one. Let{Uj} be i.i.d. uniform RVs on [0,T ].
� This representation converges almost surely as uniformly in t (by J.Rosınski):
Lt =+∞∑j=1
εj [(αΓj
2cT)−1/α ∧ ηjξ
1/αj ]I{Uj≤t}, 0 ≤ t ≤ T . (8)
� We will treat [(αΓj
2cT )−1/α ∧ ηjξ1/αj ] as one RV to reduce the # of
RVs.
12 of 39
Section 2.2: Simulation of TαS processes by seriesrepresentation—-simplify the representation� We calculated the PDF of [(
αΓj
2cT )−1/α ∧ ηjξ1/αj ] RV
� When 0 < α < 1,
fAj∧Bj (x) = (α
xΓ(j)e−tt j |t= 2cT
αxα)
[αΓ(1− α)λα∫ +∞
x
(1− γinc(λz , 1− α))zα−1dz ]
+ [αΓ(1− α)λα(1− γinc(λx , 1− α)xα−1)]γinc(2cT
αxα, j)
(9)
� When 1 < α < 2,
fAj∧Bj (x) = (α
xΓ(j)e−tt j |t= 2cT
αxα)[
∫ +∞
x
fηjξ
1/αj
(z)dz ]+(fηjξ
1/αj
(x))γinc(2cT
αxα, j)
(10)
13 of 39
Section 2.3.1: Inverse Gaussian (IG) process and K-Stest
� An IG subordinator has a Levy measure (α = 1/2) as:
νIG =ce−λx
x3/2Ix>0. (11)
� PDF:
pt(x) =ct
x3/2e2ct
√πλe−λx−πc
2t2/x , x > 0. (12)
� We perform the one-sample Kolmogorov-Smirnov statistic (K-Stest) between the empirical cumulative distribution function (CDF)and the exact reference CDF:
KS = supx|Fem(x)− Fex(x)|, x ∈ supp(F ). (13)
14 of 39
Section 2.3.2: histograms of IG process by CP
0 0.5 1 1.50
0.5
1
1.5
2
2.5
x
p t(x)
CP =0.1reference PDF
KS = 0.152843
0 0.5 1 1.50
0.5
1
1.5
2
2.5
x
p t(x)
CP =0.02reference PDF
KS = 0.009250
0 0.5 1 1.50
0.5
1
1.5
2
2.5
x
p t(x)
CP =0.005reference PDF
KS = 0.003414
Figure : Empirical histograms of an IG subordinator (α = 1/2) simulated viathe CP approximation at t = 0.5: the IG subordinator has c = 1, λ = 3;each simulation contains s = 106 samples (we zoom in and plot x ∈ [0, 1.8]to examine the smaller jumps approximation); they are with different jumptruncation sizes as δ = 0.1 (left, dotted, CPU time 1450s), δ = 0.02(middle, dotted, CPU time 5710s), and δ = 0.005 (right, dotted, CPU time38531s); the reference PDFs are plotted in red solid lines; the one-sampleK-S test values are calculated for each plot; the RelTol of integration inU(δ) and bδ is 1e − 8.15 of 39
Section 2.3.3: histograms of IG process by Series rep
0 0.5 1 1.50
0.5
1
1.5
2
2.5
x
p t(x)
series rep Q=10reference PDF
KS = 0.360572
0 0.5 1 1.50
0.5
1
1.5
2
2.5
x
p t(x)
series rep Q=100reference PDF
KS = 0.078583
0 0.5 1 1.50
0.5
1
1.5
2
2.5
x
p t(x)
series rep Q=800reference PDF
KS = 0.040574
Figure : Empirical histograms of an IG subordinator (α = 1/2) simulated viathe series representation at t = 0.5: the IG subordinator has c = 1, λ = 3;each simulation is done on the time domain [0, 0.5] and contains s = 106
samples (we zoom in and plot x ∈ [0, 1.8] to examine the smaller jumpsapproximation); they are with different number of truncations in the seriesas Qs = 10 (left, dotted, CPU time 129s), Qs = 100 (middle, dotted, CPUtime 338s), and Qs = 1000 (right, dotted, CPU time 2574s); the referencePDFs are plotted in red solid lines; the one-sample K-S test values arecalculated for each plot.16 of 39
Section 3.1: Reaction diffusion equation with TαSwhite noises
� Problem: du(t, x ;ω) = (∂2u∂x2 + µu)dt + εdLt(ω), x ∈ [0, 2]
u(t, 0) = u(t, 1) periodic BCu(0, x) = u0(x) = sin(π2 x) initial condition
(14)where Lt(ω) is a one-dimensional TαS process .
� Integral form:
u(t, x) = eµt−π2
4tsin(
π
2x) + εeµt
∫ t
0e−µτdLτ , x ∈ [0, 2]. (15)
� Simulation done by: MC/CP, MC/S, PCM/CP, PCM/S.
17 of 39
Section 3.1: Reaction diffusion equation with TαSwhite noises
� If θt(z), t ≥ 0, z ∈ R0 is F-adapted, we have the Ito isometry:
E[(
∫ T
0
∫R0
θt(z)N(dt, dz))2] = E[
∫ T
0
∫R0
θ2t (z)ν(dz)dt]. (16)
� The second moment is:
Eex [u2(t, x ;ω)] = e2µt−π2
2tsin2(
π
2x) +
cε2e2µt
µλ2−α (1− e−2µt)Γ(2−α).
(17)
� Define the error:
l2u2(t) =||Eex [u2(x , t;ω)]− Enum[u2(x , t;ω)]||L2([0,2])
||Eex [u2(x , t;ω)]||L2([0,2]). (18)
18 of 39
Section 3.2.1: First-order Euler scheme in MC
� We say that∫ T
0 f (t)dLt is in an Ito sense so that the left end ofthe time interval is used:∫ T
0f (t)dLt = lim
n→∞
n∑i=1
f (ti−1)(Lti − Lti−1). (19)
� By the first order Euler’s method:
un+1 − un = (∂2u
∂x2+ µun)4t + ε(Ltn+1 − Ltn). (20)
19 of 39
Section 3.2.2: Comparing MC/CP and MC/S
102 103 104 105
10−6
10−5
10−4
10−3
10−2
s
l2u2(T=1)
MC/S Qs=10
MC/CP b=0.1MC/CP b=0.01
C*s−1/2
102 103 104 10510−5
10−4
10−3
10−2
10−1
s
l2u2(T=1)
MC/S Qs=10
MC/PCM b=0.1MC/PCM b=0.01
C*s−1/2
Figure : l2u2(T ) of the solution for equation (??) versus the number ofsamples s obtained by MC with λ = 10 (left) and λ = 1 (right): T = 1,c = 0.1, α = 0.5, λ = 1, ε = 0.1, µ = 2 (left and right). Spatialdiscretization: Nx = 500 Fourier collocation points on [0, 2]; temporaldiscretization: first order Euler scheme with time steps 4t = 1e − 5. In theCP approximation: RelTol = 1e − 8 for integration in U(δ).20 of 39
Section 3.2.3: MC/CP V.s. MC/S
� Cost: the MC/CP costs much less CPU time than the MC/S, e.g.MC/CP w/ δ = 0.01 costs half than MC/S w/ Qs = 10 for thesame s.
� Accuracy: with only half of the CPU time, MC/CP is moreaccurate than MC/S, e.g. compare the δ = 0.01 and the Qs = 10lines.
� Convergence rate: on the left plot , the MC/CP w/ δ = 0.01 isone magnitude more accurate than δ = 0.1 for a smaller s; on theright plot, the MC/CP w/ δ = 0.01 has almost the same accuracywith δ = 0.1 for a smaller s, then δ = 0.01 starts to be moreaccurate than δ = 0.1 for larger s (Explain why.).
21 of 39
Section 3.3.1: PCM/CP and PCM/S
� PCM: Suppose the solution is a function of a finite number ofindependent RVs ({Y 1,Y 2, ...,Y n)}) as v(Y 1,Y 2, ...,Y n), them-th moment of the solution is evaluated by
E[vm(Y 1,Y 2, ...,Y n)] =
d1∑i1=1
...
dn∑in=1
vm(y 1i1 , y
2i2 , ..., y
nin)w 1
i1 ...wnin .
(21)� # of sample points:
� In CP Xt ≈ X δt ≈
∑Qcp
i=1 Jδi It≤Ti + bδt: d2Qcp points.
� In series rep Lt =∑+∞
j=1 εj [(αΓj
2cT )−1/α ∧ ηjξ1/αj ]I{Uj≤t}: d3Qcp points.
22 of 39
Section 3.3.1: PCM/CP and PCM/S—-less samplingpoints
� In CP d(Qcp + 1) instead of d2Qcp :
E[u2(t, x ;ω)] ≈ e2µt− 12π2tsin2(
π
2x) + ε2e2µtE[Jδ1 ]
Qcp∑i=1
E[e−2µTi ].
(22)� In series rep dQs instead of d3Qs :
E[u2(t, x ;ω)] ≈ e2µt− 12π2tsin2(
π
2x) + ε2e2µt 1
2µT(1− e−2µT )
Qs∑j=1
E[((αΓj
2cT)−1/α ∧ ηjξ
1/αj )2].
(23)� Indeed if E [F (X1, ...,Xd)] = G (E [f1(X1)], ...,E [fd(Xd)]).23 of 39
Section 3.3.2: PCM/CP and PCM/S—-Results
0 10 20 30 40 50 60 70 80
10−6
10−4
10−2
100
Qs or Qcp
l2u2
(T=1
)
PCM/CP b=0.1PCM/CP b=0.01PCM/CP b=0.001PCM/CP b=0.0001PCM/S
0 10 20 30 40 5010−10
10−8
10−6
10−4
10−2
100
Qs or Qcp
l2u2
(T=1
)
PCM/CP b=0.1PCM/CP b=0.01PCM/CP b=0.001PCM/S
Figure : l2u2(T ) versus Qcp (by PCM/CP) or Qs (by PCM/S) with λ = 1(left) and λ = 0.01 (right): T = 1, c = 0.1, α = 0.5, ε = 0.1, µ = 0.1,Nx = 500 Fourier collocation points on [0, 2] (left and right). In thePCM/CP: RelTol = 1e − 10 for integration in U(δ). In the PCM/S:
RelTol = 1e − 8 for integration of E[((αΓj
2cT )−1/α ∧ ηjξ1/αj )2].
24 of 39
Section 3.3.3: PCM/CP and PCM/S—-Observationsin three stages
� For smaller values of Qs and Qcp: PCM/S is more accurate andconverges faster than PCM/CP. (Explain why.)
� For middle values of Qs and Qcp: the convergence rate of PCM/Sslows down but the convergence rate of PCM/CP goes up.(Explain why.)
� For larger values of Qs and Qcp: both PCM/CP and PCM/S stopconverging due to their own limitations.� Limitation of CP: when δ is smaller, the integration in
U(δ) = c∫∞δ
eλxdxxα+1
� Limitation of series rep: when j is larger, the density for
[(αΓj
2cT )−1/α ∧ ηjξ1/αj ] spreads out on a large domain
25 of 39
Section 3.4: comparing MC and PCM
100 102 104 106 108 1010
10−4
10−3
10−2
10−1
s
l2u2(T=1)
MC/CP
PCM/CP, d=2, s=d2Qcp
PCM/CP, d=2, s=d(Qcp+1)
PCM/CP, d=3, s=d2Qcp
PCM/CP, d=3, s=d(Qcp+1)
100 105 1010 1015
10−5
10−4
10−3
10−2
10−1
s
l2u2(T=1)
MC/S, Qs=10
PCM/S, d=2, s=d3Qs
PCM/S, d=2, s=d*Qs
PCM/S, d=3, s=d3Qs
PCM/S, d=2, s=d*Qs
Figure : l2u2(T ) versus the number of samples s by: 1) MC/CP andPCM/CP w/ δ = 0.01 (left); 2) MC/S w/ Qs = 10 and PCM/S (right).T = 1 , c = 0.1, α = 0.5, ε = 0.1, µ = 2. Spatial discretization: Nx = 500Fourier collocation points on [0, 2]; temporal discretization: first order Eulerscheme in (??) with time steps 4t = 1e − 5 (left and right).RelTol = 1e − 8 in U(δ).26 of 39
Section 4.1: Generalized Fokker-Planck (FP)equations for overdamped Langevin equations
� It is known that for any overdamped Langevin equation:
dx(t) = f (x(t), t)dt + dηt(ω), x(0) = x0, (24)
� The PDF of the solution P(x , t) satisfies the following generalizedFP equation:
∂
∂tP(x , t) = − ∂
∂x[f (x , t) P(x , t)] + F−1{Pk(t) lnSk}. (25)
� Sk = E[e−ikη1 ]� Pk(t) = E[e−ikx(t)]
27 of 39
Section 4.2: FP eqns for overdamped Langevinequations
� We solve:
dx(t;ω) = −σx(t;ω)dt + dLt(ω), x(0) = x0. (26)
� Method 1: we approximate Lt by a CP process, the density satisfies
∂
∂tPcp(x , t) = [σ − 2U(δ)]Pcp(x , t) + σx
∂Pcp(x , t)
∂x
+
∫ +∞
−∞dyPcp(x − y , t)
ce−λ|y |
|y |α+1.
(27)
We will solve this by RK2 in time and Fourier collocation in spaceon a large domain [−L, L].
28 of 39
Section 4.2: FP eqns for overdamped Langevin eqns� Method 2: derive the tempered fractional PDEs (TFPDEs)
� When 0 < α < 1, Sk = exp[−D{(λ+ ik)α − λα}], where
D = cαΓ(1− α). Γ(t) =
∫ +∞0
x t−1e−xdx . The density Pts(x , t)satisfies:
∂
∂tPts(x , t) =
∂
∂x(σxPts(x , t))− D(α)∂α,λx Pts(x , t)
− D(α)∂α,λ−x Pts(x , t), 0 < α < 1,
(28)
with the initial condition Pts(x , 0) = δ(x − x0).
∂α,λx f (x) = eλxdα
dxα[e−λx f (x)]− λαf (x), 0 < α < 1, (29)
∂α,λ−x f (x) = e−λxdα
d(−x)α[eλx f (x)]− λαf (x), 0 < α < 1. (30)
We solve this by finite difference methods.
29 of 39
Section 4.2: FP eqns for overdamped Langevin eqns� Method 2: derive the tempered fractional PDEs (TFPDEs)
� When 1 < α < 2, Sk = exp[D{(λ+ ik)α − λα − ikαλα−1}] [?, ?],where D(α) = c
α(α−1) Γ(2− α). The density Pts(x , t) satisfies:
∂
∂tPts(x , t) =
∂
∂x(σxPts(x , t)) + D(α)∂α,λx Pts(x , t)
+ D(α)∂α,λ−x Pts(x , t), 1 < α < 2,
(31)
with the initial condition Pts(x , 0) = δ(x − x0).
∂α,λx f (x) = eλxdα
dxα[e−λx f (x)]− λαf (x)− αλα−1f ′(x), 1 < α < 2,
(32)
∂α,λ−x f (x) = e−λxdα
d(−x)α[eλx f (x)]−λαf (x)+αλα−1f ′(x), 1 < α < 2.
(33)We solve this by finite difference methods.
30 of 39
Section 4.2: FP eqns for overdamped Langevin eqns(finite difference)
� The Grunwald-Letnikov finite difference:
dα
dxαf (x) = lim
h→0
+∞∑j=0
1
hαWj f (x − jh), 0 < α < 2, (34)
and
dα
d(−x)αf (x) = lim
h→0
+∞∑j=0
1
hαWj f (x + jh), 0 < α < 2. (35)
� Note that Wk =
(αk
)(−1)k = Γ(k−α)
Γ(−α)Γ(k+1) can be derived
recursively via W0 = 1,W1 = −α,Wk+1 = k−αk+1 Wk .
31 of 39
Section 4.2: TFPDEs for overdamped Langevin eqns(finite difference)
� When 0 < α < 1, fully implicit discretization scheme:
Pn+1i − Pn
i
dt= (σ + 2D(α)λα)Pn+1
i + σxiPn+1i+1 − Pn+1
i−1
2h
− D(α)
hα
i∑j=0
Wje−λjhPn+1
i−j −D(α)
hα
Nx−i∑j=0
Wje−λjhPn+1
i+j .
(36)� When 1 < α < 2, fully implicit discretization scheme:
Pn+1i − Pn
i
dt= (σ − 2D(α)λα)Pn+1
i + σxiPn+1i+1 − Pn+1
i−1
2h
+D(α)
hα
i∑j=0
Wje−λjhPn+1
i−j +D(α)
hα
Nx−i∑j=0
Wje−λjhPn+1
i+j .
(37)32 of 39
Section 4.2: dealing with initial condition
In both the CP approximation and the series representation, wenumerically approximate the initial condition by the deltasequenceseither with sinc functions
δDn =sin(nπ(x − x0))
π(x − x0), lim
n→+∞
∫ +∞
−∞δDn (x)f (x)dx = f (0), (38)
or with Gaussian functions
δGn = exp(−n(x − x0)2), limn→+∞
∫ +∞
−∞δGn (x)f (x)dx = f (0). (39)
33 of 39
Section 4.3: evolution of density
0.5
1
1.5
2
10
12
0
1
2
3
4
5
t
x(t)
Pts(x,t)
Pts(x,t)
x(t)
t
0.20.4
0.60.8
0.5 0 0.5 1 1.5 2
0
2
4
6
tx(t)
Pts(x,t)
Pts(x,t)
x(t)
t
Figure : Zoomed in density Pts(t, x) plots for the solution of equation (??)at different times obtained from solving equation (??) for α = 0.5 (left) andequation (??) for α = 1.5 (right): σ = 0.4, x0 = 1, c = 1, λ = 10 (left);σ = 0.9, x0 = 1, c = 0.005, λ = 0.01 (right). We have Nx = 2000equidistant spatial points on [−12, 12] (left); Nx = 2000 points on [−12, 12](right). Time step is 4t = 1e − 4 (left and right). The initial conditions areapproximated by δD20 (left) and δG40 (right).
34 of 39
Section 4.3: exact moments and definition of errors
� Solution in an integral form
� The second moment for the exact solution of equation (??) are:
E[x2(t)] = x20 e−2σt +
c
σ(1− e−2σt)
Γ(2− α)
λ2−α . (40)
� Let us define the errors of the first and the second moments to be
err1st(t) =|E[xnum(t)]− E[xex(t)]|
|E[xex(t)]|, err2nd =
|E[x2num(t)]− E[x2
ex(t)]||E[x2
ex(t)]|.
(41)
35 of 39
Section 4.3: density with CP approximation (int eqn)
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.910 4
10 3
10 2
t
erro
rs
err
1st
err2nd
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.910 4
10 3
10 2
10 1
100
t
erro
rs
err1st
err2nd
Figure : err1st and err2nd of the solution for equation (??) versus timeobtained by solving the density equation (??) with CP approximation for theTαS process Lt : c = 0.5, α = 0.95, λ = 10, σ = 0.01, x0 = 1 (left);c = 0.01, α = 1.6, λ = 0.1, σ = 0.02, x0 = 1(right). We used RK2 in timewith time steps 4t = 2e − 3, 1000 Fourier collocation points on [−12, 12] inspace, δ = 0.012, RelTol = 1e − 8 for U(δ), and initial condition as δD20 (leftand right).36 of 39
Section 4.3: TFPDEs V.s. PCM via moments
0 0.2 0.4 0.6 0.8 110 4
10 3
10 2
10 1
100
t
err 2n
d
fractional density equation
PCM/CP
0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.4510 3
10 2
10 1
100
t
err 2n
d
fractional density equation
PCM/CP
Figure : err2nd versus time by: 1) TFPDEs; 2) PCM/CP. Problem: α = 0.5,c = 2, λ = 10, σ = 0.1, x0 = 1 (left); α = 1.5, c = 0.01, λ = 0.01, σ = 0.1,x0 = 1 (right). For PCM/CP: δ = 1e − 5, Qcp = 50, d = 2, RelTol = 1e − 8for U(δ) (left); δ = 1e − 3, Qcp = 30, d = 2, RelTol = 1e − 8 for U(δ)(right). For density approach: 4t = 2.5e − 5, 2000 points on [−12, 12],δ = 0.012, IC is δD40 (left); 4t = 1e − 5, 2000 points on [−20, 20], δ = 0.02,i.c. given by δG40 (right).37 of 39
Section 4.3: TFPDEs V.s. MC via histograms
4 2 0 2 4 60
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
x(T = 0.5)
dens
ity P
(x,t)
histogram by MC/CPdensity by fractional PDEs
4 2 0 2 4 60
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
x(T=1)
dens
ity P
(x,t)
histogram by MC/CPdensity by fractional PDEs
Figure : Zoomed in plots of Pts(x ,T ) by TFPDEs and MC/CP at T = 0.5(left) and T = 1 (right): α = 0.5, c = 1, λ = 1, x0 = 1 and σ = 0.01 (leftand right). In MC/CP: s = 105, 316 bins, δ = 0.01, RelTol = 1e − 8 forU(δ), 4t = 1e − 3 (left and right). In the TFPDEs: 4t = 1e − 5, andNx = 2000 points on [−12, 12] in space (left and right).
38 of 39
Future work
� Derive the systems of TFPDEs corresponding to the solution ofSPDEs with TαS processes
39 of 39
top related