machine learning and portfolio selections. ii.oti/portfolio/lectures/tgyorfi2.pdf · dynamic...
TRANSCRIPT
![Page 1: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/1.jpg)
Machine learning and portfolio selections. II.
Laszlo (Laci) Gyorfi1
1Department of Computer Science and Information TheoryBudapest University of Technology and Economics
Budapest, Hungary
September 22, 2007
e-mail: [email protected]/ gyorfiwww.szit.bme.hu/ oti/portfolio
Gyorfi Machine learning and portfolio selections. II.
![Page 2: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/2.jpg)
Dynamic portfolio selection: general case
xi = (x(1)i , . . . x
(d)i ) the return vector on day i
b = b1 is the portfolio vector for the first dayinitial capital S0
S1 = S0 · 〈b1 , x1〉
for the second day, S1 new initial capital, the portfolio vectorb2 = b(x1)
S2 = S0 · 〈b1 , x1〉 · 〈b(x1) , x2〉 .nth day a portfolio strategy bn = b(x1, . . . , xn−1) = b(xn−1
1 )
Sn = S0
n∏i=1
⟨b(xi−1
1 ) , xi
⟩= S0e
nWn(B)
with the average growth rate
Wn(B) =1
n
n∑i=1
ln⟨b(xi−1
1 ) , xi
⟩.
Gyorfi Machine learning and portfolio selections. II.
![Page 3: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/3.jpg)
Dynamic portfolio selection: general case
xi = (x(1)i , . . . x
(d)i ) the return vector on day i
b = b1 is the portfolio vector for the first dayinitial capital S0
S1 = S0 · 〈b1 , x1〉for the second day, S1 new initial capital, the portfolio vectorb2 = b(x1)
S2 = S0 · 〈b1 , x1〉 · 〈b(x1) , x2〉 .
nth day a portfolio strategy bn = b(x1, . . . , xn−1) = b(xn−11 )
Sn = S0
n∏i=1
⟨b(xi−1
1 ) , xi
⟩= S0e
nWn(B)
with the average growth rate
Wn(B) =1
n
n∑i=1
ln⟨b(xi−1
1 ) , xi
⟩.
Gyorfi Machine learning and portfolio selections. II.
![Page 4: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/4.jpg)
Dynamic portfolio selection: general case
xi = (x(1)i , . . . x
(d)i ) the return vector on day i
b = b1 is the portfolio vector for the first dayinitial capital S0
S1 = S0 · 〈b1 , x1〉for the second day, S1 new initial capital, the portfolio vectorb2 = b(x1)
S2 = S0 · 〈b1 , x1〉 · 〈b(x1) , x2〉 .nth day a portfolio strategy bn = b(x1, . . . , xn−1) = b(xn−1
1 )
Sn = S0
n∏i=1
⟨b(xi−1
1 ) , xi
⟩= S0e
nWn(B)
with the average growth rate
Wn(B) =1
n
n∑i=1
ln⟨b(xi−1
1 ) , xi
⟩.
Gyorfi Machine learning and portfolio selections. II.
![Page 5: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/5.jpg)
log-optimum portfolio
X1,X2, . . . drawn from the vector valued stationary and ergodicprocesslog-optimum portfolio B∗ = {b∗(·)}
E{ln⟨b∗(Xn−1
1 ) , Xn
⟩| Xn−1
1 } = maxb(·)
E{ln⟨b(Xn−1
1 ) , Xn
⟩| Xn−1
1 }
Xn−11 = X1, . . . ,Xn−1
Gyorfi Machine learning and portfolio selections. II.
![Page 6: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/6.jpg)
Optimality
Algoet and Cover (1988): If S∗n = Sn(B∗) denotes the capital afterday n achieved by a log-optimum portfolio strategy B∗, then forany portfolio strategy B with capital Sn = Sn(B) and for anyprocess {Xn}∞−∞,
lim supn→∞
(1
nlnSn −
1
nlnS∗n
)≤ 0 almost surely
for stationary ergodic process {Xn}∞−∞,
limn→∞
1
nlnS∗n = W ∗ almost surely,
where
W ∗ = E
{maxb(·)
E{ln⟨b(X−1
−∞) , X0
⟩| X−1
−∞}}
is the maximal growth rate of any portfolio.
Gyorfi Machine learning and portfolio selections. II.
![Page 7: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/7.jpg)
Optimality
Algoet and Cover (1988): If S∗n = Sn(B∗) denotes the capital afterday n achieved by a log-optimum portfolio strategy B∗, then forany portfolio strategy B with capital Sn = Sn(B) and for anyprocess {Xn}∞−∞,
lim supn→∞
(1
nlnSn −
1
nlnS∗n
)≤ 0 almost surely
for stationary ergodic process {Xn}∞−∞,
limn→∞
1
nlnS∗n = W ∗ almost surely,
where
W ∗ = E
{maxb(·)
E{ln⟨b(X−1
−∞) , X0
⟩| X−1
−∞}}
is the maximal growth rate of any portfolio.
Gyorfi Machine learning and portfolio selections. II.
![Page 8: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/8.jpg)
Martingale difference sequences
for the proof of optimality we use the concept of martingaledifferences:
Definition
there are two sequences of random variables:
{Zn} {Xn}
Zn is a function of X1, . . . ,Xn,
E{Zn | X1, . . . ,Xn−1} = 0 almost surely.
Then {Zn} is called martingale difference sequence with respect to{Xn}.
Gyorfi Machine learning and portfolio selections. II.
![Page 9: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/9.jpg)
A strong law of large numbers
Chow Theorem: If {Zn} is a martingale difference sequence withrespect to {Xn} and
∞∑n=1
E{Z 2n }
n2< ∞
then
limn→∞
1
n
n∑i=1
Zi = 0 a.s.
Gyorfi Machine learning and portfolio selections. II.
![Page 10: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/10.jpg)
A weak law of large numbers
Lemma: If {Zn} is a martingale difference sequence with respectto {Xn} then {Zn} are uncorrelated.
Proof. Put i < j .
E{ZiZj} = E{E{ZiZj | X1, . . . ,Xj−1}}= E{ZiE{Zj | X1, . . . ,Xj−1}}= E{Zi · 0} = 0
Corollary
E
(
1
n
n∑i=1
Zi
)2 =
1
n2
n∑i=1
n∑j=1
E{ZiZj}
=1
n2
n∑i=1
E{Z 2i }
→ 0
if, for example, E{Z 2i } is a bounded sequence.
Gyorfi Machine learning and portfolio selections. II.
![Page 11: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/11.jpg)
A weak law of large numbers
Lemma: If {Zn} is a martingale difference sequence with respectto {Xn} then {Zn} are uncorrelated.Proof. Put i < j .
E{ZiZj}
= E{E{ZiZj | X1, . . . ,Xj−1}}= E{ZiE{Zj | X1, . . . ,Xj−1}}= E{Zi · 0} = 0
Corollary
E
(
1
n
n∑i=1
Zi
)2 =
1
n2
n∑i=1
n∑j=1
E{ZiZj}
=1
n2
n∑i=1
E{Z 2i }
→ 0
if, for example, E{Z 2i } is a bounded sequence.
Gyorfi Machine learning and portfolio selections. II.
![Page 12: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/12.jpg)
A weak law of large numbers
Lemma: If {Zn} is a martingale difference sequence with respectto {Xn} then {Zn} are uncorrelated.Proof. Put i < j .
E{ZiZj} = E{E{ZiZj | X1, . . . ,Xj−1}}
= E{ZiE{Zj | X1, . . . ,Xj−1}}= E{Zi · 0} = 0
Corollary
E
(
1
n
n∑i=1
Zi
)2 =
1
n2
n∑i=1
n∑j=1
E{ZiZj}
=1
n2
n∑i=1
E{Z 2i }
→ 0
if, for example, E{Z 2i } is a bounded sequence.
Gyorfi Machine learning and portfolio selections. II.
![Page 13: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/13.jpg)
A weak law of large numbers
Lemma: If {Zn} is a martingale difference sequence with respectto {Xn} then {Zn} are uncorrelated.Proof. Put i < j .
E{ZiZj} = E{E{ZiZj | X1, . . . ,Xj−1}}= E{ZiE{Zj | X1, . . . ,Xj−1}}
= E{Zi · 0} = 0
Corollary
E
(
1
n
n∑i=1
Zi
)2 =
1
n2
n∑i=1
n∑j=1
E{ZiZj}
=1
n2
n∑i=1
E{Z 2i }
→ 0
if, for example, E{Z 2i } is a bounded sequence.
Gyorfi Machine learning and portfolio selections. II.
![Page 14: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/14.jpg)
A weak law of large numbers
Lemma: If {Zn} is a martingale difference sequence with respectto {Xn} then {Zn} are uncorrelated.Proof. Put i < j .
E{ZiZj} = E{E{ZiZj | X1, . . . ,Xj−1}}= E{ZiE{Zj | X1, . . . ,Xj−1}}= E{Zi · 0}
= 0
Corollary
E
(
1
n
n∑i=1
Zi
)2 =
1
n2
n∑i=1
n∑j=1
E{ZiZj}
=1
n2
n∑i=1
E{Z 2i }
→ 0
if, for example, E{Z 2i } is a bounded sequence.
Gyorfi Machine learning and portfolio selections. II.
![Page 15: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/15.jpg)
A weak law of large numbers
Lemma: If {Zn} is a martingale difference sequence with respectto {Xn} then {Zn} are uncorrelated.Proof. Put i < j .
E{ZiZj} = E{E{ZiZj | X1, . . . ,Xj−1}}= E{ZiE{Zj | X1, . . . ,Xj−1}}= E{Zi · 0} = 0
Corollary
E
(
1
n
n∑i=1
Zi
)2 =
1
n2
n∑i=1
n∑j=1
E{ZiZj}
=1
n2
n∑i=1
E{Z 2i }
→ 0
if, for example, E{Z 2i } is a bounded sequence.
Gyorfi Machine learning and portfolio selections. II.
![Page 16: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/16.jpg)
A weak law of large numbers
Lemma: If {Zn} is a martingale difference sequence with respectto {Xn} then {Zn} are uncorrelated.Proof. Put i < j .
E{ZiZj} = E{E{ZiZj | X1, . . . ,Xj−1}}= E{ZiE{Zj | X1, . . . ,Xj−1}}= E{Zi · 0} = 0
Corollary
E
(
1
n
n∑i=1
Zi
)2
=1
n2
n∑i=1
n∑j=1
E{ZiZj}
=1
n2
n∑i=1
E{Z 2i }
→ 0
if, for example, E{Z 2i } is a bounded sequence.
Gyorfi Machine learning and portfolio selections. II.
![Page 17: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/17.jpg)
A weak law of large numbers
Lemma: If {Zn} is a martingale difference sequence with respectto {Xn} then {Zn} are uncorrelated.Proof. Put i < j .
E{ZiZj} = E{E{ZiZj | X1, . . . ,Xj−1}}= E{ZiE{Zj | X1, . . . ,Xj−1}}= E{Zi · 0} = 0
Corollary
E
(
1
n
n∑i=1
Zi
)2 =
1
n2
n∑i=1
n∑j=1
E{ZiZj}
=1
n2
n∑i=1
E{Z 2i }
→ 0
if, for example, E{Z 2i } is a bounded sequence.
Gyorfi Machine learning and portfolio selections. II.
![Page 18: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/18.jpg)
A weak law of large numbers
Lemma: If {Zn} is a martingale difference sequence with respectto {Xn} then {Zn} are uncorrelated.Proof. Put i < j .
E{ZiZj} = E{E{ZiZj | X1, . . . ,Xj−1}}= E{ZiE{Zj | X1, . . . ,Xj−1}}= E{Zi · 0} = 0
Corollary
E
(
1
n
n∑i=1
Zi
)2 =
1
n2
n∑i=1
n∑j=1
E{ZiZj}
=1
n2
n∑i=1
E{Z 2i }
→ 0
if, for example, E{Z 2i } is a bounded sequence.
Gyorfi Machine learning and portfolio selections. II.
![Page 19: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/19.jpg)
A weak law of large numbers
Lemma: If {Zn} is a martingale difference sequence with respectto {Xn} then {Zn} are uncorrelated.Proof. Put i < j .
E{ZiZj} = E{E{ZiZj | X1, . . . ,Xj−1}}= E{ZiE{Zj | X1, . . . ,Xj−1}}= E{Zi · 0} = 0
Corollary
E
(
1
n
n∑i=1
Zi
)2 =
1
n2
n∑i=1
n∑j=1
E{ZiZj}
=1
n2
n∑i=1
E{Z 2i }
→ 0
if, for example, E{Z 2i } is a bounded sequence.
Gyorfi Machine learning and portfolio selections. II.
![Page 20: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/20.jpg)
Constructing martingale difference sequence
{Yn} is an arbitrary sequence such that Yn is a function ofX1, . . . ,Xn
PutZn = Yn − E{Yn | X1, . . . ,Xn−1}
Then {Zn} is a martingale difference sequence:
Zn is a function of X1, . . . ,Xn,
E{Zn | X1, . . . ,Xn−1}= E{Yn − E{Yn | X1, . . . ,Xn−1} | X1, . . . ,Xn−1}= 0
almost surely.
Gyorfi Machine learning and portfolio selections. II.
![Page 21: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/21.jpg)
Constructing martingale difference sequence
{Yn} is an arbitrary sequence such that Yn is a function ofX1, . . . ,Xn
PutZn = Yn − E{Yn | X1, . . . ,Xn−1}
Then {Zn} is a martingale difference sequence:
Zn is a function of X1, . . . ,Xn,
E{Zn | X1, . . . ,Xn−1}= E{Yn − E{Yn | X1, . . . ,Xn−1} | X1, . . . ,Xn−1}= 0
almost surely.
Gyorfi Machine learning and portfolio selections. II.
![Page 22: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/22.jpg)
Constructing martingale difference sequence
{Yn} is an arbitrary sequence such that Yn is a function ofX1, . . . ,Xn
PutZn = Yn − E{Yn | X1, . . . ,Xn−1}
Then {Zn} is a martingale difference sequence:
Zn is a function of X1, . . . ,Xn,
E{Zn | X1, . . . ,Xn−1}= E{Yn − E{Yn | X1, . . . ,Xn−1} | X1, . . . ,Xn−1}= 0
almost surely.
Gyorfi Machine learning and portfolio selections. II.
![Page 23: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/23.jpg)
Constructing martingale difference sequence
{Yn} is an arbitrary sequence such that Yn is a function ofX1, . . . ,Xn
PutZn = Yn − E{Yn | X1, . . . ,Xn−1}
Then {Zn} is a martingale difference sequence:
Zn is a function of X1, . . . ,Xn,
E{Zn | X1, . . . ,Xn−1}
= E{Yn − E{Yn | X1, . . . ,Xn−1} | X1, . . . ,Xn−1}= 0
almost surely.
Gyorfi Machine learning and portfolio selections. II.
![Page 24: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/24.jpg)
Constructing martingale difference sequence
{Yn} is an arbitrary sequence such that Yn is a function ofX1, . . . ,Xn
PutZn = Yn − E{Yn | X1, . . . ,Xn−1}
Then {Zn} is a martingale difference sequence:
Zn is a function of X1, . . . ,Xn,
E{Zn | X1, . . . ,Xn−1}= E{Yn − E{Yn | X1, . . . ,Xn−1} | X1, . . . ,Xn−1}= 0
almost surely.
Gyorfi Machine learning and portfolio selections. II.
![Page 25: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/25.jpg)
Optimality
log-optimum portfolio B∗ = {b∗(·)}
E{ln⟨b∗(Xn−1
1 ) , Xn
⟩| Xn−1
1 } = maxb(·)
E{ln⟨b(Xn−1
1 ) , Xn
⟩| Xn−1
1 }
If S∗n = Sn(B∗) denotes the capital after day n achieved by alog-optimum portfolio strategy B∗, then for any portfolio strategyB with capital Sn = Sn(B) and for any process {Xn}∞−∞,
lim supn→∞
(1
nlnSn −
1
nlnS∗n
)≤ 0 almost surely
Gyorfi Machine learning and portfolio selections. II.
![Page 26: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/26.jpg)
Optimality
log-optimum portfolio B∗ = {b∗(·)}
E{ln⟨b∗(Xn−1
1 ) , Xn
⟩| Xn−1
1 } = maxb(·)
E{ln⟨b(Xn−1
1 ) , Xn
⟩| Xn−1
1 }
If S∗n = Sn(B∗) denotes the capital after day n achieved by alog-optimum portfolio strategy B∗, then for any portfolio strategyB with capital Sn = Sn(B) and for any process {Xn}∞−∞,
lim supn→∞
(1
nlnSn −
1
nlnS∗n
)≤ 0 almost surely
Gyorfi Machine learning and portfolio selections. II.
![Page 27: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/27.jpg)
Proof of optimality
1
nlnSn =
1
n
n∑i=1
ln⟨b(Xi−1
1 ) , Xi
⟩
=1
n
n∑i=1
E{ln⟨b(Xi−1
1 ) , Xi
⟩| Xi−1
1 }
+1
n
n∑i=1
(ln⟨b(Xi−1
1 ) , Xi
⟩− E{ln
⟨b(Xi−1
1 ) , Xi
⟩| Xi−1
1 })
and
1
nlnS∗n =
1
n
n∑i=1
E{ln⟨b∗(Xi−1
1 ) , Xi
⟩| Xi−1
1 }
+1
n
n∑i=1
(ln⟨b∗(Xi−1
1 ) , Xi
⟩− E{ln
⟨b∗(Xi−1
1 ) , Xi
⟩| Xi−1
1 })
Gyorfi Machine learning and portfolio selections. II.
![Page 28: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/28.jpg)
Proof of optimality
1
nlnSn =
1
n
n∑i=1
ln⟨b(Xi−1
1 ) , Xi
⟩=
1
n
n∑i=1
E{ln⟨b(Xi−1
1 ) , Xi
⟩| Xi−1
1 }
+1
n
n∑i=1
(ln⟨b(Xi−1
1 ) , Xi
⟩− E{ln
⟨b(Xi−1
1 ) , Xi
⟩| Xi−1
1 })
and
1
nlnS∗n =
1
n
n∑i=1
E{ln⟨b∗(Xi−1
1 ) , Xi
⟩| Xi−1
1 }
+1
n
n∑i=1
(ln⟨b∗(Xi−1
1 ) , Xi
⟩− E{ln
⟨b∗(Xi−1
1 ) , Xi
⟩| Xi−1
1 })
Gyorfi Machine learning and portfolio selections. II.
![Page 29: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/29.jpg)
Proof of optimality
1
nlnSn =
1
n
n∑i=1
ln⟨b(Xi−1
1 ) , Xi
⟩=
1
n
n∑i=1
E{ln⟨b(Xi−1
1 ) , Xi
⟩| Xi−1
1 }
+1
n
n∑i=1
(ln⟨b(Xi−1
1 ) , Xi
⟩− E{ln
⟨b(Xi−1
1 ) , Xi
⟩| Xi−1
1 })
and
1
nlnS∗n =
1
n
n∑i=1
E{ln⟨b∗(Xi−1
1 ) , Xi
⟩| Xi−1
1 }
+1
n
n∑i=1
(ln⟨b∗(Xi−1
1 ) , Xi
⟩− E{ln
⟨b∗(Xi−1
1 ) , Xi
⟩| Xi−1
1 })
Gyorfi Machine learning and portfolio selections. II.
![Page 30: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/30.jpg)
Universally consistent portfolio
These limit relations give rise to the following definition:
Definition
An empirical (data driven) portfolio strategy B is calleduniversally consistent with respect to a class C of stationaryand ergodic processes {Xn}∞−∞, if for each process in the class,
limn→∞
1
nlnSn(B) = W ∗ almost surely.
Gyorfi Machine learning and portfolio selections. II.
![Page 31: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/31.jpg)
Empirical portfolio selection
E{ln⟨b∗(Xn−1
1 ) , Xn
⟩| Xn−1
1 } = maxb(·)
E{ln⟨b(Xn−1
1 ) , Xn
⟩| Xn−1
1 }
fixed integer k > 0
E{ln⟨b(Xn−1
1 ) , Xn
⟩| Xn−1
1 } ≈ E{ln⟨b(Xn−1
n−k) , Xn
⟩| Xn−1
n−k}and
b∗(Xn−11 ) ≈ bk(Xn−1
n−k) = arg maxb(·)
E{ln⟨b(Xn−1
n−k) , Xn
⟩| Xn−1
n−k}
because of stationarity
bk(xk1) = arg max
b(·)E{ln
⟨b(xk
1) , Xk+1
⟩| Xk
1 = xk1}
= arg maxb
E{ln 〈b , Xk+1〉 | Xk1 = xk
1},
which is the maximization of the regression function
mb(xk1) = E{ln 〈b , Xk+1〉 | Xk
1 = xk1}
Gyorfi Machine learning and portfolio selections. II.
![Page 32: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/32.jpg)
Empirical portfolio selection
E{ln⟨b∗(Xn−1
1 ) , Xn
⟩| Xn−1
1 } = maxb(·)
E{ln⟨b(Xn−1
1 ) , Xn
⟩| Xn−1
1 }
fixed integer k > 0
E{ln⟨b(Xn−1
1 ) , Xn
⟩| Xn−1
1 } ≈ E{ln⟨b(Xn−1
n−k) , Xn
⟩| Xn−1
n−k}
and
b∗(Xn−11 ) ≈ bk(Xn−1
n−k) = arg maxb(·)
E{ln⟨b(Xn−1
n−k) , Xn
⟩| Xn−1
n−k}
because of stationarity
bk(xk1) = arg max
b(·)E{ln
⟨b(xk
1) , Xk+1
⟩| Xk
1 = xk1}
= arg maxb
E{ln 〈b , Xk+1〉 | Xk1 = xk
1},
which is the maximization of the regression function
mb(xk1) = E{ln 〈b , Xk+1〉 | Xk
1 = xk1}
Gyorfi Machine learning and portfolio selections. II.
![Page 33: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/33.jpg)
Empirical portfolio selection
E{ln⟨b∗(Xn−1
1 ) , Xn
⟩| Xn−1
1 } = maxb(·)
E{ln⟨b(Xn−1
1 ) , Xn
⟩| Xn−1
1 }
fixed integer k > 0
E{ln⟨b(Xn−1
1 ) , Xn
⟩| Xn−1
1 } ≈ E{ln⟨b(Xn−1
n−k) , Xn
⟩| Xn−1
n−k}and
b∗(Xn−11 ) ≈ bk(Xn−1
n−k) = arg maxb(·)
E{ln⟨b(Xn−1
n−k) , Xn
⟩| Xn−1
n−k}
because of stationarity
bk(xk1) = arg max
b(·)E{ln
⟨b(xk
1) , Xk+1
⟩| Xk
1 = xk1}
= arg maxb
E{ln 〈b , Xk+1〉 | Xk1 = xk
1},
which is the maximization of the regression function
mb(xk1) = E{ln 〈b , Xk+1〉 | Xk
1 = xk1}
Gyorfi Machine learning and portfolio selections. II.
![Page 34: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/34.jpg)
Empirical portfolio selection
E{ln⟨b∗(Xn−1
1 ) , Xn
⟩| Xn−1
1 } = maxb(·)
E{ln⟨b(Xn−1
1 ) , Xn
⟩| Xn−1
1 }
fixed integer k > 0
E{ln⟨b(Xn−1
1 ) , Xn
⟩| Xn−1
1 } ≈ E{ln⟨b(Xn−1
n−k) , Xn
⟩| Xn−1
n−k}and
b∗(Xn−11 ) ≈ bk(Xn−1
n−k) = arg maxb(·)
E{ln⟨b(Xn−1
n−k) , Xn
⟩| Xn−1
n−k}
because of stationarity
bk(xk1) = arg max
b(·)E{ln
⟨b(xk
1) , Xk+1
⟩| Xk
1 = xk1}
= arg maxb
E{ln 〈b , Xk+1〉 | Xk1 = xk
1},
which is the maximization of the regression function
mb(xk1) = E{ln 〈b , Xk+1〉 | Xk
1 = xk1}
Gyorfi Machine learning and portfolio selections. II.
![Page 35: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/35.jpg)
Empirical portfolio selection
E{ln⟨b∗(Xn−1
1 ) , Xn
⟩| Xn−1
1 } = maxb(·)
E{ln⟨b(Xn−1
1 ) , Xn
⟩| Xn−1
1 }
fixed integer k > 0
E{ln⟨b(Xn−1
1 ) , Xn
⟩| Xn−1
1 } ≈ E{ln⟨b(Xn−1
n−k) , Xn
⟩| Xn−1
n−k}and
b∗(Xn−11 ) ≈ bk(Xn−1
n−k) = arg maxb(·)
E{ln⟨b(Xn−1
n−k) , Xn
⟩| Xn−1
n−k}
because of stationarity
bk(xk1) = arg max
b(·)E{ln
⟨b(xk
1) , Xk+1
⟩| Xk
1 = xk1}
= arg maxb
E{ln 〈b , Xk+1〉 | Xk1 = xk
1},
which is the maximization of the regression function
mb(xk1) = E{ln 〈b , Xk+1〉 | Xk
1 = xk1}
Gyorfi Machine learning and portfolio selections. II.
![Page 36: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/36.jpg)
Regression function
Y real valuedX observation vector
Regression function
m(x) = E{Y | X = x}
i.i.d. data: Dn = {(X1,Y1), . . . , (Xn,Yn)}Regression function estimate
mn(x) = mn(x ,Dn)
local averaging estimates
mn(x) =n∑
i=1
Wni (x ;X1, . . . ,Xn)Yi
L. Gyorfi, M. Kohler, A. Krzyzak, H. Walk (2002) ADistribution-Free Theory of Nonparametric Regression,Springer-Verlag, New York.
Gyorfi Machine learning and portfolio selections. II.
![Page 37: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/37.jpg)
Regression function
Y real valuedX observation vectorRegression function
m(x) = E{Y | X = x}
i.i.d. data: Dn = {(X1,Y1), . . . , (Xn,Yn)}
Regression function estimate
mn(x) = mn(x ,Dn)
local averaging estimates
mn(x) =n∑
i=1
Wni (x ;X1, . . . ,Xn)Yi
L. Gyorfi, M. Kohler, A. Krzyzak, H. Walk (2002) ADistribution-Free Theory of Nonparametric Regression,Springer-Verlag, New York.
Gyorfi Machine learning and portfolio selections. II.
![Page 38: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/38.jpg)
Regression function
Y real valuedX observation vectorRegression function
m(x) = E{Y | X = x}
i.i.d. data: Dn = {(X1,Y1), . . . , (Xn,Yn)}Regression function estimate
mn(x) = mn(x ,Dn)
local averaging estimates
mn(x) =n∑
i=1
Wni (x ;X1, . . . ,Xn)Yi
L. Gyorfi, M. Kohler, A. Krzyzak, H. Walk (2002) ADistribution-Free Theory of Nonparametric Regression,Springer-Verlag, New York.
Gyorfi Machine learning and portfolio selections. II.
![Page 39: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/39.jpg)
Regression function
Y real valuedX observation vectorRegression function
m(x) = E{Y | X = x}
i.i.d. data: Dn = {(X1,Y1), . . . , (Xn,Yn)}Regression function estimate
mn(x) = mn(x ,Dn)
local averaging estimates
mn(x) =n∑
i=1
Wni (x ;X1, . . . ,Xn)Yi
L. Gyorfi, M. Kohler, A. Krzyzak, H. Walk (2002) ADistribution-Free Theory of Nonparametric Regression,Springer-Verlag, New York.
Gyorfi Machine learning and portfolio selections. II.
![Page 40: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/40.jpg)
Regression function
Y real valuedX observation vectorRegression function
m(x) = E{Y | X = x}
i.i.d. data: Dn = {(X1,Y1), . . . , (Xn,Yn)}Regression function estimate
mn(x) = mn(x ,Dn)
local averaging estimates
mn(x) =n∑
i=1
Wni (x ;X1, . . . ,Xn)Yi
L. Gyorfi, M. Kohler, A. Krzyzak, H. Walk (2002) ADistribution-Free Theory of Nonparametric Regression,Springer-Verlag, New York.
Gyorfi Machine learning and portfolio selections. II.
![Page 41: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/41.jpg)
Correspondence
X ∼ Xk1
Y ∼ ln 〈b , Xk+1〉m(x) = E{Y | X = x} ∼ mb(x
k1) = E{ln 〈b , Xk+1〉 | Xk
1 = xk1}
Gyorfi Machine learning and portfolio selections. II.
![Page 42: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/42.jpg)
Correspondence
X ∼ Xk1
Y ∼ ln 〈b , Xk+1〉
m(x) = E{Y | X = x} ∼ mb(xk1) = E{ln 〈b , Xk+1〉 | Xk
1 = xk1}
Gyorfi Machine learning and portfolio selections. II.
![Page 43: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/43.jpg)
Correspondence
X ∼ Xk1
Y ∼ ln 〈b , Xk+1〉m(x) = E{Y | X = x} ∼ mb(x
k1) = E{ln 〈b , Xk+1〉 | Xk
1 = xk1}
Gyorfi Machine learning and portfolio selections. II.
![Page 44: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/44.jpg)
Partitioning regression estimate
Partition Pn = {An,1,An,2 . . . }
An(x) is the cell of the partition Pn into which x falls
mn(x) =
∑ni=1 Yi I[Xi∈An(x)]∑ni=1 I[Xi∈An(x)]
Let Gn be the quantizer corresponding to the partition Pn:Gn(x) = j if x ∈ An,j .the set of matches
In(x) = {i ≤ n : Gn(x) = Gn(Xi )}
Then
mn(x) =
∑i∈In(x) Yi
|In(x)|.
Gyorfi Machine learning and portfolio selections. II.
![Page 45: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/45.jpg)
Partitioning regression estimate
Partition Pn = {An,1,An,2 . . . }An(x) is the cell of the partition Pn into which x falls
mn(x) =
∑ni=1 Yi I[Xi∈An(x)]∑ni=1 I[Xi∈An(x)]
Let Gn be the quantizer corresponding to the partition Pn:Gn(x) = j if x ∈ An,j .the set of matches
In(x) = {i ≤ n : Gn(x) = Gn(Xi )}
Then
mn(x) =
∑i∈In(x) Yi
|In(x)|.
Gyorfi Machine learning and portfolio selections. II.
![Page 46: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/46.jpg)
Partitioning regression estimate
Partition Pn = {An,1,An,2 . . . }An(x) is the cell of the partition Pn into which x falls
mn(x) =
∑ni=1 Yi I[Xi∈An(x)]∑ni=1 I[Xi∈An(x)]
Let Gn be the quantizer corresponding to the partition Pn:Gn(x) = j if x ∈ An,j .the set of matches
In(x) = {i ≤ n : Gn(x) = Gn(Xi )}
Then
mn(x) =
∑i∈In(x) Yi
|In(x)|.
Gyorfi Machine learning and portfolio selections. II.
![Page 47: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/47.jpg)
Partitioning regression estimate
Partition Pn = {An,1,An,2 . . . }An(x) is the cell of the partition Pn into which x falls
mn(x) =
∑ni=1 Yi I[Xi∈An(x)]∑ni=1 I[Xi∈An(x)]
Let Gn be the quantizer corresponding to the partition Pn:Gn(x) = j if x ∈ An,j .
the set of matches
In(x) = {i ≤ n : Gn(x) = Gn(Xi )}
Then
mn(x) =
∑i∈In(x) Yi
|In(x)|.
Gyorfi Machine learning and portfolio selections. II.
![Page 48: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/48.jpg)
Partitioning regression estimate
Partition Pn = {An,1,An,2 . . . }An(x) is the cell of the partition Pn into which x falls
mn(x) =
∑ni=1 Yi I[Xi∈An(x)]∑ni=1 I[Xi∈An(x)]
Let Gn be the quantizer corresponding to the partition Pn:Gn(x) = j if x ∈ An,j .the set of matches
In(x) = {i ≤ n : Gn(x) = Gn(Xi )}
Then
mn(x) =
∑i∈In(x) Yi
|In(x)|.
Gyorfi Machine learning and portfolio selections. II.
![Page 49: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/49.jpg)
Partitioning-based portfolio selection
fix k, ` = 1, 2, . . .P` = {A`,j , j = 1, 2, . . . ,m`} finite partitions of Rd ,
G` be the corresponding quantizer: G`(x) = j , if x ∈ A`,j .G`(x
n1) = G`(x1), . . . ,G`(xn),
the set of matches:
Jn = {k < i < n : G`(xi−1i−k) = G`(x
n−1n−k)}
b(k,`)(xn−11 ) = arg max
b
∑i∈Jn
ln 〈b , xi 〉
if the set In is non-void, and b0 = (1/d , . . . , 1/d) otherwise.
Gyorfi Machine learning and portfolio selections. II.
![Page 50: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/50.jpg)
Partitioning-based portfolio selection
fix k, ` = 1, 2, . . .P` = {A`,j , j = 1, 2, . . . ,m`} finite partitions of Rd ,G` be the corresponding quantizer: G`(x) = j , if x ∈ A`,j .
G`(xn1) = G`(x1), . . . ,G`(xn),
the set of matches:
Jn = {k < i < n : G`(xi−1i−k) = G`(x
n−1n−k)}
b(k,`)(xn−11 ) = arg max
b
∑i∈Jn
ln 〈b , xi 〉
if the set In is non-void, and b0 = (1/d , . . . , 1/d) otherwise.
Gyorfi Machine learning and portfolio selections. II.
![Page 51: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/51.jpg)
Partitioning-based portfolio selection
fix k, ` = 1, 2, . . .P` = {A`,j , j = 1, 2, . . . ,m`} finite partitions of Rd ,G` be the corresponding quantizer: G`(x) = j , if x ∈ A`,j .G`(x
n1) = G`(x1), . . . ,G`(xn),
the set of matches:
Jn = {k < i < n : G`(xi−1i−k) = G`(x
n−1n−k)}
b(k,`)(xn−11 ) = arg max
b
∑i∈Jn
ln 〈b , xi 〉
if the set In is non-void, and b0 = (1/d , . . . , 1/d) otherwise.
Gyorfi Machine learning and portfolio selections. II.
![Page 52: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/52.jpg)
Partitioning-based portfolio selection
fix k, ` = 1, 2, . . .P` = {A`,j , j = 1, 2, . . . ,m`} finite partitions of Rd ,G` be the corresponding quantizer: G`(x) = j , if x ∈ A`,j .G`(x
n1) = G`(x1), . . . ,G`(xn),
the set of matches:
Jn = {k < i < n : G`(xi−1i−k) = G`(x
n−1n−k)}
b(k,`)(xn−11 ) = arg max
b
∑i∈Jn
ln 〈b , xi 〉
if the set In is non-void, and b0 = (1/d , . . . , 1/d) otherwise.
Gyorfi Machine learning and portfolio selections. II.
![Page 53: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/53.jpg)
Partitioning-based portfolio selection
fix k, ` = 1, 2, . . .P` = {A`,j , j = 1, 2, . . . ,m`} finite partitions of Rd ,G` be the corresponding quantizer: G`(x) = j , if x ∈ A`,j .G`(x
n1) = G`(x1), . . . ,G`(xn),
the set of matches:
Jn = {k < i < n : G`(xi−1i−k) = G`(x
n−1n−k)}
b(k,`)(xn−11 ) = arg max
b
∑i∈Jn
ln 〈b , xi 〉
if the set In is non-void, and b0 = (1/d , . . . , 1/d) otherwise.
Gyorfi Machine learning and portfolio selections. II.
![Page 54: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/54.jpg)
Elementary portfolios
for fixed k, ` = 1, 2, . . .,B(k,`) = {b(k,`)(·)}, are called elementary portfolios
That is, b(k,`)n quantizes the sequence xn−1
1 according to thepartition P`, and browses through all past appearances of the lastseen quantized string G`(x
n−1n−k) of length k.
Then it designs a fixed portfolio vector according to the returns onthe days following the occurence of the string.
Gyorfi Machine learning and portfolio selections. II.
![Page 55: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/55.jpg)
Elementary portfolios
for fixed k, ` = 1, 2, . . .,B(k,`) = {b(k,`)(·)}, are called elementary portfolios
That is, b(k,`)n quantizes the sequence xn−1
1 according to thepartition P`, and browses through all past appearances of the lastseen quantized string G`(x
n−1n−k) of length k.
Then it designs a fixed portfolio vector according to the returns onthe days following the occurence of the string.
Gyorfi Machine learning and portfolio selections. II.
![Page 56: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/56.jpg)
Combining elementary portfolios
How to choose k, `
small k or small `: large bias
large k and large `: few matching, large variance
Machine learning: combination of experts
N. Cesa-Bianchi and G. Lugosi, Prediction, Learning, and Games.Cambridge University Press, 2006.
Gyorfi Machine learning and portfolio selections. II.
![Page 57: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/57.jpg)
Combining elementary portfolios
How to choose k, `
small k or small `: large bias
large k and large `: few matching, large variance
Machine learning: combination of experts
N. Cesa-Bianchi and G. Lugosi, Prediction, Learning, and Games.Cambridge University Press, 2006.
Gyorfi Machine learning and portfolio selections. II.
![Page 58: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/58.jpg)
Exponential weighing
combine the elementary portfolio strategies B(k,`) = {b(k,`)n }
let {qk,`} be a probability distribution on the set of all pairs (k, `)such that for all k, `, qk,` > 0.for η > 0 put
wn,k,` = qk,`eη ln Sn−1(B(k,`))
for η = 1,
wn,k,` = qk,`eln Sn−1(B(k,`)) = qk,`Sn−1(B
(k,`))
andvn,k,` =
wn,k,`∑i ,j wn,i ,j
.
the combined portfolio b:
bn(xn−11 ) =
∑k,`
vn,k,`b(k,`)n (xn−1
1 ).
Gyorfi Machine learning and portfolio selections. II.
![Page 59: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/59.jpg)
Exponential weighing
combine the elementary portfolio strategies B(k,`) = {b(k,`)n }
let {qk,`} be a probability distribution on the set of all pairs (k, `)such that for all k, `, qk,` > 0.
for η > 0 put
wn,k,` = qk,`eη ln Sn−1(B(k,`))
for η = 1,
wn,k,` = qk,`eln Sn−1(B(k,`)) = qk,`Sn−1(B
(k,`))
andvn,k,` =
wn,k,`∑i ,j wn,i ,j
.
the combined portfolio b:
bn(xn−11 ) =
∑k,`
vn,k,`b(k,`)n (xn−1
1 ).
Gyorfi Machine learning and portfolio selections. II.
![Page 60: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/60.jpg)
Exponential weighing
combine the elementary portfolio strategies B(k,`) = {b(k,`)n }
let {qk,`} be a probability distribution on the set of all pairs (k, `)such that for all k, `, qk,` > 0.for η > 0 put
wn,k,` = qk,`eη ln Sn−1(B(k,`))
for η = 1,
wn,k,` = qk,`eln Sn−1(B(k,`)) = qk,`Sn−1(B
(k,`))
andvn,k,` =
wn,k,`∑i ,j wn,i ,j
.
the combined portfolio b:
bn(xn−11 ) =
∑k,`
vn,k,`b(k,`)n (xn−1
1 ).
Gyorfi Machine learning and portfolio selections. II.
![Page 61: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/61.jpg)
Exponential weighing
combine the elementary portfolio strategies B(k,`) = {b(k,`)n }
let {qk,`} be a probability distribution on the set of all pairs (k, `)such that for all k, `, qk,` > 0.for η > 0 put
wn,k,` = qk,`eη ln Sn−1(B(k,`))
for η = 1,
wn,k,` = qk,`eln Sn−1(B(k,`)) = qk,`Sn−1(B
(k,`))
andvn,k,` =
wn,k,`∑i ,j wn,i ,j
.
the combined portfolio b:
bn(xn−11 ) =
∑k,`
vn,k,`b(k,`)n (xn−1
1 ).
Gyorfi Machine learning and portfolio selections. II.
![Page 62: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/62.jpg)
Exponential weighing
combine the elementary portfolio strategies B(k,`) = {b(k,`)n }
let {qk,`} be a probability distribution on the set of all pairs (k, `)such that for all k, `, qk,` > 0.for η > 0 put
wn,k,` = qk,`eη ln Sn−1(B(k,`))
for η = 1,
wn,k,` = qk,`eln Sn−1(B(k,`)) = qk,`Sn−1(B
(k,`))
andvn,k,` =
wn,k,`∑i ,j wn,i ,j
.
the combined portfolio b:
bn(xn−11 ) =
∑k,`
vn,k,`b(k,`)n (xn−1
1 ).
Gyorfi Machine learning and portfolio selections. II.
![Page 63: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/63.jpg)
Exponential weighing
combine the elementary portfolio strategies B(k,`) = {b(k,`)n }
let {qk,`} be a probability distribution on the set of all pairs (k, `)such that for all k, `, qk,` > 0.for η > 0 put
wn,k,` = qk,`eη ln Sn−1(B(k,`))
for η = 1,
wn,k,` = qk,`eln Sn−1(B(k,`)) = qk,`Sn−1(B
(k,`))
andvn,k,` =
wn,k,`∑i ,j wn,i ,j
.
the combined portfolio b:
bn(xn−11 ) =
∑k,`
vn,k,`b(k,`)n (xn−1
1 ).
Gyorfi Machine learning and portfolio selections. II.
![Page 64: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/64.jpg)
Sn(B) =n∏
i=1
⟨bi (x
i−11 ) , xi
⟩
=n∏
i=1
∑k,` wi ,k,`
⟨b
(k,`)i (xi−1
1 ) , xi
⟩∑
k,` wi ,k,`
=n∏
i=1
∑k,` qk,`Si−1(B
(k,`))⟨b
(k,`)i (xi−1
1 ) , xi
⟩∑
k,` qk,`Si−1(B(k,`))
=n∏
i=1
∑k,` qk,`Si (B
(k,`))∑k,` qk,`Si−1(B(k,`))
=∑k,`
qk,`Sn(B(k,`)),
Gyorfi Machine learning and portfolio selections. II.
![Page 65: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/65.jpg)
Sn(B) =n∏
i=1
⟨bi (x
i−11 ) , xi
⟩
=n∏
i=1
∑k,` wi ,k,`
⟨b
(k,`)i (xi−1
1 ) , xi
⟩∑
k,` wi ,k,`
=n∏
i=1
∑k,` qk,`Si−1(B
(k,`))⟨b
(k,`)i (xi−1
1 ) , xi
⟩∑
k,` qk,`Si−1(B(k,`))
=n∏
i=1
∑k,` qk,`Si (B
(k,`))∑k,` qk,`Si−1(B(k,`))
=∑k,`
qk,`Sn(B(k,`)),
Gyorfi Machine learning and portfolio selections. II.
![Page 66: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/66.jpg)
Sn(B) =n∏
i=1
⟨bi (x
i−11 ) , xi
⟩
=n∏
i=1
∑k,` wi ,k,`
⟨b
(k,`)i (xi−1
1 ) , xi
⟩∑
k,` wi ,k,`
=n∏
i=1
∑k,` qk,`Si−1(B
(k,`))⟨b
(k,`)i (xi−1
1 ) , xi
⟩∑
k,` qk,`Si−1(B(k,`))
=n∏
i=1
∑k,` qk,`Si (B
(k,`))∑k,` qk,`Si−1(B(k,`))
=∑k,`
qk,`Sn(B(k,`)),
Gyorfi Machine learning and portfolio selections. II.
![Page 67: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/67.jpg)
Sn(B) =n∏
i=1
⟨bi (x
i−11 ) , xi
⟩
=n∏
i=1
∑k,` wi ,k,`
⟨b
(k,`)i (xi−1
1 ) , xi
⟩∑
k,` wi ,k,`
=n∏
i=1
∑k,` qk,`Si−1(B
(k,`))⟨b
(k,`)i (xi−1
1 ) , xi
⟩∑
k,` qk,`Si−1(B(k,`))
=n∏
i=1
∑k,` qk,`Si (B
(k,`))∑k,` qk,`Si−1(B(k,`))
=∑k,`
qk,`Sn(B(k,`)),
Gyorfi Machine learning and portfolio selections. II.
![Page 68: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/68.jpg)
Sn(B) =n∏
i=1
⟨bi (x
i−11 ) , xi
⟩
=n∏
i=1
∑k,` wi ,k,`
⟨b
(k,`)i (xi−1
1 ) , xi
⟩∑
k,` wi ,k,`
=n∏
i=1
∑k,` qk,`Si−1(B
(k,`))⟨b
(k,`)i (xi−1
1 ) , xi
⟩∑
k,` qk,`Si−1(B(k,`))
=n∏
i=1
∑k,` qk,`Si (B
(k,`))∑k,` qk,`Si−1(B(k,`))
=∑k,`
qk,`Sn(B(k,`)),
Gyorfi Machine learning and portfolio selections. II.
![Page 69: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/69.jpg)
The strategy B = BH then arises from weighing the elementary
portfolio strategies B(k,`) = {b(k,`)n } such that the investor’s capital
becomesSn(B) =
∑k,`
qk,`Sn(B(k,`)).
Gyorfi Machine learning and portfolio selections. II.
![Page 70: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/70.jpg)
Theorem
Assume that
(a) the sequence of partitions is nested, that is, any cell of P`+1 isa subset of a cell of P`, ` = 1, 2, . . .;
(b) if diam(A) = supx,y∈A ‖x− y‖ denotes the diameter of a set,then for any sphere S centered at the origin
lim`→∞
maxj :A`,j∩S 6=∅
diam(A`,j) = 0 .
Then the portfolio scheme BH defined above is universallyconsistent with respect to the class of all ergodic processes suchthat E{| lnX (j)| < ∞, for j = 1, 2, . . . , d .
Gyorfi Machine learning and portfolio selections. II.
![Page 71: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/71.jpg)
L. Gyorfi, D. Schafer (2003) ”Nonparametric prediction”, inAdvances in Learning Theory: Methods, Models and Applications,J. A. K. Suykens, G. Horvath, S. Basu, C. Micchelli, J. Vandevalle(Eds.), IOS Press, NATO Science Series, pp. 341-356.www.szit.bme.hu/ gyorfi/histog.ps
Gyorfi Machine learning and portfolio selections. II.
![Page 72: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/72.jpg)
Proof
We have to prove that
lim infn→∞
Wn(B) = lim infn→∞
1
nlnSn(B) ≥ W ∗ a.s.
W.l.o.g. we may assume S0 = 1, so that
Wn(B) =1
nlnSn(B)
=1
nln
∑k,`
qk,`Sn(B(k,`))
≥ 1
nln
(supk,`
qk,`Sn(B(k,`))
)=
1
nsupk,`
(ln qk,` + lnSn(B
(k,`)))
= supk,`
(Wn(B
(k,`)) +ln qk,`
n
).
Gyorfi Machine learning and portfolio selections. II.
![Page 73: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/73.jpg)
Proof
We have to prove that
lim infn→∞
Wn(B) = lim infn→∞
1
nlnSn(B) ≥ W ∗ a.s.
W.l.o.g. we may assume S0 = 1, so that
Wn(B) =1
nlnSn(B)
=1
nln
∑k,`
qk,`Sn(B(k,`))
≥ 1
nln
(supk,`
qk,`Sn(B(k,`))
)=
1
nsupk,`
(ln qk,` + lnSn(B
(k,`)))
= supk,`
(Wn(B
(k,`)) +ln qk,`
n
).
Gyorfi Machine learning and portfolio selections. II.
![Page 74: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/74.jpg)
Proof
We have to prove that
lim infn→∞
Wn(B) = lim infn→∞
1
nlnSn(B) ≥ W ∗ a.s.
W.l.o.g. we may assume S0 = 1, so that
Wn(B) =1
nlnSn(B)
=1
nln
∑k,`
qk,`Sn(B(k,`))
≥ 1
nln
(supk,`
qk,`Sn(B(k,`))
)
=1
nsupk,`
(ln qk,` + lnSn(B
(k,`)))
= supk,`
(Wn(B
(k,`)) +ln qk,`
n
).
Gyorfi Machine learning and portfolio selections. II.
![Page 75: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/75.jpg)
Proof
We have to prove that
lim infn→∞
Wn(B) = lim infn→∞
1
nlnSn(B) ≥ W ∗ a.s.
W.l.o.g. we may assume S0 = 1, so that
Wn(B) =1
nlnSn(B)
=1
nln
∑k,`
qk,`Sn(B(k,`))
≥ 1
nln
(supk,`
qk,`Sn(B(k,`))
)=
1
nsupk,`
(ln qk,` + lnSn(B
(k,`)))
= supk,`
(Wn(B
(k,`)) +ln qk,`
n
).
Gyorfi Machine learning and portfolio selections. II.
![Page 76: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/76.jpg)
Proof
We have to prove that
lim infn→∞
Wn(B) = lim infn→∞
1
nlnSn(B) ≥ W ∗ a.s.
W.l.o.g. we may assume S0 = 1, so that
Wn(B) =1
nlnSn(B)
=1
nln
∑k,`
qk,`Sn(B(k,`))
≥ 1
nln
(supk,`
qk,`Sn(B(k,`))
)=
1
nsupk,`
(ln qk,` + lnSn(B
(k,`)))
= supk,`
(Wn(B
(k,`)) +ln qk,`
n
).
Gyorfi Machine learning and portfolio selections. II.
![Page 77: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/77.jpg)
Thus
lim infn→∞
Wn(B) ≥ lim infn→∞
supk,`
(Wn(B
(k,`)) +ln qk,`
n
)
≥ supk,`
lim infn→∞
(Wn(B
(k,`)) +ln qk,`
n
)= sup
k,`lim infn→∞
Wn(B(k,`))
= supk,`
εk,`
Since the partitions P` are nested, we have that
supk,`
εk,` = limk→∞
liml→∞
εk,` = W ∗.
Gyorfi Machine learning and portfolio selections. II.
![Page 78: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/78.jpg)
Thus
lim infn→∞
Wn(B) ≥ lim infn→∞
supk,`
(Wn(B
(k,`)) +ln qk,`
n
)≥ sup
k,`lim infn→∞
(Wn(B
(k,`)) +ln qk,`
n
)
= supk,`
lim infn→∞
Wn(B(k,`))
= supk,`
εk,`
Since the partitions P` are nested, we have that
supk,`
εk,` = limk→∞
liml→∞
εk,` = W ∗.
Gyorfi Machine learning and portfolio selections. II.
![Page 79: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/79.jpg)
Thus
lim infn→∞
Wn(B) ≥ lim infn→∞
supk,`
(Wn(B
(k,`)) +ln qk,`
n
)≥ sup
k,`lim infn→∞
(Wn(B
(k,`)) +ln qk,`
n
)= sup
k,`lim infn→∞
Wn(B(k,`))
= supk,`
εk,`
Since the partitions P` are nested, we have that
supk,`
εk,` = limk→∞
liml→∞
εk,` = W ∗.
Gyorfi Machine learning and portfolio selections. II.
![Page 80: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/80.jpg)
Thus
lim infn→∞
Wn(B) ≥ lim infn→∞
supk,`
(Wn(B
(k,`)) +ln qk,`
n
)≥ sup
k,`lim infn→∞
(Wn(B
(k,`)) +ln qk,`
n
)= sup
k,`lim infn→∞
Wn(B(k,`))
= supk,`
εk,`
Since the partitions P` are nested, we have that
supk,`
εk,` = limk→∞
liml→∞
εk,` = W ∗.
Gyorfi Machine learning and portfolio selections. II.
![Page 81: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/81.jpg)
Kernel regression estimate
Kernel function K (x) ≥ 0Bandwidth h > 0
mn(x) =
∑ni=1 YiK
(x−Xi
h
)∑n
i=1 K(
x−Xih
)Naive (window) kernel function K (x) = I{‖x‖≤1}
mn(x) =
∑ni=1 Yi I{‖x−Xi‖≤h}∑ni=1 I{‖x−Xi‖≤h}
Gyorfi Machine learning and portfolio selections. II.
![Page 82: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/82.jpg)
Kernel-based portfolio selection
choose the radius rk,` > 0 such that for any fixed k,
lim`→∞
rk,` = 0.
for n > k + 1, define the expert b(k,`) by
b(k,`)(xn−11 ) = arg max
b
∑{k<i<n:‖xi−1
i−k−xn−1n−k‖≤rk,`}
ln 〈b , xi 〉 ,
if the sum is non-void, and b0 = (1/d , . . . , 1/d) otherwise.
Gyorfi Machine learning and portfolio selections. II.
![Page 83: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/83.jpg)
Kernel-based portfolio selection
choose the radius rk,` > 0 such that for any fixed k,
lim`→∞
rk,` = 0.
for n > k + 1, define the expert b(k,`) by
b(k,`)(xn−11 ) = arg max
b
∑{k<i<n:‖xi−1
i−k−xn−1n−k‖≤rk,`}
ln 〈b , xi 〉 ,
if the sum is non-void, and b0 = (1/d , . . . , 1/d) otherwise.
Gyorfi Machine learning and portfolio selections. II.
![Page 84: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/84.jpg)
Theorem
The kernel-based portfolio scheme is universally consistent withrespect to the class of all ergodic processes such thatE{| lnX (j)| < ∞, for j = 1, 2, . . . , d .
L. Gyorfi, G. Lugosi, F. Udina (2006) ”Nonparametric kernel-basedsequential investment strategies”, Mathematical Finance, 16, pp.337-357www.szit.bme.hu/ gyorfi/kernel.pdf
Gyorfi Machine learning and portfolio selections. II.
![Page 85: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/85.jpg)
k-nearest neighbor (NN) regression estimate
mn(x) =n∑
i=1
Wni (x ;X1, . . . ,Xn)Yi .
Wni is 1/k if Xi is one of the k nearest neighbors of x amongX1, . . . ,Xn, and Wni is 0 otherwise.
Gyorfi Machine learning and portfolio selections. II.
![Page 86: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/86.jpg)
Nearest-neighbor-based portfolio selection
choose p` ∈ (0, 1) such that lim`→∞ p` = 0for fixed positive integers k, ` (n > k + ˆ+ 1) introduce the set ofthe ˆ= bp`nc nearest neighbor matches:
J(k,`)n = {i ; k + 1 ≤ i ≤ n such that xi−1
i−k is among the ˆ NNs of xn−1n−k
in xk1 , . . . , xn−1
n−k}.
Define the portfolio vector by
b(k,`)(xn−11 ) = arg max
b
∑n
i∈J(k,`)n
o ln 〈b , xi 〉
if the sum is non-void, and b0 = (1/d , . . . , 1/d) otherwise.
Gyorfi Machine learning and portfolio selections. II.
![Page 87: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/87.jpg)
Nearest-neighbor-based portfolio selection
choose p` ∈ (0, 1) such that lim`→∞ p` = 0for fixed positive integers k, ` (n > k + ˆ+ 1) introduce the set ofthe ˆ= bp`nc nearest neighbor matches:
J(k,`)n = {i ; k + 1 ≤ i ≤ n such that xi−1
i−k is among the ˆ NNs of xn−1n−k
in xk1 , . . . , xn−1
n−k}.
Define the portfolio vector by
b(k,`)(xn−11 ) = arg max
b
∑n
i∈J(k,`)n
o ln 〈b , xi 〉
if the sum is non-void, and b0 = (1/d , . . . , 1/d) otherwise.
Gyorfi Machine learning and portfolio selections. II.
![Page 88: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/88.jpg)
Theorem
If for any vector s = sk1 the random variable
‖Xk1 − s‖
has continuous distribution function, then the nearest-neighborportfolio scheme is universally consistent with respect to the classof all ergodic processes such that E{| lnX (j)|} < ∞, forj = 1, 2, . . . d .
NN is robust, there is no scaling problem
L. Gyorfi, F. Udina, H. Walk (2006) ”Nonparametric nearestneighbor based empirical portfolio selection strategies”,(submitted), www.szit.bme.hu/ gyorfi/NN.pdf
Gyorfi Machine learning and portfolio selections. II.
![Page 89: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/89.jpg)
Theorem
If for any vector s = sk1 the random variable
‖Xk1 − s‖
has continuous distribution function, then the nearest-neighborportfolio scheme is universally consistent with respect to the classof all ergodic processes such that E{| lnX (j)|} < ∞, forj = 1, 2, . . . d .
NN is robust, there is no scaling problem
L. Gyorfi, F. Udina, H. Walk (2006) ”Nonparametric nearestneighbor based empirical portfolio selection strategies”,(submitted), www.szit.bme.hu/ gyorfi/NN.pdf
Gyorfi Machine learning and portfolio selections. II.
![Page 90: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/90.jpg)
Semi-log-optimal portfolio
empirical log-optimal:
h(k,`)(xn−11 ) = arg max
b
∑i∈Jn
ln 〈b , xi 〉
Taylor expansion: ln z ≈ h(z) = z − 1− 12(z − 1)2 empirical
semi-log-optimal:
h(k,`)(xn−11 ) = arg max
b
∑i∈Jn
h(〈b , xi 〉) = arg maxb
{〈b , m〉−〈b , Cb〉}
smaller computational complexity: quadratic programming
L. Gyorfi, A. Urban, I. Vajda (2007) ”Kernel-basedsemi-log-optimal portfolio selection strategies”, InternationalJournal of Theoretical and Applied Finance, 10, pp. 505-516.www.szit.bme.hu/∼gyorfi/semi.pdf
Gyorfi Machine learning and portfolio selections. II.
![Page 91: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/91.jpg)
Semi-log-optimal portfolio
empirical log-optimal:
h(k,`)(xn−11 ) = arg max
b
∑i∈Jn
ln 〈b , xi 〉
Taylor expansion: ln z ≈ h(z) = z − 1− 12(z − 1)2
empiricalsemi-log-optimal:
h(k,`)(xn−11 ) = arg max
b
∑i∈Jn
h(〈b , xi 〉) = arg maxb
{〈b , m〉−〈b , Cb〉}
smaller computational complexity: quadratic programming
L. Gyorfi, A. Urban, I. Vajda (2007) ”Kernel-basedsemi-log-optimal portfolio selection strategies”, InternationalJournal of Theoretical and Applied Finance, 10, pp. 505-516.www.szit.bme.hu/∼gyorfi/semi.pdf
Gyorfi Machine learning and portfolio selections. II.
![Page 92: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/92.jpg)
Semi-log-optimal portfolio
empirical log-optimal:
h(k,`)(xn−11 ) = arg max
b
∑i∈Jn
ln 〈b , xi 〉
Taylor expansion: ln z ≈ h(z) = z − 1− 12(z − 1)2 empirical
semi-log-optimal:
h(k,`)(xn−11 ) = arg max
b
∑i∈Jn
h(〈b , xi 〉) = arg maxb
{〈b , m〉−〈b , Cb〉}
smaller computational complexity: quadratic programming
L. Gyorfi, A. Urban, I. Vajda (2007) ”Kernel-basedsemi-log-optimal portfolio selection strategies”, InternationalJournal of Theoretical and Applied Finance, 10, pp. 505-516.www.szit.bme.hu/∼gyorfi/semi.pdf
Gyorfi Machine learning and portfolio selections. II.
![Page 93: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/93.jpg)
Semi-log-optimal portfolio
empirical log-optimal:
h(k,`)(xn−11 ) = arg max
b
∑i∈Jn
ln 〈b , xi 〉
Taylor expansion: ln z ≈ h(z) = z − 1− 12(z − 1)2 empirical
semi-log-optimal:
h(k,`)(xn−11 ) = arg max
b
∑i∈Jn
h(〈b , xi 〉) = arg maxb
{〈b , m〉−〈b , Cb〉}
smaller computational complexity: quadratic programming
L. Gyorfi, A. Urban, I. Vajda (2007) ”Kernel-basedsemi-log-optimal portfolio selection strategies”, InternationalJournal of Theoretical and Applied Finance, 10, pp. 505-516.www.szit.bme.hu/∼gyorfi/semi.pdf
Gyorfi Machine learning and portfolio selections. II.
![Page 94: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/94.jpg)
Semi-log-optimal portfolio
empirical log-optimal:
h(k,`)(xn−11 ) = arg max
b
∑i∈Jn
ln 〈b , xi 〉
Taylor expansion: ln z ≈ h(z) = z − 1− 12(z − 1)2 empirical
semi-log-optimal:
h(k,`)(xn−11 ) = arg max
b
∑i∈Jn
h(〈b , xi 〉) = arg maxb
{〈b , m〉−〈b , Cb〉}
smaller computational complexity: quadratic programming
L. Gyorfi, A. Urban, I. Vajda (2007) ”Kernel-basedsemi-log-optimal portfolio selection strategies”, InternationalJournal of Theoretical and Applied Finance, 10, pp. 505-516.www.szit.bme.hu/∼gyorfi/semi.pdf
Gyorfi Machine learning and portfolio selections. II.
![Page 95: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/95.jpg)
Conditions of the model:
Assume that
the assets are arbitrarily divisible,
the assets are available in unbounded quantities at the currentprice at any given trading period,
there are no transaction costs,
the behavior of the market is not affected by the actions ofthe investor using the strategy under investigation.
Gyorfi Machine learning and portfolio selections. II.
![Page 96: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/96.jpg)
NYSE data sets
At www.szit.bme.hu/~oti/portfolio there are two benchmarkdata set from NYSE:
The first data set consists of daily data of 36 stocks withlength 22 years.
The second data set contains 23 stocks and has length 44years.
Our experiment is on the second data set.
Gyorfi Machine learning and portfolio selections. II.
![Page 97: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/97.jpg)
NYSE data sets
At www.szit.bme.hu/~oti/portfolio there are two benchmarkdata set from NYSE:
The first data set consists of daily data of 36 stocks withlength 22 years.
The second data set contains 23 stocks and has length 44years.
Our experiment is on the second data set.
Gyorfi Machine learning and portfolio selections. II.
![Page 98: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/98.jpg)
Experiments on average annual yields (AAY)
Kernel based semi-log-optimal portfolio selection withk = 1, . . . , 5 and l = 1, . . . , 10
r2k,l = 0.0001 · d · k · `,
AAY of kernel based semi-log-optimal portfolio is 128%double the capitalMORRIS had the best AAY, 20%the BCRP had average AAY 24%
Gyorfi Machine learning and portfolio selections. II.
![Page 99: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/99.jpg)
Experiments on average annual yields (AAY)
Kernel based semi-log-optimal portfolio selection withk = 1, . . . , 5 and l = 1, . . . , 10
r2k,l = 0.0001 · d · k · `,
AAY of kernel based semi-log-optimal portfolio is 128%
double the capitalMORRIS had the best AAY, 20%the BCRP had average AAY 24%
Gyorfi Machine learning and portfolio selections. II.
![Page 100: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/100.jpg)
Experiments on average annual yields (AAY)
Kernel based semi-log-optimal portfolio selection withk = 1, . . . , 5 and l = 1, . . . , 10
r2k,l = 0.0001 · d · k · `,
AAY of kernel based semi-log-optimal portfolio is 128%double the capital
MORRIS had the best AAY, 20%the BCRP had average AAY 24%
Gyorfi Machine learning and portfolio selections. II.
![Page 101: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/101.jpg)
Experiments on average annual yields (AAY)
Kernel based semi-log-optimal portfolio selection withk = 1, . . . , 5 and l = 1, . . . , 10
r2k,l = 0.0001 · d · k · `,
AAY of kernel based semi-log-optimal portfolio is 128%double the capitalMORRIS had the best AAY, 20%
the BCRP had average AAY 24%
Gyorfi Machine learning and portfolio selections. II.
![Page 102: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/102.jpg)
Experiments on average annual yields (AAY)
Kernel based semi-log-optimal portfolio selection withk = 1, . . . , 5 and l = 1, . . . , 10
r2k,l = 0.0001 · d · k · `,
AAY of kernel based semi-log-optimal portfolio is 128%double the capitalMORRIS had the best AAY, 20%the BCRP had average AAY 24%
Gyorfi Machine learning and portfolio selections. II.
![Page 103: Machine learning and portfolio selections. II.oti/portfolio/lectures/tgyorfi2.pdf · Dynamic portfolio selection: general case x i = (x (1) i,...x (d) i) the return vector on day](https://reader033.vdocument.in/reader033/viewer/2022050522/5fa55da75cc43815ae03b580/html5/thumbnails/103.jpg)
The average annual yields of the individual experts.
k 1 2 3 4 5`
1 20% 19% 16% 16% 16%
2 118% 77% 62% 24% 58%
3 71% 41% 26% 58% 21%
4 103% 94% 63% 97% 34%
5 134% 102% 100% 102% 67%
6 140% 125% 105% 108% 87%
7 148% 123% 107% 99% 96%
8 132% 112% 102% 85% 81%
9 127% 103% 98% 74% 72%
10 123% 92% 81% 65% 69%
Gyorfi Machine learning and portfolio selections. II.