correlation through bounded memory strategies...correlation through bounded memory strategies ron...
TRANSCRIPT
. . . . . .
.
......Correlation Through Bounded Memory Strategies
Ron Peretz1
(joint work with Olivier Gossner and Penelope Hernandez)
1Tel Aviv University
November, 2011
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 1 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play!........
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play!........
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play!........
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play!........
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play!........1
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play!........10
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play!........101
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play!........101 0
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play!........101 00
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play!........101 001
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play!........101 001 111 011 000 100 110
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play!........101 001 111 011 000 100 110 010
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play!........101 001 111 011 000 100 110 010
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.
Let’s play!
..
......
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play again!........
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play again!........1
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play again!........10
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play again!........101
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play again!........101 0
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play again!........101 00
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play again!........101 001
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play again!........101 001 1
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play again!........101 001 11
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play again!........101 001 111
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play again!........101 001 111 0
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play again!........101 001 111 01...
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
Introductory example
Take a random ordering of the integers 0, 1, . . . , 7.
Write them in binary representation. We have a random sequence of24 bits, x1, . . . , x24.
How random is this sequence? How well does it play repeatedmatching pennies?
The entropy method shows that if we take n2n such bits theguaranteed value converges to zero.
What is the value if we are allowed to take a glimpse at therealization of x1, . . . , xn2n?
Bounded memory.
.Let’s play again!........101 001 111 01...
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 2 / 14
. . . . . .
The model – bounded memory
We consider the (undiscounted) repeated version of a finite game instrategic form G = ⟨A =
∏i∈N Ai , g : A → RN⟩.
An m-memory strategy τ is one that satisfies
#{τ|h : “h is a finite history”
}≤ m,
where τ|h is the strategy that τ induces on the sub-game originatingat h.
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 3 / 14
. . . . . .
The model – bounded memory
We consider the (undiscounted) repeated version of a finite game instrategic form G = ⟨A =
∏i∈N Ai , g : A → RN⟩.
An m-memory strategy τ is one that satisfies
#{τ|h : “h is a finite history”
}≤ m,
where τ|h is the strategy that τ induces on the sub-game originatingat h.
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 3 / 14
. . . . . .
Motivation
Consider the (undiscounted infinitely) repeated game G (m1, . . . ,mN)in which each player i is restricted to mi -memory strategies.
We would like to be able to say intelligible things about its “solution.”
For example,
- min-max and max-min (one maximizer against N − 1 minimizers),
- mixed and pure,
- individually rational level (mixed min-max) and equilibrium payoffs(folk theorem).
Those are tough problems.
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 4 / 14
. . . . . .
Motivation
Consider the (undiscounted infinitely) repeated game G (m1, . . . ,mN)in which each player i is restricted to mi -memory strategies.
We would like to be able to say intelligible things about its “solution.”
For example,
- min-max and max-min (one maximizer against N − 1 minimizers),
- mixed and pure,
- individually rational level (mixed min-max) and equilibrium payoffs(folk theorem).
Those are tough problems.
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 4 / 14
. . . . . .
Motivation
Consider the (undiscounted infinitely) repeated game G (m1, . . . ,mN)in which each player i is restricted to mi -memory strategies.
We would like to be able to say intelligible things about its “solution.”
For example,
- min-max and max-min (one maximizer against N − 1 minimizers),
- mixed and pure,
- individually rational level (mixed min-max) and equilibrium payoffs(folk theorem).
Those are tough problems.
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 4 / 14
. . . . . .
Motivation
Consider the (undiscounted infinitely) repeated game G (m1, . . . ,mN)in which each player i is restricted to mi -memory strategies.
We would like to be able to say intelligible things about its “solution.”
For example,
- min-max and max-min (one maximizer against N − 1 minimizers),
- mixed and pure,
- individually rational level (mixed min-max) and equilibrium payoffs(folk theorem).
Those are tough problems.
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 4 / 14
. . . . . .
Motivation
Consider the (undiscounted infinitely) repeated game G (m1, . . . ,mN)in which each player i is restricted to mi -memory strategies.
We would like to be able to say intelligible things about its “solution.”
For example,
- min-max and max-min (one maximizer against N − 1 minimizers),
- mixed and pure,
- individually rational level (mixed min-max) and equilibrium payoffs(folk theorem).
Those are tough problems.
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 4 / 14
. . . . . .
Motivation
Consider the (undiscounted infinitely) repeated game G (m1, . . . ,mN)in which each player i is restricted to mi -memory strategies.
We would like to be able to say intelligible things about its “solution.”
For example,
- min-max and max-min (one maximizer against N − 1 minimizers),
- mixed and pure,
- individually rational level (mixed min-max) and equilibrium payoffs(folk theorem).
Those are tough problems.
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 4 / 14
. . . . . .
Motivation
Consider the (undiscounted infinitely) repeated game G (m1, . . . ,mN)in which each player i is restricted to mi -memory strategies.
We would like to be able to say intelligible things about its “solution.”
For example,
- min-max and max-min (one maximizer against N − 1 minimizers),
- mixed and pure,
- individually rational level (mixed min-max) and equilibrium payoffs(folk theorem).
Those are tough problems.
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 4 / 14
. . . . . .
Matching Pennies
1 0
0 1valG = 1
2puremaxminG = 0
Ben Porath (1988).lim inf valG (m, 2o(m)) ≥ 1
2 .
Neyman (1998). If limG (m, 2m/o(1)) = 0.
Neyman-Spencer (2007).lim valG (m, 2(1−o(1))m) = 0.
Neyman (2008).lim inf G (m, 2Θm) ≥ H−1(1−Θ).
limm→∞ valG (m, 2Θm) exists?
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 5 / 14
. . . . . .
Matching Pennies
1 0
0 1valG = 1
2puremaxminG = 0
Ben Porath (1988).lim inf valG (m, 2o(m)) ≥ 1
2 .
Neyman (1998). If limG (m, 2m/o(1)) = 0.
Neyman-Spencer (2007).lim valG (m, 2(1−o(1))m) = 0.
Neyman (2008).lim inf G (m, 2Θm) ≥ H−1(1−Θ).
limm→∞ valG (m, 2Θm) exists?
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 5 / 14
. . . . . .
Matching Pennies
1 0
0 1valG = 1
2puremaxminG = 0
Ben Porath (1988).lim inf valG (m, 2o(m)) ≥ 1
2 .
Neyman (1998). If limG (m, 2m/o(1)) = 0.
Neyman-Spencer (2007).lim valG (m, 2(1−o(1))m) = 0.
Neyman (2008).lim inf G (m, 2Θm) ≥ H−1(1−Θ).
limm→∞ valG (m, 2Θm) exists?
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 5 / 14
. . . . . .
Matching Pennies
1 0
0 1valG = 1
2puremaxminG = 0
Ben Porath (1988).lim inf valG (m, 2o(m)) ≥ 1
2 .
Neyman (1998). If limG (m, 2m/o(1)) = 0.
Neyman-Spencer (2007).lim valG (m, 2(1−o(1))m) = 0.
Neyman (2008).lim inf G (m, 2Θm) ≥ H−1(1−Θ).
limm→∞ valG (m, 2Θm) exists?
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 5 / 14
. . . . . .
Matching Pennies
1 0
0 1valG = 1
2puremaxminG = 0
Ben Porath (1988).lim inf valG (m, 2o(m)) ≥ 1
2 .
Neyman (1998). If limG (m, 2m/o(1)) = 0.
Neyman-Spencer (2007).lim valG (m, 2(1−o(1))m) = 0.
Neyman (2008).lim inf G (m, 2Θm) ≥ H−1(1−Θ).
limm→∞ valG (m, 2Θm) exists?
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 5 / 14
. . . . . .
Matching Pennies
1 0
0 1valG = 1
2puremaxminG = 0
Ben Porath (1988).lim inf valG (m, 2o(m)) ≥ 1
2 .
Neyman (1998). If limG (m, 2m/o(1)) = 0.
Neyman-Spencer (2007).lim valG (m, 2(1−o(1))m) = 0.
Neyman (2008).lim inf G (m, 2Θm) ≥ H−1(1−Θ).
limm→∞ valG (m, 2Θm) exists?
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 5 / 14
. . . . . .
Self advertisement
For N = 2, there are solutions to restricted models of boundedmemory.
- Neyman (2008) and Peretz (forthcoming MOR) solveG (moblivious , 2
Θm),
- Peretz (2011 GEB) solves G (krecall ,mrecall).
For N = 3,
- Peretz (forthcoming IJGT) suggests some bounds for the mixedmin-max of G (Θmrecall ,mrecall ,mrecall).
- The present work facilitates bounds for the mixed max-min ofG (2Θm logm,m,m) and pure min-max of G (Θm,m,m).
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 6 / 14
. . . . . .
Self advertisement
For N = 2, there are solutions to restricted models of boundedmemory.
- Neyman (2008) and Peretz (forthcoming MOR) solveG (moblivious , 2
Θm),
- Peretz (2011 GEB) solves G (krecall ,mrecall).
For N = 3,
- Peretz (forthcoming IJGT) suggests some bounds for the mixedmin-max of G (Θmrecall ,mrecall ,mrecall).
- The present work facilitates bounds for the mixed max-min ofG (2Θm logm,m,m) and pure min-max of G (Θm,m,m).
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 6 / 14
. . . . . .
Self advertisement
For N = 2, there are solutions to restricted models of boundedmemory.
- Neyman (2008) and Peretz (forthcoming MOR) solveG (moblivious , 2
Θm),
- Peretz (2011 GEB) solves G (krecall ,mrecall).
For N = 3,
- Peretz (forthcoming IJGT) suggests some bounds for the mixedmin-max of G (Θmrecall ,mrecall ,mrecall).
- The present work facilitates bounds for the mixed max-min ofG (2Θm logm,m,m) and pure min-max of G (Θm,m,m).
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 6 / 14
. . . . . .
Self advertisement
For N = 2, there are solutions to restricted models of boundedmemory.
- Neyman (2008) and Peretz (forthcoming MOR) solveG (moblivious , 2
Θm),
- Peretz (2011 GEB) solves G (krecall ,mrecall).
For N = 3,
- Peretz (forthcoming IJGT) suggests some bounds for the mixedmin-max of G (Θmrecall ,mrecall ,mrecall).
- The present work facilitates bounds for the mixed max-min ofG (2Θm logm,m,m) and pure min-max of G (Θm,m,m).
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 6 / 14
. . . . . .
Self advertisement
For N = 2, there are solutions to restricted models of boundedmemory.
- Neyman (2008) and Peretz (forthcoming MOR) solveG (moblivious , 2
Θm),
- Peretz (2011 GEB) solves G (krecall ,mrecall).
For N = 3,
- Peretz (forthcoming IJGT) suggests some bounds for the mixedmin-max of G (Θmrecall ,mrecall ,mrecall).
- The present work facilitates bounds for the mixed max-min ofG (2Θm logm,m,m) and pure min-max of G (Θm,m,m).
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 6 / 14
. . . . . .
Self advertisement
For N = 2, there are solutions to restricted models of boundedmemory.
- Neyman (2008) and Peretz (forthcoming MOR) solveG (moblivious , 2
Θm),
- Peretz (2011 GEB) solves G (krecall ,mrecall).
For N = 3,
- Peretz (forthcoming IJGT) suggests some bounds for the mixedmin-max of G (Θmrecall ,mrecall ,mrecall).
- The present work facilitates bounds for the mixed max-min ofG (2Θm logm,m,m) and pure min-max of G (Θm,m,m).
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 6 / 14
. . . . . .
Plan
.Key question..
......
What kind of probability distributions on plays can result from probabilitydistributions on pairs of m-memory strategies?
Let N = 2. The set of m-memory strategies of player i is denotedAi (m).
A pair (σ, τ) ∈ A1(m)×A2(m) generates a play x1, x2, . . .
Not every play can be generated this way. For example, the play mustenter a loop in the first m2 periods.
A probability distribution µ ∈ ∆(A1(m)×A2(m)) induces aprobability distribution on plays, µ ∈ ∆(AN).
What is the range of µ?
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 7 / 14
. . . . . .
Plan
.Key question..
......
What kind of probability distributions on plays can result from probabilitydistributions on pairs of m-memory strategies?
Let N = 2. The set of m-memory strategies of player i is denotedAi (m).
A pair (σ, τ) ∈ A1(m)×A2(m) generates a play x1, x2, . . .
Not every play can be generated this way. For example, the play mustenter a loop in the first m2 periods.
A probability distribution µ ∈ ∆(A1(m)×A2(m)) induces aprobability distribution on plays, µ ∈ ∆(AN).
What is the range of µ?
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 7 / 14
. . . . . .
Plan
.Key question..
......
What kind of probability distributions on plays can result from probabilitydistributions on pairs of m-memory strategies?
Let N = 2. The set of m-memory strategies of player i is denotedAi (m).
A pair (σ, τ) ∈ A1(m)×A2(m) generates a play x1, x2, . . .
Not every play can be generated this way. For example, the play mustenter a loop in the first m2 periods.
A probability distribution µ ∈ ∆(A1(m)×A2(m)) induces aprobability distribution on plays, µ ∈ ∆(AN).
What is the range of µ?
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 7 / 14
. . . . . .
Plan
.Key question..
......
What kind of probability distributions on plays can result from probabilitydistributions on pairs of m-memory strategies?
Let N = 2. The set of m-memory strategies of player i is denotedAi (m).
A pair (σ, τ) ∈ A1(m)×A2(m) generates a play x1, x2, . . .
Not every play can be generated this way. For example, the play mustenter a loop in the first m2 periods.
A probability distribution µ ∈ ∆(A1(m)×A2(m)) induces aprobability distribution on plays, µ ∈ ∆(AN).
What is the range of µ?
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 7 / 14
. . . . . .
Plan
.Key question..
......
What kind of probability distributions on plays can result from probabilitydistributions on pairs of m-memory strategies?
Let N = 2. The set of m-memory strategies of player i is denotedAi (m).
A pair (σ, τ) ∈ A1(m)×A2(m) generates a play x1, x2, . . .
Not every play can be generated this way. For example, the play mustenter a loop in the first m2 periods.
A probability distribution µ ∈ ∆(A1(m)×A2(m)) induces aprobability distribution on plays, µ ∈ ∆(AN).
What is the range of µ?
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 7 / 14
. . . . . .
Plan
.Key question..
......
What kind of probability distributions on plays can result from probabilitydistributions on pairs of m-memory strategies?
Let N = 2. The set of m-memory strategies of player i is denotedAi (m).
A pair (σ, τ) ∈ A1(m)×A2(m) generates a play x1, x2, . . .
Not every play can be generated this way. For example, the play mustenter a loop in the first m2 periods.
A probability distribution µ ∈ ∆(A1(m)×A2(m)) induces aprobability distribution on plays, µ ∈ ∆(AN).
What is the range of µ?
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 7 / 14
. . . . . .
Approximation of i.i.d. random variables
.Lemma..
......
For every ϵ > 0 there exists δ > 0 such that for every Q ∈ ∆(A), n ∈ Nand random variables x1, . . . , xn assuming values in A, if
...1 ∥E [emp(x1, . . . , xn)]− Q∥ < δ, and
...2 1nH(x1 . . . , xn) > H(Q)− δ,
then1
n
n∑k=1
E ∥Pr(xk |x1, . . . , xk−1)− Q∥ < ϵ,
and vice versa.
Where H is Shanon’s entropy function and emp is the empirical frequency,the number of times each letter appears in the sequence divided by n.
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 8 / 14
. . . . . .
Implementation through bounded memory strategies
For a given Q ∈ ∆(A) we would like to find the largest n = n(m,Q,A)such that one can approximate n i.i.d. random variables with distributionQ through pairs of m-memory strategies..Defenition..
......
C (Q) = inf{C > 0 : ∃nm ∈ N, µm ∈ ∆(A1(m)×A2(m)) such that
C · nm ≥ m logm, and
the induced play xm1 , . . . , xmnm satisfies
(1) ∥E[emp(xm1 , . . . , xmnm)]− Q∥ < o(1),
(2) 1/nm H(xm1 , . . . , xmnm) > H(Q)− o(1).}
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 9 / 14
. . . . . .
Main result
.Theorem..
......max
i
(H(Qi )
|A−i | − 1
)≤ C (Q) ≤ max
i
(H(Q)
|Ai | − 1
)
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 10 / 14
. . . . . .
Proof: lower bound
Define
Ci (Q) = inf{C : ∃nm ∈ N, µm ∈ ∆(Ai (m)× Anm
−i ) such that
C · nm ≥ m logm, and
the induced play xm1 , . . . , xmnm satisfies
(1) ∥E[emp(xm1 , . . . , xmnm)]− Q∥ < o(1),
(2)1
nmH(xm1 , . . . , xmnm) > H(Q)− o(1).
}We show that
H(Qi )
|A−i | − 1≤ Ci (Q).
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 11 / 14
. . . . . .
Proof “H(Qi) ≤ (|A−i | − 1)Ci(Q)”
Let C > 0 such that there exists n ≥ m logm/C and µ ∈ ∆(Ai (m)× An−i )
that induces a play that satisfies...1 ∥E[emp(x1, y1, . . . , xn, yn)]− Q∥ < o(1),...2 1
nH(x1, y1, . . . , xn, yn) > H(Q)− o(1).
Let σ and Y be random variables that distribute according to µ. We have
H(Qi ) + HQ(−i |i)− o(1) = H(Q)− o(1) ≤1
nH(x1, . . . , yn) ≤
1
nH(σ,Y ) =
1
nH(σ) +
1
nH(Y |σ)
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 12 / 14
. . . . . .
Proof “H(Qi) ≤ (|A−i | − 1)Ci(Q)”
Let C > 0 such that there exists n ≥ m logm/C and µ ∈ ∆(Ai (m)× An−i )
that induces a play that satisfies...1 ∥E[emp(x1, y1, . . . , xn, yn)]− Q∥ < o(1),...2 1
nH(x1, y1, . . . , xn, yn) > H(Q)− o(1).
Let σ and Y be random variables that distribute according to µ.
We have
H(Qi ) + HQ(−i |i)− o(1) = H(Q)− o(1) ≤1
nH(x1, . . . , yn) ≤
1
nH(σ,Y ) =
1
nH(σ) +
1
nH(Y |σ)
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 12 / 14
. . . . . .
Proof “H(Qi) ≤ (|A−i | − 1)Ci(Q)”
Let C > 0 such that there exists n ≥ m logm/C and µ ∈ ∆(Ai (m)× An−i )
that induces a play that satisfies...1 ∥E[emp(x1, y1, . . . , xn, yn)]− Q∥ < o(1),...2 1
nH(x1, y1, . . . , xn, yn) > H(Q)− o(1).
Let σ and Y be random variables that distribute according to µ. We have
H(Qi ) + HQ(−i |i)− o(1) = H(Q)− o(1) ≤
1
nH(x1, . . . , yn) ≤
1
nH(σ,Y )
=1
nH(σ) +
1
nH(Y |σ)
.Explanation........ The play is a function of σ and Y .
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 12 / 14
. . . . . .
Proof “H(Qi) ≤ (|A−i | − 1)Ci(Q)”
Let C > 0 such that there exists n ≥ m logm/C and µ ∈ ∆(Ai (m)× An−i )
that induces a play that satisfies...1 ∥E[emp(x1, y1, . . . , xn, yn)]− Q∥ < o(1),...2 1
nH(x1, y1, . . . , xn, yn) > H(Q)− o(1).
Let σ and Y be random variables that distribute according to µ. We have
H(Qi ) + HQ(−i |i)− o(1) =
H(Q)− o(1) ≤1
nH(x1, . . . , yn) ≤
1
nH(σ,Y )
=1
nH(σ) +
1
nH(Y |σ)
.Explanation........
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 12 / 14
. . . . . .
Proof “H(Qi) ≤ (|A−i | − 1)Ci(Q)”
Let C > 0 such that there exists n ≥ m logm/C and µ ∈ ∆(Ai (m)× An−i )
that induces a play that satisfies...1 ∥E[emp(x1, y1, . . . , xn, yn)]− Q∥ < o(1),...2 1
nH(x1, y1, . . . , xn, yn) > H(Q)− o(1).
Let σ and Y be random variables that distribute according to µ. We have
H(Qi ) + HQ(−i |i)− o(1) =
H(Q)− o(1) ≤1
nH(x1, . . . , yn) ≤
1
nH(σ,Y ) =
1
nH(σ) +
1
nH(Y |σ)
.Explanation........ Chain rule.
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 12 / 14
. . . . . .
Proof “H(Qi) ≤ (|A−i | − 1)Ci(Q)”
Let C > 0 such that there exists n ≥ m logm/C and µ ∈ ∆(Ai (m)× An−i )
that induces a play that satisfies...1 ∥E[emp(x1, y1, . . . , xn, yn)]− Q∥ < o(1),...2 1
nH(x1, y1, . . . , xn, yn) > H(Q)− o(1).
Let σ and Y be random variables that distribute according to µ. We have
H(Qi ) + HQ(−i |i)− o(1) = H(Q)− o(1) ≤1
nH(x1, . . . , yn) ≤
1
nH(σ,Y ) =
1
nH(σ) +
1
nH(Y |σ)
.Explanation........ Chain rule.
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 12 / 14
. . . . . .
Proof “H(Qi) ≤ (|A−i | − 1)Ci(Q)”
Let C > 0 such that there exists n ≥ m logm/C and µ ∈ ∆(Ai (m)× An−i )
that induces a play that satisfies...1 ∥E[emp(x1, y1, . . . , xn, yn)]− Q∥ < o(1),...2 1
nH(x1, y1, . . . , xn, yn) > H(Q)− o(1).
Let σ and Y be random variables that distribute according to µ. We have
H(Qi ) + HQ(−i |i)− o(1) = H(Q)− o(1) ≤1
nH(x1, . . . , yn) ≤
1
nH(σ,Y ) =
1
nH(σ) +
1
nH(Y |σ)
.Explanation........ Suppressing the middle part...
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 12 / 14
. . . . . .
Proof “H(Qi) ≤ (|A−i | − 1)Ci(Q)”
Let C > 0 such that there exists n ≥ m logm/C and µ ∈ ∆(Ai (m)× An−i )
that induces a play that satisfies...1 ∥E[emp(x1, y1, . . . , xn, yn)]− Q∥ < o(1),...2 1
nH(x1, y1, . . . , xn, yn) > H(Q)− o(1).
Let σ and Y be random variables that distribute according to µ. We have
H(Qi ) + HQ(−i |i)− o(1) ≤ 1
nH(σ) +
1
nH(Y |σ)
≤
1
nH(σ) + HE[emp(x1,...,yn)](−i |i) ≤ C · log |Ai (m)|
m logm
.Explanation........
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 12 / 14
. . . . . .
Proof “H(Qi) ≤ (|A−i | − 1)Ci(Q)”
Let C > 0 such that there exists n ≥ m logm/C and µ ∈ ∆(Ai (m)× An−i )
that induces a play that satisfies...1 ∥E[emp(x1, y1, . . . , xn, yn)]− Q∥ < o(1),...2 1
nH(x1, y1, . . . , xn, yn) > H(Q)− o(1).
Let σ and Y be random variables that distribute according to µ. We have
H(Qi ) + HQ(−i |i)− o(1) ≤ 1
nH(σ) +
1
nH(Y |σ) ≤
1
nH(σ) + HE[emp(x1,...,yn)](−i |i)
≤ C · log |Ai (m)|m logm
.Explanation........ Concavity, Neyman-Okada (2009).
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 12 / 14
. . . . . .
Proof “H(Qi) ≤ (|A−i | − 1)Ci(Q)”
Let C > 0 such that there exists n ≥ m logm/C and µ ∈ ∆(Ai (m)× An−i )
that induces a play that satisfies...1 ∥E[emp(x1, y1, . . . , xn, yn)]− Q∥ < o(1),...2 1
nH(x1, y1, . . . , xn, yn) > H(Q)− o(1).
Let σ and Y be random variables that distribute according to µ. We have
H(Qi ) + HQ(−i |i)− o(1) ≤ 1
nH(σ) +
1
nH(Y |σ) ≤
1
nH(σ) + HE[emp(x1,...,yn)](−i |i)
≤ C · log |Ai (m)|m logm
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 12 / 14
. . . . . .
Proof “H(Qi) ≤ (|A−i | − 1)Ci(Q)”
Let C > 0 such that there exists n ≥ m logm/C and µ ∈ ∆(Ai (m)× An−i )
that induces a play that satisfies...1 ∥E[emp(x1, y1, . . . , xn, yn)]− Q∥ < o(1),...2 1
nH(x1, y1, . . . , xn, yn) > H(Q)− o(1).
Let σ and Y be random variables that distribute according to µ. We have
H(Qi ) − o(1) ≤ 1
nH(σ)
≤ C · log |Ai (m)|m logm
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 12 / 14
. . . . . .
Proof “H(Qi) ≤ (|A−i | − 1)Ci(Q)”
Let C > 0 such that there exists n ≥ m logm/C and µ ∈ ∆(Ai (m)× An−i )
that induces a play that satisfies...1 ∥E[emp(x1, y1, . . . , xn, yn)]− Q∥ < o(1),...2 1
nH(x1, y1, . . . , xn, yn) > H(Q)− o(1).
Let σ and Y be random variables that distribute according to µ. We have
H(Qi ) − o(1) ≤ 1
nH(σ) ≤ C · log |Ai (m)|
m logm
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 12 / 14
. . . . . .
Proof “H(Qi) ≤ (|A−i | − 1)Ci(Q)”
Let C > 0 such that there exists n ≥ m logm/C and µ ∈ ∆(Ai (m)× An−i )
that induces a play that satisfies...1 ∥E[emp(x1, y1, . . . , xn, yn)]− Q∥ < o(1),...2 1
nH(x1, y1, . . . , xn, yn) > H(Q)− o(1).
Let σ and Y be random variables that distribute according to µ. We have
H(Qi ) − o(1) ≤ 1
nH(σ) ≤ C · log |Ai (m)|
m logm
It remains to show that log |Ai (m)|/m logm ≤ |A−i | − 1 + o(1).
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 12 / 14
. . . . . .
Proof: “log |Ai(m)|/m logm ≤ |A−i | − 1 + o(1)”
In order to estimate |Ai (m)| from above we use a combinatorialstructure to describe finite memory strategies.
An m-memory strategy for player i can be executed by a finiteautomaton with m-states, input alphabet A−i and output alphabet Ai .
Counting these automata and dividing by m! (for renaming of states)shows that log |Ai (m)|/m logm ≤ |A−i − 1|+ o(1).
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 13 / 14
. . . . . .
Lemma (Neyman-Okada 2009)
Let X and τ be random variables that take values
X in An1,
τ in the set of pure strategies of player 2 in the n-fold repeated game.
The pair (X , τ) generate a play (x1, y1, . . . , xn, yn), where X = (x1, . . . , xn)and yt = τ(x1, y1 . . . , xt−1, yt−1). Let a1 and a2 be random variableswhose joint distribution is the expected empirical frequency of the inducedplay. We have
H(a1|a2) ≥1
nH(X |τ).
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 14 / 14
. . . . . .
Lemma (Neyman-Okada 2009)
Let X and τ be random variables that take values
X in An1,
τ in the set of pure strategies of player 2 in the n-fold repeated game.
The pair (X , τ) generate a play (x1, y1, . . . , xn, yn), where X = (x1, . . . , xn)and yt = τ(x1, y1 . . . , xt−1, yt−1).
Let a1 and a2 be random variableswhose joint distribution is the expected empirical frequency of the inducedplay. We have
H(a1|a2) ≥1
nH(X |τ).
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 14 / 14
. . . . . .
Lemma (Neyman-Okada 2009)
Let X and τ be random variables that take values
X in An1,
τ in the set of pure strategies of player 2 in the n-fold repeated game.
The pair (X , τ) generate a play (x1, y1, . . . , xn, yn), where X = (x1, . . . , xn)and yt = τ(x1, y1 . . . , xt−1, yt−1). Let a1 and a2 be random variableswhose joint distribution is the expected empirical frequency of the inducedplay. We have
H(a1|a2) ≥1
nH(X |τ).
Ron Peretz ([email protected]) Bounded Memory Correlation November, 2011 14 / 14