paper talk: “extracting randomness: how and why. a survey,” by noam nisan

18
Paper Talk: “Extracting Randomness: How and Why. A survey,” by Noam Nisan Albert Boggess

Upload: glenda

Post on 14-Jan-2016

37 views

Category:

Documents


0 download

DESCRIPTION

Paper Talk: “Extracting Randomness: How and Why. A survey,” by Noam Nisan. Albert Boggess. Randomized Algorithms. Can solve problems for which there are no known deterministic algorithms Often simpler than deterministic equivalents. Generating Randomness. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Paper Talk:  “Extracting Randomness:  How  and Why.  A survey,” by Noam Nisan

Paper Talk: “Extracting Randomness: How and Why. A survey,” by Noam

Nisan

Albert Boggess

Page 2: Paper Talk:  “Extracting Randomness:  How  and Why.  A survey,” by Noam Nisan

Randomized Algorithms

Can solve problems for which there are no known deterministic algorithms

Often simpler than deterministic equivalents

Page 3: Paper Talk:  “Extracting Randomness:  How  and Why.  A survey,” by Noam Nisan

Generating Randomness

Pseudo-random number generators are not sufficient

True randomness? Physical source can provide some true

randomness Dispersers and extractors

Page 4: Paper Talk:  “Extracting Randomness:  How  and Why.  A survey,” by Noam Nisan

Dispersers and Extractors

The goal is to “convert a somewhat random distribution into an almost random distribution” by adding a small number of truly random bits.

Can be represented as either graphs or functions

Page 5: Paper Talk:  “Extracting Randomness:  How  and Why.  A survey,” by Noam Nisan

Definitions

Probability distribution X over finite space A: X(a) ≥ 0 for all a in A ∑aX(a) = 1

Statistical distance between two probability distributions: d(X, Y) = (½)∑a|X(a) – Y(a)| X is e-close to Y if d(X, Y) ≤ e

Min-Entropy of a distribution: H∞(X) = mina{-log2(X(a))}

Page 6: Paper Talk:  “Extracting Randomness:  How  and Why.  A survey,” by Noam Nisan

Extractors and Dispersers Graph

A type of bipartite graph where: Left set [N] contains N = 2n vertices and right set

[M] contains M = 2m vertices. Typically n > m. Vertices are numbered by binary integers:

N = {1…N} = {0,1}n

M = {1…M} = {0,1}m

All vertices in the left set have the same degree D = 2d.

Page 7: Paper Talk:  “Extracting Randomness:  How  and Why.  A survey,” by Noam Nisan

Extractors and Dispersers Graph

Given a graph G = ([N], [M], E), the neighbor set of a vertex a in [N] is defined as T(a) = {z in [M] | (a,z) is in E}.

For a probability distribution X, T(X) is the probability distribution induced on [M] by choosing an a in [N] according to X, and then choosing a random neighbor z in T(a).

Page 8: Paper Talk:  “Extracting Randomness:  How  and Why.  A survey,” by Noam Nisan

Dispersers and Extractors Graph

Disperser: G = ([N], [M], E) is a (k, e)-disperser if for all A in

[N] where|A| ≥ K = 2k, |T(A)| ≥ (1 – e)M.

Extractor: G = ([N], [M], E) is a (k, e)-extractor if for any

distribution X with H∞(X) ≥ k, T(x) is e-close to uniform on [M].

Any (k, e)-extractor is a (k, e)-disperser.

Page 9: Paper Talk:  “Extracting Randomness:  How  and Why.  A survey,” by Noam Nisan

Dispersers and Extractors Function

Given integer sets [N], [M], and [D], the function is defined as G : [N] x [D] [M].

T(x) = {z = G(x, y) | x is in [N], y is in [D]} T(X) is the distribution of G(x, y) induced by

choosing x according to distribution X and y uniformly in [D].

Page 10: Paper Talk:  “Extracting Randomness:  How  and Why.  A survey,” by Noam Nisan

Extractors and Dispersers Function

Disperser: G : [N] x [D] [M] is called a (k, e)-disperser if for

any A in [N] where |A| ≥ K = 2k, |T(A)| ≥ (1 – e)M. Extractor:

G : [N] x [D] [M] is called a (k, e)-extractor if for any distribution X on [N] where H∞(X) ≥ k, T(X) is e-close to uniform.

Page 11: Paper Talk:  “Extracting Randomness:  How  and Why.  A survey,” by Noam Nisan

Goals

Minimize k, the amount of required “randomness” of the distribution on [N].

Minimize e, the error. Minimize d, the number of truly random bits

required.

Page 12: Paper Talk:  “Extracting Randomness:  How  and Why.  A survey,” by Noam Nisan

Construction: Hashing

Hash functions can be used to construct extractors.

If H is a family of hash functions h : [N] [L], then we say that H has collision error ∂ if P(h(a) = h(b)) ≤ (1 + ∂)/L

Given a family of hash functions H where h : [N] [L], the extractor defined by H is G(x, h) = h(x)ºh. Therefore D = |H| and M = DL.

Page 13: Paper Talk:  “Extracting Randomness:  How  and Why.  A survey,” by Noam Nisan

Construction: Hashing

Given a family of hash functions H which map [N] to [L] and have collision error ∂, the extractor defined from H is a (k, e)-extractor where K = 2k = O(L/∂) and e = O(√∂).

Page 14: Paper Talk:  “Extracting Randomness:  How  and Why.  A survey,” by Noam Nisan

Construction: Hashing

Universal Hashing: H = {h : [N] [L]} where P(h(a) = x AND h(b) =

y) = 1/L2

Universal hash functions have 0 collision error. For all 1 ≤ i ≤ n, there are universal hash function

families of size |H| = poly(N). d = O(n) k = m – d + O(loge-1) d is much too high.

Page 15: Paper Talk:  “Extracting Randomness:  How  and Why.  A survey,” by Noam Nisan

Construction: Hashing

Tiny families of hash functions: We don’t require 0 collision error, just small

collision error. For all 1 ≤ i ≤ L ≤ N and e > 0, there exist families

of hash functions that map [N] to [L] that have collision error e, and size |H| = poly(n,e,L).

This translates to (k, e)-extractors with D = poly(n,e,M) and k = m – d + O(loge-1)

Page 16: Paper Talk:  “Extracting Randomness:  How  and Why.  A survey,” by Noam Nisan

Composing Extractors

Can be composed by using the output of one extractor as the input of another. G1(x1, G2(x2 ,y)).

Only holds if X and X are independent. (X1, X2) is a block-wise source if:

H∞(X1) ≥ k1

H∞(X2 | X1 = x1) ≥ k2

Page 17: Paper Talk:  “Extracting Randomness:  How  and Why.  A survey,” by Noam Nisan

Reference

Noam Nisan, “Extracting Randomness: How and Why A survey,” Proceedings of Computation Complexity 1996.

Page 18: Paper Talk:  “Extracting Randomness:  How  and Why.  A survey,” by Noam Nisan

Known Results

k d m

Omega(n) O(log(n/e)) Omega(k)

Omega(nc) O(log(n)), e = ½ n∂

Omega(nc) O(log(n/e) * logkn) n∂

any k Poly(log(n/e)) k