1 randomized algorithms andreas klappenecker [using some slides by prof. welch]

22
1 Randomized Algorithms Andreas Klappenecker [using some slides by Prof. Welch]

Post on 21-Dec-2015

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 1 Randomized Algorithms Andreas Klappenecker [using some slides by Prof. Welch]

1

Randomized Algorithms

Andreas Klappenecker

[using some slides by Prof. Welch]

Page 2: 1 Randomized Algorithms Andreas Klappenecker [using some slides by Prof. Welch]

2

Discrete Probability Distributions

A probability distribution is called discrete if and only if its image is countable.

For example, the outcomes of the experiments:• flipping two coins once • flipping one coin infinitely oftencan be modeled by discrete probability distributions.

Page 3: 1 Randomized Algorithms Andreas Klappenecker [using some slides by Prof. Welch]

3

Uniform Probability Distribution

Let X be a random variable that takes integral values in S = {1,2,…,n}.

The random variable X is said to be uniformly distributed if and only if

Pr[X=k] = 1/nholds for all k in S.

Page 4: 1 Randomized Algorithms Andreas Klappenecker [using some slides by Prof. Welch]

4

Expected Value of a Random Variable

The most commonly used property of a random variable is its expectation value

E[X] = ∑ v Pr[X = v].

The expectation value averages over all possible values of the random variable, weighted by the probability.

v

Page 5: 1 Randomized Algorithms Andreas Klappenecker [using some slides by Prof. Welch]

5

Example

Let X be a uniformly distributed random variable with values in S= {1,2,…,n}.Then the expectation value of X is E[X] = ∑ v Pr[X = v]

= (1/n) ∑ v (sum is over S)= (1/n)n(n+1)/2= (n+1)/2.

Page 6: 1 Randomized Algorithms Andreas Klappenecker [using some slides by Prof. Welch]

6

Linearity of Expectation

Let X and Y be random variables, and a, b be real numbers.

ThenE[aX+bY] = aE[X] + bE[Y]

This is a very useful property when calculating expectation values.

Page 7: 1 Randomized Algorithms Andreas Klappenecker [using some slides by Prof. Welch]

7

Variance

The variance Var[X] of a random variable X is defined as

Var[X] = E[(X-E[X])2] = E[X2]-E[X]2.

[Thus, the variance measures the squared deviation from the expectation value.]

Page 8: 1 Randomized Algorithms Andreas Klappenecker [using some slides by Prof. Welch]

8

Example

A uniformly distributed random variable X with values in {1,2,…,n} has the variance

Var[X] = (n2 -1)/12

Indeed,Var[X] = E[X2]-E[X]2

= (1/n) ∑ v2 - ((n+1)/2)2 = (1/n)n(n+1)(2n+1)/6 – ((n+1)/2)2

= (n2 -1)/12

Page 9: 1 Randomized Algorithms Andreas Klappenecker [using some slides by Prof. Welch]

9

Exercise

Calculate yourself the expectation and variance of all probability distributions given in the lecture notes.

The expectation is usually easy. For the variance, you might need some tricks. Read the lecture notes on probability theory!

Page 10: 1 Randomized Algorithms Andreas Klappenecker [using some slides by Prof. Welch]

10

Randomized Algorithms (Recap)

Page 11: 1 Randomized Algorithms Andreas Klappenecker [using some slides by Prof. Welch]

11

Randomized Algorithms

• Instead of relying on a (perhaps incorrect) assumption that inputs exhibit some distribution, make your own input distribution by, say, permuting the input randomly or taking some other random action

• On the same input, a randomized algorithm has multiple possible executions

• No one input elicits worst-case behavior• Typically we analyze the average case

behavior for the worst possible input

Page 12: 1 Randomized Algorithms Andreas Klappenecker [using some slides by Prof. Welch]

12

Probabilistic Analysis vs. Randomized Algorithm

• Probabilistic analysis of a deterministic algorithm:• assume some probability distribution on

the inputs

• Randomized algorithm:• use random choices in the algorithm

Page 13: 1 Randomized Algorithms Andreas Klappenecker [using some slides by Prof. Welch]

13

Random Permutation

Page 14: 1 Randomized Algorithms Andreas Klappenecker [using some slides by Prof. Welch]

14

Randomly Permuting an Array

RandomPermute(A)Input: array A[1..n]for i := 1 to n doj := value in [i..n] chosen uniformly at

random;swap(A[i], A[j]);

od;

Page 15: 1 Randomized Algorithms Andreas Klappenecker [using some slides by Prof. Welch]

15

Analysis

Our goal is to show that after the i-th iteration of the for loop:

A[1..i] equals each permutation of i elements from {1,…,n} with probability (n–i)!/n!•Basis: After first iteration, A[1] contains each permutation of 1 element from {1,…,n} with probability (n–1)!/n! = 1/n

• true since A[1] is swapped with an element drawn from the entire array uniformly at random

Page 16: 1 Randomized Algorithms Andreas Klappenecker [using some slides by Prof. Welch]

16

Induction Basis

Claim: After the first iteration, A[1] contains each permutation of one element from the set {1,…,n} with probability (n–1)!/n! = 1/n.

The claim is indeed true, since A[1] is swapped with an element drawn from the entire array uniformly at random

Page 17: 1 Randomized Algorithms Andreas Klappenecker [using some slides by Prof. Welch]

17

Induction Step

Induction: Assume that after (i–1)-st iteration of the for loopA[1..i–1] equals each permutation of i–1 elements from {1,…,n} with probability (n–(i–1))!/n!

The probability that A[1..i] contains the permutation (x1, x2, …, xi ) is equal to

• the probability that A[1..i–1] contains (x1, x2, …, xi–1) after the (i–1)-st iteration

• and that the i-th iteration puts xi in A[i].

Page 18: 1 Randomized Algorithms Andreas Klappenecker [using some slides by Prof. Welch]

18

Formalizing

• Let E1 be the event that A[1..i–1] contains (x1, x2, …, xi–1) after the (i–1)-st iteration.

• Let E2 be the event that the i-th iteration puts xi in A[i].

• We need to show that Pr[E1E2] = (n–i)!/n!.

[The events E1 and E2 are not independent: if some element appears in A[1..i –1], then it is not available to appear in A[i]. ]

Page 19: 1 Randomized Algorithms Andreas Klappenecker [using some slides by Prof. Welch]

19

Conditional Probability

• Formalizes having partial knowledge about the outcome of an experiment

• Example: flip two fair coins. • Probability of two heads is 1/4• Probability of two heads when you already

know that the first coin is a head is 1/2

• Conditional probability of A given that B occurs is Pr[A|B] is defined to be

Pr[AB]/Pr[B]

Page 20: 1 Randomized Algorithms Andreas Klappenecker [using some slides by Prof. Welch]

20

Conditional Probabilities to the Rescue

Recall that Pr[AB] = Pr[A|B]·Pr[B]

Page 21: 1 Randomized Algorithms Andreas Klappenecker [using some slides by Prof. Welch]

21

Calculating the Probabilities

• Recall: E1 is event that A[1..i–1] = (x1,…,xi–1) • Recall: E2 is event that A[i] = xi

• Pr[E1E2] = Pr[E2|E1]·Pr[E1]• Pr[E2|E1] = 1/(n–i+1) because

• xi is available in A[i..n] to be chosen, since E1 already occurred and did not include xi

• every element in A[i..n] is equally likely to be chosen

• Pr[E1] = (n–(i–1))!/n! by inductive hypothesis• So Pr[E1E2] = [1/(n–i+1)]·[(n–(i–1))!/n!] = (n–i)!/n!

Page 22: 1 Randomized Algorithms Andreas Klappenecker [using some slides by Prof. Welch]

22

Conclusion

• After the last (n-th) iteration, the inductive hypothesis tells us that the array A[1..n] contains a random permutation of n elements from {1,…,n} with probability

(n–n)!/n! = 1/n!• Thus the algorithm gives us a permutation

of the array chosen uniformly at random.