lecture 15: batch rl · lecture15: batchrl emmabrunskill cs234 reinforcement learning. winter2019...

70
Lecture 15: Batch RL Emma Brunskill CS234 Reinforcement Learning. Winter 2019 Slides drawn from Philip Thomas with modifications Emma Brunskill (CS234 Reinforcement Learning. ) Lecture 15: Batch RL Winter 2019 Slides drawn from Philip Tho / 65

Upload: others

Post on 24-Jul-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Lecture 15: Batch RL

Emma Brunskill

CS234 Reinforcement Learning.

Winter 2019Slides drawn from Philip Thomas with modifications

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 1

/ 65

Page 2: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Class Structure

• Last time: Meta Reinforcement Learning• This time: Batch RL• Next time: Quiz

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 2

/ 65

Page 3: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

A Scientific Experiment

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 3

/ 65

Page 4: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

A Scientific Experiment

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 4

/ 65

Page 5: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

What Should We Do For a New Student?

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 5

/ 65

Page 6: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Involves Counterfactual Reasoning

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 6

/ 65

Page 7: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Involves Generalization

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 7

/ 65

Page 8: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Batch Reinforcement Learning

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 8

/ 65

Page 9: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Batch RL

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 9

/ 65

Page 10: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Batch RL

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 10

/ 65

Page 11: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

The Problem

• If you apply an existing method, do you have confidence that it willwork?

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 11

/ 65

Page 12: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

A property of many real applications

• Deploying "bad" policies can be costly or dangerous

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 12

/ 65

Page 13: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Deploying bad policies can be costly

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 13

/ 65

Page 14: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Deploying bad policies can be dangerous

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 14

/ 65

Page 15: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

What property should a safe batch reinforcement learningalgorithm have?

• Given past experience from current policy/policies, produce a new policy• “Guarantee that with probability at least 1− δ, will not change

your policy to one that is worse than the current policy.”• You get to choose δ• Guarantee not contingent on the tuning of any hyperparameters

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 15

/ 65

Page 16: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Table of Contents

1 Notation

2 Create a safe batch reinforement learning algorithmOff-policy policy evaluation (OPE)High-confidence off-policy policy evaluation (HCOPE)Safe policy improvement (SPI)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 16

/ 65

Page 17: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Notation

• Policy π: π(a) = P(at = a∣∣ st = s)

• Trajectory: T = (s1, a1, r1, s2, a2, r2, · · · , sL, aL, rL)

• Historical data: D = {T1,T2, · · · ,Tn}• Historical data from behavior policy, πb• Objective:

V π = E[L∑

t=1

γtRt

∣∣π]

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 17

/ 65

Page 18: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Safe batch reinforement learning algorithm

• Reinforcement learning algorithm, A• Historical data, D, which is a random variable• Policy produced by the algorithm, A(D), which is a random variable• a safe batch reinforement learning algorithm, A, satisfies:

Pr(VA(D) ≥ V πb) ≥ 1− δ

or, in general

Pr(VA(D) ≥ Vmin) ≥ 1− δ

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 18

/ 65

Page 19: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Table of Contents

1 Notation

2 Create a safe batch reinforement learning algorithmOff-policy policy evaluation (OPE)High-confidence off-policy policy evaluation (HCOPE)Safe policy improvement (SPI)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 19

/ 65

Page 20: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Create a safe batch reinforement learning algorithm

• Off-policy policy evaluation (OPE)• For any evaluation policy, πe , Convert historical data, D, into n

independent and unbiased estimates of V πe

• High-confidence off-policy policy evaluation (HCOPE)• Use a concentration inequality to convert the n independent and

unbiased estimates of V πe into a 1− δ confidence lower bound onV πe

• Safe policy improvement (SPI)• Use HCOPE method to create a safe batch reinforcement learning

algorithm,

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 20

/ 65

Page 21: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Off-policy policy evaluation (OPE)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 21

/ 65

Page 22: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Importance Sampling

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 22

/ 65

Page 23: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Importance Sampling

IS(D) =1n

n∑i=1

(L∏

t=1

πe(at∣∣ st)

πb(at∣∣ st)

)(L∑

t=1

γtR it

)E[IS(D)] = V πe

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 23

/ 65

Page 24: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Create a safe batch reinforement learning algorithm

• Off-policy policy evaluation (OPE)• For any evaluation policy, πe , Convert historical data, D, into n

independent and unbiased estimates of V πe

• High-confidence off-policy policy evaluation (HCOPE)• Use a concentration inequality to convert the n independent and

unbiased estimates of V πe into a 1− δ confidence lower bound onV πe

• Safe policy improvement (SPI)• Use HCOPE method to create a safe batch reinforcement learning

algorithm

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 24

/ 65

Page 25: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

High-confidence off-policy policy evaluation (HCOPE)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 25

/ 65

Page 26: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Hoeffding’s inequality

• Let X1, · · · ,Xn be n independent identically distributed randomvariables such that Xi ∈ [0, b]

• Then with probability at least 1− δ:

E[Xi ] ≥1n

n∑i=1

Xi − b

√ln(1/δ)

2n,

where Xi = 1n

∑ni=1(wi

∑Lt=1 γ

tR it) in our case.

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 26

/ 65

Page 27: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Safe policy improvement (SPI)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 27

/ 65

Page 28: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Safe policy improvement (SPI)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 28

/ 65

Page 29: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Off-policy policy evaluation

• Importance sampling (IS):

IS(D) =1n

n∑i=1

(L∏

t=1

πe(at∣∣ st)

πb(at∣∣ st)

)(L∑

t=1

γtR it

)

• Per-decision importance sampling (PDIS)

PSID(D) =L∑

t=1

γt1n

n∑i=1

(t∏

τ=1

πe(aτ∣∣ sτ )

πb(aτ∣∣ sτ )

)R it

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 29

/ 65

Page 30: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Off-policy policy evaluation (revisited)

• Importance sampling (IS):

IS(D) =1n

n∑i=1

wi

(L∑

t=1

γtR it

)

• Weighted importance sampling (WIS)

WIS(D) =1∑n

i=1 wi

n∑i=1

wi

(L∑

t=1

γtR it

)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 30

/ 65

Page 31: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Off-policy policy evaluation (revisited)

• Weighted importance sampling (WIS)

WIS(D) =1∑n

i=1 wi

n∑i=1

wi

(L∑

t=1

γtR it

)

• Biased. When n = 1,E[WIS ] = V (πb)

• Strongly consistent estimator of V πe

• i.e. Pr(limn→∞WIS(D) = V πe ) = 1• If• Finite horizon• One behavior policy, or bounded rewards

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 31

/ 65

Page 32: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Off-policy policy evaluation (revisited)

• Weighted per-decision importance sampling• Also called consistent weighted per-decision importance sampling• A fun exercise!

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 32

/ 65

Page 33: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Control variates

• Given: X• Estimate: µ = E[X ]

• µ̂ = X

• Unbiased: E[µ̂] = E[X ] = µ

• Variance: Var(µ̂) = Var(X )

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 33

/ 65

Page 34: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Control variates

• Given: X ,Y ,E[Y ]

• Estimate: µ = E[X ]

• µ̂ = X − Y + E[Y ]

• Unbiased:E[µ̂] = E[X − Y + E[Y ]] = E[X ]− E[Y ] + E[Y ] = E[X ] = µ

• Variance:

Var(µ̂) = Var(X − Y + E[Y ]) = Var(X − Y )

= Var(X ) + Var(Y )− 2Cov(X ,Y )

• Lower variance if 2Cov(X ,Y ) > Var(Y )

• We call Y a control variate• We saw this idea before: baseline term in policy gradient estimation

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 34

/ 65

Page 35: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Off-policy policy evaluation (revisited)

• Idea: add a control variate to importance sampling estimators• X is the importance sampling estimator• Y is a control variate build from an approximate model of the

MDP• E[Y ] = 0 in this case

• PDISCV (D) = PDIS(D)− CV (D)

• Called the doubly robust estimator (Jiang and Li, 2015)• Robust to (1) poor approximate model, and (2) error in estimates

of πb• If the model is poor,the estimates are still unbiased• If the sampling policy is unknown, but the model is good,

MSE will still be low• DR(D) = PDISCV (D)

• Non-recursive and weighted forms, as well as control variate viewprovided by Thomas and Brunskill (ICML 2016)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 35

/ 65

Page 36: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Off-policy policy evaluation (revisited)

DR(πe∣∣D) =

1n

n∑i=1

∞∑t=0

γtw it (R i

t − q̂πe (S it ,A

it)) + γtρit−1v̂

πe (S it),

where w it =

∏tτ1

πe(aτ

∣∣ sτ )πb(aτ

∣∣ sτ )• Recall: we want the control variate Y to cancel with X :

R − q(S ,A) + γv(S ′)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 36

/ 65

Page 37: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Empirical Results (Gridworld)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 37

/ 65

Page 38: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Empirical Results (Gridworld)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 38

/ 65

Page 39: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Empirical Results (Gridworld)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 39

/ 65

Page 40: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Empirical Results (Gridworld)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 40

/ 65

Page 41: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Empirical Results (Gridworld)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 41

/ 65

Page 42: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Off-policy policy evaluation (revisited): Blending

• Importance sampling is unbiased but high variance• Model based estimate is biased but low variance• Doubly robust is one way to combine the two• Can also trade between importance sampling and model based estimatewithin a trajectory• MAGIC estimator (Thomas and Brunskill 2016)• Can be particularly useful when part of the world is non-Markovian in

the given model, and other parts of the world are Markov

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 42

/ 65

Page 43: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Off-policy policy evaluation (revisited)

• What if supp(πe ⊂ supp(πb))• There is a state-action pair, (s, a), such that πe(a

∣∣ s) = 0, butπb(a

∣∣ s) 6= 0.• If we see a history where (s, a) occurs, what weight should we give it?

• IS(D) = 1n

∑ni=1

(∏Lt=1

πe(at

∣∣ st)πb(at

∣∣ st))(∑L

t=1 γtR i

t

)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 43

/ 65

Page 44: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Off-policy policy evaluation (revisited)

• What if there are zero samples (n = 0)?• The importance sampling estimate is undefined

• What if no samples are in supp(πe) (or supp(p) in general)?• Importance sampling says: the estimate is zero• Alternate approach: undefined

• Importance sampling estimator is unbiased if n > 0• Alternate approach will be unbiased given that at least one sample is in

the support of p• Alternate approach detailed in Importance Sampling with Unequal

Support (Thomas and Brunskill, AAAI 2017)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 44

/ 65

Page 45: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Can Need An Order of Magnitude Less Data

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 45

/ 65

Page 46: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Off-policy policy evaluation (revisited)

• Thomas et. al. Predictive Off-Policy Policy Evaluation forNonstationary Decision Problems, with Applications to DigitalMarketing (AAAI 2017)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 46

/ 65

Page 47: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Off-policy policy evaluation (revisited)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 47

/ 65

Page 48: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Create a safe batch reinforement learning algorithm

• Off-policy policy evaluation (OPE)• For any evaluation policy, πe , Convert historical data, D, into n

independent and unbiased estimates of V πe

• High-confidence off-policy policy evaluation (HCOPE)• Use a concentration inequality to convert the n independent and

unbiased estimates of V πe into a 1− δ confidence lower bound onV πe

• Safe policy improvement (SPI)• Use HCOPE method to create a safe batch reinforcement learning

algorithm,

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 48

/ 65

Page 49: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

High-confidence off-policy policy evaluation (revisited)

• Consider using IS + Hoeffding’s inequality for HCOPE on mountain car

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 49

/ 65

Page 50: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

High-confidence off-policy policy evaluation (revisited)

• Using 100,000 trajectories• Evaluation policy’s true performance is 0.19 ∈ [0, 1]

• We get a 95% confidence lower bound of: -5,8310,000

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 50

/ 65

Page 51: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

What went wrong

wi =L∏

t=1

πe(at∣∣ st)

πb(at∣∣ st)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 51

/ 65

Page 52: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

High-confidence off-policy policy evaluation (revisited)

• Removing the upper tail only decreases the expected value.

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 52

/ 65

Page 53: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

High-confidence off-policy policy evaluation (revisited)

• Thomas et. al, High confidence off-policy evaluation, AAAI 2015

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 53

/ 65

Page 54: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

High-confidence off-policy policy evaluation (revisited)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 54

/ 65

Page 55: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

High-confidence off-policy policy evaluation (revisited)

• Use 20% of the data to optimize c (cutoff)• Use 80% to compute lower bound with optimized c

• Mountain car results:

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 55

/ 65

Page 56: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

High-confidence off-policy policy evaluation (revisited)

Digital marketing:

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 56

/ 65

Page 57: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

High-confidence off-policy policy evaluation (revisited)

Cognitive dissonance:

E[Xi ] ≥1n

n∑i=1

Xi − b

√ln(1/δ)

2n

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 57

/ 65

Page 58: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

High-confidence off-policy policy evaluation (revisited)

• Student’s t-test• Assumes that IS(D) is normally distributed• By the central limit theorem, it (is as n→∞)

Pr

(E[

1n

n∑i=1

Xi ] ≥1n

n∑i=1

Xi

)=

√1

n−1∑n

i=1(Xi − X̄n)2

√n

t1−δ,n−1

≥ 1− δ

• Efron’s Bootstrap methods (e.g., BCa)• Also, without importance sampling: Hanna, Stone, and

Niekum, AAMAS 2017

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 58

/ 65

Page 59: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

High-confidence off-policy policy evaluation (revisited)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 59

/ 65

Page 60: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Create a safe batch reinforcement learning algorithm

• Off-policy policy evaluation (OPE)• For any evaluation policy, πe , Convert historical data, D, into n

independent and unbiased estimates of V πe

• High-confidence off-policy policy evaluation (HCOPE)• Use a concentration inequality to convert the n independent and

unbiased estimates of V πe into a 1− δ confidence lower bound onV πe

• Safe policy improvement (SPI)• Use HCOPE method to create a safe batch reinforcement learning

algorithm

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 60

/ 65

Page 61: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Safe policy improvement (revisited)

Thomas et. al, ICML 2015

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 61

/ 65

Page 62: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Empirical Results: Digital Marketing

Agent

Environment

Action, 𝑎

State, 𝑠 Reward, 𝑟

Page 63: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Empirical Results: Digital Marketing

0.002715

0.003832

n=10000 n=30000 n=60000 n=100000

Expecte

d N

orm

aliz

ed R

etu

rn

None, CUT None, BCa k-Fold, CUT k-Fold, Bca

Page 64: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Empirical Results: Digital Marketing

Page 65: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Empirical Results: Digital Marketing

Page 66: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Example Results : Diabetes Treatment

80

Blood Glucose (sugar)

Eat Carbohydrates Release Insulin

Page 67: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Other Relevant Work

• How to deal with long horizons? (Guo, Thomas, Brunskill NIPS 2017)• How to deal with importance sampling being “unfair”? (Doroudi,Thomas and Brunskill, best paper UAI 2017)• What to do when the behavior policy is not known? (Liu, Gottesman,

Raghu, Komorowski, Faisal, Doshi-Velez, Brunskill NeurIPS 2018)• What to do when the behavior policy is deterministic?• What to do when care about safe exploration?• What to do when care about performance on a single trajectory• For last two, see great work by Marco Pavone’s group, Pieter Abbeel’s

group, Shie Mannor’s group and Claire Tomlin’s group, amongst others

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 62

/ 65

Page 68: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Off Policy Policy Evaluation and Selection

• Very important topic: healthcare, education, marketing, ...• Insights are relevant to on policy learning• Big focus of my lab• A number of others on campus also working in this area (e.g. StefanWager, Susan Athey...)• Very interesting area at the intersection of causality and control

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 63

/ 65

Page 69: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

What You Should Know: Off Policy Policy Evaluation andSelection

• Be able to define and apply importance sampling for off policy policyevaluation• Define some limitations of IS (variance)• List a couple alternatives (weighted IS, doubly robust)• Define why we might want safe reinforcement learning• Define the scope of the guarantees implied by safe policy improvement

as defined in this lecture

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 64

/ 65

Page 70: Lecture 15: Batch RL · Lecture15: BatchRL EmmaBrunskill CS234 Reinforcement Learning. Winter2019 SlidesdrawnfromPhilipThomaswithmodifications EmmaBrunskill (CS234ReinforcementLearning

Class Structure

• Last time: Meta Reinforcement Learning• This time: Batch RL• Next time: Quiz

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 15: Batch RLWinter 2019 Slides drawn from Philip Thomas with modifications 65

/ 65