reinforcement...

67
Reinforcement Learning Yishay Mansour Tel-Aviv University 1

Upload: others

Post on 16-Oct-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Reinforcement Learning

Yishay MansourTel-Aviv University

1

Page 2: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Reinforcement Learning:Course Information

• Classes: Wednesday– Lecture 10-13

• Yishay Mansour

– Recitations:14-15/15-16• Eliya Nachmani• Adam Polyak

• Course Web site:rl-tau-2018.wikidot.com

• Resources:• Markov Decision Processes

– Puterman

• Reinforcement Learning – Sutton and Barto

• Neuro-dynamic Programming– Bertsekas and Tsitsiklis

2

Page 3: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Reinforcement Learning:Course requirements

• Homework:– Every two week:

Theory and Programming

• Project: – Near the end of the

term– Deep RL/Atari

• Final Exam

• Grade:– 60% Final exam

• Have to pass

– 20% HW– 20% project

3

Page 4: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Playing Board Games

4

Gerald Tesauro, TD-gammon 1992

AlphaGo, DeepMind 2015-17

Page 5: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Other notable board games

5

Arthur Samuel 1962

Deep Blue1996

Page 6: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Playing Atari Games

6

Page 7: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Controlling Robots

7

Page 8: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Today Outline: Overview

• Basics– Goal of Reinforcement

Learning– Mathematical Model

(MDP)

• Planning– Value iteration– Policy iteration

• Learning Algorithms– Model based– Model Free

• Large state space– Function approx– Policy gradient

8

Page 9: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Goal of Reinforcement Learning

Goal oriented learning through interaction

Control of large scale stochastic environments withpartial knowledge.

Supervised / Unsupervised LearningLearn from labeled / unlabeled examples

9

Page 10: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

10

Page 11: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Mathematical Model - Motivation

Model of uncertainty:

Environment, actions, our knowledge.

Focus on decision making.

Maximize long term reward.

Markov Decision Process (MDP)11

Page 12: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Contrast with Supervised Learning

The system has a “state”.

The algorithm influences the state distribution.

Inherent Tradeoff: Exploration versus Exploitation.* There is a cost to discover information !

12

Page 13: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Mathematical Model - MDP

Markov Decision Processes

S- set of states

A- set of actions

d - Transition probability

R - Reward function Similar to DFA!

13

Page 14: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

MDP model - states and actions

Actions = transitions

action a

)',,( sasd

Environment = states

0.3

0.7

14

Page 15: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

MDP model - rewards

R(s,a) = reward at state s

for doing action a

(a random variable).

Example:R(s,a) = -1 with probability 0.5

+10 with probability 0.35+20 with probability 0.15 15

Page 16: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

MDP model - trajectories

trajectory:

s0a0 r0 s1

a1 r1 s2a2 r2

16

Page 17: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

MDP - Return function.

Combining all the immediate rewards to a single value.

Modeling issues:

Are early rewards more valuable than later rewards?

Is the system “terminating” or continuous?

Usually the return is linear in the immediate rewards. 17

Page 18: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

MDP model - return functions

Finite Horizon - parameter H ),(1

iHi

i asRreturn 壣

=

Infinite Horizon

discounted - parameter g<1. ),aR(sγreturn iii

iå¥

=

=0

undiscounted return),aR(sN ii

N

¾¾ ®¾ ¥®-

=å N1

0i

1

Terminating MDP

18

Page 19: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

MDP Example: Inventory

19

!"

#" $"

!"%&=!" + #" − $" %

)" = ,-" + . #" − / !"%&P=Price/item,J(.)=ordercostC(.)=inventorycost-" = min(!" + #", $")

Page 20: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

MDP model - action selection

Policy - mapping from states to actions

Fully Observable - can “see” the “exact” state.

AIM:Maximize the expected return.This talk: discounted return

Optimal policy: optimal from any start state.

THEOREM: There exists a deterministic optimal policy

20

Page 21: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

MDP model - summary

- set of states, |S|=n.

- set of k actions, |A|=k.

- transition function.

- immediate reward function.

- policy.

- discounted cumulative return.

),,( 21 sasd

Ss Î

Aa Î

R(s,a)

AS ®:p

ii

i rå¥

= 0g

21

Page 22: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Contrast with Supervised Learning

Supervised Learning:Fixed distribution on examples.

Reinforcement Learning:The state distribution is policy dependent!!!

A small local change in the policy can make a hugeglobal change in the return.

22

Page 23: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Simple setting: Multi-armed bandit

Single state.

a1

a2

a3

s

Goal: Maximize sum of immediate rewards.

Difficulty: unknown rewards.

Given the model: Greedy action.

23

Page 24: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Today Outline: Overview

• Basics– Goal of Reinforcement

Learning– Mathematical Model

(MDP)

• Planning– Value iteration– Policy iteration

• Learning Algorithms– Model based– Model Free

• Large state space– Function Approx– Policy gradient

24

Page 25: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Planning - Basic Problems.

Policy evaluation - Given a policy p, evaluate its return.

Optimal control - Find an optimal policy p* (maximizes the return from any start state).

Given a complete MDP model.

25

Page 26: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Planning - Value Functions

Vp(s) The expected return starting at state s following p.

Qp(s,a) The expected return starting at state s withaction a and then following p.

V*(s) and Q*(s,a) are define using an optimal policy p*.

V*(s) = maxp Vp(s)

26

Page 27: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Planning - Policy Evaluation

Discounted infinite horizon (Bellman Eq.)

Rewrite the expectation

)'()'),(,())](,([)('

sVsssssREsVså+= pp pdgp

Linear system of equations.

Vp(s) = Es’~ p (s) [ R(s,p (s)) + g Vp(s’)]

27

Page 28: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Algorithms -Policy Evaluation Example

A={+1,-1}g = 1/2d(si,a)= si+ap random

s0 s1

s3s2

"a: R(si,a) = i

0 1

23

28

Vp(s0) = 0 +g [p(s0,+1)Vp(s1) + p(s0,-1) Vp(s3) ]

Page 29: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Algorithms -Policy Evaluation Example

A={+1,-1}g = 1/2d(si,a)= si+ap random

s0 s1

s3s2

"a: R(si,a) = i

0 1

23

Vp(s2) = 2 + (Vp(s1) + Vp(s3) )/4

Vp(s0) = 5/3Vp(s1) = 7/3Vp(s2) = 11/3Vp(s3) = 13/3

29

Vp(s1) = 1 + (Vp(s0) + Vp(s2) )/4

Vp(s3) = 3 + (Vp(s0) + Vp(s2) )/4

Vp(s0) = 0 + (Vp(s1) + Vp(s3) )/4

Page 30: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Algorithms - optimal control

State-Action Value function:

Note ))(,()( ssQsV ppp =

Qp(s,a) = E [ R(s,a)] + gEs’~ p (s) [ Vp(s’)]

For a deterministic policy p.

30

Page 31: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Algorithms -Optimal control Example

A={+1,-1}g = 1/2d(si,a)= si+ap random

s0 s1

s3s2

R(si,a) = i

0 1

23

Qp(s0,+1) = 0 +g Vp(s1)

Qp(s0,+1) = 7/6Qp(s0,-1) = 13/6

31

Page 32: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Algorithms - optimal control

CLAIM:A policy p is optimal if and only if at each state s:

Vp(s) = MAXa {Qp(s,a)} (Bellman Eq.)

PROOF: (only if) Assume there is a state s and action a s.t.,

Vp(s) < Qp(s,a).Then the strategy of performing a at state s (the first time)is better than p.This is true each time we visit s, so the policy thatperforms action a at state s is better than p. p 32

Page 33: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Algorithms -optimal control Example

A={+1,-1}g = 1/2d(si,a)= si+ap random

s0 s1

s3s2

R(si,a) = i

0 1

23

Changing the policy using the state-action value function.33

Page 34: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

MDP - computing optimal policy

1. Linear Programming

2. Value Iteration method.

)},({maxarg)( 1 asQs i

ai

-= pp

)}'( )',,(),({max)('

1 sVsasasRsVs

t

a

t å+¬+ dg

3. Policy Iteration method.

34

Page 35: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Example: Grid world

35

ACTIONS

InitialState=RedFinalState=BlueCOST=stepstotarget(blue)

Page 36: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Example: Grid world ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ 0

∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞

36

InitialValuest=0

Page 37: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Example: Grid world ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ 1 0∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ 1∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞

37

OneIterationt=1

Page 38: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Example: Grid world ∞ ∞ ∞ ∞ ∞ ∞ ∞ 2 1 0∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ 2 1∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ 2∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞

38

TwoIterationst=2

Page 39: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Example: Grid world

• Optimal Policy

39

Q(s,à)=walkàandadddistancetotarget

OptimalpolicyDerivefromQ(s,- )

Page 40: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Convergence

• Value Iteration – Decrease the distance from optimal

• By a factor of 1-γ

• Policy Iteration– Policy improves– Number of iteration less than Value Iteration

40

Page 41: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Today Outline: Overview

• Basics– Goal of Reinforcement

Learning– Mathematical Model

(MDP)

• Planning– Value iteration– Policy iteration

• Learning Algorithms– Model based– Model Free

• Large state space– Function approx– Policy gradient

41

Page 42: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Learning Algorithms

Given access only to actions perform:1. policy evaluation.2. control - find optimal policy.

Two approaches: 1. Model based.2. Model free.

42

Page 43: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Learning - Model Based

Estimate the model from the observation.(Both transition probability and rewards.)

Use the estimated model as a true model,and find a near optimal policy.

If we have a “good” estimated model, we shouldhave a “good” estimation.

43

Page 44: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

44

Learning - Model Based: off policy

• Let the policy run for a “long” time.owhat is “long” ?!oAssuming some “exploration”

• Build an “observed model”:o Transition probabilitieso Rewards

§ Both are independent!

• Use the “observed model” learn optimal policy.

Page 45: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Learning - Model Basedoff-policy algorithm

• Observe a trajectory generated by a policy πo off-policy: not need to control

• For every !, !# ∈ &and ' ∈ (:o )′(!, ', !’) = #(!, ', !′)/#(!, ',·)o 2′(!, ') = (34(5|!, ')

• Find an optimal policy for (&, ', )′, 2′)

45

Page 46: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Learning - Model Based

• Claim: if the model is “accurate” we will compute a near-optimal policy.

• Hidden assumption: o Each (s,a,·) is sampled many times.o This is the “responsibility” of the policy π

§ Off-policy

• Simple question: How many samples we need for each (s,a,·)

46

Page 47: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

47

Learning - Model Based: on policy

• The learner has control over the action.o The immediate goal is to learn a model

• As before:o Build an “observed model”:

§ Transition probabilities and Rewards

oUse the “observed model” to estimate value of the policy.

• Accelerating the learning:oHow to reach “unexplored” states ?!

Page 48: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

48

Learning - Model Based: on policy

Well sampled states Relatively unknown states

Page 49: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

49

Learning - Model Based: on policy

Well sampled states Relatively unknown states

HIGH REAWRD

Exploration à Planning in new MDP

Page 50: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Learning: Policy improvement

• Assume that we can perform:– Given a policy p,– Compute Vπ and Qπ functions of p

• Can run policy improvement:– p = Greedy (Qπ)

• Process converges if estimates are accurate.

50

Page 51: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Model-Free learning

53

Page 52: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

54

Q-Learning: off policy

Basic Idea: Learn the Q-function.On a move (s,a) ® s’ update:

Old estimate

New estimateLearning rate at (s,a)=1/tw

)],'(max),()[,( ),()),(1(),(1

usQasRasasQasasQ

tutt

ttt

gaa

++-=+

Page 53: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

55

Q-Learning: update equation

Old estimate New estimate

updatelearning rate

),'(max),(),( ),(),(),(1

usQasRasQasasQasQ

tutt

ttt

ga

+-=DD-=+

Page 54: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

56

Q-Learning: Intuition

• Updates based on the difference:

• Assume we have the right Q function• Good news: The expectation is Zero !!!• Challenge: understand the dynamics

– stochastic process

)],'(max),([),( usQasRusQ tutt g+-=D

Page 55: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Learning - Model FreePolicy evaluation: TD(0)

An online view:At state st we performed action at, received reward rt and moved to state st+1.

Our “estimation error” is Errt =rt+gV(st+1)-V(st),The update:

Vt +1(st) = Vt(st ) + a Errt

Note that for the correct value function we have:

E[r+gV(s’)-V(s)] =E[Errt]=0 57

Page 56: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Learning - Model FreePolicy evaluation: TD(l)

Again: At state st we performed action at, received reward rt and moved to state st+1.Our “estimation error” Errt =rt+gV(st+1)-V(st),

Update every state s:

Vt +1(s) = Vt(s ) + a Errt e(s)

Update of e(s) :When visiting s: incremented by 1: e(s) = e(s)+1For all other s’:

decremented by g l factor: e(s’) = g l e(s’)58

Page 57: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Different Model-Free Algorithms

• On-policy versus Off-policy• The function approximated (V vs Q)• Fairly similar general methodologies• Challenges: controlling stochastic process

59

Page 58: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Today Outline: Overview

• Basics– Goal of Reinforcement

Learning– Mathematical Model

(MDP)

• Planning– Value iteration– Policy iteration

• Learning Algorithms– Model based– Model Free

• Large state space– Function approx– Policy gradient

60

Page 59: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Large state MDP

Previous Methods: Tabular. (small state space).

Large state Exponential size.

Similar to basic view of learning.

61

Page 60: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Large scale MDP

Approaches:

1. Restricted Value function.

2. Restricted policy class.

3. Restricted model of MDP.

4. Different MDP representation: Generative Model

62

Page 61: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Large MDP - Restricted Value Function

applications: most of the recent work (AlphaGo, Atari etc.)

Vague Idea: reduce to a supervised (deep) learning.

(Value) Function Approximation: Use a limited class of functions to estimate the value function.

Given a good approximation of the value function, we canestimate the optimal policy.

63

Page 62: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Large MDP: Restricted Policy

• Fix a policy class Π = {$: & → (}– Given a policy $ ∈ Π– Approximate Vπ and Qπ

• Run policy improvement:– p = Greedy (Qπ)

• Quality depends on approximation.– Convergence not guaranteed.

64

Page 63: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Large MDP: Policy Gradient

• Improving the parameter of a policy– Taking the gradient

• Challenge:– The update impacts both:

• Action probabilities• Distribution over states

– Use off-policy• compute the gradient from policy history

65

Page 64: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Large MDP: Generative Model.

Algorithm for estimating optimal policy (for discounted infinitehorizon) in time independent of the number of states

s

a

s’rGenerator

Generator representation:

Clearly we can not use matrix representation.

66

Page 65: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Large state MDP: Generative Model.

• Use generative model for “look ahead”

• Sample a tree of a shallow depth• Using generative model

• Compute optimal policy on the tree

• Result in approximate optimal policy in the MDP

67

Page 66: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Today Outline: Overview

• Basics– Goal of Reinforcement

Learning– Mathematical Model

(MDP)

• Planning– Value iteration– Policy iteration

• Learning Algorithms– Model based– Model Free

• Large state space– Function approx– Policy gradient

68

Page 67: Reinforcement Learningrl-tau-2018.wdfiles.com/local--files/course-schedule/lecture1-intro.pdfAlgorithms -optimal control CLAIM:A policy pis optimal if and only if at each state s:

Course (tentative) Outline

• Part 1: MDP basics and planning• Part 2: MDP learning

– Model Based and Model free• Part 3: Large state MDP

– Policy Gradient, Deep Q-Network• Part 4: Special MDPs

– Bandits and Partially Observable MDP• Part 5: Advanced topics

69