cs 182/ling109/cogsci110 spring 2008 reinforcement learning: basics 3/20/2008 srini narayanan –...

49
CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Post on 21-Dec-2015

219 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

CS 182/Ling109/CogSci110Spring 2008

Reinforcement Learning: Basics

3/20/2008

Srini Narayanan – ICSI and UC Berkeley

Page 2: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Lecture Outline

Introduction Basic Concepts

Expectation, Utility, MEU

Neural correlates of reward based learning Utility theory from economics

Preferences, Utilities.

Reinforcement Learning: AI approach

Page 3: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Models of Learning

Hebbian ~ coincidence Recruitment ~ one trial Supervised ~ correction (backprop) Reinforcement ~ Reward based

delayed reward Unsupervised ~ similarity

Page 4: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Reinforcement Learning

Basic idea: Receive feedback in the form of rewards

also called reward based learning in psychology Agent’s utility is defined by the reward function Must learn to act so as to maximize expected utility Change the rewards, change the behavior

Examples: Learning coordinated behavior/skills (x-schemas) Playing a game, reward at the end for winning / losing Vacuuming robot, reward for each piece of dirt picked up Automated taxi, reward for each passenger delivered

Page 5: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Coordination: Making Breakfast Phil prepares his breakfast. Closely examined, even this apparently

mundane activity reveals a complex web of conditional behavior and interlocking goal-subgoal relationships: walking to the cupboard, opening it, selecting a cereal box, then reaching for, grasping, and retrieving the box. Other complex, tuned, interactive sequences of behavior are required to obtain a bowl, spoon, and milk jug. Each step involves a series of eye movements to obtain information and to guide reaching and locomotion. Rapid judgments are continually made about how to carry the objects or whether it is better to ferry some of them to the dining table before obtaining others. Each step is guided by goals, such as grasping a spoon or getting to the refrigerator, and is in service of other goals, such as having the spoon to eat with once the cereal is prepared and ultimately obtaining nourishment (Sutton and Barto,Section 1.1)

Page 6: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Basic Features

Interaction between agent and environment. Agent seeks to achieve a goal despite uncertainty in

the environment. Effects of actions cannot be completely predicted Requires monitoring the environment frequently.

Agent’s actions change the future state of the environment (opportunities and future options are impacted)

Correct choice requires taking into account indirect, delayed consequences of actions, thus may require foresight or planning.

Page 7: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Reinforcement Learning

Multiple fields contribute to the study of reinforcement learning Economics

Utility theory and preferences, game theory

Artificial Intelligence Machine learning, action and state representation, inference

Psychology Reward based prediction and control, conditioning

Neuroscience Reward related circuits, timing of rewards, neuroeconomics

Page 8: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Basic Ideas

Utility Preferences Maximum Expected Utility (MEU)

Reward Immediate and Delayed rewards Average Reward Discounting

Learning and Acting Prediction error Optimal Policy

Page 9: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Basic Idea: Maximum Expected Utility (MEU)

MEU: An agent should chose the action which maximizes its expected utility, given its knowledge

General principle for decision making Often taken as the definition of rationality

Let’s decompress this definition…

Page 10: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Reminder: Expectations Often a quantity of interest depends on a random

variable The expected value of a function is the average

output, weighted by some distribution over inputs Example: How late will I be?

Lateness is a function of traffic:L(T=none) = -10, L(T=light) = -5, L(T=heavy) = 15

What is my expected lateness? Need to specify some belief over T to weight the outcomes Say P(T) = {none: 2/5, light: 2/5, heavy: 1/5} The expected lateness:

Page 11: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Expectations Real valued functions of random variables:

Expectation of a function of a random variable

Example: Expected value of a fair die roll

X P f

1 1/6 1

2 1/6 2

3 1/6 3

4 1/6 4

5 1/6 5

6 1/6 6

Page 12: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Utilities

Utilities are functions from outcomes (states of the world) to real numbers that describe an agent’s preferences

Where do utilities come from? In a game, may be simple (+1/-1) Utilities summarize the agent’s goals Theorem: any set of preferences between outcomes can be

summarized as a utility function (provided the preferences meet certain conditions)

In general, utilities are determined from rewards and actions emerge to maximize expected utility.

Page 13: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Lecture Outline

Introduction Basic Concepts

Expectation, Utility, MEU

Neural correlates of reward based learning Utility theory from economics

Preferences, Utilities.

Reinforcement Learning: AI approach

Page 14: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Multiple neurotransmitters are involved in reinforcement learning

Page 15: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Dopamine based neural correlatesSkill learningNatural rewards Reward pathway? Learning?Intracranial self-

stimulation;Drug addiction;

Parkinson’s Disease Motor control + initiation?

Also involved in: Working memory Novel situations ADHD Schizophrenia …

Dorsal Striatum (Caudate, Putamen)

Ventral TegmentalArea

Substantia Nigra

Amygdala

Nucleus Accumbens(Ventral Striatum)

Prefrontal CortexDorsal Striatum (Caudate, Putamen)

Ventral TegmentalArea

Substantia Nigra

Amygdala

Nucleus Accumbens(Ventral Striatum)

Prefrontal CortexDorsal Striatum (Caudate, Putamen)

Ventral TegmentalArea

Substantia Nigra

Amygdala

Nucleus Accumbens(Ventral Striatum)

Prefrontal CortexDorsal Striatum (Caudate, Putamen)

Ventral TegmentalArea

Substantia Nigra

Amygdala

Nucleus Accumbens(Ventral Striatum)

Prefrontal Cortex

Page 16: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Conditioning

Ivan Pavlov

CS

UCS

I rang the bell!

Page 17: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Dopamine levels track prediction error Unpredicted reward(unlearned/no stimulus)

Predicted reward(learned task)

Omitted reward(probe trial)

(Montague et al. 1996)Wolfram Schultz Lab 1990-1996

Page 18: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Basic concept: Prediction Error

Learning theory suggests that learning occurs when a reward value fails to meet the value

predicted by conditioned stimuli The difference between expected and actual

reward is the prediction error.

Page 19: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Ventral Striatum and amount of reward

Page 20: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Areas that are probably directly involved in RL

Basal Ganglia Striatum (Ventral/Dorsal), Putamen, Substantia Nigra

Midbrain (VT) Amygdala Orbito-Frontal Cortex Cingulate Circuit (ACC) Cerebellum PFC

Page 21: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Current Hypothesis

Ventral Striatum (Nucleus Accumbens) encodes anticipation of reward. Different (overlapping) circuits for reward and punishment (OFC

involvement in punishment). Phasic dopamine encodes a reward prediction error Evidence

Monkey single cell recordings Human fMRI studies

Current Research Better information processing model

Other reward/punishment circuits including Amygdala (for visual perception) Overall circuit (PFC-Basal Ganglia interaction)

More in future lectures! Preview Wolfram Schultz’s article at http://www.scholarpedia.org/article/Reward_signals

Page 22: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Lecture Outline

Introduction Basic Concepts

Expectation, Utility, MEU

Neural correlates of reward based learning Utility theory from economics

Preferences, Utilities.

Reinforcement Learning: AI approach

Page 23: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Economic Models of Utility

Preferences Rational Preferences

Axioms for preferences

Human Rationality?

Page 24: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Preferences

An agent chooses among: Prizes: A, B, etc. Lotteries: situations with

uncertain prizes

Notation:

Page 25: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Rational Preferences

We want some constraints on preferences before we call them rational

For example: an agent with intransitive preferences can be induced to give away all its money If B > C, then an agent with C

would pay (say) 1 cent to get B If A > B, then an agent with B

would pay (say) 1 cent to get A If C > A, then an agent with A

would pay (say) 1 cent to get C

Page 26: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Rational Preferences

Preferences of a rational agent must obey constraints. These constraints (plus one more) are the axioms of rationality

Theorem: Rational preferences imply behavior describable as maximization of expected utility

Page 27: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

MEU Principle

Theorem: [Ramsey, 1931; von Neumann & Morgenstern, 1944] Given any preferences satisfying these constraints, there exists

a real-valued function U such that:

Maximum expected likelihood (MEU) principle: Choose the action that maximizes expected utility Note: an agent can be entirely rational (consistent with MEU)

without ever representing or manipulating utilities and probabilities

E.g., a lookup table for perfect tictactoe, reflex vacuum cleaner

Page 28: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Human Utilities

Utilities map states to real numbers. Which numbers? Standard approach to assessment of human utilities:

Compare a state A to a standard lottery Lp between

``best possible prize'' u+ with probability p

``worst possible catastrophe'' u- with probability 1-p

Adjust lottery probability p until A ~ Lp

Resulting p is a utility in [0,1]

Page 29: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Utility Scales Normalized utilities: u+ = 1.0, u- = 0.0

Micromorts: one-millionth chance of death, useful for paying to reduce product risks, etc.

QALYs: quality-adjusted life years, useful for medical decisions involving substantial risk One year with good health = 1 QALY

Note: behavior is invariant under positive linear transformation

With deterministic prizes only (no lottery choices), only ordinal utility can be determined, i.e., total order on prizes

Page 30: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Example: Insurance

Consider the lottery [0.5,$1000; 0.5,$0] What is its expected monetary value? ($500) What is its certainty equivalent?

Monetary value acceptable in lieu of lottery $400 for most people

Difference of $100 is the insurance premium There’s an insurance industry because people will pay to

reduce their risk If everyone were risk-prone, no insurance needed!

Page 31: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Example: Human Rationality?

Famous example of Allais (1953)

A: [0.8,$4k; 0.2,$0] B: [1.0,$3k; 0.0,$0]

C: [0.2,$4k; 0.8,$0] D: [0.25,$3k; 0.75,$0]

Most people prefer B > A, C > D But if U($0) = 0, then

B > A U($3k) > 0.8 U($4k) C > D 0.8 U($4k) > U($3k)

Page 32: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

The Ultimatum Game Proposer: receives $x, offers split $k / $(x-k) Accepter: either

Accepts: gets $k, proposer gets $(x-k) Rejects: neither gets anything

Nash equilibrium (MEU play)? Any strategy profile where proposer offers $k and accepter will accept

$k or greater Issues:

Why do people tend to reject offers which are very unfair (e.g. $20 from $100)?

Irrationality? Utility of $20 exceeded by utility of punishing the unfair proposer? What about if x is very very large?

fMRI experiments: Dopamine pathways implicated. Pleasure from punishment of others or injustice?

More in coming lectures!

Page 33: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Lecture Outline

Introduction Basic Concepts

Expectation, Utility, MEU Neural correlates of reward based learning Utility theory from economics

Preferences, Utilities. Reinforcement Learning: AI approach

The problem Computing total expected value with discounting Bellman’s equation

Page 34: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Reinforcement Learning

Basic idea: Receive feedback in the form of rewards Agent’s utility is defined by the reward function Must learn to act so as to maximize expected utility Change the rewards, change the behavior

Examples: Learning your way around, reward for reaching the destination. Playing a game, reward at the end for winning / losing Vacuuming a house, reward for each piece of dirt picked up Automated taxi, reward for each passenger delivered

DEMO

Page 35: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Elements of RL

Transition model, how action influences states Reward R, immediate value of state-action transition Policy , maps states to actions

Agent

Environment

State Reward Action

Policy

sss 221100 r a2

r a1

r a0 :::

Page 36: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Markov Decision Processes Markov decision processes (MDPs)

A set of states s S A model T(s,a,s’) = P(s’ | s,a)

Probability that action a in state s leads to s’

A reward function R(s, a, s’) (sometimes just R(s) for leaving a state or R(s’) for entering one)

A start state (or distribution) Maybe a terminal state

MDPs are the simplest case of reinforcement learning In general reinforcement learning, we

don’t know the model or the reward function

Page 37: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

MDP Solutions In deterministic single-agent search, want an optimal

sequence of actions from start to a goal In an MDP we want an optimal policy (s)

A policy gives an action for each state Optimal policy maximizes expected utility (i.e. expected rewards)

if followed Defines a reflex agent

Optimal policy when R(s, a, s’) = -0.04 for all non-terminals s

Page 38: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Example Optimal Policies

R(s) = -2.0R(s) = -0.4

R(s) = -0.03R(s) = -0.01

Page 39: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Stationarity In order to formalize optimality of a policy, need to

understand utilities of reward sequences Typically consider stationary preferences:

Theorem: only two ways to define stationary utilities Additive utility:

Discounted utility:

Page 40: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Infinite Utilities?!

Problem: infinite state sequences with infinite rewards

Solutions: Finite horizon:

Terminate after a fixed T steps Gives nonstationary policy ( depends on time left)

Absorbing state(s): guarantee that for every policy, agent will eventually “die” (like a “done” state)

Discounting: for 0 < < 1

Smaller means smaller horizon

Page 41: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Finding Optimal Policies Demo

Page 42: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

How (Not) to Solve an MDP

The inefficient way: Enumerate policies For each one, calculate the expected utility

(discounted rewards) from the start state E.g. by simulating a bunch of runs

Choose the best policy

We’ll return to a (better) idea like this later

Page 43: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Optimal Utilities

Goal: calculate the optimal utility of each state

V*(s) = expected (discounted) rewards with optimal actions

Why: Given optimal utilities, MEU tells us the optimal policy

Page 44: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Bellman’s Equation for Selecting actions

Definition of utility leads to a simple relationship amongst optimal utility values:

Optimal rewards = maximize over first action and then follow optimal policy

Formally: Bellman’s Equation

That’s my equation!

Page 45: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Example: GridWorld

Page 46: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Value Iteration

Idea: Start with bad guesses at all utility values (e.g. V0(s) = 0) Update all values simultaneously using the Bellman equation

(called a value update or Bellman update):

Repeat until convergence

Theorem: will converge to unique optimal values Basic idea: bad guesses get refined towards optimal values Policy may converge long before values do

Page 47: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Example: Bellman Updates

Page 48: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Example: Value Iteration

Information propagates outward from terminal states and eventually all states have correct value estimates

[DEMO]

Page 49: CS 182/Ling109/CogSci110 Spring 2008 Reinforcement Learning: Basics 3/20/2008 Srini Narayanan – ICSI and UC Berkeley

Policy Iteration

Alternate approach: Policy evaluation: calculate utilities for a fixed policy

until convergence (remember the beginning of lecture)

Policy improvement: update policy based on resulting converged utilities

Repeat until policy converges

This is policy iteration Can converge faster under some conditions