darpa mobile autonomous robot softwareleslie pack kaelbling; january 2000 1 adaptive intelligent...
TRANSCRIPT
DARPA Mobile Autonomous Robot SoftwareLeslie Pack Kaelbling; January 2000 1
Adaptive Intelligent Mobile Robotics
Leslie Pack Kaelbling
Artificial Intelligence Laboratory
MIT
DARPA Mobile Autonomous Robot SoftwareLeslie Pack Kaelbling; January 2000 2
Progress to Date
• Erik the Red• Video game environment• Optical flow implementation• Fast bootstrapped reinforcement learning
DARPA Mobile Autonomous Robot SoftwareLeslie Pack Kaelbling; January 2000 3
Erik the Red
RWI B21 robot• camera, sonars, laser range-finder, infrareds• 3 Linux machines• ported our framework for writing debuggable code
DARPA Mobile Autonomous Robot SoftwareLeslie Pack Kaelbling; January 2000 4
Erik the Red
DARPA Mobile Autonomous Robot SoftwareLeslie Pack Kaelbling; January 2000 5
Crystal Space
Public-domain video-game environment• complex graphics• other agents• highly modifiable
DARPA Mobile Autonomous Robot SoftwareLeslie Pack Kaelbling; January 2000 6
Crystal Space
DARPA Mobile Autonomous Robot SoftwareLeslie Pack Kaelbling; January 2000 7
Optical Flow
Get range information visually by computing optical flow field
• nearer objects cause flow of higher magnitude• expansion pattern means you’re going to hit• rate of expansion tells you when• elegant control laws based on center and rate of
expansion (derived from human and fly behavior)
DARPA Mobile Autonomous Robot SoftwareLeslie Pack Kaelbling; January 2000 8
Optical Flow in Crystal Space
DARPA Mobile Autonomous Robot SoftwareLeslie Pack Kaelbling; January 2000 9
Making RL Really Work
Typical RL methods require far too much data to be practical in an online setting. Address the problem by
• strong generalization techniques• using human input to bootstrap
DARPA Mobile Autonomous Robot SoftwareLeslie Pack Kaelbling; January 2000 10
JAQL
Learning a value function in a continuous state and action space
• based on locally weighted regression (fancy version of nearest neighbor)
• algorithm knows what it knows• use meta-knowledge to be conservative about
dynamic-programming updates
DARPA Mobile Autonomous Robot SoftwareLeslie Pack Kaelbling; January 2000 11
Incorporating Human Input
Humans can help a lot, even if they can’t perform the task very well.
• Provide some initial successful trajectories through the space
• Trajectories are not used for supervised learning, but to guide the reinforcement-learning methods through useful parts of the space
• Learn models of the dynamics of the world and of the reward structure
• Once learned models are good, use them to update the value function and policy as well.
DARPA Mobile Autonomous Robot SoftwareLeslie Pack Kaelbling; January 2000 12
Simple Experiment
The “hill-car” problem in two continuous dimensions• Regular RL methods take thousands of trials to
learn a reasonable policy• JAQL takes 11 inefficient but eventually successful
trails generated by humans to get 80% performance
• 10 more subsequent trials generate high quality performance in the whole space
DARPA Mobile Autonomous Robot SoftwareLeslie Pack Kaelbling; January 2000 13
Success Percentage
0
10
20
30
40
50
60
70
80
90
100
subsequent training runs
JAQLQL
DARPA Mobile Autonomous Robot SoftwareLeslie Pack Kaelbling; January 2000 14
Trial Length (200 max)
0
20
40
60
80
100
120
140
160
180
subsequent training runs
JAQLQL
54-step
optimum
DARPA Mobile Autonomous Robot SoftwareLeslie Pack Kaelbling; January 2000 15
Next Steps
• Implement optical-flow control algorithms on robot• Apply RL techniques to tune parameters in control
algorithms on robot in real time• corridor following using sonar and laser• obstacle avoidance using optical flow
• Build highly complex simulated environment• Integrate planning and learning in multi-layer
system