artificial agents without ontological access to reality

31
Ar#ficial Agents Without Ontological Access to Reality Olivier Georgeon h8p://liris.cnrs.fr/ideal/mooc h8p://liris.cnrs.fr/ideal/mooc 1/31

Upload: ogeorgeon

Post on 13-Apr-2017

264 views

Category:

Science


0 download

TRANSCRIPT

Page 1: Artificial Agents Without Ontological Access to Reality

Ar#ficial  Agents  Without  Ontological  Access  to  Reality  

Olivier  Georgeon  h8p://liris.cnrs.fr/ideal/mooc  

h8p://liris.cnrs.fr/ideal/mooc   1/31  

Page 2: Artificial Agents Without Ontological Access to Reality

Defini#ons  

•  Ontology:  “Onto”  (to  be)  +  “Logos”  (discourse):  – Discourse  on  what  “is”.  

•  Agent  without  ontological  access  to  reality:  – Agent  that  don’t  have  access  to  what  “is”  in  reality.  – Agent  whose  input  data  is  NOT  a  representa5on  of  reality.  

•  We  do  not  consider  input  data  as  the  agent’s  percep#on…  •  …  then  input  data  should  be  considered  as  what?  

h8p://liris.cnrs.fr/ideal/mooc   2/31  

Page 3: Artificial Agents Without Ontological Access to Reality

Mainstream  philosophy/epistemology  •  Philosophy  

–  Kant  :  Cogni#ve  agent’s  don’t  have  access  to  reality  “as  such”    (noumenal  reality).  

•  Psychology  –  Findlay  &  Gilchrist  (2003).  Ac#ve  Vision.  

•  Cogni#ve  Science.  –  “Percep#on  and  ac#on  arise  together,  dialec#cally  forming  each  other”  (Clancey,  1992,  

p5).  

•  Construc#vist  epistemology  –  Piaget:  Percep#on  and  ac#on  are  inseparable  (sensorimotor  schemes).  

•  Even  quantum  physics?  –  Predicts  results  of  experiments  without  assuming  an  objec#ve  state  of  reality    

(talking  about  the  state  of  Schrödinger’s  cat  makes  no  sense)  

h8p://liris.cnrs.fr/ideal/mooc   3/31  

Page 4: Artificial Agents Without Ontological Access to Reality

Scope  of  this  presenta#on  

•  Philosophical  prerequisite:  –  Cogni5ve  agents  have  no  access  to  reality  “as  such”    

•  Our  claim:  – Most  BICAs  and  machine  learning  algorithms  have  not  yet  acknowledged  this  philosophy!  

•  Content  of  the  presenta#on:  – How  can  we  implement  this  philosophy  into  BICAs?  – What  will  we  gain  and  loose  from  doing  so?  

h8p://liris.cnrs.fr/ideal/mooc   4/31  

Page 5: Artificial Agents Without Ontological Access to Reality

The  interac#on  cycle  

Agent  

Environment  

Input data

Output data

The  agent  interacts  with  the  environment  by  receiving  input  data  and  sending  output  data.    When  does  the  interac#on  cycle  begin  and  end?  

h8p://liris.cnrs.fr/ideal/mooc   5/31  

Page 6: Artificial Agents Without Ontological Access to Reality

Symbolic  modeling  

Agent  Seman&c  rules  

Reality  

Symbol Action

The  agent  receives  a  symbol  that  matches  seman5c  rules.      There  is  a  predefined  “discourse  on  what  is”  (the  set  of  symbols  and  seman#c  rules)  and  the  agent  has  access  to  it.      The  agent  is  a  passive  observer  of  reality.  The  cycle  begins  with  the  agent  receiving  input  data  and  ends  with  the  agent  sending  output  data.  

h8p://liris.cnrs.fr/ideal/mooc   6/31  

Page 7: Artificial Agents Without Ontological Access to Reality

Reinforcement  learning  

Agent  

Reality  st  ∈ S  

Action

at ∈ A

Observation ot = f (st) ∈ O Reward    rt = r (st) ∈ ℝ

There  is  a  predefined  “discourse  on  what  is”  (the  set  S).      Most  reinforcement  learning  algorithms  assume  that  the  observa#on  represents  the  state  of  reality  (par#ally  and  with  noise).      The  agent  is  a  passive  observer  of  reality.  The  cycle  begins  with  the  agent  receiving  input  data  and  ends  with  the  agent  sending  output  data.  

h8p://liris.cnrs.fr/ideal/mooc   7/31  

Page 8: Artificial Agents Without Ontological Access to Reality

Experiment  /  Result  cycle  

Agent  

Experiment Result rt ∈ R xt ∈ X

Reality  

The  cycle  begins  with  the  agent  sending  output  data  and  ends  with  the  agent  receiving  input  data.    The  agent  is  an  ac#ve  observer  of  reality  (embodiment  paradigm).  

h8p://liris.cnrs.fr/ideal/mooc  

We  can’t  assume  that  input  data  represent  the  state  of  reality:  it  may  not!  Most  BICAs  and  Machine  learning  algorithms  fail  to  generate  interes#ng  behaviors.  

In  a  given  state  of  reality,  rt  varies  depending  on  xt.  

8/31  

Page 9: Artificial Agents Without Ontological Access to Reality

Comparison  

h8p://liris.cnrs.fr/ideal/mooc  

Agent  

Reality  

Agent  

Reality  

a)  Tradi#onal  model   b)  Embodied  model  

 a)  and  b)  are  mathema#cally  equivalent  but  :  -­‐  a)  highlights  the  common  assump#on  that  input  data  represents  reality.  -­‐  b)  highlights  that  this  assump#on  may  be  wrong.  

9/31  

it it ot ot+1

Page 10: Artificial Agents Without Ontological Access to Reality

Agents  Without  Ontological  Access  (AWOAs)    are  “indie”  computer  science  

•  Ar#ficial  Intelligence  (Russell  &  Norvig  2010,  p.  iv).  –  ”The  problem  of  AI  is  to  build  agents  that  receive  percepts  from  the  environment  

and  perform  ac5ons”  –  The  problem  of  AI  is  to  build  agents  that  receive  data  (that  may  not  be  percepts)  

from  the  environment  and  make  decisions  (that  may  not  be  ac5ons).  •  Reinforcement  learning:  (Su8on  &  Barto  1998,  p.  4).  

–   “Clearly,  such  an  agent  must  be  able  to  sense  the  state  of  the  environment  to  some  extent  and  must  be  able  to  take  ac#ons  that  affect  the  state.  The  agent  also  must  have  goals  rela5ng  to  the  state  of  the  environment.”  

–  The  agent  must  have  preferences  (drives)  that  may  not  relate  to  the  state  of  the  environment  “as  such”.  

•  AWOAs  relate  to  other  “indie”  approaches  to  AI:  –  Enac#on,  embodied  cogni#on,  developmental  learning,  mul#  agent  systems,  etc.  

•  AWOAs  differ  from  tradi#onal  AI  by  design  rather  than  by  technique  –  All  techniques  can  be  used  in  both  ways  (rule  based  systems,  connec#onist,  mul#-­‐

agent  systems,  reinforcement  learning  techniques,  etc.)  

h8p://liris.cnrs.fr/ideal/mooc   10/31  

Page 11: Artificial Agents Without Ontological Access to Reality

Example  

(-­‐3)  (-­‐3)  (-­‐1)  

(-­‐1)  

(5)  

(-­‐10)  

Set  E  of  6  experiments:  

Set  R  of  2  results:    

Set  I  =  E  x  R    of  12  interac#ons  (with  valence):  

(-­‐1)  

(-­‐1)  

(-­‐1)  

(-­‐1)  

0    or    1  

h8p://liris.cnrs.fr/ideal/mooc  

The  Agent  /  Environment  coupling  affords  hierarchical  regulari#es  of  interac#on,  e.g.,      

-­‐  Amer                ,  experiment              results  more  likely  in              than  in                .  

-­‐  Amer                                  ,  sequence                              can  omen  be  enacted.  -­‐  Amer                ,  sequence                                  can  omen  be  enacted.  

11/31  

Page 12: Artificial Agents Without Ontological Access to Reality

The "Little loop problem"

http://liris.cnrs.fr/ideal/mooc

Bump:    Touch:    

Move  Forward  or  bump                        (5)                      (-­‐10)  Turn  lem  /  right                                                          (-­‐3)  Feel  right/  front  /  lem                                                          (-­‐1)  

12/31  

Page 13: Artificial Agents Without Ontological Access to Reality

4.  Afford  

Time  

6.  Choose  Decision  Time  

3.  Ac#vate   5.  Propose  

7.  Enact  

1.  

h8p://liris.cnrs.fr/ideal/mooc  

Hierarchical  bo8om-­‐up  sequence  learning  

13/31  

Page 14: Artificial Agents Without Ontological Access to Reality

Experiment Result

c) Experiment/Result Model

r ∈ R x ∈ X Agent  

Intended interaction

Enacted interaction

i = 〈x,r〉 ∈ X×R

d) Interactional Model

Reality  

Agent  

Reality  

e = 〈x,r’〉 ∈ X×R

Interac#onal  model  

Embodied models: the agent must use the active capacity of its body to make experiments in order to learn about reality.

h8p://liris.cnrs.fr/ideal/mooc   14/31  

Page 15: Artificial Agents Without Ontological Access to Reality

Agent

Environment

Environment “known” at time td

ecd ∈ Cd icd ∈ Cd

ep1 ip1 ipj ∈ I epj ∈ I

Decisional mechanism

Recursive  learning  and  self-­‐programming  

h8p://liris.cnrs.fr/ideal/mooc   15/31  

Page 16: Artificial Agents Without Ontological Access to Reality

Ac#vity  analysis  

10 20 30 40 50 60 70 80 90 100

100 110 120 130 140 150 160 170 180 190 200

200 210 220 230 240 250 260 270 280 290 300

300 310 320 330 340 350 360 370 380 390 400

touch  front  –    move  forward  (step  74)  

10 20 30 40 50 60 70 80 90 100

100 110 120 130 140 150 160 170 180 190 200

200 210 220 230 240 250 260 270 280 290 300

300 310 320 330 340 350 360 370 380 390 400

touch  lem  –    turn  lem  –  move  forward  (Step  186)  

10 20 30 40 50 60 70 80 90 100

100 110 120 130 140 150 160 170 180 190 200

200 210 220 230 240 250 260 270 280 290 300

300 310 320 330 340 350 360 370 380 390 400

h8p://liris.cnrs.fr/ideal/mooc   16/31  

Page 17: Artificial Agents Without Ontological Access to Reality

e-­‐puck  robot  (it  resists  to  noise!)  

h8p://liris.cnrs.fr/ideal/mooc   17/31  

Page 18: Artificial Agents Without Ontological Access to Reality

It  allows  training  

h8p://liris.cnrs.fr/ideal/mooc   18/31  

Page 19: Artificial Agents Without Ontological Access to Reality

Rudimentary distal perception

!"#$%!%

!"#$%&%

!"#$%'%

()*+$,%*#-*."/%0#,1%

Detects  rela#ve  displacement    of  objects  and  approximate  direc#on    within  180°  span  (area  A,  B,  or  C).    “Likes”  rapprochement.  “Dislikes”  disappearance.  

h8p://liris.cnrs.fr/ideal/mooc   19/31  

Page 20: Artificial Agents Without Ontological Access to Reality

Self-­‐programming  

h8p://liris.cnrs.fr/ideal/mooc   20/31  

Page 21: Artificial Agents Without Ontological Access to Reality

No  free  lunch  for  machine  learning  •  It  does  not  violate  the  “no  free  lunch  theorem”  

–  Wolpert,  D.H.,  &  Macready,  W.G.  (1997)  •  What  we  loose:  

–  It  does  not  learn  to  reach  predefined  goal  states.  •  e.g.,  win  at  chess.  

•  What  we  gain  –  It  learns  hierarchical  sa#sfying  habitudes  much  faster.  –  Prac#cal  applica#ons  when  we  need  systems  to  learn  habitudes:  

•  e.g,  home  automa#on,  somware  adapta#on,  end-­‐user  programming…  –  Robots  that  interact  with  the  real  world  (without  predefined  model)  

–  Theore#cal  applica#ons  •  It  opens  the  way  to  higher-­‐level  cogni#on  (if  we  trust  Kant,  Piaget,  etc.)  

h8p://liris.cnrs.fr/ideal/mooc   21/31  

Page 22: Artificial Agents Without Ontological Access to Reality

AImergence  

h8p://www.oliviergeorgeon.com/aimergence    

h8p://liris.cnrs.fr/ideal/mooc   22/31  

Page 23: Artificial Agents Without Ontological Access to Reality

Non-­‐Markov  Reinforcement  Learning  

Stage 1 Stage 2 Stage 3

40 200 240 320 480 520 640 800 840-15

-12.5

-10

-7.5

-5

-2.5

0

2.5

o1.a1…  on.an.on+1,  |o1…  on+1∈O  and  a1…  an∈A.    

h8p://liris.cnrs.fr/ideal/mooc   23/31  

Page 24: Artificial Agents Without Ontological Access to Reality

Blue  phenomenon  

White    phenomenon  

Level  3  

Unknown  

123456

1

? ?

10

? ? ? ?

20

? ? ? ? ? ? ?

30

?

40 50 60 70

Time  h8p://liris.cnrs.fr/ideal/mooc   24/31  

Page 25: Artificial Agents Without Ontological Access to Reality

Agent  

Intended Interaction

Enacted interaction

i = 〈x,r〉 ∈ X×R

d) Interactional model

Reality  

Agent  

Intended experience

Enacted experience

e ∈ E i ∈ E

e) Experiential model

Reality  

e = 〈x,r’〉 ∈ X×R

Experien#al  model  

It’s a radical inversion of our viewpoint on artificial agent: We focus on the agent’s stream of phenomenological experience

123456

1

? ?

10

? ? ? ?

20

? ? ? ? ? ? ?

30

?

40 50 60 70

h8p://liris.cnrs.fr/ideal/mooc   25/31  

Page 26: Artificial Agents Without Ontological Access to Reality

Agent   Intended experiences I ⊂ Σ

Enacted experiences

E ⊂ Σ

Reality  

Spatial displacement

𝜏

Spatial coupling

Σ : Experiences with spatial attributes

h8p://liris.cnrs.fr/ideal/mooc   26/31  

Page 27: Artificial Agents Without Ontological Access to Reality

Interac#on  Timeline  

Egocentric  Spa#al    Memory  

Hierarchical  Sequen#al  System  

Behavior    Selec#on  

Intend  

Prop

ose  

Propose  

Learn  /  Track  

Ontology  

Evoke  

Construct  

Enact  

AGENT  

h8p://liris.cnrs.fr/ideal/mooc   27/31  

Page 28: Artificial Agents Without Ontological Access to Reality

Spatial Little Loop Problem

http://liris.cnrs.fr/ideal/mooc 28/31  

Page 29: Artificial Agents Without Ontological Access to Reality

Dynamic  environment  

h8p://liris.cnrs.fr/ideal/mooc   29/31  

Page 30: Artificial Agents Without Ontological Access to Reality

Robo#cs  research  

h8p://liris.cnrs.fr/ideal/mooc  

Bumper  tac#le  sensor  

Panoramic  camera  

Ground  op#c  sensor  

h8p://liris.cnrs.fr/simon.gay/index.php?page=eirl&lang=en     30/31  

Page 31: Artificial Agents Without Ontological Access to Reality

Conclusion:  a  research  approach  •  Theory  of  ar#ficial  Agents  Without  Ontological  Access  to  reality  

(AWOA)  is  under  development.  •  We  design  embodied  models  that  focus  on  the  agent’s  stream  of  

phenomenological  experience.  •  We  validate  the  agents  through  behavioral  analysis  rather  then  

through  performance  measures.    •  Create  animal-­‐level  intelligence  before  human-­‐level  intelligence.  

–  “Animal-­‐level  Turing  test”  based  on  behavioral  analysis?  

•  We  (as  a  community)  must  define  criteria  of  intelligent  behavior.    •  Incremental  approach:  imagine  increasingly  difficult  experiments  and  

design  smarter  agents  in  parallel.    –  (aimergence  game).  

h8p://liris.cnrs.fr/ideal/mooc   31/31