introduction to developmental learning

27
Introduction to Developmental Learning 11 March 2014 [email protected] r http:// www.oliviergeorgeon.com t 1/33 oliviergeorgeon.com

Upload: lelia

Post on 24-Feb-2016

32 views

Category:

Documents


0 download

DESCRIPTION

Introduction to Developmental Learning. 11 March 2014 [email protected] http:// www.oliviergeorgeon.com. t. Old dream of AI. - PowerPoint PPT Presentation

TRANSCRIPT

Prsentation PowerPoint

Introduction to Developmental Learning11 March [email protected]://www.oliviergeorgeon.com

t1/33oliviergeorgeon.comOld dream of AIInstead of trying to produce a program to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain.

Presumably, the child brain is something like a notebook []. Rather little mechanism, and lots of blank sheets. []. Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed. The amount of work in the education we can assume, as a first approximation, to be much the same as for the human child.

Computing machinery and intelligence (Alan Turing, 1950, Mind, philosophy journal).

2/33oliviergeorgeon.comIs it even possible?Spiritualist vision of consciousness (it would require a soul).Causal openness of physical reality (quantum theory). Too complex. Materialist theory of consciousness(Julien Offray de La Mettrie, 1709-1751).Consciousness as a computational process(Chalmers 1994) http://consc.net/papers/computation.htmlNo ?

Yes?3/33oliviergeorgeon.comOutilineExampleDemo of developmental learning.Theoretical basesPose the problem.The question of self-programming.ExerciseImplement your self-programming agent.

4/33oliviergeorgeon.com6 ExperimentsThe coupling agent/environment offers hierarchical sequential regularities of interactions, for example : After i7 , attempting i1 or i2 results more likely in i1 than in i2 . After i9, i3, i1, i8 , i4, i7, i1 can often be enacted. After i8 , sequence i9, i3, i1 can often be enacted. After i8 , i8 can often be enacted again.i1 (5) i2 (-10)i3 (-3)i7 (-1) i8 (-1)i5 (-1) i6 (-1)i9 (-1) i10 (-1)i4 (-3)2 Results10 Interactions (value)0100100101Example 15/285Exemple 1:

Bump:

Touch: Move Forward or bump(5) (-10)Turn left / right (-3)Feel right/ front / left (-1)6/28Theoretical basesPhilosophy of mind .Epistemology (theory of knowledge)Developmental psychology.Biology (autopoiesis, enaction).Neurosciences. 7/33oliviergeorgeon.comPhilosophy : is it possible?John Locke (1632 1704) Tabula Rasa La Mettrie (1709-1751).Matter can think David ChalmersA Computational Foundation for the Study of Cognition (1994)Daniel DennettConsciousness explained (1991)Free will, individual choice, self-motivation, dterminism

8/33oliviergeorgeon.comKey philosophical ideas for DIAcognition ascomputation in the broad sense.Causal structureExample: neural net with chemistry (neurotransmitters, hormones etc.).

Determinisme does not contradict free will.Do not mistake determinism for predictibility. Herv Zwirn (Les systmes complexes, 2006)

9/33oliviergeorgeon.comEpistmology (what can I know?)Concept of ontology Study of the nature of beingAristotle (384 322 BC).Onto: being, Logos: discourse. Discourse on the properties and categories of being.Reality as such is unknowableEmmanuel Kant, (1724 1804)

10/33oliviergeorgeon.comKey epistemological ideas for DAIImplement learning mechanism with no ontological asumptions.Agnostic agents (Georgeon 2012).The agent will never know its environment as we see it.But with interactional assumptionsPredefine the possibilities of interaction between the agent and its environment.Let the agent alone to construct its own ontology of the environment through its experience of interaction.

11/33oliviergeorgeon.comDevelopmental psychology (How can I know?)Developmental learningJean Piaget (1896 1980)Teleology / motivational principlesthe individual self-finalizes recursively.Do not separate perception and action a priori:Notion of sensorimoteur schemeContructivist epistemologyJean-Louis Le Moigne (1931 - )Ernst von Glasersfeld.Knowledge is an adaptation in the functional sense.

12/33oliviergeorgeon.comEtapes dveloppementales indicativesMonth 4: Bayesian prediction.Month 5: Models of hand movement.Month 6: Objects and face recognition.Month 7: Persistency of objects.Month 8: Dynamic models of objects.

Month 9: Tool use (bring a cup to the mouth). Month 10: Gesture imitation, crawling.Month 11: Walk with the help of an adult.Month 15: Walk alone. 13/45oliviergeorgeon.comKey psychological ideas for DAIThink in terms of interactions rather than separating perception and action a priori. Focus on an intermediary level of intelligence:Cognition smantique (Manzotti & Chella 2012)

stimulus-response adaptationSemantic cognitionReasoning and languageLow levelHigh levelIntermediary level14/33oliviergeorgeon.comBiology (why know?)Autopoieseauto: self, poise : creation Maturana (1972)Structural coupling agent/environment.Relational domain (the space of possibilities of interaction)HomeostasisInternal state regulationSelf-motivationTheory of enactionSelf-creation through interaction with the environment. Enactive Artificial Intelligence (Froeze and Ziemke 2009) .

15/33oliviergeorgeon.comKey ideas from biology for DAIConstitutive autonomy is necessary for sense-making.Evolution of possibilities of interaction during the systems life. Individuation.

Design systems capable of programming themselves. The data that is learned is not merely parameter values but is executable data.16/33oliviergeorgeon.com

Neurosciences

Many levels of analysisA lot of plasticity AND a lot of pre-wiring17/33oliviergeorgeon.comNeuroscienceConnectome of C. Elegans: 302 neurons.

Entirely inborn connectome rather than acquired through experience18/33oliviergeorgeon.comHuman connectome

http://www.humanconnectomeproject.org 19/33oliviergeorgeon.comNeurosciences

Examples of mammalian brains

No qualitative rupture : human cognitive functions (e.g., language reasoning) relies of brain structures that exist in other mammalian brains. (This does not mean there is no innate differences !). The brain serves at organizing behaviors in time and space. 20/33oliviergeorgeon.comKey neuroscience ideas for DAIRenounce the hope that it will be simple. Maybe begin at an intermediary level and go down if it does not work?Biology can be source of inspirationBiologically Inspired Cognitive Architectures. Importance of the capacity to internally simulate courses of behaviors.

21/33oliviergeorgeon.comKey ideas of the key ideasThe objective is to learn (discover, organze and exploit) regularities of interaction in time and space to satisfy innate criteria (survival, curiosity, etc.).

Without pre-encoded ontological knowledge Which allows a kind of constitutive autonomy (self-programming).22/33oliviergeorgeon.com

Teaser for next course23/5oliviergeorgeon.comExercice24/33oliviergeorgeon.comExerciceTwo possible experiences E = {e1,e2}Two possible results R = {r1,r2} Four possible interactions E x R = {i11, i12, i21, i22}

Two environmentsenv1: e1 -> r1 , e2 -> r2 (i12 et i21 are never enacted)env2: e1 -> r2 , e2 -> r1 (i11 et i22 are never enacted)Motivational systems:mot1: v(i11) = v(i12) = 1, v(i21) = v(i22) = -1mot2: v(i11) = v(i12) = -1, v(i21) = v(i22) = 1mot2: v(i11) = v(i21) = 1, v(i12) = v(i22) = -1Implement un agent that learn to enact positive interactions without knowing its motivatins a priori (mot1 or mot2) neither its environnement (env1 or env2).Write a rapport of behavioral analysis based on activity traces.

25/33oliviergeorgeon.comNo hard-coded knowledge of the environmentAgen{public Experience chooseExperience(){If (env == env1 and mot == mot1) or (env == env2 and mot == mot2)return e1;elsereturn e2;}}

26/33oliviergeorgeon.comImplementationpublic static Experience e1 = new experience(); Experience e2 = new experience();public static Result r1 = new result(); Result r2 = new result();public static Interaction i11 = new Interaction(e1,r1, 1); etc.Public static void main() Agent agent = new Agent(); Environnement env = new Env1(); // Env2();for(int i=0 ; i < 10 ; i++)e = agent.chooseExperience(r);r = env.giveResult(e); System.out.println(e, r, value);Class Agentpublic Experience chooseExperience(Result r)Class Environnementpublic Result giveResult(experience e)Class Env1Class Env2Class ExperienceClass ResultClass Interaction(experience, result, value)public int getValue()

27/33oliviergeorgeon.com27