modeling cognitive development with robots · henny admoni, caroline bank, joshua tan, mariya...

1
Modeling Cognitive Development with Robots Henny Admoni ([email protected]), Yale University Joint Attention Robot gaze does not cue reflexive human attention Intention from Motion Ecologically validating psychophysics results on perception of chasing from motion Theory of Mind How can a robot identify beliefs, desires and intentions of others? Background Previous studies show that humans reflexively shift their attention in the direction of another person's gaze , even when they are motivated to attend in the opposite direction Non-social directional symbols, like arrows, elicit weaker attention shifts: people can overcome the directional cue with conscious motivation to look elsewhere We asked: Will robots be treated like faces or arrows? Will a robot's gaze cause reflexive attention shifts or be subject to volitional control? Results and Conclusions Human-robot and human-human interactions may appear similar on a gross behavioral level, but low-level differences exist Participants recognized directional significance of all stimuli, but only responded to cueing significance of non-robot stimuli RTs were significantly faster for predicted than for NPNC trials for all stimuli RTs were significantly faster for predicted than for cued trials only for robot stimuli Stimulus image sequence for a single trial of the Keepon stimulus, predicted condition. Three types of trials were presented: cued (probe and gaze are congruent), predicted (probe location is opposite to gaze direction) and not-predicted-not- cued or NPNC (probe is on a different axis than gaze). Percentages indicate probability of occurrence. Mean response times in milliseconds for each trial type and stimulus condition. A single asterisk indicates significant differences (p < 0.05), a double asterisk indicates borderline significant differences (p < 0.10). Background Humans easily perceive animacy of simple moving shapes, and will attribute intentionality to animated figures based on low-level cues such as position over time Perceived animacy is difficult to quantify Gao et al. (2009) quantitatively determined conditions under which people recognize chasing from motion information alone Such studies are based on computer displays and not real-world simulations Method For each trial, participants view a 30-second scene in which four identical iRobot Create robots move in seemingly random trajectories In some trials, one robot is chasing a second robot Participants identify whether they saw chasing, and if so, which robot was the chaser ('wolf') and which was the chasee ('sheep') Chasing subtlety is varied systematically, from 0 to 120 degrees of offset from perfect heat-seeking. Reproduction of stimulus from Heider and Simmel (1944) which demonstrated peoples' proclivity toward identifying animacy from motion (from Gao et al., 2009) Expected Contributions Discover whether on-screen findings generalize to real-world situations; discover which features are required for the appearance of chasing Utilize robots as a programmable, repeatable, embodied experiment platform Chasing subtlety is the range of heading angles a wolf can have with respect to a sheep at any given time step. For a chasing subtlety of 30 degrees, for example, the wolf has a randomly selected heading angle that is offset at most 30 degrees from a line drawn directly to the sheep (from Gao et al., 2009). Background To have a theory of mind is to impute mental states to oneself and others; this mechanism has been proposed as a necessary component of empathetic social interactions Robots interacting with people may benefit from the ability to theorize about the mental states of others , including their beliefs, desires and intentions My research focuses on pointing and joint referencing as a social catalyst Current Publications Henny Admoni, Caroline Bank, Joshua Tan, Mariya Toneva and Brian Scassellati. 2011. Robot gaze does not reflexively cue human attention. In Proc. of the 33rd Annual Conference of the Cognitive Science Society (CogSci 2011) (to appear). Citations Simon Baron-Cohen, Alan Leslie and Uta Frith. 1985. Does the autistic child have a “theory of mind”? Cognition 21:37-46. Marek Doniec, Ganghua Sun and Brian Scassellati. 2006. Active learning of joint attention. In Proc. of Humanoid Robotics, pp. 34-39. Tao Gao, George Newman and Brian Scholl. 2009. The psychophysics of chasing: A case study in the perception of animacy. Cognitive Psychology 59:154-179. Yukie Nagai, Koh Hosoda, Akio Morita and Minoru Asada. 2003. Connection Science, 15(4):211-229. Expected Contributions Understand what processes and features are involved in recognizing others' mental states Design a computational system that can recognize goals and desires of other people, predict others' reactions to social events, and modify its own behavior accordingly. Examine interactions between humans and robots that respond to their perceived beliefs, desires and intentions. The Sally and Anne test reveals whether someone is able to identify others' mental states as different from their own (from Baron-Choen et al, 1985). Challenges The process of understanding others' mental states is not well known, though it likely involves the integration of numerous perceptual and cognitive tasks Some studies assert that robots are accepted by humans in social situations, but these results are not exhaustive A robot learns to follow the gaze of a caregiver toward an object of reference that is initially outside the robot's field of view (from Nagai et al., 2003). A robot directs a learning exercise in order to learn joint reference behaviors from a caregiver (from Doniec et al., 2006). Inspiration Method Stimulus image sequence: front- facing face (human, robot or arrow), followed by turned face (up, down, left or right), followed by probe letter Counterpredictive cueing task: probe letter appears opposite stimulus gaze in 75% of trials Measured response time (RT): participants pressed key of the probe letter quickly and accurately Response time correlates to attention: reflexive shifts indicated by RT differences between cued and NPNC trials; volitional control indicated by RT differences between cued and predicted trials Four agents are shown in each trial, both in the computer-based animation (left) and the real- world robotic simulation (right) (left image from Gao et al, 2009). Funded by NSF grant for Social-Computational Systems (#0968538) Co-PIs: Brian Scassellati and Brian Scholl We ask: Can we ecologically validate chasing recognition results, as a first step toward validating psychophysics literature on perceived animacy?

Upload: others

Post on 06-Aug-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Modeling Cognitive Development with Robots · Henny Admoni, Caroline Bank, Joshua Tan, Mariya Toneva and Brian Scassellati. 2011. Robot gaze does not reflexively cue human attention

Modeling Cognitive Development with RobotsHenny Admoni ([email protected]), Yale University

Joint AttentionRobot gaze does not cue reflexive human attention

Intention from MotionEcologically validating psychophysics results on perception of chasing from motion

Theory of MindHow can a robot identify beliefs, desires and intentions of others?

Background●Previous studies show that humans reflexively shift their attention in the direction of another person's gaze, even when they are motivated to attend in the opposite direction●Non-social directional symbols, like arrows, elicit weaker attention shifts: people can overcome the directional cue with conscious motivation to look elsewhere●We asked: Will robots be treated like faces or arrows? Will a robot's gaze cause reflexive attention shifts or be subject to volitional control?

Results and Conclusions●Human-robot and human-human interactions may appear similar on a gross behavioral level, but low-level differences exist●Participants recognized directional significance of all stimuli, but only responded to cueing significance of non-robot stimuli

● RTs were significantly faster for predicted than for NPNC trials for all stimuli

● RTs were significantly faster for predicted than for cued trials only for robot stimuli

Stimulus image sequence for a single trial of the Keepon stimulus, predicted condition.

Three types of trials were presented: cued (probe and gaze are congruent), predicted (probe location is opposite to gaze direction) and not-predicted-not-cued or NPNC (probe is on a different axis than gaze). Percentages indicate probability of occurrence.

Mean response times in milliseconds for each trial type and stimulus condition. A single asterisk indicates significant differences (p < 0.05), a double asterisk indicates borderline significant differences (p < 0.10).

Background●Humans easily perceive animacy of simple moving shapes, and will attribute intentionality to animated figures based on low-level cues such as position over time●Perceived animacy is difficult to quantify●Gao et al. (2009) quantitatively determined conditions under which people recognize chasing from motion information alone●Such studies are based on computer displays and not

real-world simulations

Method●For each trial, participants view a 30-second scene in which four identical iRobot Create robots move in seemingly random trajectories●In some trials, one robot is chasing a second robot●Participants identify whether they saw chasing, and if so, which robot was the chaser ('wolf') and which was the chasee ('sheep')●Chasing subtlety is varied systematically, from 0 to 120 degrees of offset from perfect heat-seeking.

Reproduction of stimulus from Heider and Simmel (1944) which demonstrated peoples' proclivity toward identifying animacy from motion (from Gao et al., 2009)

Expected Contributions●Discover whether on-screen findings generalize to real-world situations; discover which features are required for the appearance of chasing●Utilize robots as a programmable, repeatable, embodied experiment platform

Chasing subtlety is the range of heading angles a wolf can have with respect to a sheep at any given time step. For a chasing subtlety of 30 degrees, for example, the wolf has a randomly selected heading angle that is offset at most 30 degrees from a line drawn directly to the sheep (from Gao et al., 2009).

Background●To have a theory of mind is to impute mental states to oneself and others; this mechanism has been proposed as a necessary component of empathetic social interactions●Robots interacting with people may benefit from the ability to theorize about the mental states of others, including their beliefs, desires and intentions●My research focuses on pointing and joint referencing as a social catalyst

Current Publications●Henny Admoni, Caroline Bank, Joshua Tan, Mariya Toneva and Brian Scassellati. 2011. Robot gaze does not reflexively cue human attention. In Proc. of the 33rd Annual Conference of the Cognitive Science Society (CogSci 2011) (to appear). Citations●Simon Baron-Cohen, Alan Leslie and Uta Frith. 1985. Does the autistic child have a “theory of mind”? Cognition 21:37-46.●Marek Doniec, Ganghua Sun and Brian Scassellati. 2006. Active learning of joint attention. In Proc. of Humanoid Robotics, pp. 34-39.●Tao Gao, George Newman and Brian Scholl. 2009. The psychophysics of chasing: A case study in the perception of animacy. Cognitive Psychology 59:154-179.●Yukie Nagai, Koh Hosoda, Akio Morita and Minoru Asada. 2003. Connection Science, 15(4):211-229.

Expected Contributions●Understand what processes and features are involved in recognizing others' mental states●Design a computational system that can recognize goals and desires of other people, predict others' reactions to social events, and modify its own behavior accordingly.●Examine interactions between humans and robots that respond to their perceived beliefs, desires and intentions.

The Sally and Anne test reveals whether someone is able to identify others' mental states as different from their own (from Baron-Choen et al, 1985).

Challenges●The process of understanding others' mental states is not well known, though it likely involves the integration of numerous perceptual and cognitive tasks●Some studies assert that robots are accepted by humans in social situations, but these results are not exhaustive

A robot learns to follow the gaze of a caregiver toward an object of reference that is initially outside the robot's field of view (from Nagai et al., 2003).

A robot directs a learning exercise in order to learn joint reference behaviors from a caregiver (from Doniec et al., 2006).

Inspiration

Method●Stimulus image sequence: front-facing face (human, robot or arrow), followed by turned face (up, down, left or right), followed by probe letter●Counterpredictive cueing task: probe letter appears opposite stimulus gaze in 75% of trials●Measured response time (RT): participants pressed key of the probe letter quickly and accurately●Response time correlates to attention: reflexive shifts indicated by RT differences between cued and NPNC trials; volitional control indicated by RT differences between cued and predicted trials

Four agents are shown in each trial, both in the computer-based animation (left) and the real-world robotic simulation (right) (left image from Gao et al, 2009).

Funded by NSF grant for Social-Computational Systems (#0968538)Co-PIs: Brian Scassellati and Brian Scholl

●We ask: Can we ecologically validate chasing recognition results, as a first step toward validating psychophysics literature on perceived animacy?