jim little laboratory for computational intelligence...

48
Interaction with an autonomous agent Interaction with an autonomous agent Jim Little Jim Little Laboratory for Laboratory for Computational Intelligence Computational Intelligence Computer Science Computer Science University of British University of British Columbia Columbia Vancouver BC Canada Vancouver BC Canada

Upload: others

Post on 17-Aug-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

Interaction with an autonomous agentInteraction with an autonomous agent

Jim LittleJim LittleLaboratory forLaboratory for

Computational IntelligenceComputational IntelligenceComputer ScienceComputer ScienceUniversity of British University of British

ColumbiaColumbiaVancouver BC CanadaVancouver BC Canada

Page 2: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

BackgroundBackgroundWhen we build systems that interact with the world, with When we build systems that interact with the world, with

humans, and with other agents, we rely upon all of the humans, and with other agents, we rely upon all of the aspects of cognitive vision:aspects of cognitive vision:knowledge representationknowledge representationdescriptions of the scene and its constituent objectsdescriptions of the scene and its constituent objectsmodels of agents and their intentionsmodels of agents and their intentionslearninglearningadaptation to the world and other agentsadaptation to the world and other agentsreasoning about events and about structuresreasoning about events and about structuresinterpretation of other agents' and users' interactionsinterpretation of other agents' and users' interactionsrecognition and categorizationrecognition and categorization

Page 3: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

Background (cont.)Background (cont.)I will review the ongoing Robot Partners project at UBC which I will review the ongoing Robot Partners project at UBC which

focuses on the design and implementation of visually focuses on the design and implementation of visually guided collaborative agents, specifically interacting guided collaborative agents, specifically interacting autonomous mobile robots. I will show howautonomous mobile robots. I will show howlocalization and mappinglocalization and mappinguser modelinguser modelinginterpretation of gestures and actionsinterpretation of gestures and actionsinteraction with human agentsinteraction with human agents

are achieved in the context of the project are achieved in the context of the project

Page 4: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

Cognitive Vision for AgentsCognitive Vision for AgentsRobot Partners: Robot Partners:

Visually Guided MultiVisually Guided Multi--agent Systemsagent Systems

Jim LittleJim LittleLaboratory forLaboratory for

Computational IntelligenceComputational IntelligenceComputer ScienceComputer Science

University of British ColumbiaUniversity of British ColumbiaVancouver BC CanadaVancouver BC Canada

Page 5: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

PersonnelPersonnel

Jim Little (Project Leader), CJim Little (Project Leader), CS UBCS UBCJames J. Clark, James J. Clark, ECEECE McGill UniversityMcGill UniversityNando de Freitas, Nando de Freitas, CS UBCCS UBCDavid G. Lowe, David G. Lowe, CS UBCCS UBCAlan K. Mackworth, Alan K. Mackworth, CS UBCCS UBCDinesh K. Pai, CDinesh K. Pai, CS UBCS UBCStephen Se, MD RoboticsStephen Se, MD Robotics

Page 6: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

Research FocusResearch FocusTo develop techniques for To develop techniques for the specification, design and the specification, design and

implementation of collaborative robotic systemsimplementation of collaborative robotic systems, , based on rich interaction between humans and based on rich interaction between humans and sensorsensor--based robotic systems.based robotic systems.

New tools:New tools:stochasstochastic tic constraintconstraint--based designbased designrealreal--time systems for visiontime systems for visioncontrol with eventcontrol with event--based interaction with perceptionbased interaction with perceptionstereostereo--based based shape and appearance shape and appearance modelsmodelsrobotrobot--human communication based on gestures, human communication based on gestures, sounds, and spoken commandssounds, and spoken commandsprecise visually guided localization for mobile agentsprecise visually guided localization for mobile agents

Page 7: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

BenefitsBenefitsTheories and tools for modelling, developing and Theories and tools for modelling, developing and verifying constraintverifying constraint--based controllers for stochastic based controllers for stochastic agent environments.agent environments.MMultiulti--agent algorithms for collaboration and agent algorithms for collaboration and

competition in ecompetition in e--games with mixed initiative games with mixed initiative (human/robot) control.(human/robot) control.Location recognition capability for mobile agentsLocation recognition capability for mobile agentsTheories and implementations of human/robot Theories and implementations of human/robot communication through gestures and speechcommunication through gestures and speechMobile robot collaboration on mapping and model Mobile robot collaboration on mapping and model buildingbuildingDistributed surveillance and monitoringDistributed surveillance and monitoring

Page 8: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

ApplicationsApplicationsAutonomous and semiAutonomous and semi--autonomous systems that assist autonomous systems that assist

and partner with humans:and partner with humans:warehouse and inventory control systemswarehouse and inventory control systemsconstruction systems: teams of vehicles for construction systems: teams of vehicles for surveying, collecting, and retrievalsurveying, collecting, and retrievaloffice assistantsoffice assistantsmapping and modeling; mapping and modeling; surveillance and monitoringsurveillance and monitoringremote agents for telepresence in meetings, tours remote agents for telepresence in meetings, tours and lecturesand lectures

OOur methodsur methods//technologies, such as pose estimation, technologies, such as pose estimation, tracking, object/world modeling,tracking, object/world modeling, localization, localization, and and model learning, have direct application model learning, have direct application in all in all embedded systems, as well as embedded systems, as well as spacespace applications,applications,including monitoring activityincluding monitoring activity and autonomous and and autonomous and semisemi--autonomous exploration. autonomous exploration.

Page 9: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

JosJoséé: Autonomous service: Autonomous service

Robot Control ArchitectureRobot Control ArchitectureLocalizationLocalizationNavigation Navigation -- Avoiding dynamic obstacles Avoiding dynamic obstacles People FindingPeople FindingLocation Decision: Where to serve next?Location Decision: Where to serve next?Food Tray MonitoringFood Tray MonitoringFace ModelingFace ModelingEricEric’’s personas personaSpeech Recognition and SynthesisSpeech Recognition and SynthesisInteractionInteraction

Page 10: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

Control ArchitectureControl Architecture

Page 11: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

SIFTSIFT--base localizationbase localization

Page 12: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

3D Map3D Map

Red: Features on the Walls

Blue: Features on the Floor

Green: Features on the Ceiling

Page 13: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

Global LocalizationGlobal LocalizationKidnapped robot problemKidnapped robot problemRecognize robot pose relative to a preRecognize robot pose relative to a pre--built mapbuilt mapHough Transform matching approachHough Transform matching approachVote bins for pose of each potential matchVote bins for pose of each potential matchPeak : pose with most matchesPeak : pose with most matches

Measured Pose :(70, 300, -40°)Estimated Pose :(75.8, 295.9, -41.1°)

Page 14: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

FaceFace

AngryNeutral Surprised Sad

Page 15: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

Finding PeopleFinding People

Construct occupancy grid probability map of where Construct occupancy grid probability map of where people are standingpeople are standingUse the map to decide where to serve nextUse the map to decide where to serve nextDetect people using skin color segmentationDetect people using skin color segmentationUse stereo data to compute 3D position of peopleUse stereo data to compute 3D position of peopleProject locations to floor planeProject locations to floor planeDecrease the probabilities over time because people Decrease the probabilities over time because people move aroundmove around

Page 16: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

Finding PeopleFinding People

Color Image Skin Regions Probability Map

Use occupancy grid probability map to decide where to serveDetect people using skin color Use stereo data to compute 3D position of peopleDecrease the probabilities over time because people move

Page 17: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

Homer: Homer: Human Oriented Messenger RobotHuman Oriented Messenger Robot

Page 18: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

stereostereo--vision guided mobile robot for performing humanvision guided mobile robot for performing human--interactive tasks. interactive tasks. –– navigation, localization, map building and obstacle avoidance navigation, localization, map building and obstacle avoidance –– human interaction capacities human interaction capacities

person recognitionperson recognitionspeech, speech, facial expression and gesture recognitionfacial expression and gesture recognitionhuman dynamics modeling. human dynamics modeling.

capabilitiescapabilities–– modular and independent,modular and independent,–– integrated in a consistent and scalable fashion integrated in a consistent and scalable fashion –– controlled by a decisioncontrolled by a decision--theoretic planner to model the uncertain theoretic planner to model the uncertain

effects of the robot's actionseffects of the robot's actionsplanner uses factored Markov decision processes,planner uses factored Markov decision processes,allowing for simple specification of tasks, goals and stateallowing for simple specification of tasks, goals and state spaces. spaces. Task: message delivery taskTask: message delivery task

Homer: Homer: Human Oriented Messenger RobotHuman Oriented Messenger Robot

Page 19: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

specificationDomain

Planning

Robot Server Image Server

Utility

Modelling and task execution

Localization

Mapping

Navigation

MOBILITY HRI

Speech synthesis

Facial expressions

Face recognition

People finding

Shared memory

Sockets

Manager

Supervising

Perception and motor control

Control ArchitectureControl Architecture

Page 20: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

speechface

recognitionpeoplefindingnavigation

utilityspecification

domain

PLANNING

...

recognized

current system stateoptimal action

personlocation

dynamicspeople

modules control actuators on robot

MANAGER

integrates informationdelegates actionspeech

Task OrganizationTask Organization

Page 21: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

50 100 150 200 250 300

10

12

14

16

18

20

22

24

26

28

30

32

column

scal

ed d

ispa

rity

From stereo to mapsFrom stereo to maps

Page 22: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

person 3

person 1

person 2

template database

images skin

2

1

3

timefound

1

2

other

person

Face recognitionFace recognition

Exemplars for three persons, input images linked to their most likely and most likely match scores and reported person.

Page 23: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

HOMER test runHOMER test run

Page 24: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

Constraint Nets: theory, tools, applicationsConstraint Nets: theory, tools, applications

Robert StRobert St--AubinAubin & & MackworthMackworth are designing and are designing and building Probabilistic Constraint Nets (PCN) for building Probabilistic Constraint Nets (PCN) for representing uncertainty in robotic systems.representing uncertainty in robotic systems.Song & Song & MackworthMackworth have designed and implemented have designed and implemented CNJ, a visual programming environment for CNJ, a visual programming environment for Constraint Nets, implemented in Java.Constraint Nets, implemented in Java.The system includes a specification and The system includes a specification and implementation of CNML, an XML environment for implementation of CNML, an XML environment for Constraint Nets. Constraint Nets. CNJ is now being used as a tool by CNJ is now being used as a tool by PinarPinar MuyanMuyan to to develop a constraintdevelop a constraint--based controller for a simple based controller for a simple robot soccer player.robot soccer player.

Page 25: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

CNJ: Robot in pursuit of ballCNJ: Robot in pursuit of ball

Page 26: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

CNJ: Controlling Rotation/Pan/TiltCNJ: Controlling Rotation/Pan/Tilt

Page 27: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

CNJ: Animation of controllerCNJ: Animation of controller

Page 28: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

Scale Invariant Feature Transform (SIFT)Scale Invariant Feature Transform (SIFT)

Image content is transformed into local feature Image content is transformed into local feature coordinates that are invariant to translation, rotation, coordinates that are invariant to translation, rotation, scale, and other imaging parametersscale, and other imaging parameters

SIFT Features

Page 29: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

PatchletPatchlet Surface RepresentationSurface Representation

Goal: to properly interpret the uncertainty of stereo Goal: to properly interpret the uncertainty of stereo measurements in surface reconstruction. measurements in surface reconstruction. The sensor elements considered are local patches in the The sensor elements considered are local patches in the stereo image that create stereo image that create patchletspatchlets. . These These patchletspatchlets are fit to a plane and the uncertainty of the are fit to a plane and the uncertainty of the plane in orientation and position is determined from the plane in orientation and position is determined from the stereo 3d points.stereo 3d points.

Page 30: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

Brightness and DepthBrightness and Depth

Page 31: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

Depth Pixels and ScaleDepth Pixels and Scale

Page 32: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents
Page 33: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

PatchletPatchlet UncertaintyUncertainty

Page 34: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

PatchletsPatchlets

Page 35: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

NearNear

Page 36: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

FarFar

Page 37: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

Representation and Recognition of Complex Human MotionRepresentation and Recognition of Complex Human Motion

Psychological researchPsychological researchVideo coding, searchVideo coding, searchHumanHuman--computer interactioncomputer interactionArticulated, nonArticulated, non--rigid motion at many scalesrigid motion at many scalesFind a general representation for any type of motion at any Find a general representation for any type of motion at any scalescale

Show Show ZernikeZernike polynomials as ideal for this purpose polynomials as ideal for this purpose

Page 38: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

Optical FlowOptical Flowt=0 t=1 t=2

Page 39: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

Example FlowsExample Flowsn=1 m=1 (affine) n=2 m=2

n=4 m=0 (radial only) n=4 m=2

Page 40: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

Reconstructed Flows:Reconstructed Flows:affine flow

first 49 ZPsfirst 7 ZPs

original flow field

Page 41: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

Facial expressionFacial expression--no rigid head motionno rigid head motion--72 subjects 72 subjects -- 6 expressions*6 expressions*

*Cohn-Kanade Facial Expression Database

Affine (2 ZPs): 71%

First 7 ZPs 90%(267 sequences, 4604 frames)

Page 42: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

LipLip--ReadingReading-- Tulips1 databaseTulips1 database-- 12 subjects 12 subjects -- 4 words4 words

Affine (2 ZPs): 66%

First 7 ZPs 76%

2,4,8,9,10,14,22: 79%(96 sequences, 835 frames)

Page 43: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

Multiple Camera Area SurveillanceMultiple Camera Area SurveillanceTechniquesTechniques

To distinguish "normal" people, objects, and activities from To distinguish "normal" people, objects, and activities from anomalous ones, and alert a security agent in the case of anomalous ones, and alert a security agent in the case of anomalous conditions.anomalous conditions.Our current projects are:Our current projects are:–– Fast people detection in corridor images, to be Fast people detection in corridor images, to be

used as input for activity recognition.used as input for activity recognition.–– Similarity filter development, for knowing when Similarity filter development, for knowing when

a given scene has been seen before.a given scene has been seen before.–– ContextContext--based object and scene recognition based object and scene recognition

algorithms. algorithms.

Page 44: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

People Detection AlgorithmPeople Detection Algorithm

• uses JPEG encoded images from a network camera. A large set of image features is computed, without the need for complete JPEG decompression.• A support-vector machine (SVM) is used to classify image regions into people or non-people regions.• A related project is developing an FPGA hardware implementation.

Page 45: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

Statistical Translation for Object Statistical Translation for Object RecognitionRecognition

statistical model for learning the probability that a word is statistical model for learning the probability that a word is associated with an object in a scene.associated with an object in a scene.learn these relationships without access to the correct learn these relationships without access to the correct associations between objects and words. associations between objects and words.

• a Bayesian scheme for automatic weighting of features (e.g., colour, texture, position) improves accuracy by preventing overfitting on irrelevant features.

Page 46: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

Contextual TranslationContextual Translation

Poor assumption: all the objects are independent in a Poor assumption: all the objects are independent in a scene.scene.Our more expressive model takes context into account. Our more expressive model takes context into account. Use loopy belief propagation to learn the model Use loopy belief propagation to learn the model parameters. parameters. ––On our Corel data set (On our Corel data set (www.cs.ubc.ca/~pcarbowww.cs.ubc.ca/~pcarbo), we ), we achieve almost 50% precision.achieve almost 50% precision.

Page 47: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

Object Recognition for RobotsObject Recognition for Robots

Our scheme is not realOur scheme is not real--time because of the expensive time because of the expensive segmentation step. segmentation step. BUT: our contextual translation model + a fast, crude BUT: our contextual translation model + a fast, crude segmentation results in equal or better precision! segmentation results in equal or better precision! Moreover, object recognition is more precise because the Moreover, object recognition is more precise because the segments tend to be smaller.segments tend to be smaller.

Page 48: Jim Little Laboratory for Computational Intelligence ...web.cecs.pdx.edu/~mperkows/CLASS_479/Biometrics/... · (human/robot) control. Location recognition capability for mobile agents

FINFIN