robotic etiquette: socially acceptable navigation of service robots with human motion pattern...

11
Corresponding author: Kun Qian E-mail: [email protected] Journal of Bionic Engineering 7 (2010) 150–160 Robotic Etiquette: Socially Acceptable Navigation of Service Robots with Human Motion Pattern Learning and Prediction Kun Qian, Xudong Ma, Xianzhong Dai, Fang Fang Key Lab of Measurement and Control of Complex Systems of Engineering (Ministry of Education, China), Southeast University, Nanjing 210096, P. R. China Abstract Nonverbal and noncontact behaviors play a significant role in allowing service robots to structure their interactions with humans. In this paper, a novel human-mimic mechanism of robot’s navigational skills was proposed for developing socially acceptable robotic etiquette. Based on the sociological and physiological concerns of interpersonal interactions in movement, several criteria in navigation were represented by constraints and incorporated into a unified probabilistic cost grid for safe motion planning and control, followed by an emphasis on the prediction of the human’s movement for adjusting the robot’s pre-collision navigational strategy. The human motion prediction utilizes a clustering-based algorithm for modeling humans’ indoor motion patterns as well as the combination of the long-term and short-term tendency prediction that takes into account the uncertainties of both velocity and heading direction. Both simulation and real-world experiments verified the effectiveness and reliability of the method to ensure human’s safety and comfort in navigation. A statistical user trials study was also given to validate the users’ favorable views of the human-friendly navigational behavior. Keywords: robotic etiquette, navigation, human motion prediction, human-robot interaction, service robot Copyright © 2010, Jilin University. Published by Elsevier Limited and Science Press. All rights reserved. doi: 10.1016/S1672-6529(09)60199-2 1 Introduction As service robots have been designed to provide interactive tasks in domestic and office environments, it is vital important that they behave in a human-friendly manner. Although a broad range of works in humanoid robots [1,2] have been made to mimic human in shape and imitate their actions [3] , the main emphasis of this paper is to exploit human-like skills of mobile service robots that are able to provide safe and polite navigational behav- iors which humans find comfortable. Studies in human-robot interaction provide evi- dences that humans respond to certain social character- istics, features or behaviors exhibited by robots [4–6] . In particular, humans prefer approach distances towards robots [7] which are comparable in some ways to those found for human-human social distances [8] , but they are likely to mind very close approaches to robot due to psychological reasons that these behaviors are perceived as over-familiar or threatening [9] . The reason of this can be explained that most users of service robots are from a non-technological field and they tend to be introverted in face of robots. Typical navigational tasks that account for social conventions have been developed to imitate human’s natural behaviors in movement, including passing peo- ple [10] , standing in line [11] , approaching people to joint their conversation [12] and people-following [13] . In the context of robot navigation, traditional obstacle- avoidance navigation algorithms can hardly secure both physical safety and mental comfort with respect to hu- mans. In contrast, human-aware motion planners treat people as social entities and aims to endow robots with human-like navigational behaviors. Sisbot et al. [14] ad- dressed the “user friendliness” through experimental studies to determine how people prefer to be approached by a robot, in terms of both distance and direction. However, humans were taken as static, not moving en- tities in their paper, which limits the prediction of human motion, plus how the social factors are quantitively computed was not explicitly presented. The human avoidance problem cannot be treated in

Upload: kun-qian

Post on 06-Jul-2016

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Robotic Etiquette: Socially Acceptable Navigation of Service Robots with Human Motion Pattern Learning and Prediction

Corresponding author: Kun Qian E-mail: [email protected]

Journal of Bionic Engineering 7 (2010) 150–160

Robotic Etiquette: Socially Acceptable Navigation of Service Robots with Human Motion Pattern Learning and Prediction

Kun Qian, Xudong Ma, Xianzhong Dai, Fang Fang

Key Lab of Measurement and Control of Complex Systems of Engineering (Ministry of Education, China), Southeast University, Nanjing 210096, P. R. China

Abstract Nonverbal and noncontact behaviors play a significant role in allowing service robots to structure their interactions with

humans. In this paper, a novel human-mimic mechanism of robot’s navigational skills was proposed for developing socially acceptable robotic etiquette. Based on the sociological and physiological concerns of interpersonal interactions in movement, several criteria in navigation were represented by constraints and incorporated into a unified probabilistic cost grid for safe motion planning and control, followed by an emphasis on the prediction of the human’s movement for adjusting the robot’s pre-collision navigational strategy. The human motion prediction utilizes a clustering-based algorithm for modeling humans’indoor motion patterns as well as the combination of the long-term and short-term tendency prediction that takes into accountthe uncertainties of both velocity and heading direction. Both simulation and real-world experiments verified the effectiveness and reliability of the method to ensure human’s safety and comfort in navigation. A statistical user trials study was also given to validate the users’ favorable views of the human-friendly navigational behavior.

Keywords: robotic etiquette, navigation, human motion prediction, human-robot interaction, service robot Copyright © 2010, Jilin University. Published by Elsevier Limited and Science Press. All rights reserved. doi: 10.1016/S1672-6529(09)60199-2

1 Introduction

As service robots have been designed to provide interactive tasks in domestic and office environments, it is vital important that they behave in a human-friendly manner. Although a broad range of works in humanoid robots[1,2] have been made to mimic human in shape and imitate their actions[3], the main emphasis of this paper is to exploit human-like skills of mobile service robots that are able to provide safe and polite navigational behav-iors which humans find comfortable.

Studies in human-robot interaction provide evi-dences that humans respond to certain social character-istics, features or behaviors exhibited by robots[4–6]. In particular, humans prefer approach distances towards robots[7] which are comparable in some ways to those found for human-human social distances[8], but they are likely to mind very close approaches to robot due to psychological reasons that these behaviors are perceived as over-familiar or threatening[9]. The reason of this can be explained that most users of service robots are from a

non-technological field and they tend to be introverted in face of robots.

Typical navigational tasks that account for social conventions have been developed to imitate human’s natural behaviors in movement, including passing peo-ple[10], standing in line[11], approaching people to joint their conversation[12] and people-following[13]. In the context of robot navigation, traditional obstacle- avoidance navigation algorithms can hardly secure both physical safety and mental comfort with respect to hu-mans. In contrast, human-aware motion planners treat people as social entities and aims to endow robots with human-like navigational behaviors. Sisbot et al.[14] ad-dressed the “user friendliness” through experimental studies to determine how people prefer to be approached by a robot, in terms of both distance and direction. However, humans were taken as static, not moving en-tities in their paper, which limits the prediction of human motion, plus how the social factors are quantitively computed was not explicitly presented.

The human avoidance problem cannot be treated in

Page 2: Robotic Etiquette: Socially Acceptable Navigation of Service Robots with Human Motion Pattern Learning and Prediction

Qian et al.: Robotic Etiquette: Socially Acceptable Navigation of Service Robots with Human Motion Pattern Learning and Prediction 151

a strictly local and reactive manner, if robots are to move efficiently in a human-like manner. Predicting human motion is an effective way for compliant robot naviga-tion. Foka and Trahanias[15] proposed the combination of predicting one-step ahead position and final destination point of the obstacle movement. However, the method is based on pure simulation and assumes that the moving obstacles and the robot move with the same constant velocity, which does not properly reflect the real world situation. There have been some researches on modeling object motion route, but few of them support behavior prediction and are applied to robotics. Johnson and Hogg[16] proposed a competitive neuron network based approach to model the probabilistic density function of the flow vectors extracted from video tracking result. Makris and Ellis[17] developed a method for learning entry/exit zones and routes from trajectory samples, but only spatial information was used for trajectory clus-tering and anomaly detection. Junejo et al.[18] applied graph cuts to cluster trajectories using the Hausdorff distance to compare different trajectories. But the method omits sequential information so it does not dis-tinguish between objects following the same route but heading in opposite directions. Learning motion patterns requires exploiting the spatial-temporal nature of motion. Hu et al.[19] proposed a system that clusters the motion patterns with a chain of Gaussian distributions and de-tect anomalies with a statistic method. However, their approach is only applied in traffic surveillance image sequences and no real-time prediction application is given. Bennewitz et al.[20] proposed to learn human mo-tion patterns from tracking data and predict their motion tendency for robots to avoid the possible conflictions by detouring. The limitation of the method is that human’s motion uncertainties along the motion pattern is not considered, which is an important factor since humans’ variations in velocity and orientation affects the esti-mated motion.

In this paper, a mechanism of polite robotic be-havior in navigational tasks was proposed. Several so-cial conventions derived from human experience in interpersonal motion activities were computed as eti-quette constraints and integrated into a unified traver-sability cost grid at the navigational level of robots for motion planning and control. The most important eti-quette was adjusting the robot navigational behavior in advance to prevent possible conflicts with people. A

novel method named “predictive navigation” was pro-posed, which is different from most previous work in the following ways. Firstly, the modeling of human’s typical indoor motion patterns exploits both spatial and tempo-ral information, so that sequential probability estimation is guaranteed. Secondly, the long-term prediction of the motion pattern is combined with the short-term predic-tion of uncertainties in velocity and heading direction, which provides prediction of human’s possible positions at some future time. Thirdly, the prediction is integrated into the quantized etiquette criteria to ensure robotic politeness of giving priority to human in navigation.

2 Robotic etiquette in navigation

Humans coordinate their movements with each other and observe some social conventions when they walk together. Similarly, service robots are required to perceive people in the environment and navigate in a not only safe but also human-friendly manner. In order to ensure human safety and comfort, studies on hu-man-robot socially collaborative tasks introduced sev-eral major criteria to the motion planning stage, namely “proxemics”, “visibility” and “side-tendency”[14]. However, the movements of people are not adequately considered by these three criteria. As robots navigating with people in complex indoor environment, it is crucial that robots develop natural skills to avoid interferences with the person. In this paper, a vital important criterion named “human-priority” was proposed and embedded as travel constraints at the robot’s navigational level.

(1) Proxemics The robot keeps a certain personal distance or

proxemics from the people in order to satisfy the human safety and feelings about privacy. Without the lost of generality, the proxemics is modeled by an asymmetric shaped probabilistic cost distribution Costproxemics, namely the “safety grid”. The distribution is centered at the person’s principle axis; the size of the proxemics differ across cultures and familiarity groups and change with a variety of factors such as walking speed, aware-ness of the movement of the obstacle and other mental tasks being performed while walking.

(2) Visibility The robot maintains within the people’s Field-of-

View (FOV) in order to avoid a sudden appear from the occlusion, which may surprise people or cause se-rious mental harms to them. The invisibility of robots is

Page 3: Robotic Etiquette: Socially Acceptable Navigation of Service Robots with Human Motion Pattern Learning and Prediction

Journal of Bionic Engineering (2010) Vol.7 No.2 152

generally caused by two reasons, the FOV limitation of people and occlusion by other obstacles, which are merged into a visibility grid denoted by Costvisibility.

(3) Side-tendency The ability of robots to follow the right-hand or

left-hand traffic convention is favorable to make their behaviors comprehensible and predictable to human. Tending to one side of a hallway is represented as a constraint that values free space on one side more highly than the other one. The resulting side-tendency cost grid, denoted by Costside, similarly computes a cost distribu-tion that discounts the value of the un-occupied cells in the hallway.

(4) Human-priority The movement of the robot must not interfere with

people during navigation. As an example, the situations, illustrated in Fig. 1, indicate that in narrow passages or doorways, robot should be able to wait until the person has passed through or detour in order to minimize the risk of conflict. The behavior of giving priority to humans not only shows politeness and respect to human, but also improves safety in both physical and metal aspect.

A predictive navigation approach was proposed for endowing robots with the ability of giving priority to human in navigation. Human’s motion prediction is motivated by the observation that people typically do not move randomly when they walk through their environ-ments. Instead, they usually engage in some motion patterns, related to typical activities or specific locations they are interested in approaching. Spatial and temporal knowledge about the people motion is the key issue to adjust the robot’s behavior in advance. A learned model of people’s motion patterns allows robots to anticipate the human path and destination, as will be presented in the following section.

(a) (b)

Fig. 1 Situations in which a robot interferes with a person.

3 Human motion prediction

3.1 Robot localization and people-tracking In a complex indoor environment with human-

robot coexistence, multisensor readings are utilized for a joint estimation task named Simultaneous robot Local-ization And People-tracking (SLAP), with a given oc-cupancy grid map of the environment.

In our previous work[21], a Rao-Blackwellised Par-ticle Filter (RPBF) based algorithm was proposed. It makes use of the collaborative observations of several CCD cameras and a laser range finder to the joint Bayesian estimation of the robot’s pose rt and people’s ground-plane position xt = (xt, yt, t) in the global coor-dinate frame. The dynamics of people motion are mod-eled by a Gauss-Markov process, in which a linear Gaussian state-space model is used to describe the smoothness of the people’s positions on the ground plane. The RBPF algorithm marginalizes the state components of the joint distribution that correspond to people locations, and exactly computes their Gaussian posterior density conditioned on each robot sample. As a result, the algorithm combines the representation power of a Particle Filter (PF) with the efficiency and accuracy of a Kalman Filter (KF), which implies more accurate robot localization and people-tracking with fewer parti-cles as well as high computational efficiency.

3.2 Long-term modeling of motion pattern

Based on the collection of the tracking trajectories, a set of motion patterns of people are clustered hierar-chically using a fuzzy K-means algorithm. This algo-rithm computes , a set of M different motion patterns

= { 1, ..., M}, with each m, (1,2,..., )m M ap-proximated by a mixture of K Gaussian distributions

1 , , Km m , ,k k k

m m mN , (1,2,..., )k K . Fig. 2a illustrates extracted trajectories of people’s movement in a typical office environment and Fig. 2b shows four of the learned motion patterns, in which each ellipse represents a multivariate Gaussian distribution. These motion patterns reflect people’s movements between interesting places, which are driven by salient activities within the spatial context.

Spatial probability of the person located at xt given step k of the motion pattern m is computed according to the Guassian distribution and denoted as ( | )k

t mp x . 221( | ) exp 2

2k k k

t m t m mkm

p x x . (1)

Without considering the uncertainties of the veloc-ity and heading orientation during the human’s motion,

Page 4: Robotic Etiquette: Socially Acceptable Navigation of Service Robots with Human Motion Pattern Learning and Prediction

Qian et al.: Robotic Etiquette: Socially Acceptable Navigation of Service Robots with Human Motion Pattern Learning and Prediction 153

the robot’s belief about the motion of the person being engaged in a motion pattern m is computed by the human’s position xt at time t, given the history of his motion z1:t

pattern 1: 1:1

( | )= | , , , ( , , | ).K K

t m t m t m tk k k

p p k k p k kx x z z (2)

The probability 1:| , , ,t m tp k kx z evaluates the prob-ability of the person that covers the point xt at time t, given a sequence of observations z1:t and given that z1:t starts at k

m and ends at km . The probability

1:( , , | )m tp k k z can be decomposed according to Bayes rule

1: 1:( , , | )= ( | , , ) ( ) ( , | )m t t m m mp k k p k k p p k kz z , (3)

where is a normalizer, 1:( | , , )t mp k kz is the observa-tion likelihood of z1:t and p( m) and p(k,k | m) are two prior probabilities as explained in our another previous work[22].

3.3 Short-term motion prediction

To account for the short-term uncertainty of the movement along the path of m, the variations in ve-locity and heading orientation of human are modeled in a probabilistic framework. The modeling of a person’s

short-term motion uncertainties is on a basis of several assumptions: the velocity and orientation change at every time step T; the velocity and orientation in each time step are constant, and changes randomly within [vmin,vmax] and [ min, max], respectively; the sequence of velocity (orientation) along time contains a list of ve-locities (orientations) that are independently distributed random variables. This assumption describes a common indoor motion style that changes smoothly in velocity and heading direction.

Firstly, the orientation variance is modeled by a fan-shaped area named the field of view, as shown in Fig. 3. The field of view defines a coordinate system originated at the current position of the person, h0 = (x0, y0, 0), and takes the Person’s Instantaneous Orientation (PIO) as the symmetry axis. The maximum angular distance from the PIO is = max and is defined by the size of the field of view. Within the field of view area, a point h = (r, ) has a probability of

2orien 0( | ) exp( )p h h , (4)

to be headed to at the next time step. Eq.(4) indicates that the higher value is, the less likely that the person will be heading towards the direction.

Secondly, the velocity variance is modeled by a distribution pvel(ht;t) that calculates the probability of reaching a point ht along a straight line path, as demon-strated in Fig. 4. Let ,0h and 2

0 be the current heading

Fig. 2 Humans’ trajectories and four of the learned motion patterns.

(r, )

h0PIO

Fig. 3 Uncertainty of the instantaneous heading direction.

ht

vmax

Pvel(ht;t)

t

vmin

Dist

Time Fig. 4 Uncertainty of the motion velocity.

Page 5: Robotic Etiquette: Socially Acceptable Navigation of Service Robots with Human Motion Pattern Learning and Prediction

Journal of Bionic Engineering (2010) Vol.7 No.2 154

direction and the positional variance of the person, re-spectively. Then the position ht = (xt,yt) after t time steps is given by

0 01

0 01

cos

sin

tt ii

tt ii

x x v T

y y v T . (5)

Since each ( 0,..., )iv i t follows the same but in-dependent uniform distribution of min max~ ( , )iv U v v , the variance 2

step of the movement (i.e., the velocity vari-ance) added by one time step is calculated as

max

min

2 2step

2max min

max min

2max min

( ) ( ) d

1 ( ) d2

1 ( ) . (6)12

i i i i

v

i iv

v Ev f v v

v vv v

v v

v v

Let i iEv and 2i iDv be the mathematical ex-

pectation and variance of iv , respectively. According to the central limit theorem,

1

tii

v approximately fol-lows 2

1 1( , )t t

i ii iN u , and so does tx . Thus the

probability vel ( ; )tp th is computed as

2 2

vel 2 2, , , ,

( ) ( )1 1( ; ) exp +2 2

t t t tt

x t y t x t y t

x x y yp th ,

(7)

where

0 01

0 01

2 2 2 2, , 0 step

cos

sin

t

t ii

t

t ii

x t y t

x x T

y y T

t

. (8)

Moreover, according to the assumption that vi (i = 0,..., t) follows the same but independent uniform distribution, the sequence of variables v0, ..., vi have the same mean and variance. This indicates that

2 2max min

1, ,

2

t

i i i i ii

v vEv v Dv tv .

As a result, Eq. (8) can be rewritten as

0 0

0 0

2 2 2 2, , 0 step

max min

cossin

2

t

t

x t y t

x x tv Ty y tv T

t

v vv

. (9)

To combine the long-term and short-term predic-tion, the heading orientation probability porien(h | h0) is used as the exponent discount factor to the velocity probability, and the probability of the motion pattern that the human is involved at current position h0 is also normalized by a normalize factor for all M motion patterns. Finally, the probability of reaching ht at time t is computed as

predict vel

orien 0

( ; ) ( | ) ( ; )

( | )t t m t

t

p t p p t

p

h h hh h

. (10)

4 Predictive navigation

The cost for traversing a grid cell of the environ-ment is evaluated by the combination of the obstacle occupancy grid map with the above-mentioned three socially acceptable safety cost grids: the proxemics grid, the visibility grid, the predicted occupied grid and the side-tendency grid. The merged cost of a point ht at the future time t, due to human factors, is evaluated as

merged proxemics proxemics

visibility visibility

side side predict predict

( ; ) ( ; )+

( ; )+

( )+ ( ; ). (11)

t t

t

t t

Cost t w Cost t

w Cost tw Cost w p t

h hh

h h

Costmerged(ht;t)is a dynamic cost grid that changes with the movement of robot and person along time. Another cost is the prior knowledge derived from environmental obstacle description, denoted by Occ(i, j), i = 1,...,W, j = 1,...,H, where W and H are the width and the height of the grid map, respectively. To merge the two major sources of cost, the static occupancy cost is considered as the replacement of the dynamic cost for cells that are already occupied by known obstacles. Therefore the final overall value of the cell (i, j) is computed as

( , ) ( , ) ( , )final merged( ; ) max( , ( ; ))i j i j i j

t tCost t Occ Cost th h . (12)

The final cell value of the traversability cost grid is normalized to [0,1] before being fed into the motion planner. All weight values are tuned according to the properties of the navigation task. Additionally, since people with different personality may have various af-fections on the robot’s behavior, the weights can also be learned from training data to adapt the user’s preference, as suggested by the study on user’s personality adapta-tion problem[23]. It is also feasible to explicitly set these weights by empirical parameters in experiments.

The navigation system, as shown in Fig. 5, utilizes

Page 6: Robotic Etiquette: Socially Acceptable Navigation of Service Robots with Human Motion Pattern Learning and Prediction

Qian et al.: Robotic Etiquette: Socially Acceptable Navigation of Service Robots with Human Motion Pattern Learning and Prediction 155

the etiquette integrated traversability cost grid to feed a motion controller for producing safe and polite behav-iors. As the core of the system, the motion controller is a tiered architecture of the planner and the reactor.

The planner module employs the wavefront algo-rithm that constructs a navigation function over the traversability cost grid. The wavefront algorithm in-volves a breadth first search of the graph beginning at the destination point until it reaches the start point, re-sulting in extracted connectivity free space to the desti-nation in the configuration space. The planning module computes in real-time a coarse reachable path that avoids the trap situations and cyclical motions, and then the direction generated at each point is used to advise the motion reactor module.

The reactor module computes stable and colli-sion-free motion towards the final location using a smoothed Nearest Diagram (ND) method[24]. The ND method employs a local perception window of laser, which is constructed according to the traversability cost grid for accounting all the integrated safety strategies. The smoothed ND method improves the smoothness of the velocity and acceleration variance, and generates oscillation-free motion in terms of a motion command (v,w), where v is the translational velocity and w is the rotational velocity that align the robot to the guiding direction obtained from the planning module.

5 Experiments and results

The proposed approach was validated in an OpenGL based 3D robotic simulation system as well as a

real-world office environment. The real office was de-signed as a test-bed environment to perform home-care services for the elderly and disabled. The room is 12 m × 7 m in size and the corresponding occupancy grid map is shown in Fig. 2. A mechanical-looking robot, ActivMe-dia Peoplebot (as shown in Fig. 6) was used in the ex-periments for validating the proposed human-like navi-gational behaviors. The subject pool consisted of 12 participants (8 male and 4 female), ranged in age between 21% and 34. 33% of them were from non-technological fields, while 67% worked in technology-related areas. We assume that participants in the experiment walk in smooth speed and intended to follow certain motion patterns in the room.

Fig. 6 Service robot employed in the experiment.

5.1 Motion prediction

The effectiveness and accuracy of the proposed motion prediction algorithm was firstly tested in the real office scene. The sensory system for robot localization and people-tracking is consisted of five stationary CCD cameras (Panasonic WV-CP240) mounted on each side

Robot & environment model

Human motion prediction

Robotic etiquette

Proxemics visibilityside-tendency

Human-priority

Motion prediction

Conflictestimation

People-tracking

Robot localization

SLAP

Odometry

Laser Raw data

Distributed cameras

Environment

Actuator

Wheels(v,w) Velocity

control

Reactor Planner

Traversabilitycost grid

Update

Path adapation

Positions

xt

rt

Motion controller

Fig. 5 Diagram of the navigation system.

Page 7: Robotic Etiquette: Socially Acceptable Navigation of Service Robots with Human Motion Pattern Learning and Prediction

Journal of Bionic Engineering (2010) Vol.7 No.2 156

of the room above head level and the Peoplebot’s on-board laser range finder. Accurate positional estimations of robot and human with less than 10 cm errors are achieved, as described in our previous work[21]. Based on the collection of tracking trajectories, typical indoor motion patterns of humans are learned and a part of them are shown in Fig. 2.

Fig. 7 illustrates an example of motion prediction in the office scene, in which a person started at the place G in the map and intended to walk towards the place B. The left column (video images) shows the person- tracking sequence captured by one of the cameras, and the right column (grid maps) shows the three most probable motion patterns which the person may follow. The probability plotted beside a trajectory represents the percentage of likelihood with which the person is ex-pected to move along the trajectory. In Fig. 7b, the mo-tion pattern heading towards the place F is eliminated because its probability falls sharply to zero; in Fig. 7c, the probability of moving towards the place B is 0.693 and this motion pattern becomes the most prominent one when the person continues to walk ahead. Fig. 8 plots the evolution of the probabilities which corresponds to the scene procedure in Fig. 7. Fig. 9 shows the 1-second- ahead predicted path according to the recognized motion pattern using the short-term prediction method.

Table 1 shows the prediction accuracy with dif-ferent numbers of motion patterns in the same scene and different levels of disturbance. AveCorr1 is the average

of accumulated correctness of predicted probability when human has performed 40% of the route and Ave-Corr2 is that of 60%. The result shows that the tracking error brought by other moving objects is one major cause to the accuracy decrease. Moreover, the motion predic-tion becomes unreliable in the situation of as many as 20 motion patterns. This is because that too many motion patterns for the small experiment room brings about similar trajectories sharing large proportion of space, which confuses the prediction algorithm and leads to erroneous prediction result.

Table 1 Motion pattern prediction accuracy

Num. patterns Disturbance AveCorr1 AveCorr2

none 84.1% 98.1% 6

people 81.2% 89.3%

none 79.3% 93.4% 10

people 64.6% 80.7%

none 56.9% 71.9% 20

people 41.9% 52.5%

5.2 Hallway crossing

The effectiveness and reliability of combing long-term and short-term motion prediction was verified

0.126

Person

0.416

0.458

0.0

0.519

0.481

0.0

0.693

0.307(c)

(b)

(a)

Person

G

F

E

E

F

G

B

B

B

G

F

E

Fig. 7 Example of motion prediction results.

Prob

abili

ty o

f Ppa

ttern

(ht|

m)

Fig. 8 Likelihood evolution of the predicted long-term motion patterns.

12

11

10

9

8

Y(m

)

X (m)4 5 6 7 8 9 10 11 12 13 14

Real path

Predicted path

13

Fig. 9 Comparison of the real path and the predicted path.

Page 8: Robotic Etiquette: Socially Acceptable Navigation of Service Robots with Human Motion Pattern Learning and Prediction

Qian et al.: Robotic Etiquette: Socially Acceptable Navigation of Service Robots with Human Motion Pattern Learning and Prediction 157

in two navigation scenarios, in which several etiquette criteria were integrated to ensure safety and comfort. During the navigation experiments, the maximum ve-locity of robot was limited to 60 cm·s 1. In the predictive navigation, the traversability cost grid at 5 seconds in the future is computed according to Eq. (11). Moreover, in the hallway experiment the four weights in Eq. (11) are equally set to 0.25, but in the next subsection wside is set to zero and other three weights are set to 0.33 since the side-tendency rule only works in hallway scenarios.

The first navigation experiment was when the robot and a person encountered in a hallway. The robot fol-lowed a path through the hallway from the middle, which would cause a high risk of head-on collision with the person. A regular motion planner without predicting human’s motion repelled the robot against the person only when close enough, as shown in Fig. 10a. This was too late for adequate avoidance due to the fast movement of the person, and the robot’s obstacle avoidance be-havior was changeful, which confused the person and even deteriorate the conflict situation. In contrast, a friendly behavior appeared which considered the social conventions. As shown in Fig. 10b and Fig. 10c, the proposed navigation system drove the robot to the right side of the hallway once the person’s possible further position was predicted in the vicinity. Moreover, after having passed the person, the robot stayed at a certain distance from the person instead of taking its previous lane immediately. The corresponding 3D simulation result is shown in Fig. 11.

(a)

(b)

(c)

Fig. 10 Comparison result of navigation in a hallway scenario.

Tend to the right side

Tend to the right side

Simulatedhuman

Simulatedrobot

Keep to the right side

Path

Goal of robot

Orientation

Fig. 11 Simulation result of the hallway navigation etiquette.

5.3 Narrow passage navigation

The second navigation experiment was when a robot was initially located at the place D in Fig. 12a and planned to navigate through a narrow passage (100 cm wide) to the place E in Fig. 12c. In the mean time, a person intended to walk from the place A to the place F via the same passage. Without the motion prediction, the robot would block the movement of the human at the entrance of the passage.

Fig. 12 Result of polite navigation in a passage scenario.

Page 9: Robotic Etiquette: Socially Acceptable Navigation of Service Robots with Human Motion Pattern Learning and Prediction

Journal of Bionic Engineering (2010) Vol.7 No.2 158

By predicting the human’s involved motion pattern, the robot computed the probability of human-robot conflict at the entrance of the passage (the place C in Fig. 12) in the future. More specifically, the module of human motion prediction estimated the tendency of the person’s temporary motion being engaged in a motion pattern that ends at the place C, with a probability as high as 0.975. Consequently, prediction costs were added to the cells around the exit of the passage. On the other hand, the wall beside the doorway that hided the person put additional visibility costs to the correspond-ing cells by the side of the entrance. Using the fused traversability cost grid, the robot drove to a free space outside the entrance of the passage and pause. During the waiting, the robot faced the entrance of the passage with an appropriate angle to remain visible to the person. After the person had passed through the passage, the robot proceeded to cross the doorway and continued its route. The 3D simulation of the passage navigation scenario is shown in Fig. 13 and a snapshot of the real-world experiment is shown in Fig. 14, respectively. The paths of the robot and the person, the translational and the rotational velocity change during the navigation are demonstrated in Fig. 15, which shows the robot’s motion adaptation to the person’s movement.

The resulting behavior of robot is highly efficient, safe and human-friendly. Firstly, unnecessary detour by way of another passage is avoided, which seemed prac-tical but is more expensive for the robot to travel. Secondly, oscillation of robot movement is avoided, which might be caused by a traditional planning and re-planning method. Most importantly, the robot be-haves in a human-like polite manner, which fully re-spects the human’s feeling about safety and comfort.

Simulated human

Simulated robot

(a) (b)

(c) (d)

Fig. 13 3D simulation result of the passage navigation scenario.

(a) (b)

(c) (d)

Fig. 14 Snapshot of the real-world passage navigation test.

3 4 5 6 7 8 9 108.5

9.0

9.5

10.0

10.5

11.0

11.5

12.0Start

Y(m

)

X (m)

Human Robot

Wait

Start

(a) Paths of the person and the robot

Tran

slat

iona

l vel

ocity

(m·s

1 )

(b) Change of the translational velocity

Rot

atio

nal v

eloc

ity (d

eg·s

1 )

(c) Change of the rotational velocity

Fig. 15 Paths and velocity evolution of the passage navigation.

Page 10: Robotic Etiquette: Socially Acceptable Navigation of Service Robots with Human Motion Pattern Learning and Prediction

Qian et al.: Robotic Etiquette: Socially Acceptable Navigation of Service Robots with Human Motion Pattern Learning and Prediction 159

5.4 Statistical evaluation of politeness Besides the quantitive evaluation of the physical

safety in the above experiments, a subjective evaluation was also performed to verify how human observers in-terpreted the robot’s politeness.

A user trials study was performed to explore whether people subjectively prefer the socially accept-able compliant navigation over the regular navigation methods that simply treat humans as impeditive objects. Participants observed the robot as he/she walked and answered a short questionnaire regarding each of the two types of methods. To handle the subjective nature of this study, the survey responses were analyzed using paired t-tests across trials. Table 2 gives the average and the standard deviation of the responses and t-values for each question. In each question, participants are asked to rate the robot’s behavior from 1 (not at all) to 7 (very much) according to how it met their expectations. As shown in Table 2, the polite navigation behavior is regarded to be more natural and human-like, which reflects that people are more comfortable about such type of behavior.

Table 2 Subjective evaluation of human-friendliness.

Mean (Standard deviation) Questions

Regular Polite Pairedt-test

Feel respected 3.22 (0.88) 5.04 (1.15) 1.97

Understandable & predictable 2.91 (1.29) 5.23 (1.94) 3.86

Keep distance 5.45 (1.71) 5.91 (1.20) 0.43

Furthermore, the personality of people is another

important factor that influences their feeling about safety and comfort. Although the quantification of personality is controversial, in this experiment, we used the widely accepted Eysenck model of personality (PEN)[25] to classify the personality of the participants into the in-troverted type and the extroverted type, according to the biologically based description of the extroversion- introversion dimension. Fig. 16 summarizes the average scores that the participants rated about how they feel satisfied during the navigation. The result indicates that the polite navigation behavior better met the user’s ex-pectation, because the behavior is more polite and ap-pealing to human. Meanwhile, compared with the ex-troverted people, the introverted people felt a more no-table difference between the compliant and the regular navigation behavior. So the human-friendly feature of service robots is favorable to be installed in the countries

where people are more introverted in inter-personal communication and interaction due to culture reasons.

6 Conclusions

In this paper, we have presented a novel concept of robotic etiquette and a mechanism for producing socially acceptable and human-like behaviors in indoor naviga-tion. Using the proposed human-friendly strategies, the robot reduces the risk of collision as well as respects humans’ feeling by giving space and priority to them, maintaining visibility and keeping a mentally comfort-able distance to them. The core of the human- compliance is the prediction of the human motion, which is achieved by the motion patterns modeling and motion tendency prediction. The feasibility of the proposed methodology is validated by navigation experiments as well as a user trials study, in which both physical safety and mental comfort of humans are secured. Enhanced by such robotic etiquette, the robot’s navigational behavior is interpreted by humans as a safe, legible, predictable and polite one. In our future work, we consider several other open questions in building a human-compliant navigation system, including personality learning, hu-man’s plan recognition and so on. Furthermore, this work is part of a larger effort to develop a home-care service robotic system for assisting the elderly and the disabled. All favorable features will be augmented and integrated into the system to support friendly and highly human-compliant robotic intelligence.

Acknowledgement

This work is supported by the National High Technology Research and Development Program (863 Program) of China (Grant No. 2006AA040202 and No. 2007AA041703) and the National Natural Science Foundation of China (Grant No. 60805032).

Regular Nav. Polite Nav.0

1

2

3

4

5

6

7

8

5.98

4.735.53

3.21

Extroverted userIntroverted user

Fig. 16 Score of users’ satisfaction.

Page 11: Robotic Etiquette: Socially Acceptable Navigation of Service Robots with Human Motion Pattern Learning and Prediction

Journal of Bionic Engineering (2010) Vol.7 No.2 160

References

[1] Wang X, Zhang Y, Fu X, Xiang G. Design and kinematic analysis of a novel humanoid robot eye using pneumatic ar-tificial muscles. Journal of Bionic Engineering, 2008, 5, 264–270.

[2] Hirai K, Hirose M, Haikawa Y, Takenaka T. The develop-ment of honda humanoid robot. Proceedings of IEEE In-ternational Conference on Robotics and Automation, Leu-ven, Belgium, 1998, 1321–1326.

[3] Alissandrakis A, Nehaniv C L, Dautenhahn K. Action, state and effect metrics for robot imitation. Proceedings of IEEE International Symposium on Robot and Human Interactive Communication, Hatfield, UK, 2006, 232–237.

[4] Breazeal C L. Designing Sociable Robots, The MIT Press, Cambridge, MA, USA, 2002.

[5] Kanda T, Hirano T, Eaton D, Ishiguro H. Interactive robots as social partners and peer tutors for children: A field trial. Human-Computer Interaction, 2004, 19, 61–84.

[6] Okuno H G, Nakadai K, Kitano H. Realizing audio-visually triggered ELIZA-like non-verbal behaviors. Lecture Notes in Artificial Intelligence, 2002, 2417, 31–45.

[7] Hall E T. Proxemics. In: Weitz S (ed), Nonverbal Commu-nication: Readings with Commentary, Oxford University Press, New York, USA, 1974, 205–227.

[8] Walters M L, Dautenhahn K, Te Boekhorst R, Koay K L, Kaouri C, Woods S, Nehaniv C L, Lee D, Werry I. The in-fluence of subjects’ personality traits on personal spatial zones in a human-robot interaction experiment. Proceedings of IEEE International Workshop on Robot & Human Com-munication, Nashville, TN, USA, 2005, 347–352.

[9] Dryer D C. Getting personal with computers: How to design personalities for agents. Applied Artificial Intelligence, 1999, 13, 273–295.

[10] Olivera V M, Simmons R. Implementing human-acceptable navigational behavior and a fuzzy controller for an autonomous robot. Proceedings of the 3rd Workshop on Physical Agents, Murcia, Spain, 2002, 113–120.

[11] Nakauchi Y, Simmons R. A social robot that stands in line. Proceedings of IEEE/RSJ International Conference on In-telligent Robots and Systems, Takamatsu, Japan, 2000, 357–364.

[12] Althaus P, Ishiguro H, Kanda K, Miyashita T, Christensen H I. Navigation for human-robot interaction tasks. Proceedings of IEEE International Conference on Robotics and Automa-tion, New Orleans, LA, USA, 2004, 1894–1900.

[13] Gockley R, Forlizzi J, Simmons R. Natural person-following behavior for social robots. Proceedings of Human-Robot

Interaction, 2007, Arlington, Virginia, USA, 17–24. [14] Sisbot E A, Marin-Urias L F, Alami R, Simeon T. A human

aware mobile robot motion planner, IEEE Transaction on robotics, 2007, 23, 874–883.

[15] Foka A F, Trahanias P E. Predictive control of robot velocity to avoid obstacles in dynamic environments. Proceedings of the IEEE/RJS International Conference on Intelligent Ro-bots and Systems, Las Vegas, NV, USA, 2003, 370–375.

[16] Johnson N, Hogg D. Learning the distribution of object trajectories for event recognition. Image and Vision Com-puting, 1996, 14, 609–605.

[17] Makris D, Ellis T. Learning semantic scene models from observing activity in visual surveillance. IEEE Transactions on Systems, Man, and Cybernetics-Part B, 2005, 35, 397–408.

[18] Junejo I N, Javed O, Shah M. Multi feature path modeling for video surveillance. Proceedings of International Con-ference on Pattern Recognition, Cambridge, UK, 2004, 716–719.

[19] Hu W, Xiao X, Fu Z, Xie D, Tan T, Maybank S. A system for learning statistical motion patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28, 1450–1464.

[20] Bennewitz M, Burgard W, Cielniak G, Thrun S. Learning motion patterns of people for compliant robot motion. The International Journal of Robotics Research, 2005, 24, 31–48.

[21] Qian K, Ma X D, Dai X Z. Simultaneous robot localization and person tracking using Rao-Blackwellised particle filters with multi-modal sensors. Proceedings of IEEE/RSJ Inter-national Conference on Intelligent Robots and Systems, Nice, France, 2008, 3452–3457.

[22] Liang Z W, Ma, X D, Dai X Z. Compliant navigation mechanisms utilizing probabilistic motion patterns of hu-mans in a camera network. Advanced Robotics, 2008, 22, 929–948.

[23] Tapus A, Tapus C, Mataric M J. User-robot personality matching and assistive robot behavior adaptation for post-stroke rehabilitation therapy. International Journal of Service Robotics, 2008, 1, 169–183.

[24] Minguez J, Montano L. Sensor-based robot motion genera-tion in unknown, dynamic and troublesome scenarios. Ro-botics and Autonomous Systems, 2005, 52, 290–311.

[25] Eysenck H J. Dimensions of personality: 16: 5 or 3? Criteria for a taxonomic paradigm. Personality and Individual Dif-ferences, 1991, 12, 773–790.