humanoid robots imitation of human motion using off-line ... · humanoid robots imitation of human...

18
Int. J. Mechanisms and Robotic Systems, Vol. x, No. x, xxxx 1 Humanoid Robots Imitation of Human Motion Using Off-line and Real-time Adaptation Techniques Toufik Bentaleb* Department of Information Engineering and Mathematics (DIISM), University of Siena, Via roma 56, 53100, Siena (SI), Italy E-mail: [email protected] E-mail: [email protected] *Corresponding author Luay Shahin Palestine Polytechnic University, P.O. Box 144, Hebron, Palestine E-mail: [email protected] Abstract: This paper describes Off-Line and Real time motion conversion ap- proaches for imitating the motion of a human being using a humanoid robot. In both cases, the human motion is measured with marker points captured by the 3D motion capture system Vicon. A similar physical morphology between the shape of the robot and the human is assumed. This paper also demonstrates that the input measurements of the human motion to remain as close as possible to the real motion. In the Off-line approach, we use the iCub humanoid robot, while, for the Real time approach, James robot is used with some modification of the existing imitation and limiting human motion techniques. During the realization of this work, we had a particular attention to obtain a "natural" appearance of the motion. The proposed strategies, based on the obtained results, can be capable of accomplishing Off-line and Real-time responses. Keywords: human motion; humanoid robot; imitation. Reference to this paper should be made as follows: Bentaleb, T. and Shahin, L. (xxxx) ‘Humanoid Robots Imitation of Human Motion Using Off-line and Real-time Adaptation techniques’, Int. J. Mechanisms and Robotic Systems, Vol. x, No. x, pp.xxx–xxx. Biographical notes: Toufik Bentaleb is a Ph.D researcher with the department of information engineering and mathematics at the University of Siena, Italy. Previously worked as a researcher at IIT, Genova, Italy. He received the Mas- ter’s degree in Robotics, Automatics and Industrial Informatics "Option: Robotic Mechanical Systems" from the Military Polytechnic School of Algiers, Algeria, in 2006, and the BSc degree in Mechanical Engineering from the University of Biskra in 2004. His research interests include issues related to control systems, robotics. Copyright © 200x Inderscience Enterprises Ltd.

Upload: hoangdang

Post on 25-Jun-2018

229 views

Category:

Documents


0 download

TRANSCRIPT

Int. J. Mechanisms and Robotic Systems, Vol. x, No. x, xxxx 1

Humanoid Robots Imitation of Human Motion UsingOff-line and Real-time Adaptation Techniques

Toufik Bentaleb*Department of Information Engineering and Mathematics (DIISM),University of Siena,Via roma 56, 53100, Siena (SI), ItalyE-mail: [email protected]: [email protected]*Corresponding author

Luay Shahin

Palestine Polytechnic University,P.O. Box 144, Hebron, PalestineE-mail: [email protected]

Abstract: This paper describes Off-Line and Real time motion conversion ap-proaches for imitating the motion of a human being using a humanoid robot. Inboth cases, the human motion is measured with marker points captured by the3D motion capture system Vicon. A similar physical morphology between theshape of the robot and the human is assumed. This paper also demonstrates thatthe input measurements of the human motion to remain as close as possible tothereal motion. In the Off-line approach, we use the iCub humanoid robot, while,for the Real time approach, James robot is used with some modification oftheexisting imitation and limiting human motion techniques. During the realizationof this work, we had a particular attention to obtain a "natural" appearance of themotion. The proposed strategies, based on the obtained results, can be capableof accomplishing Off-line and Real-time responses.

Keywords: human motion; humanoid robot; imitation.

Reference to this paper should be made as follows: Bentaleb, T. and Shahin,L. (xxxx) ‘Humanoid Robots Imitation of Human Motion Using Off-line andReal-time Adaptation techniques’,Int. J. Mechanisms and Robotic Systems, Vol.x, No. x, pp.xxx–xxx.

Biographical notes: Toufik Bentaleb is a Ph.D researcher with the departmentof information engineering and mathematics at the University of Siena, Italy.Previously worked as a researcher at IIT, Genova, Italy. He received the Mas-ter’s degree in Robotics, Automatics and Industrial Informatics "Option:RoboticMechanical Systems" from the Military Polytechnic School of Algiers, Algeria,in 2006, and the BSc degree in Mechanical Engineering from the University ofBiskra in 2004. His research interests include issues related to control systems,robotics.

Copyright © 200x Inderscience Enterprises Ltd.

2 T. Bentaleb and L. Shahin

Luay Shahin received the Master’s degree in electrical engineering from the Uni-versity of Detroit Mercy in Detroit, USA. Currently he is a PhD researcheratthe university of Siena, Italy. His main research interest is the development ofmeasurement systems for gas sensors, and mobile robotic and olfaction.

1 Introduction

The robotic imitation of human motions has become of high interest in the recentyears. This challenge consists in allowing the realizationof complex movements undersevere constraints by an artificial reproduction of the human body despite the numerouskinematics and dynamic differences between the natural andthe artificial system. Pollardet al. (2002) have proposed a simple method for fitting human motion for the controlhumanoid robot with exploring a set of techniques for limiting human motion of upperbody gestures to that achievable by a Sarcos humanoid robot.Using a similar experimentalenvironment, Safonova et al. (2003) have used a pre-recorded human motion to generateoptimal motion of the upper body of Sarcos humanoid robot. The function to be minimized isthe difference between the recorded and executed motion by the robot. Naksuk et al. (2004,2005) have used motion snapshots to generate initial movement for a humanoid robot, andthen, an iterative approach was utilized to produce balanceand natural movements based onminimal-angular momentum at the Center of Mass (CoM). Zuheret al. (2012) have treatedthe teleoperation of a humanoid robot through the recognition of human motions using anatural interface mechanism. Filiatrault et al. (2014) have developed a system to control thearm movement of a robot by mimicking the gestures of an actor captured by a markerlessvision sensor. Koenemann et al. (2014) have presented a system that enables Nao humanoidrobot to imitate complex whole-body motions of humans in real time, they used a compacthuman model taking into consideration the positions of the endeffectors as well as the centerof mass as the most important aspects to imitate. Okamoto et al. (2014) have presented amethod to extract person-specific styles in motions, and a framework to imitate them usingphysical humanoid robots. Their approach is an extension ofthe concept of task modeland focuses on such styles in the domain of task representations. Dasgupta et Nakamura(1999) have used captured data of a human center-of-pressure trajectory to compute thetrajectory of human walking, only considering the balance criterion. While, (Nakaoka etal. (2003)-a, Nakaoka et al. (2003)-b, Nakaoka et al. (2004), Naksuk et al. (2005)) havetook into account the balance criterion. Nakazawa et al. (2002) have implemented a methodfor importing human dance motion into humanoid robots through visual observation. Thenthe whole motion sequence is divided into some motion elements and clustered into somegroups according to the correlation of end-effectors’ trajectories. Nakaoka et al. (2004)have proposed a method for composing a whole body motion of a humanoid robot thatuses primitive motions. The motion primitives are defined only for lower body motions.Nakaoka et al. (2005) proposed a task model of lower body motion, which consists of taskprimitives and skill parameters. Based on this model, a sequence of task primitives andtheir skill parameters are detected from human motion, and robot motion is regeneratedfrom the detected result under the constraints of the robot.Ruchanurucks et al. (2006) havesuggested a method to optimize upper body motion of humanoidrobot in order to imitatea human record motion. Their objective function preserves the main characteristics of theoriginal motion, and at the same time it respects the physical constraints of the humanoid

Humanoid Robots Imitation of Human Motion Using Off-line and Real-time Adaptation Techniques 3

robot. Suleiman et al. (2008) have considered the imitationproblem as an optimizationproblem. The physical limits of the humanoid robot are transformed into constraints of theoptimization problem, and the objective function to be minimized is the difference betweenthe angular values of the humanoid robot’s joints and those of the virtual actor.

Figure 1 The humanoid robot iCub (left) and James (right).

Other researchers have proposed an alternative method based on Jacobian inverse kine-matic (IK) based, Molet et al. (1996) gave a real-time conversion of magnetic sensor mea-surements into human anatomical rotations with an application to animation design. Ban-dera et al. (2004) carried out the real-time motion analysisbased on hierarchical tracking.Their system can estimate upper-body human postures with limited perceptual cues suchas position of hands and head. In the imitation learning, or programming by demonstration(PbD), the researchers described a prototype system for real-time with interactive imita-tion between a human teacher and a humanoid robot. The systemlearns to cluster humanmotion patterns into discrete actions and the predictive models of how motor actions af-fect the objects in the environment. Then the system uses itsmodels to compute the nextaction given the state of the environment. This approach is an example of robot learningfrom demonstration framework to perform force based manipulation tasks (see Brenna etal. (2010)). Lopes et al. (2004) have produced a system resembling the human arm-handkinematics as closely as possible, while keeping it simple and relatively low-cost, theysupported there results with an imitation of a human demonstrator in real-time. In previousworks, the word real-time concerns different aspects of theimitation loop. Most considerthe real-time control of the robot based on a pre-recorded human trajectories.

In this paper, we use an IK-based algorithm to propose a real-time conversion of thehuman trajectories, meaning that the robot is performing the imitation at the same time asthe actor. In such conditions, Pollard et al. (2002) IK-based approach, for example, cannotbe applied because of the run backward of their scaling algorithm. The other problem isthe quality of the trajectories extracted from the human motion resulting in the occlusionproblem. Our work addressed these limitations with simple but efficient techniques withoutprogramming the robot with a set of basic behaviors (withoutprimitives). First the markers’trajectories are extracted in real-time using Vicon Real-time engine software. The mappingof these markers is realized in order to match the Degrees of Freedom (DoF) of the robot.The captured data are then treated using IK on each limb and dealt with occlusions in real-time. These basic trajectories (and matching velocities) are then scaled to match the robot

4 T. Bentaleb and L. Shahin

kinematics constraints. The main goal of the Off-line approach is to generate a human-like motion with building more natural appearing machine behaviors. As a step towardthe robotics imitation of natural systems, we present our work on generating a 7-DoF armgesture imitation using the iCub humanoid robot. Our approach consists in mixing thefiltering (Pollard et al. (2002)) and the inverse kinematics(Nakaoka (2003)) approaches.For generalization purpose, we have validated this method experimentally with several trialsand different kind of movements.

Excluding the introduction and conclusions sections, thispaper consists of three sec-tions. In the first section we describe the experimental setup environment, which includesthe two used robots (iCub and James) in addition to hardware and software used to capturehumans’ motion. The second part contains the detailed algorithms applied to treat captureddata. The generation of arm motion is described in the third part of this paper, therein, weconsider constrains for real-time and off-line cases.

2 Experimental environment

2.1 Humanoid robots (iCub and James)

The iCub is a medium-size whole body humanoid robot resembling the body built of a3.5 year old child (105 cm, 20.3 kg), see Figure 1-(left). Therobot design is a joint projectbetween Several European universities under the umbrella of the RobotCub Consortium.The main goal of this platform is to study cognition through the implementation of biologicalmotivated algorithms. This is an open project in many different ways: it will distribute theplatform openly and develop open-source software iCub (2012); robotcub (2012); iCubWebsite (2012). iCub humanoid robot is enough advanced to becapable of crawling,using basic visual processing primitives, low-level oculomotor control, visual, inertial andproprioceptive sensors or realizing dexterous manipulation. Its design and functions allowdefining more complex and realistic human-like abilities, including gesture and motion.The robot has a total of 53-DoF distributed as follows: 7-DoFper arm, 9-DoF per hand,6-DoF per leg, 6-DoF for the head and 3-DoF for the torso Tsagarakis et al. (2007); Mettaet al. (2010). The robot is controlled through Yet Another Robot Platform (YARP) Mettaet al. (2006), which consists in a set of libraries, protocols and tools to keep the modulesand devices cleanly decoupled. The experimental platform in addition to the dedicateddynamics simulation environment (iCub Humanoid Robot Simulator) is managed by YARPin different modes Tikhanoff et al. (2008).

The left arm of iCub and its three-dimensional Computer-Aided Design (3D CAD)model are shown in Figure 2. This module contains 7-DoF without the hand. They aredetailed on Table 1, including the range of the joint angularpositions and velocities.

James robot shown in Figure 1 (right) is a predecessor of iCubwith a 22-DoF represent-ing only the upper torque about a ten years old boy, originally developed at the Laboratoryfor Integrated Advanced Robotics of the University of GenoaJamone et al. (2006).

2.2 Human motion capture

We recorded the human movements using the 3D optical motion capture system "Vicon8V" Vicon Website (2012) at the frequency of 100Hz. Vicon 8V system can be used tomeasure or give real time feedback on the movements of the whole human body or any of

Humanoid Robots Imitation of Human Motion Using Off-line and Real-time Adaptation Techniques 5

Figure 2 iCub Left arm and its CAD model Tsagarakis et al. (2007).

Joint ai−1 di αi−1 θiJoint position Joint velocitylimits (deg) limits (rad/s)

Shoulder

flex/extension 0.0 107.74 −π/2 θ0+π/2 [−95.5 5] [−0.35 0.35]ab/adduction 0.0 0.0 π/2 θ1−π/2 [ 0 160.5] [−0.35 0.35]

rotation 0.0 152.28−π/2 θ2+π/2 [ −30 90] [−0.35 0.35]−15∗π/180

Elbow flex/extension -15.0 0.0 π/2 θ3 [ 5.5 106] [−0.53 0.53]

Wristflex/extension 0.0 137.3 π/2 θ4−π/2 [ −90 90] [−0.53 0.53]ab/adduction 2.0 0.0 π/2 θ5+π/2 [ −90 0] [−0.53 0.53]

rotation 62.5 -16 0.0 θ6 [ −20 40] [−0.53 0.53]

Table 1 Denavit-Hartenverg parameters ofiCub left arm and range of the joint positions andvelocities.

its desired parts, including hands, face, feet and spine. However, in this particular study weused a pre-recorded data of the movements. The system consists of eight optical cameras,28 reverberation markers of 12 mm, which are located on the actor’s body. We mainly usedthe records of the 8 markers of the left arm for the validationof the off-line approach. Whilein real-time approach, the number of the markers is 14. The positions of the markers on thehuman body (see Figure 3) play an important role: they represent the different rigid bodiesthat constitute the humanoid robot assuming that the human joints behave in a similar way tothe ones of the robot. This means that some very complex joints movement, like the motionof the human shoulder, are completely neglected in the problem of motion conversion.

The proposed algorithm is divided into four major steps as shown in the diagram ( seeFigure 4). The first step initializes the models by checking the missing data and handlingof marker occlusions as input. The missing data checking step is divided in two parts.First, filling the missing data as rigid parts of the body and second, defining the rest bydisplacements continuities.

The individual blocks are explained below:

6 T. Bentaleb and L. Shahin

−5000

5001000 −500

0500

1000

0

500

1000

1500

y [mm]x [mm]

z [m

m]

Figure 3 Layout of Markers used (left, right) and skeleton model of human (middle).

• Model validity checks the missing data and handles missing marker or occlusions.

• Body model rigidity checks if the markers represent the rigid part.

• Handling missing data (Block 1)fills the missing data by the continuity of the markertrajectory.

• Handling missing data (Block 2)fills the missing data by the symmetric of the shape.

• Scaling Algorithms computes the joint angles through IK and methods of the scalingjoints angles positions and velocities.

• PD servo represents the Proportional−Derivative controller which takes the jointsangles position from theScaling Algorithms block and encoders data as inputs andsends commands to the robot as output. The PD servo algorithmic run time (1ms) isa bit different from the run time of general program (10ms).

Another embedded software, Vicon Real-time Engine, was used for the real-time ac-quisition of the data. We created a template model that allows Vicon 8V System software(Nexus) to build a model of the observed subjects also in real-time, which was completedwith specific functions for the reconstruction of a virtual and probabilistic model, in real-time. The 10 cameras were positioned in order to minimize theocclusion problem accordingto the desired movement. However, some occlusions in the captured motion still occurred:in such case Vicon Real-time Engine automatically sets the missing data to zero. Whileusing our algorithm, shown in Section 3, the missing data is automatically rebuilt.

2.3 Additional material

In addition to the previous two robots, there are three standard PCs. The first computerruns the whole system, and is connected to the other PCs via a standard Local Area Network"LAN". The second PC receives the captured motion from Viconsystem and sends to thefirst PC via Vicon Real-time Engine. The third PC receives thejoint angle estimationsfrom first PC and sends them to the iCub and James robots via YARP. We implemented thealgorithm in C++ and tested it on various data sets in Off-Line and Real-time. The actorwith markers is shown standing in the center of a room, each marker representing a different

Humanoid Robots Imitation of Human Motion Using Off-line and Real-time Adaptation Techniques 7

Model

validityActor

body

model

rigidity

Handling

missing

data

(Block 1)

Handling

missing

data

(Block 2)

Scal ing

A lgor i t hms

PD servo

1msec

(1000Hz)

ROBOT

I/ O Board

ROBOT HARDWARE

(motors, encoders)

Get

Encoders

Input

Data

yes

no no

yes

desiredjoint

angles(DJA)

Motor

torque

Output

Figure 4 Diagram of data flow in our system.

rigid part. The markers Sternum (STRN), Clavicle (CLAV), 7th cervical vertebrae (C7) and10th thoracic vertebrae (T10) represent the chest part. Themarkers Right Forward Head(RFHD), Right Backward Head (RBHD), Left forward Head (LFHD) and Left BackwardHead (LBHD) represent head part. The shoulder marker (LSHO), the elbow marker (LELB)and (LWRA) is on the thumb side and (LWRB) is on the side of the little finger, and thefinger marker is attached to the skin between the knuckles of the index and middle finger(LFIN) represent left hand. The resulting work is a completeclosed loop system with ahumanoid robot capable of imitating a human actor.

3 Treatment of motion capture data

Dealing with the discontinuity of marker data was particularly interesting for us in realtime, as the classical approaches cannot be used. We need to recover good approximationsin order to extract continuous motions closely resembling the original movements in real-time. We identified several cases of missing data: first case is used with all markers byusing an algorithm for an autonomous real-time rebuilder for filling the missing data. Inother cases, we could observe if the markers construct the rigid body, e.g. head and waistmarkers, the used algorithm described by equation (3) fills missing data by the symmetryof the shape which it constructs which is more robust than thefirst algorithm approach.For example, in the Figure 5 (left), missing data LBHD markerin the head is defined asfollowing:

The head central point displacement can be calculated as

vHD = (vRBHD+vLFHD)/2 (1)

wherev represents the position vector.

8 T. Bentaleb and L. Shahin

RFHDRBHD

LFHDLBHD

LFIN

LFIN LWRB

LWRA

r o

LELB

Figure 5 Missing data: mapping the data to a skeleton model in Vicon during one framewithmissing two markers (left). Left arm model with one marker missing, ”r” radius of the rotate fingermarker around the middle of the wrist ”o” (right).

Then the position of the missing dataLBHD is determined as

vLBHD = 2∗vHD −vRFHD (2)

wherevHD is the center point position among the markersRFHD, RBHD, LFHD andLBHD.

vLFIN =

o+(r0+ ε)∗norm(vLFIN −o) if r ≥ r0+ ε;o+(r0− ε)∗norm(vLFIN −o) if r ≤ r0− ε;vLFIN if r0− ε < r < r0+ ε.

(3)

whereo= (vLWRA+vLWRB)/2 andr = ‖o−vLFIN‖, here we consider thatr0 is measuredbefore the experiment andε = 3mm.

Equation 3 gives just an example of treatment of missing dataof one marker, while theremaining points of the head are treated similarly. The results of the finger trajectory areshown only on the Y axis (see Figure 6), because they are similar in the other two cases,the parts which are the skipped vertical blue lines represent the missing data and the greencurve represent the approximation filling of the data.

0 0.5 1 1.5 2 2.5−250

−200

−150

−100

−50

0

50

100

150

Time [s]

Y−

Pos

ition

[mm

]

0 0.5 1 1.5 2 2.50

20

40

60

80

100

120

140

160

Time [s]

Pos

ition

[mm

]

Figure 6 Filling data represent the coordinate of the marker LBHD along the Y-axis

Humanoid Robots Imitation of Human Motion Using Off-line and Real-time Adaptation Techniques 9

Figure 7 Skeleton model in Vicon, with orientation of each rigid body.

4 Generation of the arm motion

To generate the arm motions, the initial values of the joint angle positions are calculateddirectly by inverse kinematics of the captured markers positions. Then these values aremodified to adapt to the constraints of our robot by joint angle filtering. This processmanages to keep key poses and their timing as much as possible.

4.1 Inverse Kinematics from Captured Makers

From the measure of the position of each part of the human body, we calculate the jointangles of the robot using inverse kinematics. Inverse kinematics which we using it’s more-less different from the generic one, because we have plentiful information about positions,which is acquired as marker positions. Joint angles can almost directly be calculated fromposition correlations among connected markers. Let us firstdefine the bodies frames com-mon to both human and humanoid structures from the captured positions of the markers.The frameFi attached to the rigid bodyi is defined using a coordinate frame matrix (seeFigure 7)

ξx =vLSHO−vRSHO

|vLSHO−vRSHO|(4)

ξz = ξx×(vT10−vSTRN)

|(vT10−vSTRN)|(5)

10 T. Bentaleb and L. Shahin

ξy = ξz×ξx (6)

F0 = (ξx,ξy,ξz) (7)

wherevn is position vector of markern, Fi is coordinate frame matrix of jointi from thearm origin 0.

The following calculation process is an iteration in which angles related to the currentframe are calculated and the frame is rotated by the acquiredangles. In other words, jointangles are calculated by forward kinematics of frames.

v0 = F−10 .(vLELB−vLSHO) (8)

θ0 = arctan2(−v0z,−v0y) (9)

F1 = R(F0x,θ0).F0 (10)

θ1 = arctan(v0x

|v0|) (11)

F2 = R(F1x,θ1).F1 (12)

v2 = F−12 .

{

vLWRA−vLWRB

2−vLELB

}

(13)

θ2 = arctan2(−v2x,−v2z) (14)

F3 = R(F2y,−θ2).F2 (15)

θ3 =−arctan(− v2y

|v2|) (16)

F4 = R(F3x,θ3).F3 (17)

v4 = vLWRA−vLWRB (18)

θ4 = arctan2(−v4.F4z,−v4.F4x) (19)

F5 = R(F4y,−θ4).F4 (20)

v5 = F−15 .

{

vLFIN − vLWRA+vLWRB

2

}

(21)

θ5 = arctan2(−v5z,−v5y) (22)

θ6 = arctan(v5x

|v5|) (23)

In our approach, we observed the need to filter the captured trajectories first because ofthe experimental noise. The filter algorithm we have implemented also limits joint velocitieswithin a specified range. Moreover, it controls joint accelerations to smooth the motion andgive it the desired "natural" appearance.

Humanoid Robots Imitation of Human Motion Using Off-line and Real-time Adaptation Techniques 11

4.2 Motion filtering

Extraction of the data is largely based upon the level of noise of the data. Many processesin the extraction refer to velocity or acceleration derivedfrom the original positions. Thenoise level is exaggerated with derived data. This effect influences the validity of the results.Therefore, the noise of the data must be eliminated by some smoothing filters. We employthe Direct Form II Transposed filter. In this method, the position of each frame is determinedfrom the original positions around it as follows:

a(1)∗y(i) = b(1)∗x(i)+b(2)∗x(i −1)+ ...+b(ib+1)∗x(i − ib)−a(2)∗y(i −1)− ...−a(ia+1)∗y(i − ia)

(24)

wherex(i) is the original position of framei, y(i) is the new position of framei, and wetake the coefficientsa(i) = 1 andb(i) = 1/4.

4.3 Constraints of off-line case

In a joint angle sequence acquired by the above process, angles or angular velocities maybecome an impossible value over the constraints of the robot. In addition to the problemof each joint trajectory, the motion may imply poses which are in the neighborhood ofsingular points. At poses near the singular point, valid moving patterns are limited andmovement may be locked. Such motion data cannot be performedby the actual robot.This problem can be considered as a problem of angular velocity because a non-continuouscurve of velocity around a locked frame can be considered to be a high gradient curve on adiscrete system. Therefore, first of all, values must be restricted within the possible range byapplying some filters. We employ a filter proposed by Pollard et al. (2002). Their filter canlimit angular velocity within a specified range, and can alsocontrol angular acceleration.The basic idea of this method is that a new angle trajectory iscreated by the followingthe original trajectory under the limit of angular velocity. The process is similar to the PDcontrol method.First, a new trajectory is created by the following equations:

θ̇i = θi −θi−1 (25)

θ̈F,i+1 = 2√

Ks(θ̇i − θ̇F,i−1)+Ks(θi −θF,i) (26)

θ̇F,i+1 = max(θ̇L,min(θ̇U , θ̇F,i + θ̈F,i+1)) (27)

θF,i+1 = θF,i + θ̇F,i+1 (28)

whereθi is the original joint angle (see Figure 8 the curve (a)),θF,i is new joint angle,i is ajoint number,θL andθU are the lower and upper velocity limits.Ks is the parameter whichcontrols the stiffness of motion.The result becomes a trajectory similar to the original one within the limit. However, it mustdelay from the original one all through the motion, because the trajectory cannot follow theoriginal motion in areas over the limit (see Figure 8 the curve (b)).Then another trajectory is created by the inverse process ofthe above equations from endto start as follows:

θ̇i = θi −θi−1 (29)

12 T. Bentaleb and L. Shahin

0 2 4 620

40

60

80

100

120

(a)

Join

tang

letr

ajec

tory

[deg

]

(a) Original(b) Forward(c) Backward(d) Averaged

0 2 4 60

20

40

60

80

100

120

(b)

Join

tang

letr

ajec

tory

[deg

]

(a) Original(b) Forward(c) Backward(d) Averaged

0 2 4 620

2

0

40

4

0

(c)

Time [s]

Join

tang

letr

ajec

tory

[deg

]

(a) Original(b) Forward(c) Backward(d) Averaged

(d)

Time [s]

Join

tang

letr

ajec

tory

[deg

]

Figure 8 Joint angles trajectories example: The figures (a, b, c) each joint angletrajectory ofthe iCub arm (Shoulder pitch, Shoulder roll, and Wrist yaw angle respectively),and the figure (d)shows the locked point around the singular part that this method able to change the singular pointwith smooth curve.

θ̈B,i−1 = 2√

Ks(θ̇i − θ̇B,i)+Ks(θi −θB,i) (30)

θ̇B,i−1 = max(θ̇L,min(θ̇U , θ̇B,i + θ̈B,i−1)) (31)

θB,i−1 = θB,i + θ̇B,i−1 (32)

The result becomes a trajectory of the future instance compared with the original one (seeFigure 8 the curve (c)).Finally, both trajectories are averaged to obtain a trajectory whose shape is overlapped bythe original one as follows:

θV,i =θF,i −θB,i

2(33)

In this final trajectory, the rhythm of curve approximately matches the original one, andvalues are within the limit, at the same time. Figure 8 the curve (d) shown the final trajectoryof the shoulder pitch, shoulder roll, shoulder yaw and elbowrespectively, with consideringthe physical limits of each joint. The locked parts around singular points in the initialmotion are also changed into a smooth curve. Figure 8-d) shows the initial motion whichmust lock on the actual robot, and the motion after the filter which can be performed onthe actual robot. Across numerous examples, we have observed that the simulation of themotion resulting from this system has confirmed that the model is able to imitate an observedhuman motion, with a similar "natural" appearance. The effectiveness of the method wasfirst tested on the left arm ofiCubhumanoid robot, and then extended to whole body motionusingiCub humanoid robot and simulatediCub with physical parameter of real robot.

Humanoid Robots Imitation of Human Motion Using Off-line and Real-time Adaptation Techniques 13

Figure 9 Effects of the scaling functionf (θ): the horizontal axis represents the original values;the vertical axis represents the scaled values.

4.4 Constraints of real-time case

Unlike the off-line case, the applied constrains for each joint angle were based on thefollowing proposed function that scales down values of the joints trajectories:

f (θ) =U − (U −L)

(

1−cos

(

π(u−x)(u− ℓ)

))

(34)

whereu andℓ are the upper and lower bounds of the joint angle range of the humanoidrobot, andU andL are the original upper and lower limits of jointθ .

Figure 9 shows an illustration of the scaling functionf (θ) which is the horizontal axisrepresents the original values. In this case, it is the estimation of the joints angles range ofthe human being. The vertical axis represents the scaled values, in our case taking the jointangle limits of the robot into consideration.

Looking at each joint anglei (i =1 ... N) independently, the respective robot ranges aredenoted by[θ i

Min,θ iMax] and the human matching joint angle ranges are denoted[β i

Min,β iMax].

Then the limit of each joint angle is determined as follows:

θ′i = θMax−

(θ iMax−θ i

Min) · (1−cos(π · (β iMax−θi)

(β iMax−β i

Min)))

2(35)

whereθi is the original angle position of framei, θ ′i is the new angle position of framei.

The above process allowed the robot to realize the input trajectories, but we still couldobserve overvalues for joint velocities. In order to insurethat the joint velocities remain

14 T. Bentaleb and L. Shahin

Figure 10 Snapshots of the conducted motion. Actor #01, Trial #01 sampled everyone hundredand fifty frames(1.5sec).

within the motors limits, they were also scaled. Moreover, the scaling filter uses the valuesof the joint accelerations to smooth the motion and give it the desired "natural" appearance.This approach corresponds to the forward scaling calculations proposed by Pollard et al.(2002).

θ̇i = θi −θi−1 (36)

θ̈′i+1 = 2

√Ks(θ̇i − θ̇

′i−1)+Ks(θi −θ

′i ) (37)

θ̇′i+1 = max(θ̇L,min(θ̇U , θ̇

′i + θ̈

′i+1)) (38)

θ′i+1 = θ

′i + θ̇

′i+1 (39)

whereθi is the original joint angle,θ ′i is new joint angle,θL andθU are the lower and

upper velocity limits.Ks is a gain which controls stiffness of motion.

5 Results and discussion

The goal of our experiments was to test the feasibility of theproposed method to adaptthe mixing of the filtering and the inverse kinematics approaches, as shown in section 4, theinverse kinematics approaches produce a similar human motion without any considerationof the constraints limit of the robot. However, the filteringmethods which are shown in thesame section, complement the imitation with smooth motion and consider the joints angleand velocities limits. Figure 8 shows the joints trajectories smoothing effect after applyingthe method. This was due to our process that allows to obtain the forward and backward

Humanoid Robots Imitation of Human Motion Using Off-line and Real-time Adaptation Techniques 15

Figure 11 Snapshot sequences from physical robot using real-time method of imitation basedcaptured motion suit human motion.

Figure 12 Snapshot sequences from simulated iCub using real-time method of imitationbasedcaptured motion suit human motion.

trajectories. Furthermore, the forward/backward processgives us a smooth bending of thecurve when the velocity decreased/increased aside from thesingular points. To give valueto our method we did not find a better experience than waving motion. Figure 10 showsan imitation of the waving motion of the right hand by left hand of the iCub humanoidrobot. Also, some experiments on the robot was done to imitate several other human armmotions, the robot can recognize and imitate human motions successfully. Figure 11 showsthe experimental results with the actual humanoid robot James. To show our results weconducted different movements of the arm motion to test the robustness of our approach.In brief, we can say that our results were comparable to the response in the real timecase. Figure 12 shows the results of the experiments inside Vicon room. In all of theseexperiments, we use a Vicon system to observe the demonstrator gestures through themarkers and a simulated robot to replicate those gestures. The simulated robot can recognizeand imitate human motions successfully. The motion experiment consisted in "writing thenumber eight" that shows the evolution of the method throughout the trial.

16 T. Bentaleb and L. Shahin

6 Conclusion and future work

In this paper, we have developed a software environment enabling off-line and real-time imitation of a captured human motion by a humanoid robot. The imitation focused onkinematics conversion, respecting the position of the joint angles and velocities limits of therobot. We have presented an off-line method to generate a human-like arm motion as the firststep for imitating the whole body motion of a human. While, theapplication of the methodof Real-time imitation is not limited to arm motion: it can beextended to all the upperbody. As it is an IK-based approach, we cannot apply it to dynamics motion. Another limitto dynamics imitation is the real-time part of our experiments: most locomotion process isbased on our knowledge about the future situation. With suchconsiderations, our approachcould be very effective coupled with the use of primitives: real-time motion imitation maybe used until a primitive is identified. Our future developments will focus on this aspect ofreal-time locomotion imitation, coupling known to unknowngesture situations.

References

Zuher, F. and Romero, R. (2012) ’Recognition of Human Motions for Imitation and Control of aHumanoid Robot’,Robotics Symposium and Latin American Robotics Symposium (SBR-LARS),2012 Brazilian, pp.190–195

Koenemann, J. and Burget, F. and Bennewitz, M. (2014) ’Real-time imitation of human whole-body motions by humanoids’,2014 IEEE International Conference on Robotics and Automation(ICRA), pp.2806–2812

Filiatrault, S. and Cretu, A.-M. (2014) ’Human arm motion imitation by a humanoid robot’, 2014IEEE International Symposium on Robotic and Sensors Environments (ROSE), pp.31–36

Okamoto, T. and Shiratori, T. and Glisson, M. and Yamane, K. and Kudoh, S. and Ikeuchi, K. (2014)’Extraction of person-specific motion style based on a task model and imitation by humanoidrobot’,2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014),pp.1347–1354

Pollard, N.S., Hodgins, J.K., Riley, M.J. and Atkeson, C.G. (2002)’Adapting human motion forthe control of a humanoid robot’,IEEE International Conference on Robotics and Automation,pp.1390–1397

Safonova, A., Pollard, N. and Hodgins, J.K. (2003) ’Optimizing Human Motion for the Control of aHumanoid Robot’,2nd International Symposium on Adaptive Motion of Animals and Machines,

Jamone, L. and Metta, G. and Nori, F. and Sandini, G. (2006) ’James:A Humanoid Robot Acting overan Unstructured World’,International Conference on Humanoid Robots, 2006 6th IEEE-RAS,

Naksuk, N. and Yongguo Mei and Lee, C. S G (2004) ’Humanoid trajectory generation: an iterativeapproach based on movement and angular momentum criteria’,International Conference onHumanoid Robots, 2004 4th IEEE/RAS, Vol. 2, pp.576–591

Naksuk, N. and Lee, C.S.G. and Rietdyk, S. (2005) ’Whole-body human-to-humanoid motion trans-fer’, 5nd IEEE-RAS International Conference on Humanoid Robots, pp.104–109

Dasgupta, A. and Nakamura, Y. (1999) ’Making feasible walking motionof humanoid robots fromhuman motion capture data’,Proc. IEEE International Conference on Robotics and Automation,Vol. 2, pp.1044–1049

Nakaoka, S. and Nakazawa, A. and Yokoi, K. and Katsushi, I. (2003) ’Leg Motion Primitives fora Humanoid Robot to Imitate Human Dances’,6nd International Conference on Humans andComputers(HS2003),

Humanoid Robots Imitation of Human Motion Using Off-line and Real-time Adaptation Techniques 17

Nakaoka, S. and Nakazawa, A. and Yokoi, K. and Hirukawa, H. and Ikeuchi, K. (2003) ’Generatingwhole body motions for a biped humanoid robot from captured human dances’,IEEE Interna-tional Conference on Robotics and Automation (ICRA), 2003. Proceedings, Vol. 3, pp.3905–3910

Nakaoka, S. and Nakazawa, A. and Yokoi, K. and Ikeuchi, K. (2004) ’Leg motion primitives for adancing humanoid robot’,in Proceedings of IEEE International Conference on Robotics andAutomation, pp.610–615

Nakazawa, A. and Nakaoka, S. and Ikeuchi, K. and Yokoi, K. (2002) ’Imitating human dance motionsthrough motion structure analysis’,IEEE/RSJ International Conference onIntelligent Robots andSystems, 2002., Vol. 3, pp. 2539–2544

Nakaoka, S. and Nakazawa, A. and Kanehiro, F. and Kaneko, K. and Morisawa, M. and Ikeuchi,K. (2005) ’Task model of lower body motion for a biped humanoid robot to imitate humandances’,Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2005), pp.3157–3162

Ruchanurucks, M. and Nakaoka, S. and Kudoh, S. and Ikeuchi, K.(2006) ’Humanoid robot motiongeneration with sequential physical constraints’,Proceedings 2006 IEEE International Confer-ence on Robotics and Automation (ICRA), 2006., pp.2649–2654

Suleiman, W. and Yoshida, E. and Kanehiro, F. and Laumond, J-P andMonin, A. (2008) ’On HumanMotion Imitation by Humanoid Robot’,in 2008 IEEE International Conference on Robotics andAutomation,

Nakaoka, S. (2003) ’Generating Whole Body Motions For A Biped Humanoid Robot From CapturedHuman Dances’,Master thesis, University of Tokyo,

Molet, T. and Boulic, R. and Thalmann, D. (1996) ’A Real Time Anatomical Converter For HumanMotion Capture’,Proc. 7th Eurographics Int. Workshop on Animation and Simulation, Springer-Verlag,,

Bandera, J.P. and Molina-Tanco, L. and Marfil, R. and Sandoval, F.(2004) ’A model-based humanoidperception system for real-time human motion imitation’,Proc. IEEE Conference on Robotics,Automation and Mechatronics, Vol. 1, pp.324–329

Argall, B.D. and Sauser E.L. and Billard, A.G. (2010) ’Tactile Guidance for Policy Refinement andReuse’,IEEE 9th International Conference On Development And Learning, Vol. 30, No. 3,pp.256–264

Lopes, M. and Beira, R. and Praça, M. and Santos-Victor, J. (2004)’An anthropomorphic robot torsofor imitation: design and experiments’,IEEE/RSJ International Conference on Intelligent Robotand Systems,

Tikhanoff, V. and Fitzpatrick, P. and Metta, G. and Natale, L. and Nori, F. and Cangelosi, A. (2008) ’AnOpen-Source Simulator for Cognitive Robotics Research: The Prototype of the iCub HumanoidRobot Simulator’,Workshop on Performance Metrics for Intelligent Systems, National Instituteof Standards and Technology,

Tsagarakis, N. G. and Metta, G. and Sandini, G. and Vernon, D. and Beira, R. and Becchi, F. andRighetti, L. and Santos-Victor, J. and Ijspeert, A. J. and M. Ch. Carrozza, and Caldwell, D.G. (2007) ’iCub: the design and realization of an open humanoid platformfor cognitive andneuroscience research.’,Advanced Robotics, Vol. 21, No. 10, pp.1151–1175

Metta, G. and Fitzpatrick, P. and Natale, L. (2006) ’YARP: Yet Another Robot Platform.’,InternationalJournal of Advanced Robotic Systems, Vol. 3, No. 1, pp.043–048

Metta, G. and Natale, L. and Nori, F. and Sandini, G. and Vernon, D. andFadiga, L. and von Hofsten,C. and Rosander, K. and Lopes, M. and Santos-Victor, J. and Bernardino, A. and Montesano,L. (2010) ’The iCub humanoid robot: An open-systems platform for research in cognitivedevelopment.’,Neural Networks, Vol. 23, Issues 8–9, pp.1125–1134

iCub [online] ’http://en.wikipedia.org/wiki/ICub’,Accessed 02 September 2012,

iCub RobotCub [online] ’http://www.robotcub.org/’,Accessed 02 September 2012,

iCub Website [online] ’http://eris.liralab.it/wiki/Main_Page’,Accessed 02 September 2012,

18 T. Bentaleb and L. Shahin

Vicon Website [online] ’http://http://www.vicon.com’,Accessed 02 September 2012,