online dynamical retouch of motion patterns towards ... · pdf fileonline dynamical retouch of...

6
Online Dynamical Retouch of Motion Patterns Towards Animatronic Humanoid Robots Tomomichi Sugihara, Wataru Takano, Katsu Yamane, Kou Yamamoto and Yoshihiko Nakamura Department of Mechano-Informatics, University of Tokyo 7–3–1, Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan sugihara, takano,, yamane kou @ynl.t.u-tokyo.ac.jp Abstract— This paper presents an online dynamical retouch method of referential motion patterns for humanoid robots. It converts transition sequences of desired posture and supporting state which are rough in terms of both time and space to a set of smooth piecewise functions applicable to actual robots with dynamical constraints taken into account. It is based on Boundary Condition Relaxation method, and has an advantage that it completely works in online, although it has a latency of hundreds of milliseconds due to a batch operation. A virtual fighting between a real human and a robot is demonstrated as an application. It forms a completely autonomous humanoid system consisting of decision-making mechanism and a real robot controller, featuring Mimetic Communication model. Index Terms— Humanoid robot, Animatronics, Online motion retouch, Mimetic communication I. I NTRODUCTION “Animatronics” is one of the key terms towards the deploy- ment of humanoid robotics; it originally means a technology to realize lively actions of robots, particularly in the field of movies and entertainments. Passing by a preliminary study phase of each basic motion such as standing, walking, crawl- ing, rising-up, etc., how to synthesize animated behaviors of robots from a variety of motion primitives is being focused on. Aiming at imitating not only apparent animations but the internal mechanism of humans, this problem is approached from both sides of information processing and motor control. The main interests of the former are in management of hu- manoid action, namely, description, acquisition, memorization and invocation of them, while of the latter in reproduction of those invoked actions on the real robot body. Human-like natural behavior of a robot is a resultant of those technologies, which are still challenging even as individual issues; since humanoids are large-sized nonlinear articulated-body systems without mechanical connection to the inertial frame, the above problems do not have closed solutions. Recent growth of computational theory such as brain-like information processing framework [1] [2] [3] has been en- couraging to handle large-scale humanoid motions. Motion capture system has also become a tool to measure and analyze complicated human behavior [4]. However, the output from them are so rough both in terms of time and space that they should be retouched and interpolated with severe dynamical This work is partially supported by Category “S” # 15100002 of Grant-in- Aid for Scientific Research, Japan Society for the Promotion of Science, and by Project for the Practical Application of Next-Generation Robots in The 21st Century Robot Challenge Program, New Energy and Industrial Technology Developemnt Organization. constraints taken into account. Although there has been pro- posed several techniques to modify a trajectory with physical inconsistency to be applied for real robots [5] [6] [7] [8] [9], they all work in offline, and in addtion, some of them don’t deal with underactuation of humanoid systems. This paper shows a method to convert commanded mo- tion patterns to dynamically feasible referential trajectories. It firstly segments a sequence of the desired posture and supporting state which the command module outputs with a period of tens of milliseconds so that the desired supporting state is invariant in each segment. And then, it generates a set of piecewise continuous functions which smoothly connect to the current robot state. Though it has few hundreds of milliseconds latency due to this batch operation, it completely works in online, using Boundary Condition Relaxation method [10]. Since the trajectories are output in analytical forms, one can arbitrary choose the length of quantized control period. As an application is presented a virtual fighting between a real human and a robot. Inasmuch as it is our primary attempt, it forms a complete humanoid system consisting of human behavior observer, autonomous decision-making mechanism, physically consistent motion planner and a robot working in the real environment. Being different from conventional logic- based interaction system with humans [11][12][13], quick responsivity of the robot from recognition to action is required. And thus, efficacy of proposed is demonstrated. II. BOUNDARY CONDITION RELAXATION METHOD[10] Path-planning for manipulators is basically a problem to find a trajectory from the initial state to the desired. The difficulty of that for legged robots mainly lies on the fact that the corresponding pattern of external force must satisfy the severe dynamical constraint due to the lack of fixation to the inertial frame. Boundary Condition Relaxation method[10] digested in this section handles this issue by accepting a certain error of the state at the goal and discontinuity of the external force at each end of segments. Its advantage to the other conventional methods [14] [6] [15] [16] [17] is that it enables a completely online legged-motion planning only within desired one-step. Suppose that the center of gravity(COG) dominates the dynamics of legged system so strongly that the effect of moment around COG are ignorable, and the equation of motion of such an approximate model as Fig.1(B) is as follows. (1) (2)

Upload: vanque

Post on 16-Mar-2018

222 views

Category:

Documents


0 download

TRANSCRIPT

Online Dynamical Retouch of Motion PatternsTowards Animatronic Humanoid Robots

Tomomichi Sugihara, Wataru Takano, Katsu Yamane, Kou Yamamoto and Yoshihiko NakamuraDepartment of Mechano-Informatics, University of Tokyo

7–3–1, Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan�sugihara, takano,, yamane kou�@ynl.t.u-tokyo.ac.jp

Abstract— This paper presents an online dynamical retouchmethod of referential motion patterns for humanoid robots. Itconverts transition sequences of desired posture and supportingstate which are rough in terms of both time and space to aset of smooth piecewise functions applicable to actual robotswith dynamical constraints taken into account. It is based onBoundary Condition Relaxation method, and has an advantagethat it completely works in online, although it has a latency ofhundreds of milliseconds due to a batch operation. A virtualfighting between a real human and a robot is demonstratedas an application. It forms a completely autonomous humanoidsystem consisting of decision-making mechanism and a real robotcontroller, featuring Mimetic Communication model.

Index Terms— Humanoid robot, Animatronics, Online motionretouch, Mimetic communication

I. INTRODUCTION

“Animatronics” is one of the key terms towards the deploy-ment of humanoid robotics; it originally means a technologyto realize lively actions of robots, particularly in the field ofmovies and entertainments. Passing by a preliminary studyphase of each basic motion such as standing, walking, crawl-ing, rising-up, etc., how to synthesize animated behaviors ofrobots from a variety of motion primitives is being focusedon. Aiming at imitating not only apparent animations but theinternal mechanism of humans, this problem is approachedfrom both sides of information processing and motor control.The main interests of the former are in management of hu-manoid action, namely, description, acquisition, memorizationand invocation of them, while of the latter in reproductionof those invoked actions on the real robot body. Human-likenatural behavior of a robot is a resultant of those technologies,which are still challenging even as individual issues; sincehumanoids are large-sized nonlinear articulated-body systemswithout mechanical connection to the inertial frame, the aboveproblems do not have closed solutions.

Recent growth of computational theory such as brain-likeinformation processing framework [1] [2] [3] has been en-couraging to handle large-scale humanoid motions. Motioncapture system has also become a tool to measure and analyzecomplicated human behavior [4]. However, the output fromthem are so rough both in terms of time and space that theyshould be retouched and interpolated with severe dynamical

�This work is partially supported by Category “S” # 15100002 of Grant-in-Aid for Scientific Research, Japan Society for the Promotion of Science, andby Project for the Practical Application of Next-Generation Robots in The 21stCentury Robot Challenge Program, New Energy and Industrial TechnologyDevelopemnt Organization.

constraints taken into account. Although there has been pro-posed several techniques to modify a trajectory with physicalinconsistency to be applied for real robots [5] [6] [7] [8] [9],they all work in offline, and in addtion, some of them don’tdeal with underactuation of humanoid systems.

This paper shows a method to convert commanded mo-tion patterns to dynamically feasible referential trajectories.It firstly segments a sequence of the desired posture andsupporting state which the command module outputs with aperiod of tens of milliseconds so that the desired supportingstate is invariant in each segment. And then, it generates aset of piecewise continuous functions which smoothly connectto the current robot state. Though it has few hundreds ofmilliseconds latency due to this batch operation, it completelyworks in online, using Boundary Condition Relaxation method[10]. Since the trajectories are output in analytical forms, onecan arbitrary choose the length of quantized control period.

As an application is presented a virtual fighting between areal human and a robot. Inasmuch as it is our primary attempt,it forms a complete humanoid system consisting of humanbehavior observer, autonomous decision-making mechanism,physically consistent motion planner and a robot working inthe real environment. Being different from conventional logic-based interaction system with humans [11][12][13], quickresponsivity of the robot from recognition to action is required.And thus, efficacy of proposed is demonstrated.

II. BOUNDARY CONDITION RELAXATION METHOD[10]

Path-planning for manipulators is basically a problem to finda trajectory from the initial state to the desired. The difficultyof that for legged robots mainly lies on the fact that thecorresponding pattern of external force must satisfy the severedynamical constraint due to the lack of fixation to the inertialframe. Boundary Condition Relaxation method[10] digested inthis section handles this issue by accepting a certain error ofthe state at the goal and discontinuity of the external force ateach end of segments. Its advantage to the other conventionalmethods [14] [6] [15] [16] [17] is that it enables a completelyonline legged-motion planning only within desired one-step.

Suppose that the center of gravity(COG) dominates thedynamics of legged system so strongly that the effect ofmoment around COG are ignorable, and the equation of motionof such an approximate model as Fig.1(B) is as follows.

����� � �� � � (1)

��� � ���� � � � (2)

(A)Mass distributed model (B)Mass concentrated model

f

pG

pZ

f2, n2f1, n1

Fig. 1. Mass distributed model versus Mass concentrated model

where � is the total mass, �� � � �� �� �� �� is COG, � �

� � � � �� is the acceleration of gravity, � � � �� �� �� ��

is the total linear force acting to the robot, and �� �� �� �� �� �

� is ZMP[18]. Note that �� is the ground level,and is known. From Eq.(1)(2), we get:

��� � ��

���� � ��� (3)

��� � ��

���� � ��� (4)

��� ���

�� � (5)

where

�� �

���� � �

�� � ��(6)

In this paper, we don’t deal with types of motion throughaerial phase such as jumping and running. Then, the followingconstraints are posed on the external force.

�� � ���� (7)

�� � � (8)

where ���� is a certain convex region which the contact pointsbetween the robot and the environment define on the horizontalplane � � �� (typically, a convex hull of the points if all thecontact points are on that plane).

From the symmetricity of (3) and (4), we consider onlythe motion in �-axis, hereinafter. The vertical motion (5) isindependent from the horizontal. Now, we assume that itseffect on the horizontal motion is small enough to be ignored,namely, �� is a constant. The analytical solutions of them areobtained when ����� is given as certain functions of �.

Let us design the referential ZMP by the following function.

����� � ���� �� ����� �� ������ ����� (9)

where � �� �� � (10)

The choice of this form rather easily creates dynamic kickingmotion at ����� and pivoting motion around ���� �. deter-mines a magnitude of the kick; the larger , the faster ZMPtravels. Then, the general solution of (3) is as follows.

������ ����� �

��������� �������

� � ����������� �

(11)

where � and � are unknown coefficients. Differentiating theabove, we get:

�������� ������� �

����������� �������

� � ������

(12)

From these equations, the boundary conditions at � � � and� � � are gathered into the following matrix form.

����

��������� �

���� �

������������������ ����� �

���� (13)

where

��

�����������

� � ��

� � �

� � �� �

�� �����

� � ��

��

� � �

��� ���� ������

� � �

�����

� � �� �

����� ���

������

�����

� � ��

�������

� � �

�����������

(14)Since � is a regular matrix, a set of boundary condition �����,�����, ���� � and ���� � uniquely determines the trajectoryof both ZMP and COG, and (7) is not necessarily satisfied.

Let us reconsider the meanings of each parameter in (13).����� and ����� must be satisfied for the continuity ofmotion, while the desired state ���� � and ���� � can accept acertain error. And, ����� and ���� � are required to satisfy (7),while � and � are coefficients without any constraints. Then,we substitute (13) for the following quadratic programming.

��� ��������� �� �� min.

subject to �� � �(*)

where

� � ������ ���� � ���� � ���� ���

� �������

���� � ���� � ���� ���

:the desired �

� � ������� �� � � ��

� �

� � ���� ����� � ���� ����

� � �

��� ������ ���

����������

And, ��(�� � � � �) are components of ���.

��� �

������� ��� ��� ������ ��� ��� ������ ��� ��� ������ ��� ��� ���

���� (15)

Solving the problem (*), we get:

� � ����� ����� ������� �� (16)

� and � is determined by the following.

��� ��� ��� ������ ��� ��� ���

������������������ ����� �

���� (17)

both support right foot support both supportdesired supportingcondition

desired posture

right foot support right foot support

motion segment

both support

Fig. 2. Segmentation of input rough posture sequence

Locating ����� and ���� � in the supporting region at � �� and � � � , respectively, with a sufficient margin, (7) issatisfied, since Eq.(9) is a monotony function.

III. ONLINE DYNAMICAL MOTION RETOUCH

Boundary Condition Relaxation method is applied to arapid motion retouch in this section. Necessary conditions fordesigning referential motion trajectory of humanoid robots areitemized as follows.

1) Let joint angles be within the motion range2) Avoid self-collision3) Let the trajectory be apparently continuous and smooth4) Not require attracting force acting at each contact point

to the environment5) Follow the planned supporting state transition6) Carry ZMP onto the sole of supporting foot before

detaching the swing foot off the ground7) Let joint angle velocity be within the limit8) Let joint torques be within the limit

These requirements involving dynamics condition indicate thatnot only desired postural sequence but also supporting statetransition command is needed to describe humanoid motionwith respect to the environment.

Responding to the above indication, Suppose that a motioncommand is given as a transition sequence of the desired pos-ture and supporting state with a period of tens of millisecond,not taking dynamics into account, namely, the constraints 4)and 6) are not necessarily satisfied. For simplicity, however,we assume that it already satisfies 1)2)7)8) and has enoughmargin, so that trajectory modification in the following subsec-tion doesn’t harm those constraints. Under those assumptions,one can segment the input sequence according to the transientof desired supporting state as is figured by Fig.2. Once thechangeover of the desired supporting state is recognized, acontact-state-invariant subsequence is clipped, and the follow-ing online trajectory modification is applied for each. Notethat it causes a latency to wait for the end of the segmentbetween the time when the beginning of it is given and whenit is referred from the real robot in principle.

Suppose a segmented subsequence is given, and let � be itstime length. The online dynamical trajectory modification isprocessed as follows.

I) Create cubic spline curves ���� which connect givendesired via-points of the extremities and trunk attitude.The actual robot state at the end of segment does notnecessarily match the desired due to model error, compu-tational error in inverse kinematics and so forth. Then, as

is shown in Fig.3(A), ����� defined by Eq.(19) is addedto the original ���� (chained line in the figure) in orderto reduce the gap between the desired previous goal,which is the desired initial value of the new segment,and the actual initial value to be zero, taking time � .The resultant referential trajectory ���� concatenated tothe actual initial state forms as follows.

���� � ���� ������ (18)

����� � ������ �����

���

�� � � �

�(19)

II) Compute the referential trajectory of COG ����� fromthe current state and the desired goal state after � , apply-ing Boundary Condition Relaxation method as Fig.3(B).The ZMP trajectory ����� is simultaneously created.

III) In the above stage, the time of foot-detachment �� isfound from the planned ����� and the desired supportingregion �; �� is the minimum time when ����� comeswithin �. ZMP trajectory and foot placement are inconsistent if the foot to move keeps contact with theground until ��. Then, timescale ���� originally definedin � � to ����� in �� � as follows (Fig.3(C)).

����� �

��

���� �� � � ���

�� ��� ���

� � ��

���� � � � �

(20)

As the result, modified trajectories of COG, trunk attitude andthe extremities are generated. Note that all those trajectoriesare designed as continuous functions, so that it is quantized-period invariant. Sampling the set of referential values fromthem and applying the whole body inverse kinematics, thereferential joint angle values are obtained.

IV. VIRTUAL FIGHTING BETWEEN REAL HUMAN AND

HUMANOID ROBOT

A. Concept

This section shows a virtual fighting between a real humanand a robot as a primary attempt to build an autonomoushumanoid system which interacts with humans through ob-servation and action. Since fighting is a quick type of commu-nication, it’s beyond the performance of a conventional logic-based AI and offline motion planning. The proposed onlinemotion retouch bridges an upper-level decision-making moduleand a lower-level motion controller within a short cycle tomeet the requirement of much higher system responsivity. Wealso adopted Mimesis Loop paradigm[1] extent to a realtimeimplementation for the brain part.

B. System construction

Fig.4 shows the system construction and Fig.5 the actualscene of the fighting. It consists of three parts, namely,observer-tactician part with optical motion capture system andMimetic Communication model[19], robot motion controllerpart based on the online dynamical motion retouch, and imagesynthesizer part which recomposes virtual fights on a screen.

T0

p(0)

p(0)d

p(t)d

p(t)

∆p(t)

: desired via-points : actual initial point

p(T)

t

p(t)+d ∆p(t)

(A) segment concatenation

T0

p (0)

tTD

G

p (t)G

p (t)Z

desired supporting region

p (T)d G

p (T)G:COG trajectory

:ZMP trajectory

(B) boundary condition relaxation

T0

p(0)

: desired via-points: actual via-point at detachment

t

p(t)p’(t)

TD

:foot trajectory

(C) detachment-delaying & timescalingFig. 3. Interpolation toolbox for physically consistent motion trajectory

Virtual ring and fighters in the display

Player

motion capture system

Robot controller

Motion analyzer& Tactician

Image synthesizer

ethernet

Robot

Fig. 4. Diagram of virtual fighting system

The player’s motion is captured by MAC3D System(MotionAnalysis Corporation) and is converted to the motion of ahuman-figure which has the same structure with the robotwith inverse kinematics[20]. Then, Mimetic Communicationmodel recognizes it, selects the corresponding motion tacticsand sends sequences of the desired posture and supportingstate with a period of 30[ms] to the robot. It runs on PC withCPU:Xeon 3.60GHz � 2 and 2GB RAM. Robot controllerreceives them via TCP/IP using socket connection, and inter-polates it into dynamically consistent reference every 3[ms]. Itruns on PC with CPU:Pentium 4 3.00GHz and 512MB RAM,and commnicates with a processor on the robot via wiredTCP/IP. The robot status is also monitored every 3[ms] andsent to the above Mimetic Communication system. Human-figure motion and the robot motion monitored are synthesizedon another PC with CPU:Xeon 3.60GHz, 2GB RAM andnVIDIA FX 4400, and the image of virtual ring and twofighters is displayed.

This system does not feature a bilateral force feedbackmechanism, and collision detection is done only by geometric

Robot Actor

Virtual ring and fighters

Fig. 5. Actor, robot and virtual fighting field

IDs=1 NsIDo=1 No

θs [k] θo [k]1

2

Ns

1

2

No

1

2

L

subject object

θs [k+1]

Fig. 6. Mimetic communication with hierarchical HMM

computation between the models. The player fights with theimaginary robot, watching the synthesized image in the dis-play. In the sense that the information to the player is restrictedonly to visual sensation, it lacks of reality as a virtual type ofapplication. However, the initial objective to build a completelyautonomous humanoid robot system is achieved.

C. Mimetic communication[19]

Mimesis Loop[1] is a framework in which intelligence toacquire and invoke protosymbols is successively cultivatedthrough the process of imitating others’ behaviors, namely,

Fig. 8. Outer view of UT-�2:magnum

recognizing them, remapping them into the robot’s bodyand estimating the likelihood between the original motionobserved and that replayed by the robot, using Hidden MarkovModel(HMM). Based on this Mimesis criterion, Takano etal.[19] proposed a primitive communication model whichconsists of hierarchical HMM. In the learning phase, the lowerlayer memorizes motion patterns of two persons independently,outputs the identifier of the HMM storing the pattern and givesit to the upper layer. The upper layer learns the interactionpatterns, namely, the combination of the motion identifiersof those by two individuals. Eventually, the model acquiresthe correspondency of those behaviors which is thought to beresulted of communication between them.

The most original property of the model is that it also func-tions as an online behavioral decision-making module of therobot. The upper layer finds HMM which stores the most likelypattern to the history of interaction between the robot and theother with one additional latest action of the other recognized,and chooses the motion identifier which corresponds to themissing part, namely, robot’s next motion to appear. Fig.6illustrates the mechanism. It is utilized in our system as abrain part; picking up the “will” of the player whose action ismeasured by the optical motion capture system, determiningabstract fighting tactics such as “punching”, “avoiding” andso forth, choosing the representative sequence of the desiredposture and supporting condition with a period of 30[ms], andsending them to the robot as the motion command. The postureincludes the whole joint angle values and the attitude of trunklink with respect to the inertial frame.

D. Demonstration

Fig.7 shows a scene of the demonstration. We used thehumanoid robot UT-�2[21] (Fig.8). It is about 55[cm] tall and7.5[kg] weighed, including the battery and processor on thebody. The total number of actuated joints is 20. Fig.9 shows apart of the commanded state-transition sequence representingone-step-forward motion, and retouched referential trajectoriesof COG and the feet. The square, circle and triangle marksrepresent the via points in (A)(B)(C), and the quantizedreferential values in (D)(E)(F). One can see that referentialCOG trajectory largely differs from the commanded patterm.It proves that the method proposed works to modify such a

physically ill-conditioned output from the brain part so as tobe applied to the real robot. The effect of timescaling on theleft foot trajectory is also verified.

In this operation, the time length of the segmented sequence,which was in turn the latency between the commanded timeand the appearing time of motion, ranged from about 100[ms]to 500[ms] and was about 300[ms] in average.

V. CONCLUSION

The motion retouch method, which converts a rough statetransition sequence in terms of time and space to that satisfiesthe physical law in online, is presented. The key technologyis at the use of Boundary Condition Relaxation which enablesa rapid legged motion planning, dealing with the dynamicalcomplexity of the system rather in an easy computation. Itis expected to be a complement for the gap between newparadigm of artificial intelligence, such as brain-like informa-tion processing or captured motion database, and the real robot.It could also contribute to the promotion of animatronics,aiding to breaking the last barrier of the tunnel which connectsartificial intelligence and robot body dynamics.

ACKNOWLEDGMENT

The authors greatly appreciate to NAC Image Technology.,Inc. and Motion Analysis Corporation for letting us use theMAC3D System. And also thank Mr. Tatsuhito Aono as a testplayer of the virtual system demonstrated in section IV.

REFERENCES

[1] Tetsunari Inamura, Yoshihiko Nakamura, and Iwaki Toshima. EmbodiedSymbol Emergence based on Mimesis Theory. The International Journalof Robotics Research, 23(4):363–377, 2004.

[2] Masafumi Okada, Koji Tatani, and Yoshihiko Nakamura. PolynomialDesign of the Nonlinear Dynamics for the Brain-like Information Pro-cessing of Whole Body Motion. In Proceedings of the 2002 IEEEInternational Conference on Robotics & Automation, pages 1410–1415,2002.

[3] Auke Jan Ijspeert, Jun Nakanishi, and Stefan Schaal. MovementImitation with Nonlinear Dynamical Systems in Humanoid Robots. InProceedings of the 2002 IEEE International Conference on Robotics &Automation, pages 1398–1403, 2002.

[4] Kazutaka Kurihara, Shin’ichiro Hoshino, Katsu Yamane, and YoshihikoNakamura. Optical Motion Capture System with Pan-Tilt CameraTracking and Realtime Data Processing. In Proceedings of the 2002IEEE International Conference on Robotics & Automation, pages 1241–1248, 2002.

[5] Anirvan DasGupta and Yoshihiko Nakamura. Making Feasible WalkingMotion of Humanoid Robots From Human Motion Capture Data. InProceedings of the 1999 IEEE International Conference on Robotics &Automation, pages 1044–1049, 1999.

[6] Ken’ichiro Nagasaka. The Whole Body Motion Generation of HumanoidRobot Using Dynamics Filter(in Japanese). PhD thesis, University ofTokyo, 2000.

[7] Katsu Yamane and Yoshihiko Nakamura. Dynamics Filter — Conceptand Implementation of On-Line Motion Generator for Human Figures.In Proceedings of the 2000 IEEE International Conference on Robotics& Automation, pages 688–695, 2000.

[8] Nancy S. Pollard, Jessica K. Hodgins, Marcia J. Riley, and Christo-pher G. Atkeson. Adapting Human Motion for the Control of aHumanoid Robot. In Proceedings of the 2002 IEEE InternationalConference on Robotics & Automation, pages 1390–1397, 2002.

[9] Shin’ichiro Nakaoka, Atsushi Nakazawa, Kazuhito Yokoi, HirohisaHirukawa, and Katsushi Ikeuchi. Generating Whole Body Motions for aBiped Humanoid Robot from Captured Human Dances. In Proceedingsof the 2003 IEEE International Conference on Robotics & Automation,2003.

(A) Fight of digitally reconstructed actor and robot on the virtual ring

(B) Responsive fighting actions of the robot against the actor’s motions capturedFig. 7. Showdown between digital fighters

0.1

-0.02

0

0.02

0.04

0.06

0.08

[m]

0 1 2 3 4 5 [sec]

COGleft footright foot

(A) commanded motion along �-axis

-0.02

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

[m]

0 1 2 3 4 5 [sec]

COGleft footright foot

(B) commanded pattern along �-axis

-0.05

0

0.05

0.1

0.15

0.2

0.25

0.3

[m]

0 1 2 3 4 5 [sec]

COGleft footright foot

(C) commanded pattern along �-axis

-0.02

0

0.02

0.04

0.06

0.08

0.1

[m]

0 1 2 3 4 5 [sec]

COGleft footright foot

(D) retouched pattern along �-axis

-0.02

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

[m]

0 1 2 3 4 5 [sec]

COGleft footright foot

(E) retouched motion along �-axis

-0.05

0

0.05

0.1

0.15

0.2

0.25

0.3

[m]

0 1 2 3 4 5 [sec]

COGleft footright foot

(F) retouched motion along �-axisFig. 9. Commanded state-transition sequence and retouched trajectories

[10] Tomomichi Sugihara and Yoshihiko Nakamura. A Fast Online GaitPlanning with Boundary Condition Relaxation for Humanoid Robots.In Proceedings of the 2005 IEEE International Conference on Robotics& Automation, 2005.

[11] Yoshiaki Sakagami, Ryujin Watanabe, Chiaki Aoyama, Shinichi Mat-sunaga, Nobuo Higaki, and Kikuo Fujimura. The intelligent ASIMO:System overview and integration. In Proceedings of the 2002 IEEE/RSJInternational Conference on Intelligent Robots and Systems, pages 2478–2483, 2002.

[12] Koichi Nishiwaki, Satoshi Kagami, James J. Kuffner, Masayuki Inaba,and Hirochika Inoue. Online Humanoid Walking Control System and aMoving Goal Tracking Experiment. In Proceedings of the 2003 IEEEInternational Conference on Robotics & Automation, pages 911–916,2003.

[13] Masahiro Fujita, Yoshihiro Kuroki, Tatsuzo Ishida, and Toshi T. Doi.Autonomous Behavior Control Architecture of Entertainment HumanoidRobot SDR-4X. In Proceedings of the 2003 IEEE/RSJ InternationalConference on Intelligent Robots and Systems, pages 960–967, 2003.

[14] Jin’ichi Yamaguchi, Eiji Soga, Sadatoshi Inoue, and Atsuo Takanishi.Development of a Bipedal Humanoid Robot – Control Method of WholeBody Cooperative Dynamic Biped Walking –. In Proceedings of the1999 IEEE International Conference on Robotics & Automation, pages368–374, 1999.

[15] Shuuji Kajita, Fumio Kanehiro, Kenji Kaneko, Kiyoshi Fujiwara, Ken-suke Harada, Kazuhito Yokoi, and Hirohisa Hirukawa. Biped Walking

Pattern Generation by using Preview Control of Zero-Moment Point. InProceedings of the 2003 IEEE International Conference on Robotics &Automation, pages 1620–1626, 2003.

[16] Ken’ichiro Nagasaka, Yoshihiro Kuroki, Shin’ya Suzuki, Yoshihiro Itoh,and Jin’ichi Yamaguchi. Integrated Motion Control for Walking,Jumping and Running on a Small Bipedal Entertainment Robot. InProceedings of the 2004 IEEE International Conference on Roboticsand Automation, pages 3189–3914, 2004.

[17] Kensuke Harada, Shuuji Kajita, Kenji Kaneko, and Hirohisa Hirukawa.An Analytical Method on Real-time Gait Planning for a HumanoidRobot. In Proceedings of the 2004 IEEE-RAS/RSJ International Con-ference on Humanoid Robots, 2004.

[18] M. Vukobratovic and J. Stepanenko. On the Stability of Anthropomor-phic Systems. Mathematical Biosciences, 15(1):1–37, 1972.

[19] Wataru Takano, Katsu Yamane, and Yoshihiko Nakamura. HierarchicalModel for Primitive Communication Based on Motion Recognition andGeneration through Proto Symbol Space. In Proceedings of the 2005IEEE-RAS/RSJ International Conference on Humanoid Robots, 2005.

[20] Katsu Yamane and Yoshihiko Nakamura. Natural Motion Animationthrough Constraining and Deconstraining at will. IEEE Transactions onVisualization and Computer Graphics, 9(3):352–360, 2003.

[21] Tomomichi Sugihara, Kou Yamamoto, and Yoshihiko Nakamura. Archi-tectural Design of Miniature Anthropomorphic Robots Towards High-Mobility. In Proceedings of the 2005 IEEE/RSJ International Conferenceon Intelligent Robots and Systems, 2005.