in proc. ieee int. conf. robotics and automation ... - ijs

6
In Proc. IEEE Int. Conf. Robotics and Automation, San Francisco, California, pp. 2223-2228, April 2000. Planning of Joint Trajectories for Humanoid Robots Using B-Spline Wavelets Aleˇ s Ude , Christopher G. Atkeson , Marcia Riley ERATO Kawato Dynamic Brain Project, Japan Science and Technology Corporation 2-2 Hikaridai Seika-cho, Soraku-gun, Kyoto, Japan College of Computing, Georgia Tech, 801 Atlantic Drive, GA 30332-0280, USA ATR Human Information Processing Research Laboratories 2-2 Hikaridai Seika-cho, Soraku-gun, Kyoto, Japan Abstract The formulation and optimization of joint trajectories for humanoid robots is quite different from this same task for standard robots because of the complexity of the hu- manoid robots’ kinematics. In this paper we exploit the similarity between the movements of a humanoid robot and human movements to generate joint trajectories for such robots. In particular, we show how to transform human motion information captured by an optical tracking device into a high dimensional trajectory of a humanoid robot. We utilize B-spline wavelets to efficiently represent the joint trajectories and to automatically select the density of the basis functions on the time axis. We applied our method to the task of teaching a humanoid robot how to make a dance movement. 1 Introduction Movements of most of the current robot manipulators can be described by a single open kinematic chain. A stan- dard approach to the specification of movement tasks for such robots is to define a trajectory for the motion of a robot tip. Further constraints are necessary in the case of redundant mechanisms. The specification of a movement task involving the full-body motion of a humanoid robot (Fig. 1) is much more complicated because the underlying kinematic structure of the humanoid robot is more com- plex. Indeed, a humanoid robot possesses 5 tips: the head, the two hands and the two feet. The motions of its tips are not independent, but are coupled through the robot linkage. Therefore we cannot directly specify a full-body motion by independently specifying the trajectories of these tips. In addition, taken as a whole the humanoid robot’s kinematic structure is redundant. Our humanoid robot DB consists of 15 rigid parts di- vided into 6 interconnected kinematic chains: the head; the upper arms, lower arms, and hands; the lower and the up- per torso; and the upper legs, lower legs and feet. The tra- Figure 1: Humanoid robot called DB in our laboratory jectories of joints connecting these body parts define the complete motion of the robot as the robot pelvis is fixed in space. DB has 26 degrees of freedom (plus four degrees of freedom for the eyes’ movements which are not con- sidered in this paper). The dependencies between trajecto- ries of different body parts are defined by DB’s geometric structure. It is thus more appropriate to represent full-body motion trajectories in terms of independent variables, e. g. joint angles, because joint space trajectories automatically conform to the geometric constraints. There is not much hope of finding a closed form tra- jectory for full-body motions such as dancing. There- fore we exploit the similarities between humanoid robots and humans to generate appropriate trajectories. We are also taking advantage of the nature of the tasks humanoid robots are asked to perform, which typically involve mak- ing human-like motion. We started our investigation with motion capture techniques from the entertainment indus- try and computer graphics [1], and also borrowed tech-

Upload: others

Post on 24-Apr-2022

13 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: In Proc. IEEE Int. Conf. Robotics and Automation ... - IJS

In Proc. IEEE Int. Conf. Robotics and Automation, San Francisco, California, pp. 2223-2228, April 2000.

Planning of Joint Trajectories for Humanoid Robots Using B-Spline Wavelets

Ales Ude�, Christopher G. Atkeson

� � �, Marcia Riley

� � �

�ERATO Kawato Dynamic Brain Project, Japan Science and Technology Corporation2-2 Hikaridai Seika-cho, Soraku-gun, Kyoto, Japan�College of Computing, Georgia Tech, 801 Atlantic Drive, GA 30332-0280, USA�ATR Human Information Processing Research Laboratories2-2 Hikaridai Seika-cho, Soraku-gun, Kyoto, Japan

Abstract

The formulation and optimization of joint trajectoriesfor humanoid robots is quite different from this same taskfor standard robots because of the complexity of the hu-manoid robots’ kinematics. In this paper we exploit thesimilarity between the movements of a humanoid robot andhuman movements to generate joint trajectories for suchrobots. In particular, we show how to transform humanmotion information captured by an optical tracking deviceinto a high dimensional trajectory of a humanoid robot. Weutilize B-spline wavelets to efficiently represent the jointtrajectories and to automatically select the density of thebasis functions on the time axis. We applied our method tothe task of teaching a humanoid robot how to make a dancemovement.

1 Introduction

Movements of most of the current robot manipulatorscan be described by a single open kinematic chain. A stan-dard approach to the specification of movement tasks forsuch robots is to define a trajectory for the motion of arobot tip. Further constraints are necessary in the case ofredundant mechanisms. The specification of a movementtask involving the full-body motion of a humanoid robot(Fig. 1) is much more complicated because the underlyingkinematic structure of the humanoid robot is more com-plex. Indeed, a humanoid robot possesses 5 tips: the head,the two hands and the two feet. The motions of its tips arenot independent, but are coupled through the robot linkage.Therefore we cannot directly specify a full-body motion byindependently specifying the trajectories of these tips. Inaddition, taken as a whole the humanoid robot’s kinematicstructure is redundant.

Our humanoid robot DB consists of 15 rigid parts di-vided into 6 interconnected kinematic chains: the head; theupper arms, lower arms, and hands; the lower and the up-per torso; and the upper legs, lower legs and feet. The tra-

Figure 1: Humanoid robot called DB in our laboratory

jectories of joints connecting these body parts define thecomplete motion of the robot as the robot pelvis is fixed inspace. DB has 26 degrees of freedom (plus four degreesof freedom for the eyes’ movements which are not con-sidered in this paper). The dependencies between trajecto-ries of different body parts are defined by DB’s geometricstructure. It is thus more appropriate to represent full-bodymotion trajectories in terms of independent variables, e. g.joint angles, because joint space trajectories automaticallyconform to the geometric constraints.

There is not much hope of finding a closed form tra-jectory for full-body motions such as dancing. There-fore we exploit the similarities between humanoid robotsand humans to generate appropriate trajectories. We arealso taking advantage of the nature of the tasks humanoidrobots are asked to perform, which typically involve mak-ing human-like motion. We started our investigation withmotion capture techniques from the entertainment indus-try and computer graphics [1], and also borrowed tech-

Page 2: In Proc. IEEE Int. Conf. Robotics and Automation ... - IJS

niques from robot teleoperation, robot programming by di-rect teaching or showing [5], and virtual reality. We foundthat the requirements to actually control a physical devicerequired modifications of techniques from the virtual andentertainment applications, and that we could take advan-tage of the offline nature of our task to more effectivelyprocess the teaching data and handle less cumbersome mo-tion capture devices. In this paper we capture full bodymotions of a human performer using an optical trackingdevice, which provides the 3D location of identified activemarkers which are currently in view. We have also experi-mented with goniometer devices strapped to the performer,and magnetic systems that provide marker orientation aswell as location. The techniques presented in this paper canbe easily extended to all of these different types of motioncapture systems. We are also currently extending our tech-niques to vision systems where there are no markers, butindividual pixels must be matched or accounted for [10].

To relate human motion to robot motion, we modelthe kinematics of the performer using a similar kinematicmodel as the robot’s, but scaled to the performer. The per-former’s kinematic model should contain at least those de-grees of freedom which are relevant for the desired move-ment task. Sometimes it is advantageous to augment theperformer’s kinematic model by additional degrees of free-dom to facilitate the measurement process. This approachcan of course work only for a humanoid robot having akinematic structure similar to a human.

2 Setting up the criterion function

The placement of a human body in Cartesian space isdetermined by the position and orientation of a global bodycoordinate system rigidly attached to one of the body partsand by the values of the joint angles. We use twist coordi-nates [6] to model the performer’s and DB’s kinematics. Ifthe coordinates of a marker in a local coordinate system of arigid body part to which it is attached are given by ��� , thenits 3-D position at body configuration � ��� ��� � � � � � � canbe calculated as follows � ����� � ��� ��� � � � ��� � � � � � � � � � � � ��� � � � � � � � � � � � � � � ���� � � ��� ��� � � � � � � (1)

Here � � � is the function mapping twists � � representing thebody kinematics into rigid body transformations, � � is thehomogeneous matrix combining the position and orienta-tion of the local body part coordinate system to which themarker is attached with respect to the global body coordi-nate system at zero configuration, � and � are the orienta-tion 1 and position of a global body coordinate system with

1We use rotation vectors to represent the orientation.

respect to the world coordinate system, and � � ��� ��� denotesthe homogeneous matrix corresponding to � and � . Notethat the set of twists affecting the motion of a body pointvaries according to the identity of the body part to whichthe marker is attached.

Our trajectory planning method should generate mo-tions which are perceptually similar to the motion of theperformer. This can be achieved by recovering joint anglesthat results in marker positions close to the measurements.Thus we should favor configurations minimizing !� " # � � � ��� $ % � � ��� $ % � � � � $ % � � � � ��� $ % � ��&'��(� � $ % � # ) �

(2)where � (� � $ % � denotes the measured markers at time $ % .

A straightforward approach to the generation of motiontrajectories is to sequentially minimize criterion (2) at eachmeasurement time. A continuous trajectory can be gener-ated afterwards by interpolating the recovered joints. Thisapproach has, however, several deficiencies. First of all,optical motion capture systems often cannot recover the po-sitions of all markers due to occlusions. This can result inunderconstrained linear systems causing the optimizationprocess to break down. Moreover, experiments showed thateven when the positions of all markers can be recovered,the optimization process can still break down because ofthe singularities in the kinematic model. Finally, just mov-ing like the human is not the only criterion relevant for therobot motion.

The optimization process can be made more reliableby recovering the complete trajectory instead of singleconfigurations and by the regularization of the objectivefunction (2). Writing the full-body trajectory as *+� $ � �� ��� $ � � ��� $ � � � � $ � � � � �,� $ � � , we look for a function thatminimizes

- � *�� �/.0 1!% " !� " # � � � *+� $ % � ��&2��(� � $ % � # ) � (3)

over all possible trajectories. The regularization can beachieved by minimizing the amplitude of physical quan-tities such as acceleration (2nd derivative) or jerk (3rdderivative)

3 � *�� �/.054 6 # *�7 8+9 � $ � # ) : $ ��; � 0=< >�? (4)

We always normalize the time of our trajectory to 1. In gen-eral we could penalize any weighted combination of kine-matic variables such as acceleration, jerk, and violation ofjoint space or Cartesian soft limits. Because we have thefull trajectory available and can analytically compute ve-locities and accelerations (for spline based trajectory rep-resentations), we could use inverse dynamics to compute

Page 3: In Proc. IEEE Int. Conf. Robotics and Automation ... - IJS

torques or actuator commands from a physically feasibletrajectory, and use the computed values to penalize anyfunction of torque or actuator commands such as squaredtorque magnitude or actuator energy dissipation.

The trajectory planning problem thus becomes

��� �* � - � *������ 3 � *�� �� (5)

where � is the parameter governing the tradeoff betweenthe two objective functions. Apart from the above men-tioned computational issues, it is advantageous to generatesmooth trajectories also in order to reduce the wear andtear of the mechanical system, avoid exciting higher orderdynamics, and because real actuators often have limits ontheir torque output or on the rate of change of output. Fur-thermore, jerky motions do not look natural which is dis-tracting for movement tasks like dancing.

3 Trajectory Generation

Since parametric forms of complex body movements arenormally not known, we applied the finite element methodto represent the trajectory. In particular, we could use B-splines that were used before for the generation of jointtrajectories of industrial robots [9] as basis functions. Un-fortunately, an optimal set of fixed B-splines often cannotbe determined in advance. If there are not enough basisfunctions, the generated motion may be far from the de-sired motion. If there are too many basis functions, thecomputational complexity is increased unnecessarily dueto the larger number of variables as well as the resultingill-conditioning of the linear subproblems that arise in theoptimization process. A solution is to use a wavelet basis,in which the trajectory is represented hierarchically [4, 8],instead of a B-spline basis.

3.1 B-Spline Wavelets

Let � � ;'� be the set of0 � � ; endpoint-interpolating

B-splines of degree ; constructed from knot sequence

.0 ��� � � � � � �8�� �� � ��� �� . � 0 � � 0 � & 0 � 0 � & . � 0 � � � 0 �� � � �8�� �� � ��� �

� (6)

Linear spaces spanned by these splines are nested, i. e.

� � 6 � ;'� ��� � � � ;'� ��� � � ) � ;'� ��� � (7)

where� � �2� denotes the linear space spanned by mem-

bers of � . Each orthogonal complement of� � � � ; � �

in� � � � � ;'� � is called a wavelet space and its basis

is denoted by � � ;'� . The members of � � ; � are

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

Figure 2: A typical B-spline and a typical wavelet

called semiorthogonal wavelets because they are orthog-onal only to B-splines but not to each other. By defini-tion

� � � � ; � �!� � � � � � ;'� � , thus wavelets are piece-wise polynomials of the same degree as the underlying B-splines. A desirable property for wavelets is to have smallsupport. Let " � � and # �� be members of � � ; � and � � ;'� ,respectively. Since

� � � � ;'� � and� � � � ; � � are subsets

of� � � � � ; � � , we can write

" � � � ! % $ � � % " � � % �%# �� � ! % & �� % " � � % (8)

A construction for semiorthogonal wavelets with the small-est number of consecutive nonzero & �� % is given in [8]. Notethat the same property is true for B-splines, hence matrices' � � � $ � � % and ( � � � & �� % are banded. We relate thereader to [8] for the explicit formulae for ( � and

' �. A

typical B-spline and a typical wavelet are shown in Fig. 2.The multiresolution finite element approach assumes

that the optimal trajectory can be written as a linear combi-nation of B-spline wavelets:

* � ! � ) � "�*� �! � ! � + � � # �� (9)

Here B-splines " *� are fixed at the lowest possible resolu-tion , , while the optimal set of wavelets # �� ��,.-0/1-.2 ,should be determined automatically by the optimizationprocedure. By definition B-spline wavelets # �� have unitnorm. This forces their amplitude to become higher andhigher to retain the unit norm as they become narrower andnarrower. Since this is counterproductive for optimization[4], we replaced # �� with

0 *�3 � # �� to obtain proper scaling.

3.2 Large-Scale Optimization

By replacing * in the optimization problem (5) with theabove linear combination of B-splines and wavelets, whichare fixed in this section, we obtain a classic unconstrainedoptimization problem. Instead of minimizing over all func-tions from some function space, we can minimize over pa-rameters

) � � + � � . Note, however, that the number of un-known parameters is very high. The full-body motion of

Page 4: In Proc. IEEE Int. Conf. Robotics and Automation ... - IJS

DB involves 26 joints plus 6 degrees of freedom for the dis-placement in space. If we fix the highest resolution spaceof piecewise polynomials to �� � ? � , there are altogether

� �basis functions. Thus in this case the highest possible num-ber of variables is

� ����? 0 � 0 . � � which clearly makes ouroptimization problem large. The highest resolution used inour experiments was eight which results in over 4000 un-known variables.

Trust region methods are suitable for solving large-scaleoptimization problems [2]. Let � and � respectively be theHessian and the gradient of the objective function (5) at thecurrent estimate for unknown variables � � � ) � � + � � � . Themain idea of the trust region approach is to approximate thecriterion function with a quadratic function in the neighbor-hood around the current estimate. The next approximationis thus computed by minimizing

� � �� .0 ���� � ���� ��� ��� ��� ��� � # � � # -���� � (10)

where�

is a diagonal scaling matrix and � is a positivescalar setting the size of the neighborhood.

The trust region approach assumes that the gradient andthe Hessian of the criterion function can be calculated. Letus define� � ��� ������ � ! � ) � " *� � $ � �"! � ! � + � � # �� � $ � # &5� ( � $ �

...� ! � ) � " *� � $ 1 � �"! � ! � + � � # �� � $ 1 � # & � ( � $ 1 �$ %%%& (11)

Comparing (11) with the objective function (3) we note that- � *�� � ) # � � ��� # ) . Let ' be the Jacobian of � at the cur-rent estimate. It is easy to verify that the gradient of - isequal to ' � � ��� while the Hessian of - is given by ' ' �second order terms. It is a common practise in nonlinearleast squares problems to neglect the second order terms,which are expensive to calculate, and to approximate theHessian by ' ' .

The calculation of the gradient and the Hessian of thecriterion function (4) involves the calculation of inner prod-ucts of derivatives of basis functions( 6 " 7 8 9� " 7 8 9� : $ � ( 6 " 7 8+9� # 7 8+9� : $ � ( 6 # 7 8+9� # 7 8+9� : $ Let ) be the matrix of these inner products. It is easy tosee that (4) can be rewritten as 3 � *�� � ) � )�� . Thus thegradient of the objective function 3 is given by )�� and itsHessian is simply equal to ) . Note that ) is independent of� , thus it should be calculated only once at the beginningof the optimization process. We employed the Gaussianquadrature formulae, which are exact for polynomials upto a certain degree, to evaluate these integrals exactly.

0 10 20 30

0

10

20

30

40

50

60

nz = 546

Figure 3: Sparsity patternof the kinematics Jacobian

0 100 200 300 400 500 600 700 800 900 1000

0

100

200

300

400

500

600

700

800

900

1000

nz = 289168

Figure 4: Sparsity pattern ofthe Hessian ' '0����)

The row dimension of ' is even larger than its columndimension, hence its calculation should be designed withcare. The bulk of the computing time for the calculation of' is spent on the calculation of the Jacobian of the kine-matic model for the human body motion at all measure-ment times. Due to the structure of our model, the kine-matics Jacobian is sparse (see Fig. 3). Moreover, due tothe minimal support property of B-splines and wavelets, ' ,' ' , ) and the resulting combined Hessian ' ' � ��)are also sparse (see Fig. 4). In the example in Fig. 4, allB-splines and all wavelets at a particular resolution levelwere included in the calculation. The sparsity pattern is abit more complicated when only some of the wavelets fromdifferent resolution levels are used (each resolution levelcontributes one large block in the Hessian), but the result-ing matrices are nevertheless sparse.

We showed that the gradient and the Hessian of the com-bined criterion function (5) can be estimated as

� � ' � � ���%����)*� ���,+-' '0�0��) (12)

Thus the next estimate for our trajectory can be computedby solving the trust region subproblem (10). However,standard algorithms for solving (10) are not appropriate forlarge-scale problems because they require the computationof a full eigensystem [2]. This is resolved in large-scale op-timization approaches by restricting the solution space to atwo-dimensional subspace spanned by the direction of thegradient and the direction of the negative curvature. Thecalculation of the next approximation is then trivial. Fi-nally, the trust radius � is adjusted to a new value. We wereable to use the MATLAB Optimization Toolbox implemen-tation of a trust region method for large scale optimizationproblems and the sparse matrix capabilities of MATLABfor the resolution of sparse linear systems. Therefore werelate the reader to [2] and the references therein for fur-ther information.

Page 5: In Proc. IEEE Int. Conf. Robotics and Automation ... - IJS

3.3 Detecting the Optimal Resolution

Criteria similar to the ones proposed in [4] for varia-tional geometric modeling were applied to choose the op-timal resolution: add more wavelets to better approximatethe trajectory and remove the unneeded wavelets to obtain asolution with lower energy (defined by the objective func-tion (4)). While the first principle can be realized usinghierarchical B-splines, the second criterion is much easierto realize in a wavelet basis because the necessary densityis reflected in the magnitude of wavelet coefficients.

After calculating the estimate for a trajectory at a certainresolution, we first check the magnitude of wavelet coeffi-cients. If they are below a certain threshold, we removethe corresponding wavelets from the trajectory. The nextstep is the addition of higher resolution wavelets on timeintervals where the model marker positions generated bythe recovered trajectory poorly match the measured mark-ers. Wavelets centered on such intervals are added to thesolution and their initial coefficients are set to zero. Thisprocedure is repeated until a stable solution is found.

4 Experiments and Discussion

We captured several motion trajectories involvingfull-body motion of a human performer. A marker-based measurement system Optotrak (see web pagehttp://www.ndigital.com/opto.html) was used for this pur-pose. Optotrak uses identifiable active markers which is ad-vantageous because full-body movements often cause someof the markers to be occluded. Systems using passive mark-ers are more prone to matching errors in such cases. In ourexperiments, we used 22 markers distributed over the per-former’s body.

Experiments showed that our approach offers significantadvantages compared to a straightforward approach of se-quentially minimizing the criterion function (2) at consec-utive measurement times. Firstly, the sequential minimiza-tion requires that a sufficient number of markers is visibleat each measurement time. This can make the optimizationprocess fail when too many of them are occluded. In thesequential approach, we were able to recover part of thetrajectory only by interpolating the marker positions fromthe times when they were not occluded to the intervals ofocclusion. This reduces the quality of the recovered trajec-tory. These problems were alleviated by the batch recov-ery of a complete trajectory. Secondly, it turned out thateven when the positions of all markers can be measured,the recovery of some of the joint angles becomes sensi-tive at some configurations due to the kinematic structureof the body as well as the choice of the marker placementon the body. Indeed, we were unable to estimate the torsodegrees of freedom and some of the degrees of freedom at

0 1 2 3 4 5 60

0.5

1

1.5

2

2.5

time (seconds)

angl

e (r

adia

ns)

Figure 5: Trajectory of the right arm flexion/extension

the performer’s hand using the sequential approach. Theseproblems were resolved in the proposed approach with theintroduction of the regularization factor (4) in the optimiza-tion criterion. Thirdly, the recovered trajectory is smoothand can avoid discontinuities that can arise in the sequen-tial approach because of the switching between local min-ima of the objective function.

Our trajectories are high dimensional (over 30 degreesof freedom), so due to space limitations we are unable topresent all of their components. Some of the above find-ings are nicely demonstrated on the recovery of the rightshoulder flexion/extension degree of freedom (see Fig. 5).The motion capture process lasted a bit more than 5 sec-onds and we took 60 measurements per second. The bluetrajectory shows the trajectory recovered by the sequentialapproach. It is more noisy than the other two trajectoriesand it contains discontinuities. The green trajectory showsthe recovered trajectory using an initial cubic B-spline ba-sis (third resolution level, 11 basis functions). While it isless noisy than the trajectory generated by the sequentialapproach and it does not contain discontinuities, it onlypoorly follows the measured markers. The red trajectoryshows our final result after adding some wavelet functions.It is both smooth and continuous and it follows the mea-sured markers better. The average error between the mea-sured and the estimated marker positions, i. e. the squareroot of criterion (2) divided by the number of markers, forthe last two trajectories is shown in Fig. 6 (dashed: initialB-spline trajectory, solid: final trajectory). We should notehere that a significant part of these errors is caused by sys-tematic errors due to inaccuracies in our model. If theseerrors are larger than the predefined tolerance for the mea-surement error, then the wavelet adding process should stopsupplying new wavelets. Currently, this is done by limitingthe upper resolution level.

Our models are built by measuring the performer andthe positions of markers attached to her/him. Work on au-tomatic model generation, which should make the modelbuilding process more accurate and less cumbersome, is

Page 6: In Proc. IEEE Int. Conf. Robotics and Automation ... - IJS

0 1 2 3 4 5 610

15

20

25

30

35

40

45

50

time (seconds)

aver

age

erro

r (m

illim

eter

s)

Figure 6: The average error in marker positions

currently under way. Another problem is the mismatch be-tween human and robot. Especially, DB’s joint limits aremore restrictive than human’s. In this work we handled thisproblem by scaling and translating the reconstructed jointtrajectories into the range of DB’s joint limits. The regular-ization term in (5) was used to prevent jumps in velocitiesand accelerations and thus ensure the generation of motionsthat can be executed by the robot. This makes robot motionplanning different from programming a virtual movie actoror interactive game agent that need only approximate ratherthan follow real-world physics. See [3] for a computergraphics approach to motion adaptation. Another extensionwould be the development of a cross-validatory procedurefor the automatic determination of the smoothing parame-ter in criterion (5), as it is standard in biomechanics [12].

A possible field for further research is the developmentof criterion functions that improve the style of motion [11]and can be added to the objective function (5). Althoughthe objective function (4) does improve the style of motionby reducing its acceleration or jerk, its primary function isto make the trajectory feasible to execute. Our scheme issuitable for the generation of a library of movement primi-tives. Techniques for an on-line interpolation between suchprimitives based on perceptual inputs should be developedin the future. Another interesting area of research wouldbe to develop control strategies to combine the learnedprimitives into longer movement tasks [7]. The humanoidrobotics is an attractive test case for such research.

5 Summary and Conclusions

This paper presents a new approach to the formulationand optimization of joint trajectories for humanoid robotsusing B-spline wavelets. We first outlined the difficultiesarising in the specification of full-body motions for hu-manoid robots and proposed to exploit the similarity be-tween human and humanoid robot motions to generate suchtrajectories. We demonstrated that B-spline wavelets aresuitable for the formulation and optimization of humanoid

robots’ trajectories at different resolution levels. Finally,we showed how to resolve the resulting large-scale opti-mization problems to compute such trajectories. The abil-ity to treat large-scale optimization problems that need tobe solved to generate optimal full-body motions and toautomatically infer the appropriate resolution level drawsa distinction between our approach and other approachesproposed for human motion capture in the literature.

Videos showing our humanoid robot performing somedance movements can be seen on DB’s home pagehttp://www.erato.atr.co.jp/DB/home.html.

Acknowledgment: Ales Ude is on leave from the Depart-ment of Automatics, Biocybernetics and Robotics, JozefStefan Institute, Ljubljana, Slovenia.

References

[1] B. Bodenheimer and C. Rose. The process of motion cap-ture: Dealing with the data. In Computer Animation andSimulation ’97, Proc. Eurographics Workshop, pages 3–18,Budapest, Hungary, 1997. Springer Verlag.

[2] T. Coleman, M. A. Branch, and A. Grace. OptimizationToolbox User’s Guide. The MathWorks, Natick, MA, 1999.

[3] M. Gleicher. Retargeting motion to new characters. In Com-puter Graphics, Proc. SIGGRAPH ’98, pp. 33–42, 1998.

[4] S. J. Gortler and M. F. Cohen. Hierarchical and variationalgeometric modeling with wavelets. In Proc. 1995 Symp. onInteractive 3D Graphics, pp. 35–42, Monterey, California,1995.

[5] T. Lozano-Perez. Robot programming. Proc. IEEE,71(7):821–841, 1983.

[6] R. M. Murray, Z. Li, and S. S. Sastry. A Mathematical Intro-duction to Robotic Manipulation. CRC Press, Boca Raton,New York, 1994.

[7] C. Rose, B. Guenter, B. Bodenheimer, and M. F. Cohen.Efficient generation of motion transitions using spacetimeconstraints. In Computer Graphics, Proc. SIGGRAPH ’96,pp. 147–154, 1996.

[8] E. J. Stollnitz, T. D. DeRose, and D. H. Salesin. Waveletsfor Computer Graphics: Theory and Applications. MorganKaufmann Publishers, San Francisco, 1996.

[9] S. E. Thompson and R. V. Patel. Formulation of joint tra-jectories for industrial robots using B-splines. IEEE Trans.Industrial Electronics, 34(2):192–199, 1987.

[10] A. Ude. Robust estimation of human body kinematics fromvideo. In Proc. IEEE/RSJ Conf. Intelligent Robots and Sys-tems, Kyongju, Korea, 1999.

[11] A. Witkin and M. Kass. Spacetime constraints. In ComputerGraphics, Proc. SIGGRAPH ’88, pp. 159–168, 1988.

[12] H. J. Woltring. A FORTRAN package for generalized,cross-validatory spline smoothing and differentiation. Ad-vances in Engineering Software, 8(2):104–113, 1986.