lifting and carrying an object of unknown mass properties

6
Lifting and Carrying an Object of Unknown Mass Properties and Friction on the Head by a Humanoid Robot Riku Shigematsu 1 , Shintaro Komatsu 1 , Yohei Kakiuchi 1 , Kei Okada 1 , Masayuki Inaba 1 Abstract— When a humanoid robot carries an object, it should recognize surroundings to avoid obstacles while holding the object stably. Previous methods usually hold an object in front of its body. However, it causes visual occlusions and instability of holding when the object is supported by only its hands. To solve these problems, we propose methods for making a humanoid robot lift and carry an object on the head. By holding an object on the head, a robot can recognize its surroundings easily as the object goes out of its sight. In addition, as the object is supported by both hands and the head, three points in total, carriage becomes more stabilized. We also implement our methods on the humanoid robot JAXON and show the methods can be applied to the real robot. I. INTRODUCTION Humanoid robots can hold and carry objects with their arms and legs. Using arms enables them to manipulate much wider objects compared to a single arm, and legs enables them to carry objects even in uneven places. In this paper, we focus on lifting and carrying an object which size and weight can be manipulated by both arms. For such an object, it is common for a robot to hold it in front of its body [1], [2], [3], [4]. However, holding an object in front has two disadvantages. One disadvantage is that the visual field is reduced by the object. The other one is instability from the object being supported by only its hands. To solve the former problem, Kumagai et al. use environmental memory and avoid obstacles [5]. However, this method does not work when the environment is varied. Therefore, we propose methods to make a humanoid robot lift an object on the head. By holding the object on the head, the visual field is not reduced. In addition, as the object is supported by three points (both hands and the head), carriage becomes stabilized. For lifting an object on the head, a difficult point is that a robot should change its way to hold the object. If the robot doesn’t change its way to hold, it comes into contact with or turn over the object (Fig. 1). For avoiding these problems, the robot should change its way to hold. To make a humanoid robot lift and carry an object on the head, we propose a system shown in Fig. 2. This system consists of four types of components. Motion Generator and Arms Impedance Parameter Controller decide joint angles, conditions of arms, and objective force values. Minimum Pressing Force Estimator estimates minimum force to hold and lift the object using visual information. Impedance 1 Riku Shigematsu, Shintaro Komatsu, Yohei Kakiuchi, Kei Okada, and Masayuki Inaba are with Department of Mechano-Infomatics, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan [email protected] Fig. 1. A robot should change its hand positions and a way to hold an object so that it can avoid coming contact with (illustrated in the left sub- figure) and turning over the object (illustrated in the right sub-figure). Controller and Full-body Balancing Controller modifies the joint angles specified by the Motion Generator, using the force values measured by sensors and the impedance param- eters and the objective force values specified by the Arms Impedance Parameter Controller. The Real Robot moves according to the modified joint angles. Obstacle Detector detects obstacles on the ground during carrying the object. Fig. 2. Whole system of our architecture to lift and carry an object on the head. p impedance represents impedance parameters. θ all ref , θ all mod represent all joint angles which are specified and modified, respectively. f d is the objective force values. fsensor represents force values measured by sensors. I is image data. P is depth cloud data. x represents positions of obstacles. The contributions of this paper are as follows. We produce a series of motions to make a humanoid robot lift an object on the head without turning over the object. We propose a method to make a humanoid robot change a way to hold an object on the knees. We make a humanoid robot lift an object which mass properties and friction are unknown to the robot, using visual information. Related works are written in Sec. II. Methods to lift and carry objects on the head are written in Sec. III. The experimental results are shown in Sec. IV. We conclude and mention the future study in Section V. 2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids) Beijing, China, November 6-9, 2018 978-1-5386-7282-2/18/$31.00 ©2018 IEEE 1053

Upload: others

Post on 24-Jan-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Lifting and Carrying an Object of Unknown Mass Properties

Lifting and Carrying an Object of Unknown Mass Properties andFriction on the Head by a Humanoid Robot

Riku Shigematsu1, Shintaro Komatsu1, Yohei Kakiuchi1, Kei Okada1, Masayuki Inaba1

Abstract— When a humanoid robot carries an object, itshould recognize surroundings to avoid obstacles while holdingthe object stably. Previous methods usually hold an object infront of its body. However, it causes visual occlusions andinstability of holding when the object is supported by onlyits hands. To solve these problems, we propose methods formaking a humanoid robot lift and carry an object on thehead. By holding an object on the head, a robot can recognizeits surroundings easily as the object goes out of its sight. Inaddition, as the object is supported by both hands and the head,three points in total, carriage becomes more stabilized. We alsoimplement our methods on the humanoid robot JAXON andshow the methods can be applied to the real robot.

I. INTRODUCTION

Humanoid robots can hold and carry objects with theirarms and legs. Using arms enables them to manipulate muchwider objects compared to a single arm, and legs enablesthem to carry objects even in uneven places.

In this paper, we focus on lifting and carrying an objectwhich size and weight can be manipulated by both arms.For such an object, it is common for a robot to hold itin front of its body [1], [2], [3], [4]. However, holding anobject in front has two disadvantages. One disadvantage isthat the visual field is reduced by the object. The other oneis instability from the object being supported by only itshands. To solve the former problem, Kumagai et al. useenvironmental memory and avoid obstacles [5]. However,this method does not work when the environment is varied.Therefore, we propose methods to make a humanoid robotlift an object on the head. By holding the object on the head,the visual field is not reduced. In addition, as the object issupported by three points (both hands and the head), carriagebecomes stabilized.

For lifting an object on the head, a difficult point is that arobot should change its way to hold the object. If the robotdoesn’t change its way to hold, it comes into contact with orturn over the object (Fig. 1). For avoiding these problems,the robot should change its way to hold.

To make a humanoid robot lift and carry an object on thehead, we propose a system shown in Fig. 2. This systemconsists of four types of components. Motion Generator andArms Impedance Parameter Controller decide joint angles,conditions of arms, and objective force values. MinimumPressing Force Estimator estimates minimum force to holdand lift the object using visual information. Impedance

1Riku Shigematsu, Shintaro Komatsu, Yohei Kakiuchi, Kei Okada,and Masayuki Inaba are with Department of Mechano-Infomatics, TheUniversity of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, [email protected]

Fig. 1. A robot should change its hand positions and a way to hold anobject so that it can avoid coming contact with (illustrated in the left sub-figure) and turning over the object (illustrated in the right sub-figure).

Controller and Full-body Balancing Controller modifies thejoint angles specified by the Motion Generator, using theforce values measured by sensors and the impedance param-eters and the objective force values specified by the ArmsImpedance Parameter Controller. The Real Robot movesaccording to the modified joint angles. Obstacle Detectordetects obstacles on the ground during carrying the object.

Fig. 2. Whole system of our architecture to lift and carry an object on thehead. pimpedance represents impedance parameters. θallref , θallmod representall joint angles which are specified and modified, respectively. fd is theobjective force values. fsensor represents force values measured by sensors.I is image data. P is depth cloud data. x represents positions of obstacles.

The contributions of this paper are as follows.

• We produce a series of motions to make a humanoidrobot lift an object on the head without turning overthe object.

• We propose a method to make a humanoid robot changea way to hold an object on the knees.

• We make a humanoid robot lift an object which massproperties and friction are unknown to the robot, usingvisual information.

Related works are written in Sec. II. Methods to liftand carry objects on the head are written in Sec. III. Theexperimental results are shown in Sec. IV. We conclude andmention the future study in Section V.

2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids)Beijing, China, November 6-9, 2018

978-1-5386-7282-2/18/$31.00 ©2018 IEEE 1053

Page 2: Lifting and Carrying an Object of Unknown Mass Properties

II. RELATED WORKS

A. Lift an object using both arms by a humanoid robot

There are some works which make a humanoid robot liftan object by both arms. Harada et al. calculate a humanoidrobot’s objective ZMP from weight and a position of anobject and succeed to lift a 8.5 kg box by a humanoid robotHRP2 [1]. Ohmura et al. succeed to hold and lift a 30 kgobject using whole body tactile skin [2]. Arisumi et al. usea body of a humanoid robot dynamically and lift a heavyobject using momentum [6]. Nozawa et al. make a humanoidrobot lift a heavy object whose weight and mass propertiesare unknown by using on-line estimation of the operationalforce [3].

B. Carry an object by a humanoid robot

There are also previous works which make a humanoidrobot carry an object. Harada et al. calculate a change inZMP and succeed to avoid falling during carrying an ob-ject [1]. Murooka et al. produce methods to make a humanoidrobot select ways to transport an object from pushing,pivoting, and lifting [4]. Murooka et al. also produce methodsto push a heavy object using both arms, hands, or the hipwithout losing balance [7]. Kumagai et al. propose methodswhich make a humanoid robot walk through a terrain whilecarrying an object using environmental memory [5].

III. METHODS

A. Series of actions for lifting an object on the head

In this section, we explain a series of actions for liftingand carrying an object. Fig. 3 illustrates them.

Fig. 3. A series of motions for lifting an object on the head while keepingthe object pose

There are 8 stages to lift a target object on the head. Thestages are listed as follows.

(a) A robot searches for a target object.(b) The robot goes to the object so that it can lift the object.(c) The robot crouch and hold the object.(d) The held object is lifted.(e) The robot brings the object close to its knees.(f) The robot changes a way to hold the object by bringing

down its hand positions until torso and rotating hands

Fig. 4. Motions of (e) and (f) in Fig. 3

at the same time. Fig. 4 illustrates this motion moreclearly.

(g) The held object is lifted on the head.(h) The object is placed on the head.In the above motions, we made posture (c), (d), (f), (g),

and (h), beforehand. We first set positions of end effectors ofarms and legs. Then, we calculate how much a robot shouldopen its arms from a size of the object. Finally, we calculateinverse kinematics and make the postures. We calculate aposture (e) as Sec. III. C. 2 explains.

B. Control of the Force

To lift an object on the head successfully, a robot shouldkeep pressing the object with adequate force. For realizingthis, we apply internal force based impedance control [8] toend effectors of arms. Internal force based impedance controlis represented as a following equation.

M(x− xd) +D(x− xd) +K(x− xd) = f − fd (1)

In Eq. 1, M ,D,K are 6×6 diagonal matrices. M ,D,Krepresent a virtual inertia, damping, and stiffness matrix,respectively. x and xd are six dimensional vectors. x rep-resents modified position and rotation of the end effectorsof arms, and xd represents specified position and rotation.The modified position and rotation correspond to θallmod inFig. 2. The specified position and rotation correspond toθallref . In these matrices and vectors, three components arerelated to the position (x, y, and z position), and otherthree components are related to the rotation (around x, y,and z axis). f and fd are six dimensional vectors andrepresent force values measured by sensors and objectiveforce values on the end effectors of arms, respectively. Inthese vectors, three components are related to force (in x, y,and z direction), and other three components are related tothe torque (around x, y, and z axis).

The robot updates its positions and rotations of the endeffectors of arms according to Eq. 1 in every time step. Asthe Eq. 1 shows, We can change a reaction of arms againstapplied force by changing the values of M ,D,K.

We divide a series of motions illustrated in Fig. 3 into twotypes of motions: motions where only arms and the objectare in contact (i.e. (c), (d), (e), (g) in Fig. 3) and motions

1054

Page 3: Lifting and Carrying an Object of Unknown Mass Properties

where the body is also in contact with the object (i.e. (f), (h)in Fig. 3).

During motions where only arms of the robot are incontact with the object, the robot should keep holding theobject with adequate pressing force, because only armssupport the object. For realizing this, we set smaller values onthree diagonal components of M ,D,K which componentsare related to the position. By setting smaller values, therobot tries to move its arms widely to keep holding the object.Consequently, the robot can press the object with moreadequate force even when the object is somehow deformedby the applied force. We also set smaller values on threediagonal components of M ,D,K which components arerelated to the orientation. This makes the robot follow torquemore easily and prevent slipping in rotational directions.

When the body of the robot is also in contact with theobject, large internal force occurs and its direction is unpre-dictable. In such cases, the robot should not move its armswidely by the applied force. Otherwise, it sometimes happensthat the arms vibrate and get out of control. Therefore, weset bigger values to all diagonal components of M ,D,K.Moreover, when internal force occurs, we want the robotto slip its arms in z direction (vertical direction) duringmotion (f) and (h) in Fig. 3. Thus, we set bigger values tothe diagonal components of M ,D,K which correspond tothe movement in z direction (vertical direction), and smallervalues in the other two directions.

C. Use Visual Information

To lift an object on the head, a robot uses visual informa-tion for estimating a pose of the object. For the experiment,we prepare two checkerboards for estimating the pose (thecheckerboards are attached as illustrated in Fig. 6). Usingthe estimated pose, the robot can move to hold and generatemotions to place the object on the knees (motion (e) in Fig.3). The robot can also estimate the minimum pressing forceusing the estimated pose.

1) Move for Lifting the Object: The robot moves so thatit can lift the object. In this paper, we only consider box typeobjects. For lifting the object, the robot simply moves in frontof the object. A distance between the robot and the object isdecided so that the knees of the robot will not collide withthe object when it bend down.

2) Place the Object on the Knees: In order for the robotto lift the object, the object should be placed on the kneessuccessfully (motion (e) in Fig. 3). As a pose of the objectis varied during lifting, the robot should calculate how muchits arms should be moved to place the object on the knees.Fig. 5 illustrates this process. We choose a point where anedge of the object can be placed (the region is illustrated as ared region in Fig. 5) and calculate how much its arms shouldbe moved so that the nearest edge of the object is placed onthat chosen point. Then, we calculate inverse kinematics tojudge whether such movement is executable. We also checka motion (f) in Fig. 3 is executable by calculating inversekinematics. If both motions are executable, the robot do thesemotions. Otherwise, we choose another point and do the

same checking again. In our program, some discrete pointsin the region are chosen at first and the check is executed inorder of points closer to the torso.

Fig. 5. Calculate how much the robot should move its arms to place theobject on the knees.

3) Estimate Minimum Pressing Force: To lift an objectwith unknown mass properties and friction, a robot shouldestimate minimum pressing force to hold the object. Aprevious work uses changes in the robot’s force sensorvalues [9] to estimate minimum force to lift the object. Thismethod works when the robot grips the object with its handsor a direction of force applied is the same as the object lifted.However, when the robot lifts an object which doesn’t havehandles (e.g. a cardboard box), it should hold the object first,and then lift. In such a case, a direction of force applied andthe object lifted are different. The previous method cannotwork in such cases. Therefore, we should estimate minimumpressing force to hold and lift an object in a different way.

We use visual information to estimate minimum pressingforce. The robot tries to lift the object a little by holdingit with certain initial force. If the robot judges the objectis lifted somehow while keeping its orientation (i.e. withoutrotating), the robot considers the force is enough to holdand lift the object. Otherwise, the robot resets the objectand does the same operations after holding the object withbigger force. The robot repeats these processes until it findsminimum pressing force or executes maximum force. Theseprocesses are implemented as Algorithm 1.

D. Modify Posture by Estimated Weight of the Object

To keep balance during lifting and carrying the object,the robot should modify its posture in accordance with theposition and weight of the object. We estimate the weight ofthe object from foot force sensor values between the motion(c) and (d) in Fig. 3. We modify the posture by assumingthe weight is loaded on both hands evenly.

IV. EXPERIMENT

We implement the proposed methods on the real robotJAXON [10]. We do three experiments. The first experimentis to lift an object of unknown mass properties and frictionon the head. This shows a humanoid robot successfully lift

1055

Page 4: Lifting and Carrying an Object of Unknown Mass Properties

Algorithm 1 Minimum force estimationfunction MINIMUM FORCE ESTIMATION

fmax, fini ← maximum/initial forceδf ← force intervalzth, θth ← position/rotation thresholdf ← finiwhile f < fmax do

Lift an object by zMeasure how long the object lifted δzMeasure how much the object turned δθif δz > zth and abs(δθ) < θth then

return felse

Put down the object by zf ← f + δf

end ifend while

end function

an object even though its mass properties and friction areunknown. The second experiment is to detect a step andclimb it while carrying an object on the head. This showsocclusion is reduced by lifting an object on the head andthe robot can detect an obstacle while carrying the object.This also shows the robot can carry the object stably. Thelast experiment is to lift and carry an object in succession.This shows our methods enable a humanoid robot to lift andcarry the object successively.

A. Robot and Object

To show our methods can be applied to a real robot, weimplement our methods on the robot and make it lift andcarry an object on the head. We use the humanoid robotJAXON [10], which is about 188 cm, 127 kg, and has 33degrees of freedoms (8 in arms, 6 in legs, 3 in torso, and 2in neck). This robot has six axial force sensors on the bothhands and foots.

As an object to be lifted and carried, we prepare a 40 cmcubic cardboard box with plastic bottles filled with water asweight. We attach two checkerboards on the upper surfaceand in front of the box to estimate the pose from visualinformation. The sizes of the checkerboards are 6× 3, 3 cmsquare on the upper surface, and 5 × 4, 2.5 cm square infront. The box’s looking is depicted in Fig. 6.

For lifting and carrying, we attach silicon rubbers on theboth hands (shown in Fig. 7). These are attached so that thewrist parts do not contact with the object.

B. Parameters settings

We set diagonal components of M ,D,K as shown inTable I. In Table I, motions mean types of motions illustratedin Fig. 3. Impedance represents diagonal components of Kin Eq. 1. The diagonal components of D and M which relatewith a position can be calculated by multiplying 60, 40 tothe corresponding values of K, respectively. Note that thesenumbers to be multiplied are changed according to the robot

Fig. 6. A 40cm cubic cardboard box usedin the experiments. Plastic bottles filled withwater are contained as weight.

Fig. 7. Silicon rub-bers attached to bothhands of the robot.They have a thicknessof 2.5 cm.

being used. The diagonal components of D and M whichrelate with rotation can be calculated by multiplying 40,40 to corresponding value of K, respectively. These valuesare also changed according to the robot being used. Weincrease diagonal components of M ,D,K during motion(e) compared to (c), (d), (g) in Fig. 3 so that the robotcan avoid vibrating when the robot comes in touch with theobject.

The numbers written in Table I are determined by the ex-periments. The magnitude relationships will not be changedaccording to the robot being used. However, the concretenumbers are changed according to the robot.

Motions Impedanceposition [N/m] rotation [(N·m)/rad]x y z x y z

(c),(d),(g) 10 10 10 10 10 10(e) 20 20 20 20 20 20(f),(h) 200 200 20 1000 1000 1000

TABLE IDIAGONAL COMPONENTS OF K IN EQ. 1.

In Algorithm 1, we set fmax as 100 N, fini as 10 N, δfas 15 N, z as 200 mm, zth as 140 mm, and θth as 10 deg.In addition, we add 30 N to f after estimating minimumpressing force. This is because our system interpolates tra-jectories based on HoffArbib interpolation [11], [12]. As aresult, the breadth space of both hands is not always the sameas instructed during the trajectory. To reduce this influence,we add the value to the estimated minimum pressing force.

C. Experimental Results

Fig. 8 shows the JAXON lifts the box on the head. Weightof the box is about 3 kg. Snapshots are taken in every 8seconds. During the experiment, a position and orientation ofthe box is calculated by detecting two checkerboards attachedto the box (illustrated in Fig. 6). Fig. 9 illustrates changesof the force values on the right hand in x direction (thepressing direction) during the experiment shown in Fig. 8.The graph shows the robot successfully measures minimumpressing force to hold and lift an object of unknown massproperties and friction.

Fig. 10 shows an experiment that the robot carries the boxwhile recognizing the surroundings. This experiment shows

1056

Page 5: Lifting and Carrying an Object of Unknown Mass Properties

Fig. 8. Lift an object on the head. Weight of the box is about 3kg. Snapshotsare taken in every 8 seconds.

Fig. 9. Force changes on the right hand in the x direction (pressingdirection) during the experiment shown in Fig. 8

visual occlusions and instability are successfully reduced. Weprepare a step as an obstacle shown in Fig. 11. It has 15 cmin height, 80 cm in width, and 40 cm in depth. Once therobot detects the step, the robot goes until a certain distance.Then, it climbs the step. Weight of the box is about 6.4kgin this experiment. As the robot can detect a position of thestep while carrying the box, it successfully climb the stepand avoid falling. In this experiment, the robot recognizesits surroundings using the Multisense equipped on the headof JAXON shown in Fig. 12. The position of the step isdetected using depth cloud captured by the Multisense.

The whole process for detecting the step is illustratedin Fig. 13. Names used in the figure and explanationscorrespond to the program names in the pcl ros1 or jskrecognition2 package. We first cut off input depth cloud andget two kinds of depth cloud, depth cloud provided in aboveand near a front floor, using the PassThrough filtering. Wecluster the depth cloud and get the cluster indices above

1http://wiki.ros.org/pcl ros2https://jsk-recognition.readthedocs.io

Fig. 10. Carry an object and climb a step. Weight of the box is about6.4kg. Snapshots are taken in every 3 seconds.

Fig. 11. A step prepared as anobstacle during carrying the ob-ject on the head. Height is about15cm, width is about 80cm, anddepth is about 40cm

Fig. 12. Multisense attached onthe head of JAXON

the front floor using the Euclidean Clustering. We detectthe floor surface from the depth cloud provided near a frontfloor, using the Organized Multi Plane Segmentation. Weutilize the Cluster Point Indices Decomposer and align theclustered depth cloud to the detected surface. We get theobstacle positions by making bounding boxes of the alignedclustered cloud.

Fig. 14 shows snapshots of an experiment that the robotlifts and carries the box in succession. This experiment showsthe proposed methods can make a real robot to lift and carrythe object successively.

V. CONCLUSION AND FUTURE WORKS

In this paper, we produce methods which make a hu-manoid robot lift and carry an object of unknown mass

1057

Page 6: Lifting and Carrying an Object of Unknown Mass Properties

Fig. 13. Visual information processing for detecting obstacles.

Fig. 14. Snapshots during lifting and carrying in succession. Weight ofthe box is about 3.0kg.

properties and friction on the head. To achieve this, wepropose

• a series of motions to make a humanoid robot lift anobject on the head without turning over the object;

• a method to change a way to hold an object on theknees;

• a method to estimate minimum pressing force for hold-ing and lifting an object by both arms.

We implemented the proposed methods on the humanoidrobot JAXON and show the methods can make the robot liftan object on the head. We also show the robot can recognizeits surroundings without reducing a visual field and avoid an

obstacle.There are two major improvements needed for the future

works. First, we should produce a method which enables therobot to estimate where it should stand to hold an objectwhose shape is not box type. Second, we should producea way to estimate the position and orientation of the objectheld on the head so that the robot can hold and carry theobject stably for a long while. These two future studies willmake a humanoid robot lift and carry more various objectswith more stability.

REFERENCES

[1] Kensuke Harada, Shuuji Kajita, Hajime Saito, Mitsuharu Morisawa,Fumio Kanehiro, Kiyoshi Fujiwara, Kenji Kaneko, and HirohisaHirukawa. A humanoid robot carrying a heavy object. In Roboticsand Automation, 2005. ICRA 2005. Proceedings of the 2005 IEEEInternational Conference on, pp. 1712–1717. IEEE, 2005.

[2] Yoshiyuki Ohmura and Yasuo Kuniyoshi. Humanoid robot which canlift a 30kg box by whole body contact and tactile feedback. In Intel-ligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ InternationalConference on, pp. 1136–1141. IEEE, 2007.

[3] Shunichi Nozawa, Ryohei Ueda, Youhei Kakiuchi, Kei Okada, andMasayuki Inaba. A full-body motion control method for a humanoidrobot based on on-line estimation of the operational force of an objectwith an unknown weight. In Intelligent Robots and Systems (IROS),2010 IEEE/RSJ International Conference on, pp. 2684–2691. IEEE,2010.

[4] Masaki Murooka, Shintaro Noda, Shunichi Nozawa, Yohei Kakiuchi,Kei Okada, and Masayuki Inaba. Manipulation strategy decisionand execution based on strategy proving operation for carrying largeand heavy objects. In Robotics and Automation (ICRA), 2014 IEEEInternational Conference on, pp. 3425–3432. IEEE, 2014.

[5] Iori Kumagai, Fumihito Sugai, Shunnichi Nozawa, Youhei Kakiuchi,Kei Okada, Masayuki Inaba, and Fumio Kanehiro. Complementaryintegration framework for localization and recognition of a humanoidrobot based on task-oriented frequency and accuracy requirements. InHumanoid Robotics (Humanoids), 2017 IEEE-RAS 17th InternationalConference on, pp. 683–688. IEEE, 2017.

[6] Hitoshi Arisumi, Sylvain Miossec, Jean-Remy Chardonnet, andKazuhito Yokoi. Dynamic lifting by whole body motion of humanoidrobots. In Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJInternational Conference on, pp. 668–675. IEEE, 2008.

[7] Masaki Murooka, Shunichi Nozawa, Yohei Kakiuchi, Kei Okada, andMasayuki Inaba. Whole-body pushing manipulation with contactposture planning of large and heavy object for humanoid robot. InRobotics and Automation (ICRA), 2015 IEEE International Conferenceon, pp. 5682–5689. IEEE, 2015.

[8] RC Bonitz and Tien C Hsia. Internal force-based impedance controlfor cooperating manipulators. IEEE Transactions on Robotics andAutomation, Vol. 12, No. 1, pp. 78–89, 1996.

[9] Shunichi Nozawa, Masaki Murooka, Shintaro Noda, Kunio Kojima,Yuta Kojio, Yohei Kakiuchi, Kei Okada, and Masayuki Inaba. Unifiedhumanoid manipulation of an object of unknown mass properties andfriction based on online constraint estimation. In Humanoid Robotics(Humanoids), 2017 IEEE-RAS 17th International Conference on, pp.249–256. IEEE, 2017.

[10] Kunio Kojima, Tatsuhi Karasawa, Toyotaka Kozuki, Eisoku Kuroiwa,Sou Yukizaki, Satoshi Iwaishi, Tatsuya Ishikawa, Ryo Koyama, Shin-taro Noda, Fumihito Sugai, et al. Development of life-sized high-power humanoid robot jaxon for real-world use. In Humanoid Robots(Humanoids), 2015 IEEE-RAS 15th International Conference on, pp.838–843. IEEE, 2015.

[11] Bruce Hoff and Michael A Arbib. Models of trajectory formation andtemporal interaction of reach and grasp. Journal of motor behavior,Vol. 25, No. 3, pp. 175–192, 1993.

[12] Shunichi Nozawa, Eisoku Kuroiwa, Kunio Kojima, Ryohei Ueda,Masaki Murooka, Shintaro Noda, Iori Kumagai, Yu Ohara, YoheiKakiuchi, Kei Okada, et al. Multi-layered real-time controllers forhumanoid’s manipulation and locomotion tasks with emergency stop.In Humanoid Robots (Humanoids), 2015 IEEE-RAS 15th InternationalConference on, pp. 381–388. IEEE, 2015.

1058