artificial emotion of face robot through learning in communicative interactions with human

Upload: migue-manoband

Post on 05-Jul-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/15/2019 Artificial Emotion of Face Robot Through Learning in Communicative Interactions With Human

    1/9

    Proceedingsof the 2004 IEEE Inte rnational W orkshop onRobot and H uma n Interactive Comm unicationKurashiki, Okayama Ja pan September 20-22,2004

    Artificial Emotion of Face Robot through Learningin Communicative Interactions with Human

    Fumio HaraTokyo Universityof Science1-3 Kagurazaka, Shinjuku-ku

    E-mail haraj f@rs. kagu.tus.ac.jpTokyo,162-860 1 JAPAN

    AbstractIt is pointed out that human-robot interface or human

    interface must be newly interpreted as intelligence ofcommunicative interaction between robot and human, andis again a key i ssue,for development o f a new species of

    robot that is able to serve humans. We point out a conceptof “active human interface or media) ’ to be composed ofthree functions and essential to such new robots. Whenapplying such machine to use for human service,psychological familiarity of robot jorm is another issue toensure the psychological acceptance. We refer again tothe existence of uncanny valley in familiari ty to robotappearance or orm.

    We have been developing a life-like ,face robot thatwill be a human media fo r realizing such intell igence ofcommunicative interaction and technological issues andperformance of jac e robot’s recognition unction of humanemotion nd actuation function o jucial expressions arebriefly summarized to understand the state-of-the artsabout the ace robot Mark II

    Then basic studies for characterizing face robotbehavior through communicative interactions with ahuman partner are described by showing two test resultsof imitation of facial expressions and personality creationin fa ce robot response. We point out the importance oflearning method employed by,face robot in communicativeinteractions is also key issue. We can show that the mostsuitable learning method fo r human partner is OR typesupervised reinforced one.

    Finally we discuss the value system in relation tounsupervised learning of face robot in communicativeinteraction to characterize its emotion or selection criteriaor bias at taking a certain response facial expression tothe partner’s state of mind. At the end concluding remarksand utur e studies are briefly pointed out.

    1 IntroductionAs pointed out by Takeuchi and Nagao[ I ] and Hara

    and Kobayashi [2], face-to-face conversation betweenhumans is considered as an ideal model in designing anhuman interface or human media for human-machinecommunication since our daily communication is fullybased on such communicative interactions. The majorfeature of human face-to-face communication is themultiplicity of communication channels.A channel is acommunication medium associated with a particularencoding method e.g. an auditory channel for carryingvoices and a visual channel for face actions includingnodding motionof a head, facial expressions, etc. Thusface-to-face communication is a multimodalcommunicative interaction between humans and we canconstruct social relationship between us through suchcommunications. Concerning face-to-face communication,Mehrabian [3] indicated that only7% of a message is dueto linguistic language,38% is due to paralanguage and55 of it due to facial expressions. This implies that thefacial expression is a major modality in human face-to-face com munication[4].

    Recently this style of communication has attracted a

    keen interest among even robotists since they needtodevelop the intelligent human interface for social robotsthat will be considered highly potential to alternates ofhuman service works such as care-take, house-hold andsoon. Especially for developmentof humanoid robot that hasa life-like body such as face, arms, legs and also voiceorgan, they are in necessity to develop such intelligenthuman interface since humanoid robotwill be, in future,used for service works.

    On the contrary when we review a conventional typeof human interface e mployed usually in industrial plants,itis pointed out that, when happened an accident in suchsystem, one of the major factors caused the accidentis so-called human error, or in other words mis-recognition,mis-judgment, and or mis-operation. The important

    technology to prevent such human error should bedeveloped along the perspective of human cognitiveaffordance [ 5 ] For example, the display of the state ofindustrial plant must be more easily understandable to the

    0-7803-8570-5/04/ 20.0002004 IEEE - 7 -

  • 8/15/2019 Artificial Emotion of Face Robot Through Learning in Communicative Interactions With Human

    2/9

    human operator by not using a huge number of digitalmode data displays but by employing an analogue modedata matching more to human cognitive competence.

    From the discussions taken above, the interfacebetween a human operator or more generally speaking, ahuman and an artifact system such as plant, machine or

    robot must be more intelligent in communicativeinteraction between human and artifact system or in otherwords, proactive to human’s recognitnon, judgment andoperation.

    In the following sections, we w ill first describe a newtype of human interface that should be intelligence ofcommunicative interaction with human: “Active HumanInterface (AHI) and essential three functions of suchinterface are briefly depicted:3 technological componentsof active human interface are 1) recognition function ofthe agent partner’s psychological state,2) decision makingfunction what kind of response to be made to its partner,and 3) actuation function to express the response to humanpartner.

    When such interface is applied to use for human

    services, psychological familiarity of interfacemorphology or form is another issue to ensure thepsychological acceptance. U ncanny valley of fam iliarity ofmachine form pointed out first by Masahiro Mori[6] isbriefly described. This character is discussed inconjunction with active human interface when employinga face robot.

    Then we will show technological issues andperformance of intelligent interface, that is, face robot’srecognition function of human emotion and actuationfunction of facial expressions are briefly summarized tounderstand the state-of-the arts about the face robot MarkI [7].

    For characterizing face robot behavior as intelligentinterface through communicative interactions with a

    human partner are described by showing two testexamples of imitation of facial expressilons and personalitycreation in face robot response to human partner’sbehavior when we employ a supervised reinforcedlearning. Then the learning method to be employed byface robot in communicative interactions with humanpartner is also key issue: We will show an effectivereinforced learning method under communicativeinteraction with human: that is, so-called, OR typereinforced leaming method.

    Finally we discuss the value system in relation tounsupervised learning of face robot in communicativeinteraction for our face robot to cha racterize its em otion orselection criteria or bias at taking a certain response facialexpression to the partner’s state of m ind.

    Atthe end concluding remarks and future studies arebriefly pointed out.

    2 Active Human Interface and ArtificialEmotion

    When we think about human and robot interactivecommunication as an advanced formof human-machineinterface, the robot or machine must posses a special

    interface to communicate the transactional information aswell as psychological information interactively with ahuman user as in human-to-human communication. Themachine interface needs communication functions similarto those that evolved in hum an beings. These functions, atleast, the three as pointed out by Arbid[8], alreadymentioned in the previous section, are briefly summ arizedhere again:

    (1) Recognition to understand the user’s or humanpartner’s state of mind or emotion state through theinformation that is expressed by the partner and detectedby visual andlor auditory channels.

    (2) Computing to select one of the artificial emotionstates that is the best for pro-acting the partner’s cognitivecompetence.

    3 ) Actuating expression to display the em otion stateby appropriate modality of communication channels suchas facial expression, prosodic voice or gesture.

    On the human side, helshe is already equipped withthese three functions. If a machine or robot is equippedwith these functions, then human user can immediatelyundertake mutually interactive communication with themachine. This interactive communication may result inmaking it possible for human user’s intelligence to beactivated more effectively. From the discussion above, wehave reached the concept of intelligent human interfacethat must enhance human’s cognitive competence, i.e.,Active Human Interface (AHI). The AH1 is of course amultimodal communication media.

    When extending the concept of active humaninterface to social robotic system thatis equipped withthose three functions and then is used undercommunicative interaction with human partner, it is notinterface but may be called as media. Thus we have cometo a m ore general concept about a proactive artifact system,i.e. Active Hum an Media(AHM). The face robot that willbe explained in the talkis an example to realize suchactive human media. The face robot is equipped with thefirst function of real-time, automatic recognition of hum anfacial expressions, and the third oneof realistic facialexpressions of at least6 basic emotions. Then the secondfunction of AH1 is the most important key issue fordeveloping such intelligent human interface media andmay be called artificial intelligence of emotion that candecide which response must be taken to hum an partner.

    Any model based artificial emotion or emotionalcomputing is known to have an essential drawback that themodel is unworkable to the situations that are not taken

    - 8 -

  • 8/15/2019 Artificial Emotion of Face Robot Through Learning in Communicative Interactions With Human

    3/9

    account into when the model is constructed. To solvepartly the problem, we are undertaking a differentapproach to construct an artificial emotion, i.e. a life-likeartifact system characterized as AHM may be equippedwith both a value system and a learning mechanism. Theartifact system may commence the learning of the

    recognition-expression coordination by properly changingthe coordination weights in the learning mechanism(reinforcement learning) along the course ofcommunicative interaction with its human partner. Thenafter a certain amount of communicative experiences, theartifact system of artificial emotion may have self-organized the recognition-expression coordinationsubjected to its value system. Then the recognition-expression coordination may select a particular emotionstate at any given situation that has strongly been affectedby the human partner’s character. Thus the establishedrecognition-expression coordination may be called asartificial emotion which might be more flexible andindividual -dependent.

    3 Virtual Com municationAccording to Nagao’s definition of communication

    [9], human communication is a sort of behavior to sharesomething with other persons. He pointed out thedifficulty in realizing communication between a humanand a machine, and also indicated a suspicion to thenecessity of human-machine communication. This ismostly due to the extreme difficulty in implementationofan “artificial mind or artificial emotion” into a machine.However, as pointed out by Hara [lo], a human userwould be able to feel as if he/she were mentally involvedand sharing something with a machine when a machineworks at least along the three functions of AHI. Recentlysocial robotics that must deal with communicativeinteraction with human is a demanding field in robotics forpractical application to human cares as well as in cognitivesciences for understanding higher intelligence of humancommunications.

    As we do not know whether or not a life-like formmachine really understands a human mind or feeling andmay have the mental-sharing feeling, we must say thatthere exists no real communication between a human andsuch machine. However, on the human side, he/she mayhave a high possibility to feel as if heishe is involved inthe communicative interaction and is satisfied with themessage exchanged between him/her and the machinethrough such active human interface [Reeve and Nass111This is still an assumption on which our active humaninterface is based and we needto construct psychologicalsupport or evidence to the assumption. We thus point outthat “communication” between a human and a machine ora life-like robot through such interface may be called

    “virtual communication” to clearly differentiate it fromhuman-to-human communication.

    In virtual communication, we deal with at least twotypes of information or message; one is transactionalinformation such as information composed of computerlanguage and the other is psychological information such

    as the user’s feeling. In this paper we focus on the latterpsychological information since we are interested in theaspect of communication between human and life-likerobot where psychological message may play a major roleto evoke feeling/ affectiodemotion in the human users.The psychological state of mind is saidso important that itmay sometime govern a user’s intelligent activity [Toda eta1 121.

    4 Uncanny Valley against Life-like Form s

    Life-like form of our face robot is assumed to fullyaccepted and to work for pro-acting the human partner’srecognition, judgment, and operation in thecommunicative interaction, or in short, the face robotworks as active human media for its partner. The task isthus obviously assumed, for example, to enhance itspartner’s cognitive competence such as recognition,judgm ent, an do r operation. Thus the environment towhich the face robot is exposed must include alwayshumans as its interacting partner under a givencommunication context. The communication context ishere, for example, the situation or the scene in which theface robot exchanges a certain psychological as well astransactional inform ation with its partner.

    Note here that the acceptability or familiarity of life-like form of face robot to human partner is again anotherkey issue for designing and constructing such social robots.M. Mori[6] pointed out in 1973 that, the more life-like wemake robots (i.e. the more similar they become to us), themore familiar or believable become, until ultimately, inthe case of 100% similarity with healthy human beings,familiarity level reaches a maximum. However thetransition has a local minimum, characterized by a sharpdrop in familiarity when robots appear very life-like butmight sometimes be mistaken for real. In this case robotscan cause an uncanny and unpleasant feeling where stillexisting differences (but possibly very small) suddenlymake us realize that the robots are not real, thus violatingour expectations [Dautenhahn 131.Figure 1 illustrates theuncanny valley in familiarity feeling o f life-like form s.

    - 9 -

  • 8/15/2019 Artificial Emotion of Face Robot Through Learning in Communicative Interactions With Human

    4/9

    .at

    .--E

    3.

    IC_

    similarity[Lackof liveness

    f

    Figure 1 Uncanny valley of familiarity or believability ofrobotic appearance

    Dautenhahn[ 131 concluded that, for believable designof robots is a matter of balance: finding the appropriatelevel of similarity with human, and taking into account

    movement and appearance, and possibly many otherfactors. Various aspects of how the agent looks andbehaves need to be consistent. Of coursle, depending on thetechnological development of the three components oflife-like face robot stated abov e suchais 1) recognition ofhuman partner’s state of emotion, 2) selection of emotion-state or emotional computing, and3) expression ofemotion state, the form or morphology of face robotshould be consistent to its task-environment. For exam ple,when in som e future the face robot may attain higher andmore realistic intelligence or emotion for complexcommunication with human partners,il must have healthyhuman -like face.

    5 Technolo gical State-of-arts of Face RobotMark I1

    5.1 Real Time R ecognition of FacialExpressions

    In this section we present a brief description of atechnology of real-time recognitionof human facialexpressions by using a layered neural network and showthe recognition performance for ;six typical facialexpressions such as surprise, fear, disgust, anger,happiness and sadness [Kobayashi et a11.11.

    For obtaining a face image d ataof a full human face,we use a CCD camera which has 18 mm focus distanceand the resolution of 256x240 pixels.Figure 2 shows ablock diagram for obtaining a face image data and atransputer for further process of the image data. By usingcorrelation values of brightness disl.ribution of pixelsbetween the one presently obtained along a vertical linesegment crossing the region including eye and eyebrow

    and the nominal one along the vertical line crossing theiris, we can identify the position of irises for both eyes.

    TRP withiSGO

    Rlack arid White

    IMGwthDSP

    TRP with 860

    Figure 2:Block diagram of transputer andCCD camera toobtain face image data

    Figure 3 shows an example of brightness distribution ofimage data along a vertical line segment.The accuracy ofiris positioning was examined and found thatit is less than3 mm difference at maximu m error for9 mm iris size. Thetime needed to obtaining the iris position for both eyeswas about40 ms which is enough short time.

    Brightnessdistribution

    Figure 3:Brightness distribution of image data along avertical line segment crossing an eye

    By using the center position of both irises, wedetermined each area including eyebrow, eyes, and mouth,respectively. Then we selected 13 vertical line segmentscrossing eye, eye-brow, and upper and lower lips, and

    10

  • 8/15/2019 Artificial Emotion of Face Robot Through Learning in Communicative Interactions With Human

    5/9

    obtained the change in the brightness distribution shapedata of a human face between the one expressing a certainemotion-state and the one neutral. The facial informationdata which state the change in the brightness distributionshape were input to a layered neural network that wasalready trained by many facial information data of6

    typical facial expressions for recognizing6 basic facialexpressions. Figure 4 shows thirteen vertical linesegments for obtaining facial information data.

    Figure 4: 3 vertical line segm ents tor obtaining lacia1information

    When the position of two irises was identified in theface image data obtained by aCCD camera, the facialinformation data normalized were immediately input to thetrained neural network and the recognition result wasoutput. The one cycle of facial expression recognition tookthe time less than 100 ms. This is real-time scaleapplicable to the face robot’s communicative interactionexperiments. The average correct recognition rate to thesix basic facial expressions such as surprise, fear, anger,disgust, happiness, and sadness reached the value of85 .Table 1 shows correct recognition rates and mis-recognition rates for the six typical facial expressions.

    Table 1 Correct recognition rates for six typical facialexpressions

    liiput emotion

    5.2 Emotion Expression on Face RobotFace robotMark I and Ma rk I1 have been developed

    by our laboratory [Hara and Kobayashi 15, Hara et a171.Facial expression on both robots is designed by usinganatomical and psychological knowledge on human facialexpressions, According to Eckman and Friesen[161, each

    of almost all facial expressions is composed of a certaincombination of 44 Action Units. The action units arecomponents of a coding system to describe facialexpression. Thus a facial expression is designed by aparticular combinationof face components moving atgenerating the facial expression.For six basic facialexpressions, 14 action units shown inTable 2 are selectedand the com binations of these action units for the six facialexpressions are also show n inTable 3. The displacementmagnitude for each action unit is determinedexperimentally. In order to realize these14 action units onthe face robot, we selected 19 points empirically or fromanatomical knowledge of face-muscle morphology on faceskin and their location and moving directions are shown inFigure 5:We have found that these points called as control

    points are enough to realize the 14 action units byparticular combination of someof these points.

    Table 2:Action units required for six basic facialexpressions___ _I_

    All No. Appearance Changes.

    timer Brow Raiser2 Outer Brow Kaiser4 Brow Lowerer5 LJpper Lid Kaiser67 Lid Tightener‘ Nosu Wririkkr10 llpper 1-ip Raiser12 t i p Corner Puller

    15 Lip Comer Depressor17 Chin Raiser20 Lip Strbchcr15 IdipsPart26 J c l w D r q

    Cheek RiserX : l a id ompressor

    Table 3:Com bination of action units for six basic facialexpressions

    Facial Exp. AUs

    Surprise

    Fear

    Disgust

    Anger

    H V P YSad

    1+2+5+26

    13- 2 4 +5+20+25,26

    4 9 17

    4 5 7 10 25,26

    6-4-12+f261 4 f5

    average : 5.0%

    11

  • 8/15/2019 Artificial Emotion of Face Robot Through Learning in Communicative Interactions With Human

    6/9

    .7 ’5 8 JJ 9

    57 18

    Figure 5:Position of control points and their movingdirections

    The micro-actuator employed to pull the controlpoints (see Figure 5 on the face skin of the robot is

    composed of shape memory alloy (SM.A) fine wire whichis specially coiled in an actuator module [Hara et a1 71.The force to pull the control points is controlled by electriccurrent applied to the SMA w ire. The response time of theSMA actuators for facial expression generation is foundsatisfactory [ 7 ] The visual evaluation of the correctexpression rate for the6 typical facial expressions offeredthe average value of 83 and the correct and mis-repression rates are shown inTable 4. In Photo 1 the 6typical facial expressions are shown: surprise, fear, disgust,anger, sadness and happiness.

    Table 4:Expressiveness of each facial expression on the

    face robot

    Photo 1 6 typical facial expressions displayed on the facerobot

    6 Comm unicative Interaction and Learn ing

    6.1 Reflex of Facial ExpressionWe have carried out the following experiment to

    demonstrate the capability for our face robotto execute acommunicative interaction with human partner. A humanpartner is placed just in front the face robot and facesdirectly to the face robot. When the human partner starts tomake one of six facial expressions, simultaneously theface robot starts to obtain the partner’s face image. Byusing the face-image brightness distribution, the face robotin the first place determines the location of two irises andthen obtains again the brightness distribution along the13

    vertical lines. Normalized facial information data are inputto the trained neural network. The recognitionof facialexpression is done successively. If the face robot obtainsthe same recognition result three times, then the face robotdecides the partner’s facial expression as that obtained.Then the recognition result is transferredto the computerthat determines the displacement of the control pointsneeded for the facial expression. Then the micro-actuatorsare driven to pull the face skin control points needed forthe facial expression. Thus the face robot simply reflexesto the facial expression displayed by the human partner.Photo 2 shows an example of facial expression (anger)directly reflexe d by the face r obot [H ara et a1 171.

    12

  • 8/15/2019 Artificial Emotion of Face Robot Through Learning in Communicative Interactions With Human

    7/9

    Photo 2 Angry facial expression displayed on the facerobot which is directly reflexed to a human partner’s facial

    expression shown in right hand side

    6.2 Shaping Face Robot PersonalityWe prepared another experiment of communicative

    interaction of the face robot with human partner andinvestigate the face robot personality which is appeared inits response behavior to the human partner [Iida eta1 181.

    We implement the reinforcement learning algorithm(Q-learning) in the face robot computer. The rewardessentially needed in the reinforcement learningis givenby the human partner to each of the face robot responsewhich is taken place to her partner’s action. When theresponse made by the face robot is preferable to thepartner, the partner gives a positive reward to the learningalgorithm and when not preferable, a negative one is given.Then after a course of such communicative interactions,the face robot has organized a fo llower personality even ifthe face robot initially responded in a random manner.Note that the response behavior of the face robot wasobserved by several university students and theirevaluations were categorized into 12 personalityclassifications which is usually employed in Y-Gpersonality analyses. The distributionof the scoresobtained in the communicative interaction was verysimilar to that typical for a follower type personality, orsocial and lively and somewhat subjective.Figure 6shows the distributionof the scores obtained in the courseof communicative interaction between a human partnerand the face robot which is implemented with thereinforced learning algorithm [Iida et al181.

    The major factors determined the face robotpersonality was found to be mannerof interaction anddynamic feature of the face robot hardware such as speedof head rotation.

    PessimisticSociable Whim

    I

    Discourage

    Lively SelfishShort Temper

    Figure 6:Face robot personality shaped through thelearning process in the comm unicative interaction

    experiment

    6.3 Learning Algorithm for Communicative

    We have found that the conventional reinforcedlearning algorithm has some difficulty when applied to the

    face robot learning througha human instruction; thatmeans the way of human instruction to the face robot wasfound to have a certain characteristic that causes thedifficulty [Iida eta1 181.

    To investigate a suitable learning algorithm undercommunicative interaction betweena human partner andthe face robot, in other words, a human partner givesreward to the face robot when the face robot behavior willbe preferable. We designed three kinds of rewarding to theconventional reinforced Q learning: (1) AND typereinforcing, (2) OR type reinforcing, and (3)COMPARATIVE type reinforcing [Iida et a1191. ANDtype reinforcingis characterized as the conventional onethat the certain reward is given to the behavior composedby logical productof executed actions.OR type one givesthe reward to the behavior as well as those that includesthe elemental actions which were employed in thatbehavior. COMPARATIVE one to the behaviorscomposed of the elemental actions that are contained inthe one-step before and present behaviors when th e rewardis changed.

    An experiment was designed to produce a happyfacial expression on the face robot when using10 controlpoints which were selected three around eyebrow and4around mouth. 5 test subjects were asked to give therewards +1, 0 -1 when the face robot looks happy at eachlearning step. We implemented the three types ofreinforced leaming algorithm in the face robot computerand each of 5 subjects conducted the experiment. Theresults show the followings: The happy faces generatedare slightly different among the5 subjects. Whenevaluating the cum ulative reward in the co urse of the threetypes of reinforced learning, we can see the effectivelearning method among the three.Figure 7 shows the

    Interactions

    13 -

  • 8/15/2019 Artificial Emotion of Face Robot Through Learning in Communicative Interactions With Human

    8/9

    average valueOtaken over5 subjects, of the cumulativereward plotted against the learning step number andreveals that the OR type or COMPARATIVE typelearning methods are effective in the communicativeinteraction between the human partner and the face robot[Iid a et a1191.

    10

    8

    g 6s4

    2

    0

    -2

    i”

    70 8 90 100

    STEP

    Figure 7:Co mparison of the averaged cumulative reward

    against learning steps,

    7 Value System and Artificiall Emotion

    We assume the communicatively interacting facerobot with its human partner may be schematicallydepicted as in Figure .8.Influenced by the partner’sexpression of em otion state{ bi} and that of itself {ai}, theface robot may take the next facial expression ai+l inorder to m ake the partner satisfied at the end of a series ofevents {Ei (ai, bi) } under the giv’en comm unicationcontext. Thus the face robot must select a particular seriesof facial expressions { ai}. When the face robot can displaya preference on its facial expression, or give a preferenceto ai rather than aj at a given time stage, we should say the

    face robot may have a value system lhat gives a higherpreferencebias to ai than others. If the face robot can leamthe bias in the course of the facial response to the partner’sactions or facial expressions {bi}, the face robot may lookas if it has an em otional computing or artificial emo tion.

    bi Face robot

    nvironment --

    In relation to this preference, itis easily consideredthat a human would like to access to hisher preferenceinstead of approaching something dislike, which meansthe human behavior usually contain the information ofhidher preference or bias to a given environment. Thuswe may utilize the preference information exhibited in

    human partner’s behavior for the value system when theface robot is involved in communicative interactions. Wehave examined the possibility of this hypothesis in thepreliminary experiment for generating a value system ofthe face robot [Hara and Kobayashi201 and found that thisapproach may work.

    At the present stage of our research, we think theconventional reinforcement learning algorithm, especiallyQ-learning based S-temperature learning algorithm[Sawada et a1 211 may be applied to self-organize theartificial emotion in the face robot. However this may bethought a challenging research subject, but whensucceeded in the self-organization of such artificialemotion through the leaming mechanism and a valuesystem implemented by someone, the face robot must be

    more valuable in applications as well as in scientificinvestigation to higher-level intelligence.

    8 Concluding Remark s and Future Works

    This paper summarized in the first place the conceptof active human interface/media and its basic threefunctions, and then briefly stated the frame work of theface robot with stressing on communicative interactionwith human partner. The technological aspects of the threefunctions that must be implemented within the face robotwere in some detail depicted. The two primitive examplesof face robot interaction with human partner wereexplained. We discussed the method of learning by whichthe face robot can attain a preferable facial expression in

    the course of interaction with hum an partner. We proposedone new potential approach to the self-organizationscheme of artificial em otion that may be based on a valuesystem and a reinforcement learning algorithm.

    When the technological as well as economic problemsassociated to the development of the face robot withemotional computing or artificial emotion may be solvedin future, the followings can be pointed out as potentialapplications.

    (1) Interactive entertainment robots2) Interactive art-media robots

    (3) Education software robots(4) Personal service robots(5) Cognitive science research platform robotsBut for specific application of the technology stated

    here, it is always noted out that the most importantframework is the task-environment for the specific roboticsystem when you want it really accepted by human. For agiven task environment. the robotform or amearance mav

    Figure 8:Schem atic drawing of comm unicative interaction-between face robot and human partner

    14

  • 8/15/2019 Artificial Emotion of Face Robot Through Learning in Communicative Interactions With Human

    9/9

    be determined, and the level of robotic intelligenceincluding artificial emotion may be determined for therobot to be accepted psychologically by its human partners.

    The subject stated here is oneof the challengingresearches in robotics as well as psychology and theresearch i s just at the beginning stage. Thus there may b e a

    lot of research subjects to develop an interdisciplinaryscience of robot communicatively interacting with humanpartners. Some of them are 1) technological ones such ashigh performance of vision system, other communicationchannels, soft-mechanics of facial motion, emotional voicegeneration andso on. 2) The psychological ones such asavoidance of uncanny valley, psychological acceptance ofrobotic motions, artificial emotion, value system andso on.

    Finally we must point out that the concept of activehuman media may be applied to more general practicalfields in which artifact system and human partner(s)should collaborate with mutual communicativeinteractions.

    Acknowledgements

    The most of the research works stated in the resumewere done under the Research Programfor the Future,JSPS, 96P00803 through 1996to 2000. Many thanks go toProf. H. Kobayashi who helped me a lot during the worksrelated to this paper and those who were involved in theResearch Program.

    References

    [l] A. Takeuchi and K. Nagao (1993); Communicativefacial displays as a new conversational modality, Proc.

    [2] H. Kobayashi andF. Hara (1991); The recognition ofbasic facial expressions by neural network, Proc. IJNN

    [3] A. Mehrabian (1968); Communication without words,Psychology Today, v01.2, no.4

    [4] M. F. Vargas (1987); An Introduction to NonverbalCommunication (in Japanese translation), Shinchobunko, p.255

    [5] K. Mori (1995); Cognitive Psychology, Iwanami-Shoten, p. 182

    [6] M. Mori (2004); Uncanny Valley, ROBOCON , no.28,pp.49-51 (originally reported in 1970 in “Energy” vol.7, no. 4)

    [7] F. Hara, H. Akazawa, andH. Kobayashi (2001);Realistic facial expressions by SMA driven face robot,Proc. IEEE RO-MA N 01, pp.504-51 1

    [8] M.A. Arbib (1992); Neural network and brain, Science

    Publishing, p.507 (translated into Japanese)[9] M. Nagao (1993); Some problem in communicationbetween human and machine, J Artificial Intelligence,vo1.8, no.6, pp. 7 05-708

    ACM/IFlP/INTERCHI 93, pp. 187- 193

    91, V01.3, pp.460-466

    [ 101F. Hara (1 994);A new paradigm for robot and humancommunication, JSME Proc. Robotics andMechatronics, no.940-21, pp. 1-9

    [ l l ] B. Reeves and C. Nass (1996); The media equation,Cambridge University Press

    [12] M. Toda and K. Higuchi (1994); Em otion and social

    factors in communication, Proc. IEEE RO-MAN 94,

    131 K. Dautenhahn (2002); Design spaces and nichespp. 12-17

    spaces of believable social robots,Proc. IEEE RO-MAN 02, pp. 192- 197

    141 H. Kabayashi, A. Tange, andF. Hara (1995); Realtime recognition of 6 basic facial expressions, Proc.

    5 F. Hara and H. Kobayashi (1997), State-of-art incomponent technology for an animated face robot-itscomponent technology development for interactivecommunication with humans, Jof RSJ AdvancedRobotics, Vol. 1 1, no. 6, pp.585 -604

    161 P. Eck man andW. V. Friesen (1975); Unmasking theface, Printice Hall

    [17] F. Hara, H. Kobayashi and F. Iida (1998); Aninteractive face robot able to create virtualcommunication with human, Proc. VirtualEnvironments 98, pp. 182-194

    [18] F. Iida, M. Tabata andF. Hara(1999); GeneratingPersonality Character in a Face Robot throughInteraction with Human, Proc. IEEE R O-MAN 98, pp.

    [ 191F. Iida, and F. Hara (2000); Behavior learning of facerobot based on the characteristics of human instruction,

    [20] F. Hara and H. Kobayashi (2004); Intelligence ofFace- Emergenceof Artificial Emotion through FaceRobot Interaction (in Japanese), Kyouritu Syuppan,p. 124

    [21] T. Sawada, S. Ichikawa and F. Hara (1999);Autonomous action-mode change in a two mobilerobotic system S-temperature based on-line leaming,IROS Proceedings, Vol. 1, pp. 393- 399

    IEEE R O-MAN 95, pp. 179-186

    48 1-486

    J. RSJ, V ol. 18,NO. 6, pp.839-846

    15