Gesture Communication in Social Robots:
A Case Study of a Robotic Teddy Bear
by
Jamy Li
A thesis submitted in conformity with the requirements
for the degree of Master of Applied Science
Graduate Department of Mechanical and Industrial Engineering
University of Toronto
© Copyright by Jamy Li, 2008
ii
Abstract
Gesture Communication in Social Robots: A Case Study of a Robotic Teddy Bear
Jamy Li, 2008
Master of Applied Science
Department of Mechanical & Industrial Engineering
University of Toronto
Understanding how humans perceive robot gestures will aid the design of robots capable of social
interaction with humans. This thesis examines the generation and interpretation of gestures in a
simple robot capable of head and arm movement using four experiments. In Study 1, four
participants created gestures with corresponding messages and emotions based on 12 scenarios
provided. The resulting gestures were judged by 12 participants in a second study. Ratings of liking
were higher for gestures conveying positive emotion and with more arm movements. Emotion and
message understanding were relatively low, but situational context considerably improved message
understanding and to a lesser extent emotion recognition. In Study 3, five novices and five
puppeteers created gestures conveying six basic emotions which were shown to 12 Study 4
participants. Puppetry experience had minimal effect on emotion recognition. The results obtained
are used to develop design guidelines for gesture communication in social robots.
iii
Acknowledgments
This work couldn’t have been completed without the help of many people. Thanks first to Mark
Chignell, professor and advisor, for guiding me in my research endeavours and offering supportive
advice in any language;
To Professor Michiaki Yasumura, Naoko Ishida and Sachi Mizobuchi for making my
experience living, researching and travelling in Japan one of my most memorable;
To the Japan Society for the Promotion of Science (JSPS), Bell University Labs and the
Natural Sciences and Engineering Research Council of Canada (NSERC) for their financial support;
To members of the Interactive Media Lab in Toronto for listening to all my rants, and to
Flora Wan for suggesting some statistical analyses;
To members of the iDL lab at Keio University, and especially to Ryo Yoshida for all his
invaluable research and translation help while I was conducting experiments in Japan;
To my experimental participants;
To Dr. Sharon Straus for advising me on research related to my interests in the medical field;
To my committee members Professor Michael Gruninger and Professor Khai Truong for
their recommendations and insights;
And to my family and friends for everything.
iv
Table of Contents
Chapter One: An Introduction......................................................................................................... 1
1.1 Gestures of social robots ................................................................................................... 2 1.2 The “Robot Equation” ..................................................................................................... 3 1.3 Objectives ........................................................................................................................ 4 1.4 Benefits ............................................................................................................................ 5 1.5 Boundaries of thesis.......................................................................................................... 5 1.6 Structure of thesis............................................................................................................. 6 1.7 Summary and research questions...................................................................................... 6
Chapter Two: Literature Review ...................................................................................................... 7
2.1 Gesture as non-verbal communication ............................................................................. 8 2.2 Classes of gestures ............................................................................................................ 8
2.2.1 Communication of emotions ................................................................................... 9 2.2.2 Communication of messages .................................................................................. 11 2.2.3 Content communication hypotheses ...................................................................... 12
2.3 Emotions in gestures ...................................................................................................... 12 2.3.1 Emotional valence .................................................................................................. 12 2.3.2 Emotion type ......................................................................................................... 13 2.3.3 Emotion valence and type hypotheses .................................................................... 13
2.4 Motion characteristics of gestures ................................................................................... 13 2.4.1 Perception of emotion............................................................................................ 14 2.4.2 Lifelikeness and social presence .............................................................................. 14 2.4.3 Motion characteristics hypotheses .......................................................................... 15
2.5 Contextual determinants ................................................................................................ 15 2.5.1 Context hypothesis................................................................................................. 16
2.6 Individual differences ..................................................................................................... 16 2.6.1 Personality and event coding .................................................................................. 16 2.6.2 Expertise ................................................................................................................ 18 2.6.3 Individual difference hypotheses............................................................................. 19
2.7 Summary........................................................................................................................ 19 Chapter Three: Studies 1 and 2...................................................................................................... 21
3.1 Methodology.................................................................................................................. 22 3.1.1 Study 1 .................................................................................................................. 22 3.1.1.1 Apparatus............................................................................................................... 22 3.1.1.2 Participants ............................................................................................................ 23
v
3.1.1.3 Procedure............................................................................................................... 24 3.1.2 Study 2 .................................................................................................................. 25 3.1.2.1 Apparatus............................................................................................................... 25 3.1.2.2 Participants ............................................................................................................ 25 3.1.2.3 Procedure............................................................................................................... 25 3.1.3 Measures ................................................................................................................ 27 3.1.3.1 Attitude toward robots ........................................................................................... 27 3.1.3.2 Personality ............................................................................................................. 28 3.1.3.3 Gesture understanding ........................................................................................... 28 3.1.3.4 Gesture complexity ................................................................................................ 28 3.1.3.5 Other measures ...................................................................................................... 29
3.2 Results ........................................................................................................................... 29 3.2.1 Communication of emotions and messages ............................................................ 29 3.2.2 Effect of emotional valence..................................................................................... 30 3.2.3 Effect of gesture complexity ................................................................................... 34 3.2.4 Effect of situational context.................................................................................... 36 3.2.5 Effect of personality ............................................................................................... 37 3.2.6 Effect of author ...................................................................................................... 38
3.3 Discussion and design guidelines.................................................................................... 39 3.4 Summary........................................................................................................................ 43
Chapter Four: Studies 3 and 4 ....................................................................................................... 44
4.1 Methodology.................................................................................................................. 45 4.1.1 Study 3 .................................................................................................................. 45 4.1.1.1 Apparatus............................................................................................................... 45 4.1.1.2 Participants ............................................................................................................ 45 4.1.1.3 Procedure............................................................................................................... 46 4.1.2 Study 4 .................................................................................................................. 46 4.1.2.1 Apparatus............................................................................................................... 46 4.1.2.2 Participants ............................................................................................................ 46 4.1.2.3 Procedure............................................................................................................... 47 4.1.3 Measures ................................................................................................................ 47
4.2 Results ........................................................................................................................... 48 4.2.1 Qualitative observations ......................................................................................... 48 4.2.2 Effect of puppetry experience ................................................................................. 48 4.2.3 Correlation and factor analysis of emotion ratings .................................................. 50
4.3 Discussion and design guidelines.................................................................................... 52 4.4 Summary........................................................................................................................ 54
Chapter Five: Conclusion............................................................................................................... 55
5.1 Summary........................................................................................................................ 56 5.2 Main findings and contributions .................................................................................... 56 5.3 Limitations and future work........................................................................................... 58 5.4 Final words .................................................................................................................... 59
vi
References ...................................................................................................................................... 61
vii
List of Tables
Table 1. Classification of gestures..................................................................................................... 9 Table 2. Hypotheses tested in this thesis......................................................................................... 20 Table 3. List of Scenarios. .............................................................................................................. 24 Table 4. List of Messages and Emotions. ........................................................................................ 26 Table 5. Characteristics of perceived emotions.† ............................................................................. 30 Table 6. Correlations between personality agreement and ratings................................................... 38 Table 7. Design guidelines for robotic gestures based on results of Studies 1 and 2. ....................... 42 Table 8. Effect of puppeteer experience on liking, lifelikeness and gesture recognition. .................. 49 Table 9. Simple correlations among emotion ratings by gesture viewers. ........................................ 51 Table 10. Rotated factor loadings for observer ratings of 6 emotion types, likeability and lifelikeness.
............................................................................................................................................... 51 Table 11. Design guidelines for robot gestures based on results from Studies 3 and 4. ................... 53 Table 12. Hypotheses tested in this thesis and findings. ................................................................. 57 Table 13. Design guidelines for robot gestures based of results from all studies. ............................. 58
viii
List of Figures
Figure 1. Participant manipulating robot bear used in this study. ..................................................... 5 Figure 2. Encoding and decoding in social signals; modified from (Argyle, 1994). ........................... 8 Figure 3. Study methodology, modeled after Figure 2. ................................................................... 22 Figure 4. Participants viewed each gesture either under as a software animation (a) or as a video of
the bear robot (b). .................................................................................................................. 27 Figure 5. Rating of likeability versus emotion valence. Error bars show 95% confidence intervals. . 31 Figure 6. Rating of lifelikeness versus emotion valence. Error bars show 95% confidence intervals. 32 Figure 7. Head movement complexity versus valence. Error bars show 95% confidence intervals... 33 Figure 8. Movement complexity versus valence for right and left arms. Error bars show 95%
confidence intervals. ............................................................................................................... 34 Figure 9. Lifelikeness versus arm and head movement complexity. Error bars show 95% confidence
intervals.................................................................................................................................. 35 Figure 10. Author-viewer message agreement versus arm and head movement complexity. Arm
movement influences agreement; head movement does not. Error bars show 95% confidence intervals.................................................................................................................................. 36
Figure 11. Emotion and message agreement accuracy versus experimental condition. Contextual information (scenario) improves accuracy. Error bars show 95% confidence intervals. ........... 37
Figure 12. Likeability versus gesture author. Gestures created by authors 1 and 2 were liked more than those by authors 3 and 4. Error bars show 95% confidence intervals. ............................. 39
ix
List of Appendices
Appendix A: Materials for Studies 1 and 2 ..................................................................................... 68 Appendix B: Materials for Studies 3 and 4 ..................................................................................... 80 Appendix C: Participant information for Studies 1-4 ..................................................................... 91 Appendix D: Coding schema for Studies 1 and 2 ........................................................................... 93 Appendix E: Content communication, binomial test ..................................................................... 95 Appendix F: Emotional valence, ANOVAs and contrasts ............................................................... 96 Appendix G: Gesture complexity, ANOVAs .................................................................................. 99 Appendix I: Personality, correlations ............................................................................................ 108 Appendix J: Author, ANOVA ...................................................................................................... 109 Appendix K: Puppetry experience, ANOVA ................................................................................ 112 Appendix L: Emotional valence, factor analysis, ANOVA and discriminant analysis .................... 114
1
Chapter One: An Introduction
“As the tongue speaketh to the ear, so the gesture speaketh to the eye.”
Sir Francis Bacon, 1605
This chapter introduces the focus of this thesis. It first outlines the role of gestures in social
robots and how this area of study relates to the broad fields of human-robot interaction and
social psychology. It then describes the objectives of this work, its benefits, its scope, the
structure of this document and our main hypotheses regarding robot gesture communication.
1.1 Gestures of social robots
Robots today are much different than they were ten or 15 years ago. No longer are
they designed to function merely as manufacturing aids, but they have also become
household pets (e.g., Sony’s AIBO), domestic helpers (iRobot’s Roomba), healthcare
assistants (RIKEN Japan’s Ri-Man), weight loss coaches (Intuitive Automata’s Autom) and
emotional companions (e.g., PARO and MIT’s Kismet and Leo). Researchers in Japan have
developed humanoid robots with life-like silicone skin that gesture, speak, blink and even
appear to breathe (Ishiguro, 2005). With such advances the roles of robots are becoming
increasingly social. This has resulted in a shift from robots designed for traditional human-
robot interaction (HRI), such as mechanical manipulation for teleoperation, to those in
which social human-robot interaction is a key consideration. This new breed of robots has
been called “socially interactive robots” (Fong, Nourbakhsh & Dautenhahn, 2003) or “social
robots” (Breazeal, 2003).
In light of this transformation, many authors have subsequently called for better
design of robots capable of engaging in meaningful social interactions with people (e.g.
Breazeal, 2003; Fong et al., 2003). How can this be achieved? The use of gestures has been
identified as one crucial aspect in the design of such robots (Fong et al., 2003). We choose to
focus on gestures for three main reasons: 1) few guidelines exist for the design of robotic
gestures; 2) previous research in HRI that have investigated robotic gestures consider gesture
creation but do not convincingly evaluate people’s understanding of those gestures; and 3)
we believe studying gesture understanding is key to improving human-robot interaction for
modern robots, which often have limited ability for speech and facial expression.
Current practice in the design of robot gestures has robot inventors devising gestures
based on their own experience and sometimes drawing upon fields such as dance (e.g.
Mizoguchi, Sato, Takagi, Nakao & Hatamura, 1997). These methods may be convenient
but little work has been done to evaluate their success. Can viewers understand robot
gestures, and if so, what messages or emotions can be conveyed? What contextual or motion
2
characteristics lead to gestures that can be better understood? And are robot engineers e
qualified to design these gestures—or should they consult a “gesture expert” such as a
psychologist, actor, puppeteer or other pro
ven
fessional? This thesis describes exploratory work to
answer these pertinent questions in HRI.
1.2
me
concept
”
,
k
to the design of social robots and social psychology of human interaction with technology.
The “Robot Equation”
To investigate how people understand gestures of social robots, we ground our work
in the field of human gesturing. Just as interpersonal behaviour—how people interact with
other people—influences how people treat computers (Reeves & Nass, 1996) and has been
used extensively to motivate human-computer interaction (HCI) design, we apply these sa
s to understand how people respond to robot gestures and motivate their design.
Gestures play an important role in human-human interaction. Put in the words of
Sir Francis Bacon: “As the tongue speaketh to the ear, so the gesture speaketh to the eye.
(Bacon, 1605) Humans use gestures to communicate and convey a range of social cues
including support for conversation (e.g., nodding for agreement) and certain types of
emotion (e.g., fist-clenching in the case of anger) (Argyle, 1994; Blake & Shiffar, 2007).
Moreover, this behaviour is so ingrained in us that we automatically and subconsciously loo
for and apply these social cues to the movements of entities that are decidedly not human:
people are aroused by sausages moving in a commercial as if they were living beings (Reeves
& Nass, 1996) and attribute mental states such as desires and emotions to simple geometric
shapes moving on a screen (Heider & Simmel, 1944). Previous research has suggested this
effect called the “Media Equation” (Reeves & Nass, 1996) is also applicable to robots (what
we term a “Robot Equation”): people respond to an AIBO robot by applying personality-
based social rules (Lee, Peng, Jin & Yan, 2006) and will nod at robots during conversation
even when the robots give no similar gestural cues (Sidner, Lee, Morency & Forlines, 2006).
Although it may seem likely that humans process robotic gestures in similar ways as human
gestures, little research has been done to show this is the case. These issues have implications
3
1.3 Objectives
The overall goal of this research is to provide a comprehensive review and case study
of the role of robot gesturing in human-robot interaction. We investigate three research
questions related to the processes of gesture creation and gesture understanding so as to
enhance the design of robots capable of social interaction. First, what types of information
do robot gestures convey to people (RQ1)? Second, what factors affect how well this
information is conveyed (RQ2)? Third, are puppeteers better than amateurs in creating these
gestures (RQ3)? We use human-robot interaction as the domain of study and refer to social
psychology to motivate our hypotheses related to gesture communication. We thereby also
investigate whether the rules of human gesture interpretation will transfer to robots that
move differently from humans and have far fewer degrees of freedom for movement.
To investigate these questions, we present an extensive case study of a teddy bear
robot (Figure 1) and focus on movements made with the head and arms only; we exclude
facial expressions and characteristics of voice or speech content so as to examine how
communicative a robot can be if it only has limited gestures available to use. The
methodology employed in this thesis is based on paired experimental studies: a first study in
which participants create a corpus of robot gestures and a second in which a different set of
participants view those gestures. We conducted one paired experiment in Fujisawa, Japan to
investigate RQ1 and RQ2 and one paired experiment in Toronto, Canada to look at RQ3.
4
Figure 1. Participant manipulating robot bear used in this study.
1.4 Benefits
A better understanding of robot gesture communication and the “Robot Equation”
has four main benefits. First, it informs the design of social robot gestures in a practical sense.
We use the results from our research to derive guidelines for the design of robotic gestures.
Second, for our studies we develop a corpus of gesture videos and animations that can be
used by other researchers interested in the topic. Third, our case study of the RobotPHONE
bear robot directly assists other researchers using this robot. A fourth benefit is insight into
how people make judgments about robots and whether effects present in social situations
with humans translate into interaction with robots as suggested by the “Robot Equation.”
1.5 Boundaries of thesis
The focus of this work is to investigate robot gesture communication. We investigate
robotic gestures in lieu of any sound or facial expression produced by the robot. We
therefore exclude conversational gestures such as those employed by embodied conversational
agents (ECAs) (Cassell, 2000; Cassell & Thorisson, 1999) and concentrate on “stand-alone”
gestures only. While the communicative ability of robot gestures can be enhanced using
speech, sounds or facial movement, this is outside our scope of research.
5
Here we use one specific type of robot called RobotPHONE, which is modelled after
a teddy bear plush toy and is designed for use in a household environment. While our results
are directly applicable only to humanoid robots that are limited to arm and head movements
such as the one used in our studies, we expect that our results may be generalizable to more
sophisticated robots. Nevertheless, gestures of more complex robots—which may move their
torso, legs and possibly non-human limbs such as tails—remains beyond the boundaries of
this research. We also focus on body gestures as opposed to hand gestures because of the
form of our robot. While we conduct studies in both Japan and Canada, we do not do cross-
cultural comparison as our studies differed in their methodology.
It is important to note that we use robot gesture understanding to denote how
humans perceive the gestures of a robot, not its reverse—how robots can be designed to
better recognize and understand human gestures.
1.6 Structure of thesis
This thesis is structured as follows. Chapter two reviews literature in social
psychology and human-robot interaction used to inform this work and presents the
hypotheses investigated. Chapter three describes the methodology and results of the first
paired studies conducted in Japan, which explored many factors in gesture communication.
Chapter four describes the second paired students conducted in Canada to investigate
expertise in gesture authoring. Conclusions and future work are presented in Chapter five.
1.7 Summary and research questions
In this chapter we introduced the focus of this thesis—the gesture communication of
social robots—and described the objectives, benefits and boundaries of our research. We
then outlined the structure of the coming sections. The following research questions are
addressed in this thesis:
RQ1: What information do robot gestures communicate?
RQ2: What factors influence this communication?
RQ3: Are puppeteers better than amateurs at designing gestures?
6
Chapter Two: Literature Review
“Why do people always gesture with their hands when they talk on the phone?”
Jonathan Carroll
The area of robot gesture communication is interdisciplinary in nature and this thesis draws
from previous research in a number of subjects. In particular it is situated within the fields of
social psychology, human-robot interaction (HRI) and human-computer interaction (HCI).
This chapter provides an overview of relevant literature in these fields with references to
enable the reader to explore topics of interest in greater depth. It is pragmatic in that it
focuses on behavioural effects in gesture communication and does not explore neural,
evolutionary or other bases for such behaviour. We focus on body gestures predominantly
but present some relevant literature on hand gestures as well.
The following chapters are divided by topics in gesture communication, with each
section first describing relevant literature in social psychology and then any related findings
in HRI or HCI if they exist. We use the “Robot Equation” (Section 1.2) as a theory
predicting social responses to robot gestures and generate original research hypotheses for
robot gestures by looking at literature in human gesture perception.
7
2.1 Gesture as non-verbal communication
Communication has long been believed to be a primary function of gesturing.
Gestures have been found to be synchronized with speech and therefore closely linked to
human cognition (McNeill, 1987). Human gestures are a form of non-verbal
communication (NVC). Other types of NVC include facial expression, gaze and posture
(Argyle, 1994).
Argyle (1994) characterizes all signals in social communication with a simple model
of coding and decoding as illustrated in Figure 2. A person codes some information in their
gesture which another person views and decodes. Sometimes the sender is unaware of any
coding, such as with unintentional hand movements while talking, and sometimes both
sender and receiver are incognizant, such as with the dilation of pupils during sexual
attraction (Argyle, 1994). Here we alternatively refer to the encoding step as gesture creation
and the decoding step as gesture understanding.
Several factors can affect the coding-decoding process: type of information conveyed
(e.g. emotional or semantic content), motion characteristics, situational context and
individual differences (personality, demographic). These are described in the following
sections.
Figure 2. Encoding and decoding in social signals; modified from (Argyle, 1994).
2.2 Classes of gestures
If gestures are communicative, what information do they tell us? Many authors have
identified and classified different roles for gestures during interpersonal communication (e.g.,
Argyle, 1994; McNeill, 1987; Beattie, 2003; Efron, 1941; Ekman & Friesen, 1969; Nehaniv,
2005). In particular, Nehaniv (2005) identified five broad classes of gestures which are
summarized in Table 1. These categories are quite extensive and encompass the myriad
8
classifications of other researchers, some of which we relate in Table 1. Here we focus on two
types of gestures: ones that communicate emotional content (class 2 in Nehaniv’s
classification) and semantic content (class 3), which is related to the gesturer’s “intended
meaning” or “message” (Searle, 1969). Examples of gestures that communicate emotion
include fist-clenching that indicates aggression and moving of hands when excited. Examples
of gestures that convey meaning include waving hello and using hand movement to illustrate
shapes and sizes.
2.2.1 Communication of emotions
In communication between humans, emotions play an important role and have a
strong tendency to be affected by body motion—for example, people can be emotionally
engaged when watching dance performances. Previous research suggests that emotions can be
identified in videos of body gestures without speech or facial expressions using standard
recognition tasks such as selection from a list of emotions (Montepare, Koff, Zaitchik &
Albert, 1999; Atkinson, Dittrich, Gemmell & Young, 2004; de Meijer, 1989; Dittrich,
Troscianko, Lea & Morgan, 1996). Basic emotions are also identifiable with point-light
animations of arm movement (Pollick, Paterson, Bruderlin & Sanford, 2001), body
movement (Blake & Shiffar, 2007) and paired conversations (Clark, Bradshaw, Field,
Hampson & Rose, 2005), although recognition rates are sometimes low (as in the case of
arm movement). However, Rosenthal and DePaulo (1979) found that using facial expression
only to judge emotion resulted in much higher accuracy than either body only or tone of
voice only—suggesting that although body gestures in isolation may be used to judge
emotion, in real life people tend to rely on facial expression as the key indicator of emotion.
This is supported by Frijda (1986), whose descriptions of the movements associated with
basic emotions mostly involve the face. It is therefore of interest to ask whether in the
absence of an expressive face robots can still convey emotions using simple gestures with the
head and arms.
Table 1. Classification of gestures.
9
Class Name Defining characteristics and goal Examples
1 “Irrelevant”/Manipulative gestures
Influence on non-animate environment; manipulation of objects, side effects of body motion
Grasping a cup; motion of arms when walking
2 Expressive gestures Expressive marking; communication of affective states
Excited raising and moving of hands while talking; fist-clenching (aggression)
Related classifications:
“Emotion gestures” (Argyle, 1996, p. 36)
Same
“Affect displays” (Ekman & Friesen, 1969)
Same
3 Symbolic gestures Signal in communicative interaction; communicative of semantic content
Waving ‘hello’; illustrating size/shape with hands; gesture ‘language’ used in broadcasting, military, etc.; holding two fingers to indicate ‘two’
Related classifications:
“Illustrators” (Argyle, 1994, p. 35; Ekman & Friesen, 1969)
Iconic and resemble their referents Hand movements to illustrate shape
“Iconics” (McNeill, 1992, p. 12, 236; Beattie, 2003)
Has a direct relationship to the idea they convey
Hand movements to represent bending when describing a character bending a tree
“Metaphorics” (McNeill, 1992, p.14; Beattie, 2003)
Pictorial like iconics but represent abstract idea
Raising hands as though to offer object to listener to represent retelling of a story
“Emblems” (McNeill, 1992, p.56; Argyle, p. 35; Ekman & Friesen, 1969)
Arbitrary, conventionalized meanings Hitchhike sign, clapping, religious signs
4 Interactional gestures Regulation of interaction with a partner
Nodding head to indicate listening
Related classifications:
“Regulators” (Ekman & Friesen, 1969)
Same
“Reinforcers” (Argyle, 1994, p. 35)
Encourage what has gone before or encourage person to talk more
Head nods
“Beats” (McNeill, 1992, p. 15) “Baton” (Efron, 1941; Ekman & Friesen 1969)
Move with rhythm of speech, as if beating to music
Flick of hand or fingers up or down (or back and forth)
5 Referential/Pointing gestures Use spatial locations as referents of absent agents; indicate objects, agents or topics by pointing
Pointing of all kinds
Related classifications:
“Deictics” (McNeill, 1992, p.18)
Same
10
We predict the social responses observable with human gestures will also be present
with robotic gestures. Literature in HRI includes several examples of the use of body
movement to show emotions: Mizoguchi et al. (1997) employed ballet-like poses for a
mobile robot; Scheeff, Pinto, Rahardja, Snibbe and Tow (2000) used smooth motions for a
teleoperational robot (“Sparky”); and Lim, Ishii and Takanishi (1999) used different walking
motions for a bipedal humanoid robot. Marui and Matsumaru (2005) used the same robot
bear employed in this study to look at how participants used head and arm movements to
convey emotion. However, while previous work has investigated the creation of emotion-
conveying gestures, few have looked in detail at whether those emotions could in fact be
accurately conveyed. We believe these missing components to be crucial to the study of how
robot gestures can effectively support interaction with humans in social settings.
2.2.2 Communication of messages
Experimental evidence supporting the idea that gestures in isolation convey semantic,
non-emotional information have shown that some gestures have conventionalized or
emblematic meanings (“thumbs up” or “raised fist” for example) (Sidner, Lee, Morency &
Forlines, 2006) and that body movement assists non-verbal communication through
providing redundant information (Bartneck, 2002). Brezeal and Fitzpatrick (2000) claim
that gestures are viewed as semantically meaningful even when they are not intended to be—
as is the case for eye gaze, which shows locus of attention.
With respect to gestures made during conversation, speakers gesture more when face-
to-face with listeners (Cohen, 1977), communicative effectiveness in describing shapes is
improved with gestures (Graham & Argyle, 1975), and gesture cues convey initiation and
termination, turn-taking and feedback information. When gestures are communicated with
speech, people tend to judge meaning based on what they hear rather than what they see
(Krauss, Morrel-Samuels & Colasante, 1991)—indicating that in both emotion and message
communication with other humans, gestures may guide interpretation but only in a
supporting role (to facial expression and to speech, respectively). With respect to gestures
11
viewed in isolation, these gestures do convey meaning but viewers identify the correct
interpretation only slightly better than chance (Krauss, Chen & Chawla, 1996).
The use of movement of technological entities to convey semantic meaning has been
investigated mostly through embodied conversational agents. Cassell (Cassell, 2000; Cassell
and Thorisson, 1999) describes humanoid computer agents such as a virtual real estate agent
capable of human-like movements to support conversational speech. However, many of
today’s robots are not able to employ speech, so it is of interest to investigate whether in lieu
of verbal communication, the gestures of such agents are able to convey meaning.
2.2.3 Content communication hypotheses
In light of previous research described above, we develop two hypotheses related to
content communication for robot gestures.
Hypothesis 1: Gestures viewed in isolation will be able to communicate emotional content, but to
a limited extent only.
Hypothesis 2: Gestures viewed in isolation will be able to communicate semantic content, but to a
limited extent only.
2.3 Emotions in gestures
2.3.1 Emotional valence
Emotions such as anger, happiness and sadness can be categorized using the
dimension of valence (Schlossberg, 1954), which refers to whether an emotion is positive,
negative or neutral. Previous work in social psychology has found positive emotion
expression to have beneficial consequences: people who express positive rather than negative
emotions experience greater interpersonal attraction from others (Staw, Sutton & Pelled
1994). We anticipate this effect to be present for the emotional expression of robot gestures.
Moreover, in addition to being more liked, people who express positive emotions are
also attributed with other favourable traits, a notion called the “halo effect” (Staw et al.,
1994). For example, Asch (1961) found that people judged to be “warm” rather than “cold”
were also judged to be more generous, wise and good-natured. How much an employee is
12
liked can also lead to overly positive evaluations of their performance (Cardy & Dobbins,
1986).
If gestures that express positive emotion are more liked, they may also be judged as
more lifelike according to the halo effect.
2.3.2 Emotion type
Ekman, Friesen and Ellsworth (1972) identified six basic emotions that can be
detected in facial expression: happiness, surprise, anger, sadness, fear and disgust/contempt.
These emotions have agreed meaning across several populations including across cultures
(Ekman, 1999). Ekman’s six basic emotions can also be expressed and recognized in body
movements in lieu of facial expression (Shaarani & Romano, 2006).
Some studies have shown that some emotions are conveyed better in particular
channels: the face, for example, shows happiness best, while the body is good for showing
tense and relaxed best (Ekman & Friesen, 1969). It follows that for a particular NVC
channel some emotions are more easily recognized than others. Studies have also shown
subjects who see videotapes of body movement only can more accurately judge negative
emotion than those who view face only (Ekman & Friesen, 1969). Given that some
emotions can be recognized better than others in perception of human motion, it is of
interest to see whether this same effect is present in robot motion.
2.3.3 Emotion valence and type hypotheses
Hypothesis 3: Gestures conveying positive emotions will be more liked than those conveying
negative or neutral emotions.
Hypothesis 4: Gestures conveying positive emotions will appear more lifelike than those conveying
negative or neutral emotions.
2.4 Motion characteristics of gestures
There are three main classes of motion characteristics that affect the perception of
body gestures (Atkinson, Tunstall & Dittrich, 2007):
1) Structural information: an object’s form and composition
13
2) Kinematics: an object’s displacement, velocity and acceleration
3) Dynamics: an object’s mass and force
Significant importance has been placed on the role of kinematics and dynamics in
visual perception, as evinced by the ubiquity of point-light experiments which exclude static
information and focus on motion information. These studies have found that motion
characteristics affect people’s perception of a gesture in terms of how well they can
understand the gesture and how lifelike it appears. Here we look at gesture complexity as a
measure of kinematics and dynamic information.
2.4.1 Perception of emotion
The kinematics information found in body movements provides sufficient guidance
for people to perceive emotional expressions (Atkinson et al., 2007). In one study motion
characteristics affected perception of emotion in point-light experiments featuring knocking
and drinking arm movements (Pollick et al., 2001): fast, jerky movements were more
associated with anger and happiness while slow, smooth movements were linked with sadness.
Differences in kinematics of arm movements have been found to accurately predict how well
viewers are able to distinguish between emotion types such as joy, anger and sadness (Sawada,
Suda & Ishii, 2003).
2.4.2 Lifelikeness and social presence
Previous work has established that both adults and children use the patterns and
timing of motion to judge life or sentience (reviewed in Rakison & Poulin-Dubois, 2001).
These cues include autonomy (i.e., “does the movement appear self-directed?”), speed (“is
the movement speed similar to the speed of human motion”?) and goal-directedness (“does
the movement appear to be achieving a goal?”) (Leslie, 1994; Morewedge, Preston &
Wegner, 2007; Opfer, 2002; Premack, 1990). However, some cues such as autonomy are
subjective and too vague to be of use (Gelman, 1995), while others such as speed may be
difficult to measure in robotic movement due to the absence of appropriate sensors. Thus,
this study looks instead at changes in an object’s movement direction, which have been
14
found to influence perceptions of animate life for both children and adults (Bassili, 1976;
Rochat, Morgan & Carpenter, 1998; Tremoulet & Feldman, 2000). As the objects
investigated in these studies are of different forms than robots (e.g., Tremoulet and Feldman
(2000) used a moving particle) we do not make specific hypotheses on the directionality of
complexity effects.
Closely related to lifelikeness is the idea of social presence which is the term discussed
more frequently in HRI literature. As experienced by a person interacting with a robot, social
presence has been defined as “access to another intelligence.” (Biocca, 1997) Kidd and
Breazeal (2003) showed that people judge a robot that moved its eyes to be more socially
present and engaging of their senses than a similar animated character. Lee, Peng, Jin & Yan
(2006) showed that judgments of social presence are affected by robot personality evinced
through movement and appearance. These studies suggest participants can feel engaged with
a social or intelligent presence when interacting with a robot and this effect is influenced by
motion characteristics.
2.4.3 Motion characteristics hypotheses
Hypothesis 5: Gesture motion characteristics will differ between emotions.
Hypothesis 6: Gesture complexity (as measured by changes in direction) will influence judgments
of lifelikeness.
Hypothesis 7: Gesture complexity (as measured by changes in direction) will influence how
accurately gesture meaning is recognized.
2.5 Contextual determinants
Research in social psychology has shown that in human-to-human communication,
the perceived meaning of a gesture depends on its social and environmental context (Argyle,
1994; McNeill, 2005). For example, depending on the situation an open palm can mean
different things—“give me something” versus “let’s work together” versus “it’s your turn.”
Clarke, Bradshaw, Field, Hampson and Rose (2005) found that knowledge of social context
aided perception of emotion in point-light body movement videos: subjects could better
15
judge the emotion of a pair of point-light actors when both were seen together instead of
each in isolation. From a theoretical perspective, this may be because an individual’s
experience of different emotions is characterized by “situational meaning structures,” which
are based on cognition of how the situation affects an individual and his or her judgment of
whether that affect is desirable (Frijda, 1986). In other words, the same event can cause a
variety of different responses in people depending on how it is interpreted, what aspects are
emphasized or overlooked, or what preconceptions are present—all of which are affected by
contextual knowledge.
Similarly, for gestures to convey semantic meaning they must be associated with
semantic content (Krauss et al., 1996). Although this is typically done with accompanying
speech, we predict that situational context in the form of a narrated scenario will also provide
the semantic content needed to improve identification.
2.5.1 Context hypothesis
Hypothesis 8: Knowledge of situational context will improve understanding of both emotional and
semantic content in robot gestures.
2.6 Individual differences
How do individual differences affect the encoding and decoding of gestures? In
watching point-light videos of movements of the human body, observers can not only
determine the meaning of the gestures (such as emotion conveyed) but are also able to
accurately judge characteristics of the human gesturer such as gender (Barclay, Cutting &
Kozlowski, 1978) and identity (Loula, Prasad, Harber & ShiVrar, 2005). Likewise, robotic
movement may offer cues as to who authored the gesture.
2.6.1 Personality and event coding
Personality can be generally defined as “a dynamic and organized set of characteristics
possessed by a person that uniquely influences his or her cognitions, motivations, and
behaviours in various situations” (Ryckman, 2004). It influences how a person interacts with
16
their environment. Personalities differ in distinct and classifiable ways. However, there are a
number of different theories that purport to describe the nature of personality differences
and their classification. Here we employ the five-factor model, or “Big Five” (Goldberg,
1981), which describes personality using five mutually-independent, measurable dimensions:
• Extraversion (talkative, assertive) versus Introversion (quiet, reserved)
• Emotional stability (calm, stable) versus Neuroticism (anxious, moody)
• Agreeableness (friendly, sympathetic) versus Disagreeableness (unfriendly, cold)
• Conscientiousness (organized, self-disciplined) versus Unconscientious
(disorganized, careless)
• Openness to experience (imaginative, complex) versus Closed to experience
(conventional, uncreative)
The Big Five model has been validated experimentally (McCrae & Costa, 1987), and
has become the dominant approach to modeling personality in psychology (De Raad &
Perugini, 2002). Some research has shown that other models, such as Eysenck's three-factor
model and the Myers-Briggs Type Indicator (MBTI), measure aspects of the Big Five model
(Eysenck & Eysenck, 1991; McCrae & Costa, 1989).
Compelling evidence suggests that people’s ability to perceive the gestures of others is
linked to our own experience in producing gestures. This idea is known as the common
coding principle (Prinz, 1997) or the theory of event coding (Hommel et al., 2001), which
claims the visual representations used for movement perception are tied to the motor
representations used for planning and executing self-produced movements (Blake & Shiffrar,
2007).
In particular, if gesture production is tied to gesture perception, viewers should be
most perceptive toward movements that they frequently use themselves and least sensitive to
movements that are unfamiliar. Previous studies have shown this to be the case: for example,
observers are best able to judge whether a pair of point-light animations depict the same
person when the videos are of themselves (Loula et al., 2005) instead of other people. Since
gesture expression is linked to perception and there is evidence that personality affects the
17
expression of gestures (Campbell & Rushton, 1978), we predict that personality with
influence gesture understanding. Specifically, we predict that higher personality agreement
between gesture author and observer will improve the recognition accuracy of the gesture’s
meaning.
Additionally, prior research has suggested that people like to interact with
personalities that are similar to their own. In psychology this is known as the “law of
similarity-attraction.” (Reeves & Nass, 1996) This behaviour seems to be present in
situations where communication is not face-to-face with another human, such as interacting
with a computer’s “personality” in text displays: Reeves and Nass (1996) found that
dominant personality users prefer dominant personality computers while submissive users
prefer submissive computers. Thus, we predict that higher similarity of personality between
gesture author and observer will also improve ratings of likeability.
2.6.2 Expertise
Are some individuals better are designing robot gestures than others? If so, what
characteristics affect skilfulness in gesture creation?
Previous studies have indicated that there are differences in gesture communication
among experts and non-experts. Trafton et al. (2006) studied use of iconic gestures by both
experts and “journeymen” (i.e., apprentices) in the domain of neuroscience and
meteorological forecasting. They found that experts perform more gestures that non-experts
to convey spatial information. One possible reason is that experts and novices have been
found to employ different knowledge structures in a variety of activities, including playing
chess (Chase & Simon, 1974) and solving physics problems (Chi, Feltovich & Glaser, 1981;
Larkin, 1983). We expect that differences in the ability to create gestures will also exist
between experts and amateurs.
This still leaves the issue of who constitutes a gesture expert. The answer is
contingent in part on the nature or use of the robot: in designing gestures for a dancing
robot, for example, it would make sense to consult professional dancers, and for a
18
meteorologist robot a human meteorologist. But for a robot that is designed for common
everyday interaction, which profession would be best suited for gesture design?
We chose puppeteers in our research because they have experience manipulating
puppets that have similar form factors similar to our robot. We were also able to recruit a
sufficient number of them as participants.
2.6.3 Individual difference hypotheses
Hypothesis 9: Viewers will be better able to judge gestures created by authors with more similar
personalities.
Hypothesis 10: Viewers will find gestures created by authors with more similar personalities to be
more likeable.
Hypothesis 11: Puppeteers will create gestures that are more lifelike than amateurs.
Hypothesis 12: Puppeteers will create gestures that convey emotion better than amateurs.
2.7 Summary
This chapter reviewed literature in social psychology, HRI and HCI that informs the
hypotheses tested in the research studies described in the next chapters. Table 2 presents a
summary of the 12 hypotheses and lists the paired experiment that was used to test each.
19
Table 2. Hypotheses tested in this thesis.
Topic Hypothesis Study
Content communication Gestures viewed in isolation will be able to communicate emotional content, but to a limited extent only.
H1 1&2
Gestures viewed in isolation will be able to communicate semantic content, but to a limited extent only.
H2 1&2
Emotion valence Gestures conveying positive emotions will be more liked than those conveying negative or neutral emotions.
H3 1&2
Gestures conveying positive emotions will appear more lifelike than those conveying negative or neutral emotions.
H4 1&2
Motion characteristics Gesture motion characteristics will differ between emotions.
H5 1&2
Gesture complexity (as measured by changes in direction) will influence judgments of lifelikeness.
H6 1&2
Gesture complexity (as measured by changes in direction) will influence how accurately gesture meaning is recognized.
H7 1&2
Contextual determinants Knowledge of situational context will improve understanding of both emotional and semantic content in robot gestures.
H8 1&2
Individual differences Viewers will be better able to judge gestures created by authors with more similar personalities.
H9 1&2
Viewers will find gestures created by authors with more similar personalities to be more likeable.
H10 1&2
Puppeteers will create gestures that are more lifelike than amateurs.
H11 3&4
Puppeteers will create gestures that convey emotion better than amateurs.
H12 3&4
20
Chapter Three: Studies 1 and 2
“It's a rather rude gesture, but at least it's clear what you mean.”
Katharine Hepburn
The previous chapter reviewed literature and presented hypotheses related to the creation
and understanding of simple robot gestures. The current chapter describes a set of paired
studies conducted to test the hypotheses. Studies 1 and 2 were conducted near Tokyo, Japan.
These studies provide research into what types of information gestures convey and whether a
wide range of factors influence gesture understanding, lifelikeness and likeability. This
chapter first discusses the research methodology, including the procedures and measures used,
followed by the major results on gesture coding and encoding. The chapter concludes by
outlining how the experimental results can be applied to human-robot interaction and
outlines preliminary design guidelines for robot gestures.
21
3.1 Methodology
We investigate issues of robotic gesturing within the overall framework of human
interpersonal behaviour. Our methodology is consequently linked to Argyle’s model of
coding and decoding (Figure 2) so as to model real-life communication of social signals as
closely as possible. Fig. 3 illustrates this methodology.
We conducted two related studies: in the first study, participants created gestures
using the bear robot; in the second study, participants viewed and judged the gestures created
by participants in the first study.
human author robot gesture human viewercreates interprets
Study 1 Study 2
Figure 3. Study methodology, modeled after Figure 2.
The following sections discuss the apparatus, participant group and procedure used
for each of the two studies.
3.1.1 Study 1
3.1.1.1 Apparatus
The robot used for this study had the form and size of a traditional teddy bear. It was
developed at the University of Tokyo as an IP phone system called “RobotPHONE”
(Sekiguchi et al., 2001; Sekiguchi et al., 2004) and is offered for sale in Japan. The bear has
six motors (two in the head and two in each of the arms) that allow it to move its head and
each of its arms both vertically and laterally. It can perform such movements as nodding its
head and waving its arms, but is unable to move its torso and does not have elbows. Its
movements were recorded by a computer connected via USB for later playback either as a
22
software animation or on the bear itself. A Sony Vaio laptop PC running Windows XP was
used to run the recording software.
We use a simple robot for two main reasons. First, we aim to investigate a lower
bound to gesture understanding. Second, a simple robot allows users to manipulate it easily,
as if moving a normal teddy bear. This allows untrained individuals to create gestures and
avoids the need to “program” robotic movement.
3.1.1.2 Participants
Four participants (two female, two male) ranging in age from 21 to 35 (mean 25)
were recruited from within the Keio University community. The study was conducted at the
Shonan Fujisawa Campus located south of Tokyo. The participants did not have previous
experience with robotic bears and were confident in using technology according to the
Technology Profile Inventory (DeYoung & Spence, 2004) we administered. Participants
varied on their prior attitudes toward robots. No participants had strongly negative attitudes
toward the social influence of robots (M=15.4, SD=5.4, on a scale from 5, least negative
attitude, to 25, most negative attitude, with midpoint 15) although two had negative
attitudes toward emotional interaction with robots (M=7.3, SD=4.5, on a scale from 3, least
negative attitude, to 15, most negative attitude, midpoint 9) according to the Negative
Attitudes toward Robots Scale (Nomura et al., 2006). Women appeared to have more
negative attitudes than men in our small sample size.
We did not use experts to create gestures because it was unclear what expertise was
needed (e.g. psychology, law enforcement, puppetry, drama). To our knowledge no previous
research has looked at the impact of individual differences on robotic gesture creation.
Further, using amateurs reflects current practice whereby robot gestures are created by
engineers without formal experience in gesture creation and reflects the intended use of the
RobotPHONE teddy bear robot used in this study (where lay users gesture to each other via
the robot).
23
3.1.1.3 Procedure
In the first study, participants created a message-conveying gesture for each of 12
scenarios (presented as one or two sentences of text, see Table 3) by direct manipulation of
the robot bear as if playing with a normal teddy bear; no “programming” was needed.
Participants were asked to write down the message and/or emotion that they thought was
being conveyed. They were given time to practice the gesture before recording it and were
given the option to re-record if they were dissatisfied with their result. Participants also filled
out demographic information, surveys on their attitude towards technology and a personality
questionnaire based on the Big Five index (Gosling et al., 2003). At the end of the session,
participants were asked how they came up with gestures in a semi-structured interview,
which was video recorded.
Table 3. List of Scenarios.
No. Scenario
1 2 3 4 5 6 7 8 9 10 11 12
You have just returned home. You are in the living room watching TV. You laugh at a funny show. You reach your hand to the bear to pick it up. You pat the bear's head. You have been working at your desk for a long time. You yawn. You take your clothes off. You start eating dinner. You say "goodnight" to the bear. You start crying. You start playing music on your computer. You are sitting at your computer and working. You start drinking a beer.
We chose to collect our own corpus of gestures in an experimental setting for three
main reasons. First, it allowed us to use a specialized robot to address particular research
questions. Second, we could obtain personality, demographic and prior attitude data from
gesture creators. Third, we did not know of any substantial pre-existing corpus of robot
gesture videos.
24
3.1.2 Study 2
3.1.2.1 Apparatus
This experiment was conducted on a Sony Vaio laptop PC running Windows XP.
Participants viewed robot gestures as bear animations on the RobotPHONE software
interface and as Apple Quicktime videos of the bear robot performing each gesture.
3.1.2.2 Participants
Twelve participants (three female, nine male) ranging in age from 18 to 60 (mean 26)
were recruited from the Keio University population, as in Study 1. All Study 1 participants
were excluded from participating in Study 2. Participants attitudes toward social influence of
robots were not extreme (M=15.4, SD=5.2), nor were their attitudes toward emotional
interaction with robots (M=8.44, SD=2.39).
3.1.2.3 Procedure
In the second study, participants were shown the gestures created in Study 1 (four
authors for each of 12 scenarios for a total of 48 gestures) and attempted to identify the
message and emotion being conveyed from lists of messages and emotions provided to them.
Only discrete emotions were provided; emotion valences (i.e. positive, negative or neutral)
were excluded. These lists were coded by two investigators and one interpreter from
responses in Study 1 (see Table 4); similar codes were merged. It is important to note that
while the scenarios provided influenced message creation, they did not necessarily define
those messages: for example, given the scenario “You start drinking a beer”, participants
generated the following messages: “Don’t do that!”, “Good job”, “That looks delicious” and
“Take care of me.” Clearly, messages such as these do not uniquely identify the scenario that
motivated them.
25
Table 4. List of Messages and Emotions.
Messages Emotions
I want to ask a question Good job Welcome home Take care of me What happened? I'm bored It’s stupid I don't understand I want to shake hands I want to be hugged Thank you I want more Please touch me Let’s play You seem tired No Don’t do that That looks delicious It’s okay Good night I’m hungry
I am happy I am interested I love you I am confused I am embarrassed I am sad I am feeling awkward I am angry I am surprised Neutral/none
Participants were also asked to rate lifelikeness of the gesture, how much they liked
the gesture and their degree of confidence in the correctness of the emotion and message that
they had assigned to the gesture. The study was structured as a fully within-subjects
experiment with two factors: medium and context. For medium, two conditions were used:
in one condition an animated version of the gesture was shown as a software avatar
(“animation”), and in the other a video of the bear robot was shown (“robot”) (shown in
Figure 4). This factor was included to test our methodological validity only—i.e. would
results differ between a 3D animation (similar to an embodied conversational agent) versus a
3D video of an actual robot. Due to variability in movement during gesture playback (e.g.,
sometimes a motor in the bear would stall) video recordings were used instead of a co-
present robot to ensure that gestures were reproduced correctly and that all participants saw
exactly the same gestures.
26
(a) (b)
Figure 4. Participants viewed each gesture either under as a software animation (a) or as a video
of the bear robot (b).
Context also had two conditions: either the appropriate scenario was read to the
participant immediately preceding the gesture (“context”), or else information about the
scenario was not provided (“no context”). Both factors were balanced and experimental order
randomized for each participant.
As in the first study, participants completed demographic, technology and
personality questionnaires. At the end of the session, participants were asked to discuss how
they judged what messages and emotions the gestures were conveying in an interview.
3.1.3 Measures
3.1.3.1 Attitude toward robots
Prior attitudes toward robots were assessed using the “Negative Attitude toward
Robots Scale (NARS)” (Nomura et al, 2006). It consists of three subscales: S1, negative
attitude toward situations of interaction with robots (sample item: “I would feel paranoid
talking with a robot”); S2, negative attitude toward the social influence of robots (sample
item: “Something bad might happen if robots developed into human beings”); and S3,
negative attitude toward emotions in interaction with robots (sample item, reversed: “I
would feel relaxed talking with robots”). Each item is rated on a five-point Likert scale
ranging from “1: Strongly disagree” to “5: Strongly agree.” Subscale scores are calculated by
summing the scores of the component items, with higher scores indicating more negative
27
attitudes. The minimum and maximum scores for S1 are 6 and 30, for S2 are 5 and 25 and
for S3 are 3 and 15, respectively (note subscales do not have the same number of items).
3.1.3.2 Personality
Personality was assessed using the “Ten-Item Personality Inventory (TIPI)” (Gosling,
2003) based on the Big-Five Inventory (Goldberg, 1981). Each of five personality
dimensions (extraversion, agreeableness, conscientiousness, emotional stability and openness
to experience) is rated with two items using seven-point Likert scales ranging from “1:
disagree strongly” to “7: agree strongly.”
3.1.3.3 Gesture understanding
Two measures of understanding or “successful transmission” of gesture message and
emotion were used. The first represented the amount of agreement between author and
viewer and was calculated by comparing the author’s written meaning with each viewer’s
selected codes. The second represented consensus agreement among viewers (referred to as
“inter-viewer” emotion and message agreement). For emotions, inter-viewer agreement was
based on the frequency of the most selected code among viewers without regard to the
author’s intended emotion; for messages, the two highest frequencies of codes were used
because two selections were made.
3.1.3.4 Gesture complexity
Gesture complexity was measured in two ways: (a) head movements, determined by
counting the number of changes in direction of the robot’s head; and (b) arm movements,
determined by similar counting with the robot’s arms (and summing over both arms). For
the analyses below both measures were divided into two approximately equal groups
representing high movement and low movement.
28
3.1.3.5 Other measures
Ratings of liking, lifelikeness and confidence were made on single-item seven-point
Likert scales ranging from “1: Strongly Disagree” to “7: Strongly Agree” (e.g., “I liked this
gesture”). Multiple-item scales were impractical as we obtained these measures for each of 48
trials per participant and did not want to excessively burden participants; longer scales also
did not seem necessary to assess simple opinions.
3.2 Results
3.2.1 Communication of emotions and messages
Hypothesis 1 and 2 were supported. Gestures viewed in isolation carry both
emotional and semantic information. The ability of people to detect intended emotions was
assessed in the following way. First the 432 emotion judgments (note that not all trials had a
judged emotion because people had the option of selecting no emotion for a gesture) were
scored as either correct or incorrect based on whether or not the selected emotion agreed
with the emotion that had been attached to the gesture by its author. Next a binomial test
was used to see if the observed proportion of correct responses (22%) was greater than the
one in ten (10%) correct responses that would have been expected by chance alone. Using
the Z approximation to the binomial test as implemented in SPSS, the proportion of correct
emotion responses was found to be significantly greater than chance (p<.001) although fully
78% of the emotions were not judged correctly. Thus our expectation that the emotions
associated with gestures would be judged relatively poorly, but still better than chance, was
confirmed.
Of the 432 message judgments 176 (40%) were correct (i.e., agreed with the author's
intended message). As assessed by a binomial test, this proportion was significantly greater
(p<.001) than the two out of 21 (9.5%) correct responses that would have been expected by
chance alone (since there were 21 message options in total and participants were asked to
choose two of them per gesture). While messages were also judged relatively poorly (being
correct only 40% of the time) they were judged more accurately than emotions (which were
29
correct only 22% of the time). This provides evidence that, at least for our corpus of gestures,
movements consisting of the head and arms are better at conveying message information
over emotions.
3.2.2 Effect of emotional valence
To test Hypothesis 3 and Hypothesis 4 we analyzed the characteristics of robot bear
gestures that people perceive as expressing different emotions. These figures are presented in
Table 5.
Table 5. Characteristics of perceived emotions.†
Emotion Head movements
Right arm* movements
Left arm* movements
Gesture time, sec Like rating Lifelikeness
rating
Positive Emotions:
I am happy 2.33 (0.39) 4.62 (0.52) 4.46 (0.55) 7.29 (0.36) 4.53 (0.12) 4.67 (0.12)
I am interested 5.13 (1.08) 5.80 (0.85) 5.37 (0.97) 7.87 (0.59) 4.83 (0.23) 4.76 (0.22)
I love you 2.00 (0.53) 7.70 (0.96) 7.13 (0.98) 8.47 (0.65) 4.70 (0.22) 5.43 (0.22)
Overall 3.01 (0.38) 5.47 (0.41) 5.16 (0.44) 7.64 (0.28) 4.64 (0.10) 4.83 (0.10)
Negative Emotions:
I am confused 5.16 (0.92) 6.62 (0.96) 5.11 (1.14) 8.98 (0.72) 3.80 (0.16) 4.18 (0.22)
I am embarrassed 3.19 (0.68) 3.25 (0.68) 2.25 (0.56) 7.59 (0.49) 4.09 (0.21) 4.34 (0.21)
I am sad 3.48 (0.70) 3.90 (0.88) 2.65 (0.60) 8.19 (0.73) 3.81 (0.22) 4.52 (0.22)
I am feeling awkward 3.36 (0.59) 3.64 (0.74) 2.38 (0.54) 7.85 (0.57) 3.87 (0.21) 4.47 (0.20)
I am angry 3.39 (0.66) 3.67 (0.68) 3.02 (0.58) 7.65 (0.58) 3.92 (0.18) 4.18 (0.17)
Overall 3.75 (0.33) 4.28 (0.37) 3.16 (0.34) 8.06 (0.28) 3.89 (0.09) 4.32 (0.10)
Neutral Emotions:
I am surprised 4.18 (0.69) 4.06 (0.71) 3.20 (0.76) 7.33 (0.55) 4.90 (0.19) 4.59 (0.21)
(neutral) 3.24 (0.57) 2.52 (0.47) 1.96 (0.44) 6.64 (0.39) 3.24 (0.13) 3.58 (0.14)
Overall 3.58 (0.44) 3.08 (0.40) 2.42 (0.40) 6.89 (0.32) 3.84 (0.13) 3.95 (0.13) † Means with standard error in brackets. Like and lifelikeness ratings on a 7-point Likert scale (1-strongly disagree; 7-strongly agree) *Right and left are from the perspective of the viewer, not the bear
30
Hypothesis 3 was supported. A one-way repeated measures ANOVA was conducted
with the emotion valence of the gesture (as defined by positive, negative, or neutral emotion)
as the within-subject factor and the likeability rating as the dependent measure. Emotion
valence had a significant effect (F(2,22) = 12.56, p < .001, r = .60) on the likeability of a
gesture. As shown in Figure 5, gestures conveying positive emotions are rated as more likable
(M = 4.64), followed by gestures conveying neutral emotions (M = 3.89) and negative
emotions (M = 3.84). A simple contrast was done for the emotion valence variable, using
positive emotions as the control category to which negative and neutral emotions were
compared. The results show a significant difference in both positive vs. negative emotions,
F(1,11) = 18.00, p < .005, r = .79, as well as positive vs. neutral emotions, F(1,11) = 28.34, p
< .001, r = .84. This confirms that viewers like gestures conveying positive emotions more
than those conveying negative and neutral emotions. Using G*Power software, all effect sizes
were sufficient to achieve power over .80 ( = .05 and N = 12), which is a desirable level
according to Cohen (1977).
2
2.5
3
3.5
4
4.5
5
5.5
6
negative neutral positive
Valence
Ratin
g of
like
abili
ty
Figure 5. Rating of likeability versus emotion valence. Error bars show 95% confidence intervals.
31
Hypothesis 4 was also supported. A one-way repeated measures ANOVA was
conducted with the emotion valence conveyed by the gesture as the within-subject factor and
the lifelikeness rating as the dependent measure. As with the previous analysis, emotion
valence also had a significant strong effect (F(2,22) = 9.07, p < .001, r = .41) on perceived
lifelikeness of a gesture. Gestures conveying positive emotions appear more lifelike (M =
4.83), followed by gestures conveying negative emotions (M = 4.32) and neutral emotions
(M = 3.95), as shown in Figure 6. Furthermore, simple contrasts revealed that gestures with
positive emotions were judged to be significantly more lifelike versus negative emotions,
F(2,22) = 7.59, p < .05, r = .51, and versus neutral emotions, F(2,22) = 16.86, p < .001, r =
66. This supports the hypothesis that gestures conveying positive emotions are perceived to
be more lifelike than those conveying negative and neutral emotions. All effect sizes were
sufficient to achieve power over .80 ( = .05 and N = 12).
2
2.5
3
3.5
4
4.5
5
5.5
6
negative neutral positive
Valence
Ratin
g of
life
liken
ess
Figure 6. Rating of lifelikeness versus emotion valence. Error bars show 95% confidence intervals.
Hypothesis 5 was supported. Looking at the distribution of head and arm
movements in Table 5, positive emotions tend to be associated with the highest number of
32
right and left arm movements and the lowest number of head movements. In particular, the
emotion “I love you” is associated with an average of two changes in head direction but seven
changes in both right and left arm direction. Conversely, negative emotions have more level
numbers of head, right arm and left arm movements, while neutral emotions have fewer
movements overall as expected. A one-way (emotion valence) repeated measures ANOVA
showed that valence had a significantly strong effect on both right arm and left arm
movements, F(2,22)=6.06, p=.008 and F(2,22)=7.14, p=.004 respectively. A simple contrast
comparing valence groups for head movements had negative emotions using significantly
more movements than positive emotions, F(2,22)=5.46, p=.039. Figure 7 illustrated this.
Similar contrasts had positive emotions using more right arm and left arm movements than
neutral emotions, F(2,22)=10.8, p=.007 and F(2,22)=13.0, p=.004 respectively. Arm
movements for each valence group are shown in Figure 8.
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
negative neutral positive
Valence
Num
ber o
f hea
d m
ovem
ents
Figure 7. Head movement complexity versus valence. Error bars show 95% confidence intervals.
33
0
1
2
3
4
5
6
7
negative neutral positive
Valence
Num
ber o
f arm
mov
emen
tsrightarmleftarm
Figure 8. Movement complexity versus valence for right and left arms. Error bars show 95%
confidence intervals.
Interestingly, right arm movements are used more than left arm movements across all
emotion types. This is likely because our gesture authors were right-handed and manipulated
the bear while facing it (“right” and “left” arm for the bear is from the gesture viewer’s
perspective).
3.2.3 Effect of gesture complexity
To examine Hypothesis 6, two three-way repeated measures ANOVAs were
conducted: one with medium, context and gesture arm movement as between-subject factors,
and one with medium, context and gesture head movement as factors. The results support
our hypothesis that gesture complexity influences perception of lifelikeness. The main effect
of arm movement complexity was statistically significant, F(1,11)=7.530, p=.019. Figure 9
shows the ratings of lifelikeness that resulted from movements of the arm and head. Looking
at the arms condition reveals that gestures with a large number of changes in movement
direction are perceived as being much more lifelike (M = 4.68, SD = .147) than those with
34
fewer arm movements (M = 4.22, SD = .123). This effect was not found in the case of head
movements, and there were no main effects of medium or context on lifelikeness. Since there
were no significant effects involving medium in this or subsequent analyses, our results seem
to be methodologically valid in either medium.
3
3.5
4
4.5
5
5.5
6
6.5
7
arms head
Rat
ing
of li
felik
enes
s
low movement
high movement
Figure 9. Lifelikeness versus arm and head movement complexity. Error bars show 95% confidence
intervals.
Gesture complexity was also found to affect emotion and message communication as
predicted in Hypothesis 7. Arm movement complexity was found to have significant effects
on emotion agreement among viewers, F(1,11)=624.530, p<.001, message agreement among
viewers, F(1,11)=257.012, p<.001, and message agreement between author and viewer,
F(1,11)=6.009, p=.032—which improved from 34.8% to 43.8% as shown in Figure 10.
However, while more arm movements resulted in an increase in both author-viewer and
inter-viewer measures of message agreement, it also resulted in a decrease in emotion
agreement. This result supports the notion that emotions and messages are perceived in
different ways, and that gesture characteristics that may be conducive to one type of
communication may not be beneficial for the other.
35
0
10
20
30
40
50
60
arms head
Aut
hor-v
iew
er m
essa
ge a
gree
men
t (%
) low movement
high movement
Figure 10. Author-viewer message agreement versus arm and head movement complexity. Arm
movement influences agreement; head movement does not. Error bars show 95% confidence
intervals.
3.2.4 Effect of situational context
Hypothesis 8 was supported. Two-way repeated measures analysis of variance
(ANOVA) was conducted with medium and context as between-subjects factors. Situational
context improved both message and emotion identification accuracy. Context had a
significant main effect on author-viewer message agreement, F(1,11) = 93.849, p < .001, and
a borderline significant main effect on author-viewer emotion agreement, F(1,11) = 4.46, p
= .058. Figure 11 shows what percentage of viewers identified the correct message and
emotion across experimental conditions. Comparing the more heavily shaded bars, it is clear
that participants were able to identify the gesture author’s intended message much more
accurately when situational context was provided (M = 56.5%, SD = 3.85%) versus when it
was excluded (M = 20.1%, SD = 1.31%). Similarly for emotions (the lighter shaded bars),
context improved emotion recognition (M = 26.7%, SD = 3.61%) compared with no
context (M = 15.2%, SD = 3.48%). There was no main effect of medium. No effect was
36
found between context and participant ratings of identification confidence, although in post-
study interviews subjects reported judging gestures to be easier when the scenario was
provided.
0
10
20
30
40
50
60
70
80
video video +scenario
anim anim +scenario
Aut
hor-v
iew
er a
gree
men
t acc
urac
y (%
)
correct emotion
correct message
Figure 11. Emotion and message agreement accuracy versus experimental condition. Contextual
information (scenario) improves accuracy. Error bars show 95% confidence intervals.
3.2.5 Effect of personality
To look at Hypothesis 9 and Hypothesis 10 we correlated the personality agreement
between authors and viewers with viewer’s ratings and recognition scores, as shown in Table
6. Hypothesis 9 was not supported: an observer’s accuracy at judging the message or emotion
conveyed by a gesture did not increase when the gesture was authored by some with a more
similar personality. Hypothesis 10, however, was supported: there was a positive relationship
between personality similarity between gesture author and viewer, and how much the viewer
liked the gesture, r=.301, p=.019 (Pearson, pairwise, one-tailed). This is a medium effect
(Cohen, 1977) accounting for about 9% of the variance of the measures.
37
Table 6. Correlations between personality agreement and ratings.
Personality similarity Likeability Lifelikeness Confidence
in emotion Confidence in message
Correct emotion
Correct message
Pearson correlation .301* .236 -.095 -.086 -.019 .062
Sig. (1-tailed) .019 .053 .260 .281 .448 .338 Note: N = 48 (each pairing of 4 authors and 12 viewers)
3.2.6 Effect of author
Apart from the above analyses, the effect of individual authors was investigated using
a one-way (gesture author) repeated measures ANOVA. Who authors a gesture was found to
be an important factor in how viewers perceived that gesture. Author was a significant main
effect on the degree to which viewers agreed on a gesture’s message, F(3,33) = 49.4, p<.001,
and emotion, F(3,33) = 629, p<.001. As shown in Figure 12, gestures created by authors 1
and 2 were more liked by subjects in the second study than gestures created by authors 3 and
4 across both context conditions, F(3,33)=6.78 (p<.05). For authors 1 and 2, ratings of
liking are generally positive (above 4, the neutral point), whereas ratings for authors 3 and 4
are negative (below the neutral point). The gestures by authors 1 and 2 were also judged to
be more lifelike than those by authors 3 and 4, F(3,33) = 3.14 (p<.05).
38
3
3.2
3.4
3.6
3.8
4
4.2
4.4
4.6
4.8
5
1 2 3 4Author
Sco
re o
f lik
ing
No ContextContext
Figure 12. Likeability versus gesture author. Gestures created by authors 1 and 2 were liked more
than those by authors 3 and 4. Error bars show 95% confidence intervals.
3.3 Discussion and design guidelines
The present results indicate that simple robot gestures can provide useful information
concerning emotions and meaning. However, by themselves gestures do not provide a lot of
information, and accuracy (particularly for judgments of emotion) is low, although
significantly better than chance. In practice however, people are aided by their knowledge of
the current situation in which gestures are made. Thus most realistic use of social robots
involves settings where the context in which gestures are made is known. This is fortunate,
since our results show that context has a large impact on how well gestures are understood,
boosting message understanding accuracy close to the 60% level for the types of gestures and
scenarios used in this study. However, based on the results of the present study, even when
context is known, assessment of emotion in simple gestures is relatively poor at less than
30% accuracy.
Besides situational context, movement complexity, authorship and valence of
emotion affected gesture decoding.
39
Arm movements, but not head movements, were found to facilitate the perception of
lifelikeness (sentience). This is in spite of the fact that the robot used in this study was
capable of only simple arm movements and did not have a wrist, elbow or shoulder with
which to make more subtle arm movements.
Gestures conveying positive emotions were more liked and perceived to be more
lifelike than those conveying either negative or neutral emotions. Closer inspection reveals
that gestures associated with positive emotions have more arm movements, indicating that
the results on emotional valence and movement complexity may be inter-related, and that a
possible reason for the beneficial impact of arm movements is that they convey positive
emotion to viewers.
We also found different emotions had distinct movement patterns, with some
emotions favouring arm movements over head movements. This complements work by
Marui on temporal and spatial tendencies for robot bear gestures conveying different
emotions (Marui & Matsumaru, 2005).
While this research is a first step in the systematic study of the interpretation of
simple robot gestures and should be seen as exploratory, seven tentative design guidelines can
be inferred from this study, which are summarized in Table 7. First, there is a need to boost
emotion understanding when interacting with social robots. This can be done by adding
facial expressiveness, limiting the number of possible emotions that need to be considered or
including simple statements of emotion either in words or non-speech sounds (e.g.,
chuckling for happiness).
Second, incorporating gestures that convey positive emotions and have many arm
movements will improve how much people like the gesture, how lifelike the robot appears
and how well the gesture’s meaning is understood. Gestures that convey negative emotions,
on the other hand, may be disliked or difficult to understand because these emotions are
unexpected from robotic agents.
Third, social robots should employ contextual information when deciding which
gestures to express to users, since users will be using similar information in decoding the
40
robot’s actions. As an example, a robotic pet may use information about the time of day,
lighting conditions, movement in its environment and the number of people around it to
determine whether it should employ gestures indicative of excitement or boredom.
A fourth design guideline is that the context in which gestures of social robots occur
should be made clear to users, so that people are aided by this situational knowledge. A
robotic companion, for instance, may make a verbal remark about a door being opened and
then execute a gesture that conveys the message “Welcome home.” Alternatively, one non-
verbal option is to have a robot execute a referential gesture to indicate locus of attention
(e.g., turning the head or pointing toward the agent it is reacting to) before engaging in the
response gesture.
Three additional guidelines encourage higher personality similarity between gesture
author (which may be referred to as the robot’s personality) and viewer. Based on the theory
of event coding, gesture perception is linked to gesture production, so people will be most
sensitive to movements they frequently act out themselves. Thus, having users design a
robot’s gestures on their own would be one way to achieve more likeable gestures.
However, rather than explicitly requiring individuals to author a robot’s gestures
(which may be laborious or infeasible with certain robots or users), this may be done
implicitly. If a person is playing with a robot and moving its parts, these movements may be
recorded and played back to the user at a later time. Moreover, movements may be paired
with contextual information gathered by the robot to determine appropriate situations to
employ each gesture. For example, if a child is playing with a robot and laughing loudly, the
movements of the robot may be associated with a happy scenario. Another option would be
to mimic a user’s gestures without any overt input from them. As video processing
techniques improve, it may be possible for robots to capture, analyze and reproduce
movements of the people with whom they are interacting. This would be most feasible in an
android or humanoid robot but could be achieved by other robot forms with some mapping
between human movements and the robot’s movements (e.g., a mapping of arm movement
to flippers in a seal-like robot).
41
A third way to achieve personality is to simply choose an author with a similar
personality as the intended user group, if that user group’s personality is consistent and
known ahead of time.
Table 7. Design guidelines for robotic gestures based on results of Studies 1 and 2.
Hypothesis Experimental result (decoding)
Design guideline (encoding/decoding)
H2 Gestures in isolation do not convey emotion well
Boost emotion understanding (e.g. by including vocal expression)
H3, H4 Gestures conveying positive emotions are more liked and lifelike
H6 More arm movements improves lifelikeness
Use gestures conveying positive emotions with many arm movements to achieve lifelikeness
Use contextual information in deciding what gestures robot will express
H8 Context improves gesture understanding Make sure context in which robot is gesturing to is made known to user
Have user create gestures for their own robot
Mimic gestures of user H10 Higher personality similarity between author and viewer improves gesture likeability
Select author with similar personality to design gesture
In addition to the development of design guidelines, the verification of our
experimental hypotheses taken together provides additional evidence that there is indeed a
“Robot Equation” that exists as a corollary to the “Media Equation.” Specifically looking at
gesture communication, several effects present in human-to-human interaction are also
present in human-robot interaction with respect to understanding of semantic and emotional
meaning and perception of lifelikeness. Our research thereby suggests that when people
interact with humanoid robots, they respond to a robot’s gestures in much the same way as
they would to a human’s.
A limitation of this study is that we used only four authors who had no special
training in developing gestures for robots. Thus our results are likely to provide a low
estimate of the type of interpretability that is possible with a simple social robot. It is possible
42
that the people we asked to create gestures were not very skilful and that puppeteers,
choreographers or other types of “experts” could in fact create much more interpretable and
lifelike gestures. Interestingly, we found that even in a small sample of authors, differences in
gesture creation skill existed. As this issue of how to create gestures—and who should create
them—is particularly important for designers of social robots, we conduct another paired
experiment (Studies 3 and 4) to find out what types of expertise or characteristics are most
beneficial in designing gestures for social robots.
3.4 Summary
In Study 1, four participants used a robotic teddy bear to create gestures based on 12
scenarios provided. These gestures were replayed to 12 viewers in Study 2 as animations and
videos without sound or facial expression. The results from these two studies demonstrate
that even gestures of a simple social robot capable of arm and head movement only can
convey emotional and semantic meaning. How lifelike viewers judge a gesture to be is
affected by emotional valence (positive emotion expression suggests greater lifelikeness) and
complexity of arm movements (more arm movements improves gesture lifelikeness). How
well viewers are able to decode a gesture’s intended meaning is found to be affected by
knowledge of the gesture’s situational context (providing context improves understanding),
complexity of arm movement (more arm movements boosts understanding) and who
authored the gesture—although in the next paired experiment (Chapter 4) we describe
additional research conducted to identify the characteristics of a gesture “expert.”
43
Chapter Four: Studies 3 and 4
“An expert knows all the answers - if you ask the right questions.”
Levi Strauss
Studies 1 and 2 described in the previous chapter provided an overview of factors that
influence robot gesture communication. The current chapter describes a set of paired studies
conducted to test hypotheses focusing on the effect of expertise. Since the first set of studies
used only four gesture creators, all of whom were amateurs, we explored the topic of
individual differences in gesture creation in greater depth with Studies 3 and 4, focusing on
expressive gestures. These studies were conducted at the University of Toronto in Canada
with participation from five experienced puppeteers. This chapter discusses research
methodology, including the procedure and measures used, followed by the major results
related to how puppetry experience affects gesture understanding. The chapter concludes by
outlining how the results may guide design of robot gestures.
44
4.1 Methodology
For Studies 3 and 4 we use the same paired-study methodology as employed by
Studies 1 and 2. Participants create gestures using the bear robot in Study 3, which are then
viewed and judged by participants in Study 4. The following sections discuss the apparatus,
participant group and procedure used for each of the two studies. Our protocol was passed
by the Ethics Review Office at the University of Toronto.
4.1.1 Study 3
4.1.1.1 Apparatus
Study 3 used the same robotic teddy bear (“RobotPHONE”) as in Study 1. The
study environment was a university lab room similar to that used in Study 1.
4.1.1.2 Participants
Ten people (four female, six male) ranging in age from 21 to 65 years old (M = 31.5,
SD = 11.8) participated in Study 3. Five participants were recruited from the University of
Toronto community and had no prior experience in puppetry. The remaining five
participants were recruited from the puppetry community (students of puppetry and
professional puppeteers) in Toronto and had between three and 59 years of puppetry
experience (M = 20.6, SD = 22.6). Participants had no impairments that affected their hand
motor skills. Six participants had English as their native language while four did not. Five
participants had graduate or professional school experience, three had bachelor degree
experience, one had college/technical college experience and one did not respond.
To assess prior attitudes, we administered the Negative Attitude toward Robots Scale
(Nomura et al., 2006). No participants rated as having negative attitudes toward situations
of interaction with robots (M = 12.8; SD = 1.62). Four participants had negative attitudes
toward social influence of robots (M = 14.6, SD = 3.24) while four participants had negative
attitudes toward emotions in interaction with robots (M = 9.8, SD = 1.81).
45
4.1.1.3 Procedure
In Study 3, participants created an emotion-conveying gesture for each of six basic
emotions identified by Ekman, Friesen and Ellsworth (1972): anger, disgust, fear, happiness,
sadness and surprise. They directly manipulated the robot bear as in Study 1. Participants
were given time to practice gesturing with the bear and were given the option to view and re-
record each gesture if they were dissatisfied with their result. Participants also filled out
demographic information, surveys on their attitude towards robots and a personality
questionnaire. At the end of the session participants were asked a single semi-structured
interview question on their opinion of creating gestures with the bear.
4.1.2 Study 4
4.1.2.1 Apparatus
This part of the experiment was conducted on a Dell Inspiron desktop PC running
Windows XP. Participants viewed videos of the bear robot performing each gesture as .wmv
files using Windows Media Player. The study environment was a university room similar to
that used in Study 2.
4.1.2.2 Participants
Twelve people (three female, eight male, one unspecified) ranging in age from 21 to
30 years old (M = 25.1, SD = 2.57) participated in Study 4. All participants were recruited
from the University of Toronto community. All Study 3 participants were excluded from
participating in Study 4. Eight participants had English as their native language while four
did not. Four participants had completed or were currently in graduate or professional school,
while the remaining eight had bachelor degree experience.
No participants had negative attitudes toward situations of interaction with robots
(M = 12.1, SD = 3.4). Six participants had negative attitudes toward social influence of
robots (M = 15.3, SD = 2.75) while three participants had negative attitudes toward
emotions in interaction with robots (M = 8.58, SD = 2.07).
46
4.1.2.3 Procedure
In Study 4 participants were shown the corpus of gestures created in Study 3 (ten
authors for each of six emotions for a total of 60 gestures). They rated whether they liked the
gesture and whether they thought the gesture was lifelike using seven-point Likert scales
ranging from “1: strongly disagree” to “7: strongly agree”. They also rated the degree to
which they believed each of the six emotions was being conveyed by the robot.
The study was structured as a two-way repeated measures experiment. The factor of
puppetry experience had two conditions: amateur and expert. The factor of emotion type
had six conditions: anger, disgust, fear, happiness, sadness and surprise. Experimental order
was randomized for each participant. As in the first study, participants completed
questionnaires on demographics, personality and attitude toward robots. At the end of the
session, participants were asked an interview question to describe how they judged whether
the bear was conveying each of the six emotions.
4.1.3 Measures
Negative attitudes toward robots were accounted for by dividing the participant
group into those that had negative attitudes (defined as above the midpoint of each of the
NARS subscales) and those who had neutral or positive attitudes.
How well observers recognized gestures was assessed using two measures. One
measure, “rating of correct emotion,” is simply the observer’s selected rating for the emotion
that the author was attempting to convey. If the emotion was sadness and the author selected
7 on the Likert scale for “I’m sad” this measure would be 7. The other measure, “points over
mean,” is a normalized measure calculated as the observer’s selected rating for the correct
emotion minus the average of the other five ratings. In the above example, this measure
would be 7 – (average of observer’s ratings for all emotions except sadness).
47
4.2 Results
4.2.1 Qualitative observations
From observations recorded during Study 3, all individuals with puppetry experience
remarked that having the robot touch its face would be extremely useful, and that this was a
severe limitation to its ability to convey emotion. (The bear robot used did not have the
range of motion to allow for touching of the face in a dynamic way, especially due to lack of
elbow joints.) For example, one puppeteer mentioned wanting the have the bear touch their
mouth and stomach to indicate disgust. Not being able to do this, she said “I can’t do
disgusted.” Upon reading the emotion “I’m afraid,” another puppeteer remarked, “I can’t
hide its eyes (laugh).” The control group of amateurs made very few remarks about the
limitations in teddy bear movement.
Overall puppeteers spent more time than the control group to create gestures, often
taking up the entire 45 minutes scheduled. All puppeteers, as well as one amateur,
manipulated the bear robot with the robot on their lap and facing away from them; all other
amateurs manipulated the robot with it facing them on a table.
4.2.2 Effect of puppetry experience
To investigate the effect of puppetry experience (Hypothesis 11 and Hypothesis 12)
we first conducted a one-way (puppeteer) repeated measures ANOVA, sorted for each
emotion type (anger, disgust, fear, happiness, sadness and surprise). Table 8 presents the
results.
48
Table 8. Effect of puppeteer experience on liking, lifelikeness and gesture recognition.
Amateur Puppeteer F (1,11) p η2 Power
M (SD) M (SD)
Liking:
Anger 4.43 (.85) 4.57 (.61) .595 .457 .051 .109
Disgust 4.40 (.43) 4.48 (.82) .107 .750 .010 .060
Fear 4.45 (.63) 4.45 (1.04) .000 1.00 .000 .050
Happiness 4.49 (.92) 4.50 (.64) .004 .948 .000 .050
Sadness 4.37 (.65) 4.58 (.59) .949 .351 .079 .145
Surprise 4.52 (.49) 4.60 (.96) .103 .754 .009 .060
Lifelikeness:
Anger 4.15 (.97) 4.28 (1.10) .263 .618 .023 .076
Disgust 4.32 (.73) 4.13 (1.30) .412 .534 .036 .090
Fear 4.20 (.93) 4.22 (.99) .002 .964 .000 .050
Happiness 4.20 (.96) 4.52 (.91) 2.931 .115 .210 .346
Sadness 4.33 (.94) 4.33 (1.02) .001 .973 .000 .050
Surprise 4.23 (.86) 4.38 (1.12) .265 .617 .024 .076
Rating of correct emotion:
Anger 3.40 (.86) 3.37 (1.09) .014 .908 .001 .051
Disgust 3.35 (.88) 3.72 (.97) 2.14 .172 .163 .267
Fear 3.40 (1.02) 3.32 (1.23) .075 .790 .007 .057
Happiness 3.10 (.94) 2.97 (.92) .248 .628 .022 .074
Sadness 3.40 (.89) 3.20 (.85) .725 .413 .062 .122
Surprise 3.50 (1.14) 3.43 (.90) .056 .818 .005 .055
Rating over average:
Anger .247 (.829) .123 (.845) .146 .710 .013 .064
Disgust .010 (.622) .420 (.576) 3.25 .099† .228 .377
Fear -.197 (.546) .270 (.786) 5.09 .045* .316 .539
Happiness -.160 (.951) -.337 (.674) .250 .627 .022 .074
Sadness .027 (.725) -.157 (.703) .350 .566 .031 .084
Surprise .247 (.630) .230 (.832) .003 .958 .000 .050 † significant at the p < .10 level * significant at the p < .05 level
49
Hypothesis 11 was not supported. Ratings of both liking and lifelikeness did not
differ significantly between gestures created by experts and those created by amateurs for any
emotion type.
Hypothesis 12 was partially supported. The normalized measure of recognition was
significantly better for expert-created gestures over novice-created gestures for the emotion of
fear. Disgust was also conveyed better with puppeteer’s gestures at the p<.10 significance
level. In general the recognition of emotions was very low; ratings of correct emotion were all
under 4 (representing neither agreeing nor disagreeing that the bear was conveying a given
emotion). The high variance of these ratings indicates viewers were not very consistent with
their emotion judgments of gesture videos.
Closer inspection reveals that a possible reason that puppeteers created better gestures
for fear is that viewers mistook fear for happiness or surprise significantly more with novice-
created gestures. For gestures designed to convey fear, normalized ratings of happiness were
significantly higher for novices (M=.423, SD=.820) than for experts (M=-.444, SD=.723),
F(1,11)=8.96, p=.012. The same tendency was found for surprise: ratings for novice gestures
(M=.505, SD=.821) were significantly higher than for puppeteer gestures (M=-.428,
SD=.600), F(1,11)=6.52, p=.027.
4.2.3 Correlation and factor analysis of emotion ratings
In post-study interviews participants commented that some emotions were difficult
to distinguish from one another. To investigate this issue, a simple correlation analysis was
performed between all of the emotions. Results are presented in Table 9. All pairs of negative
emotions (anger, fear, disgust and sadness) were positively correlated with each other, with
the highest correlation between anger and disgust, r = .59, p<.01, Pearson, two-tailed). The
negative emotions of anger and disgust were also negatively correlated with happiness (r= -
0.11, p<.01 and r= -0.16, p<.01, respectively). The positive emotion of happiness and the
neutral emotion of surprise were positively correlated, r=.47, p<.01).
50
Table 9. Simple correlations among emotion ratings by gesture viewers.
Emotion Disgust Fear Happiness Sadness Surprise
Anger 0.59** 0.30** -0.11** 0.27** 0.01
Disgust 0.39** -0.16** 0.28** 0.00
Fear -0.06 0.38** 0.20**
Happiness -0.30** 0.47**
Sadness -0.20** ** significant at the .01 level
Given these measures were correlated we ran factor analysis on the rating data. For
ratings of both non-puppeteer and puppeteer gestures, three components were identified as
shown in Table 10: the first factor consisted of ratings of disgust, anger, fear and sadness
(labelled negative valence); the second factor consisted of lifelikeness and liking (labelled
affinity); the third factor consisted of surprise, happiness and sadness (labelled
positive/neutral valence). Together these three factors accounted for 69.8% and 67.9% of
the total variance for the non-puppeteer and puppeteer groups respectively.
Table 10. Rotated factor loadings for observer ratings of 6 emotion types, likeability and
lifelikeness.
Rating Factor 1:
Negative valence (26.7% variance)
Factor 2: Affinity
(21.3% variance)
Factor 3: Positive/neutral valence
(20.7% variance)
Disgust .809
Anger .762
Fear .745
Sadness .544 -.454
Lifelikeness .921
Likeability .906
Surprised .851
Happy .813 Note. N = 712.Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization.
51
Based on the factor analysis, we coded conveyed emotions into positive and negative
valence. A two-way (puppeteer x valence) repeated measures ANOVA was conducted with
recognition score as the dependent measure, but did not reveal any main effects for puppetry
experience or valence, or any interaction effects between the two factors (see Appendix L). A
discriminant analysis was also conducted with valence as the grouping variable and all viewer
ratings as independents but did not reveal any variables that predicted membership (see
Appendix L).
4.3 Discussion and design guidelines
In this experiment puppetry experience had a limited effect on how well robot
gestures were designed. Viewers judged the gestures of both puppeteer and novice groups as
similar in likableness and lifelikeness. Puppeteers created gestures that better portrayed fear
than novices, but not other emotions. The likely explanation for why this effect was not
more pronounced was that the RobotPHONE bear used in this study did not have enough
movement capabilities for the experience of the puppeteers to fully come into play. For such
a simple and constrained robot non-puppeteers were able to create gestures that were similar
in effectiveness as puppeteers. Indeed, the puppeteers complained about the limitations of
the bear (e.g., its inability to touch its face). Nevertheless, this gives some indication that an
individual’s prior experience in manipulating expressive avatars such as puppets may assist in
the design of more complex robotic movements.
Viewers’ ratings of negative emotions (anger, fear, disgust and sadness) were
positively correlated with each other, as were their ratings of the positive/neutral emotions of
happiness and surprise. This suggests that people may be able to judge emotional valence of
robot gestures more easily than specific emotions. Factor analysis revealed that people did
indeed rate gestures consistently for emotional valence. In addition, liking and lifelikeness
were judged similarly, suggesting these measures were related to a general affinity for a
gesture. These results point to the difficulty of emotional expressivity based on robotic
movement alone without the aid of sound, voice or facial expression. As research in
52
embodied conversation agents shows, however, expressive gestures may be less important
than interactional gestures for robots capable of speech (Cassell & Thorisson, 1999).
Four preliminary design guidelines can be suggested based on experimental results.
First, since emotional understanding was difficult in this experiment as in the previous one,
there is consistent evidence that expressive gestures should be accompanied by voice or facial
expression to make their meaning clear. Moreover, since viewers appeared better able to
judge emotional valence over emotion types such as fear or happiness, emotional robot
gestures may be designed with a general valence in mind instead of a specific emotion type.
Since a gesture author’s puppetry expertise did not have much of an impact on the
ability of viewers to correctly identify the gesture’s emotion, it may be safe to use novice
authors to design the gestures of robots with simple movements such as the one used in our
experiment. However, for robots capable of more advanced movements—especially with the
ability to touch their faces—puppeteers may be able to create more effective emotion-
conveying gestures.
Table 11. Design guidelines for robot gestures based on results from Studies 3 and 4.
Hypothesis Experimental result (encoding/decoding)
Design guideline (encoding/decoding)
Gestures in isolation do not convey emotion well a
Boost emotion understanding (e.g. by including vocal expression)
H2 Viewers are consistent when rating gestures for emotional valence
Create gestures that express different emotional valences rather than specific emotions
May use novice authors to design gestures for simple-movement robots
H12 Expressive gestures created by experienced puppeteers are only marginally better than those created by novices b
Puppeteers may be more beneficial for advanced-movement robots (esp. with face-touching)
a Also found in Studies 1 and 2. b Only gestures representing fear made by puppeteers received higher recognition scores compared to those made by novices.
53
4.4 Summary
In Study 3 ten participants (five experienced puppeteers, five novices) used a robotic
teddy bear to create emotion-conveying gestures for each of Ekman’s six basic emotions
(1972). These were videotaped and shown to 12 viewers in Study 4 without sound, facial
expression or contextual knowledge. Viewers were better able to identify the emotion of fear
(but not the other emotions) with gestures designed by puppeteers rather than novices, based
on their post-viewing ratings. However, emotion recognition was low in general. Viewers
were also able to consistently rate gestures for emotional valence as being positive/neutral or
negative.
We propose that the limited benefit of puppetry experience is partly due to the
physical limitations of the robot itself, especially since all puppeteer authors commented of
the difficulty in having the robot bear express movements without being able to have the
bear’s hands touch its face and stomach. The effect of puppetry expertise on gesture creation
may therefore be more pronounced with robots capable of refined arm and hand movement.
54
Chapter Five: Conclusion
“Gesture reveals a new dimension of the mind.”
David McNeill, 1992
This chapter synthesizes and summarizes the different work described in the thesis on robot
gesture communication. We revisit the main research questions addressed in this work and
present the main findings from two paired experiments used to investigate various factors
affecting the creation and understanding of robot gestures. We then highlight the important
contributions of our work and show how they can be applied to inform the design of
human-robot interaction. Lastly we examine the limitations of the thesis and suggest how
these may be addressed in future work.
55
5.1 Summary
The goal of this research is to explore the relatively new research topic of gesture
communication in social robots using a simple robotic teddy bear as a case study. We ask
three main research questions: What information do robot gestures communicate? What
factors affect that communication? Are puppeteers better than amateurs at designing robot
gestures?
We investigated these questions using two paired experiments. Studies 1 and 2 were
exploratory in that they tested what types of content robot gestures convey and the influence
of emotional valence, motional characteristics, situational context and individual differences.
Study 1 had four amateur authors create gestures for each of twelve motivating scenarios.
These were then shown to twelve observers in Study 2 who judged the meaning of the
gestures as well as the likeability and lifelikeness. Studies 3 and 4 focused on the effect of
expertise—specifically that of puppeteers—on gesture communication. Five experienced
puppeteers and five novices created emotion-conveying gestures in Study 3 that were viewed
and judged by twelve participants in Study 4.
5.2 Main findings and contributions
This thesis presents the first comprehensive look at robot gesture communication.
Table 12 presents the 12 hypotheses tested in the thesis and indicates whether evidence was
found for each of these.
The main contributions of the thesis are that we have:
Developed design guidelines for robot gestures based on our experimental findings
Provided evidence of the “Robot Equation” (showed how interpersonal effects
between humans can be observed in HRI)
Reviewed literature in social psychology, HCI and HRI pertaining to gesture
communication
Generated a set of testable research hypotheses in the field of robot gestures
Generated a corpus of robotic gesture videos and animations
56
We have therefore contributed to the field of robot gestures in terms of practical
design recommendations and a theoretical framework to predict social responses. In
particular, we list in Table 13 the design guidelines developed in this work and their
relationship to the hypotheses and experimental results from all studies.
Table 12. Hypotheses tested in this thesis and findings.
Topic Hypothesis Study Verified?
Content communication
Gestures viewed in isolation will be able to communicate emotional content, but to a limited extent only.
H1 1&2 Yes
Gestures viewed in isolation will be able to communicate semantic content, but to a limited extent only.
H2 1&2 Yes
Emotion valence and type
Gestures conveying positive emotions will be more liked than those conveying negative or neutral emotions.
H3 1&2 Yes
Gestures conveying positive emotions will appear more lifelike than those conveying negative or neutral emotions.
H4 1&2 Yes
Motion characteristics Gesture motion characteristics will differ between emotions.
H5 1&2 Yes
Gesture complexity (as measured by changes in direction) will influence judgments of lifelikeness.
H6 1&2 Yes
Gesture complexity (as measured by changes in direction) will influence how accurately gesture meaning is recognized.
H7 1&2 Partial a
Contextual determinants
Knowledge of situational context will improve understanding of both emotional and semantic content in robot gestures.
H8 1&2 Yes
Individual differences Viewers will be better able to judge gestures created by authors with more similar personalities.
H9 1&2 No
Viewers will find gestures created by authors with more similar personalities to be more likeable.
H10 1&2 Yes
Puppeteers will create gestures that are more lifelike than amateurs.
H11 3&4 No
Puppeteers will create gestures that convey emotion better than amateurs.
H12 3&4 Partial b
a Effect only present for arm movement and differs between emotions and messages. b Effect only present for fear emotion.
57
Table 13. Design guidelines for robot gestures based of results from all studies.
Related to Experimental result Design guideline
Gestures in isolation do not convey emotion well
Boost emotion understanding (e.g. by including vocal expression)
H2 Viewers are consistent when rating gestures for emotional valence
Create gestures that express different emotional valences rather than specific emotions
H3, H4 Gestures conveying positive emotions are more liked and lifelike
H6 More arm movements improves lifelikeness
Use gestures conveying positive emotions with many arm movements to achieve lifelikeness
H8 Context improves gesture understanding
Use contextual information in deciding what gestures robot will express Make sure context in which robot is gesturing to is made known to user
If user personality is known, select author with similar personality to design gesture
Have user create gestures for their own robot H10 Higher personality similarity between author
and viewer improves gesture likeability
Mimic gestures of user
May use novice authors to design gestures for simple-movement robots
H12 Expressive gestures created by experienced puppeteers are only marginally better than those created by novices b
Puppeteers may be more beneficial for advanced-movement robots (esp. with face-touching)
5.3 Limitations and future work
One of the challenges in carrying out research on human-robot interaction is that
many possible types of robots can be studied. This research used a relatively simple robot to
establish a baseline or lower bound for the types of gesture interpretation that are possible
and to facilitate easy manipulation by subjects. The results may only be directly applied to
social robots that are limited to arm and head movements such as the one used here;
nevertheless, we expect that our results may be generalizable to more sophisticated robots.
This supposition will need to be tested through future inquiry into how gestures of more
complex gestures are interpreted (including movements of torso, legs and perhaps non-
human limbs such as tails). However, there is evidence that a person’s ability to discern
58
information such as emotion from visual behaviour is based on motion information such as
kinematics and dynamics rather than static information such as appearance or shape
(Atkinson et al., 2007). Point-light videos, for example, have been found to be as good as
full-light videos (where the whole body or face is shown) in such judgement tasks as gender
recognition (Hill, Jinno & Johnston, 2003).
A further limitation of this study is that we used only puppeteers as our gesture
“experts”; it is possible that individuals from other professions such as choreographers,
anthropologists or social psychologists could create more interpretable and lifelike gestures
than puppeteers. However, we would expect that other experts would not fare much better
with a limited-movement robot such as the teddy bear robot employed in this study. As the
issue of who should design gestures is of particular importance to social robot development,
our work calls for additional research with more complex robots to identify what types of
expertise, skills or characteristics are most beneficial in designing gestures.
This work found evidence that people respond to robot gestures in similar ways as to
human gestures (the “Robot Equation”). Further research can therefore explore what
additional social communication rules are applicable to human-robot interaction.
Although we conducted experiments in both Canada and Japan, this work does not
explore cross-cultural differences because of disparities in study methodology. While we
expect many of the effects identified here to be common across many cultures, some
characteristics of non-verbal communication are culturally-dependent (Argyle, 1994) and
their impact on robot gesture understanding should be explored in greater depth.
5.4 Final words
Our paper provides the first comprehensive look at the novel area of gesture
understanding in human-robot interaction. As in human non-verbal communication,
gestures of the body play an important role in HRI. While the visions of human-like
androids capable of integrating seamlessly in human society depicted by such shows as Star
Trek and Battlestar Galactica may well come to pass at some point in the future, it seems
59
likely that social robots will continue to have limited capabilities for the foreseeable future—
making it all the more important that simple gestures be expressive and meaningful. The
results of this research provide reason to believe that, in the absence of full artificial
intelligence, it should still be possible for interaction designers to use interface elements such
as gestures to increase the expressive power and usability of simple social robots.
60
References
Argyle, M. (1994). The psychology of interpersonal behaviour (5th ed.). London, UK: Penguin
Books.
Asch, S. E. (1961). Forming impressions of personality. Journal of Abnormal and Social
Psychology, 41, 258-290.
Atkinson, A., Tunstall, M., & Dittrich, W. (2007). Evidence for distinct contributions of
form and motion information to the recognition of emotions from body gestures.
Cognition, 104(1), 59-72.
Atkinson, A. P., Dittrich, W. H., Gemmell, A. J., & Young, A. W. (2004). Emotion
perception from dynamic and static body expressions in point-light and full-light
displays. Perception, 33, 717–746.
Bacon, F. (1605). The advancement of learning, Book 2. London: Oxford University Press.
Barclay, C. D., Cutting, J. E., & Kozlowski, L. T. (1978). Temporal and spatial factors in
gait perception that influence gender recognition. Perception and Psychophysics, 23,
145–152.
Bartneck, C. (2002). “eMuu—an emotional embodied character for the ambient intelligent
home,” Ph.D. dissertation, Technical University Eindhoven, The Netherlands, 2002.
Bassili, J. N. (1976). Temporal and spatial contingencies in the perception of social events.
Journal of Personality and Social Psychology, 33, 680–685.
Beattie, G. (2003). Visible thought: The new psychology of body language. London, UK:
Routledge.
Biocca, F. (1997). The cyborg’s dilemma: Progressive embodiment in virtual environments.
Journal of Computer-Mediated Communication, 3(2). Available:
http://www.ascusc.org/jcmc/vol3/issue2/biocca2.html.
Blake, R., & Shiffar, M. (2007). “Perception of human motion.” Annu. Rev. Psychology, 58,
47–73.
61
Breazeal, C. (2003). Toward sociable robots. Robotics and Autonomous Systems, 42, 167–175.
Breazeal, C., & Fitzpatrick, P. (2000). “That certain look: Social amplification of animate
vision,” in Proceedings of the AAAI Fall Symposium on Society of Intelligence Agents—
The Human in the Loop, 2000.
Campbell, A., & Rushton, J. (1978). Bodily communication and personality. British Journal
of Social & Clinical Psychology, 17(1), 31-36.
Cardy, R. L., & Dobbins, G. H. (1986). Affect and appraisal accuracy: Liking as an integral
dimension in evaluating performance. Journal of Applied Psychology, 71, 672-678.
Cassell, J. (2000). Embodied conversational interface agents. Commun. ACM, 43(4), 70-78.
Cassell, J., & Thorisson, K. R. (1999). The power of a nod and a glance: envelope vs.
emotional feedback in animated conversational agents. Applied Artificial Intelligence,
13(4), 519-538.
Chase, W. G., & Simon, H. A. (1974). Perception in chess. Cognitive Psychology, 4, 55–81.
Chi, M. T. H., Feltovich, P. J., & Glaser, R. (1981). Categorization and representation of
physics problems by experts and novices. Cognitive Science, 5, 121–152.
Clarke, T. J., Bradshaw, M. F., Field, D. T., Hampson, S. E., & Rose, D. (2005). The
perception of emotion from body movement in point-light displays of interpersonal
dialogue. Perception, 34(10), 1171–1180.
Cohen, A. A. (1977). The communicative functions of hand illustrators. Journal of
Communication, 27, 54-63.
Cohen, J. (1977). Statistical Power Analysis for the Behavioral Sciences (Revised Edition). New
York: Academic Press.
de Meijer, M. (1989). The contribution of general features of body movement to the
attribution of emotions. Journal of Nonverbal Behavior, 13, 247–268.
De Raad, B., & Perugini, M. (2002). Big Five assessment. Hogrefe and Huber Publishers.
DeYoung, C., & Spence, I. (2004). Profiling information technology users: en route to
dynamic personalization. Computers in Human Behavior, 20(1), 55-65.
62
Dittrich, W. H., Troscianko, T., Lea, S., & Morgan, D. (1996). Perception of emotion from
dynamic point-light displays represented in dance. Perception, 25, 727–738.
Efron, D. (1941). Gesture and environment. Morningside Heights, NY: King’s Crown Press.
Ekman, P., & Friesen, W. V. (1969). The repertoire of nonverbal behavior: Categories,
origins, usage, and coding. Semiotica, 1, 49-98.
Ekman, P., Friesen, W.V., & Ellsworth, P. (1972). Emotion in the human face: Guidelines for
research and an integration of findings. New York: Pergamon Press.
Ekman, P. (1999). Handbook of Cognition and Emotion. Sussex, U.K.: John Wiley & Sons,
Ltd.
Eysenck, H., & Eysenck, S.B.G. (1991). The Eysenck Personality Questionnaire - Revised.
Sevenoaks: Hodder and Stoughton.
Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003). A survey of socially interactive robots.
Robotics and Autonomous Systems, 42, 143–166.
Frijda, N. (1986). Emotions. Cambridge: Cambridge University Press.
Gelman, R., Durgin, F., & L. Kaufman. (1995). Distinguishing between animates and
inanimates: not by motion alone. In S. Sperber, D. Premack & A. Premack (Eds.),
Causal cognition: a multi-disciplinary debate (150-184). Cambridge: Oxford
University Press.
Goldberg, L. (1981). Language and individual differences: The search for universals in
personality lexicons. Review of Personality and Social Psychology, 2(1), 141-165.
Gosling, E., Rentfrow, P., & Swann Jr., W. (2003). A very brief measure of the Big-Five
personality domains. Journal of Research in Personality, 37, 504–528.
Graham, J. A., & Argyle, M. (1975). A cross-cultural study of the communication of extra-
verbal meaning by gestures. International Journal of Psychology, 10, 57-67.
Heider, F., & Simmel, M. (1944). An experimental study of apparent behavior. American
Journal of Psychology, 57, 243–259.
Hill, H. Jinno, Y., & Johnston, A. (2003). Comparing solid-body with point-light
animations. Perception, 32, 561–566.
63
Hommel, B., Musseler, J., Aschersleben, G., & Prinz, W. (2001). The theory of event
coding (TEC): A framework for perception and action planning. Behav. Brain Sci.,
24, 849–937.
Ishiguro, H., & Minato, T. (2005). Development of androids for studying on human-robot
interaction, in Proceedings of 36th International Symposium on Robotics, TH3H1, Dec.
2005.
Kidd, C., & Breazeal, C. (2003). Comparison of Social Presence in Robots and Animated
Characters.
Krauss, R., Chen, Y., & Chawla, P. (1996). Advances in experimental social psychology. In
M. P. Zanna (Ed.), Advances in Experimental Social Psychology, 28 (389-450). San
Diego, CA: Academic Press.
Krauss, R. M., Morrel-Samuels, P., & Colasante, C. (1991). Do conversational hand
gestures communicate? Journal of Personality and Social Psychology, 61, 743-754.
Larkin, J. H. (1983). The role of problem representation in physics. In D. Gentner & A. L.
Stevens (Eds.), Mental models (75–98). Hillsdale, NJ: Lawrence Erlbaum Associates,
Inc.
Lee, K. M., Peng, W., Jin, S.-A., & Yan, C. (2006). Can robots manifest personality? An
empirical test of personality recognition, social responses, and social presence in
human–robot interaction. Journal of Communication, 56, 754–772.
Leslie, A. M. (1994). ToMM, ToBy, and agency: Core architecture and domain specificity.
In L. Hirschfield & S. Gelman (Eds.), Mapping the mind: Domain specificity in
cognition and culture (119–148). Cambridge, England: Cambridge University Press.
Lim, H., Ishii, A., & Takanishi, A. (1999). “Basic emotional walking using a biped
humanoid robot,” in Proceedings of the IEEE SMC, 1999.
Loula, F., Prasad, S., Harber, K., & ShiVrar, M. (2005). Recognizing people from their
movement. Journal of Experimental Psychology: Human Perception and Performance,
31, 210–220.
64
Marui, N., & Matsumaru, T. (2005). Emotional motion of human-friendly robot:
Emotional expression with bodily movement as the motion media. Nippon Robotto
Gakkai Gakujutsu Koenkai Yokoshu, 23, 2H12.
McCrae, R., & Costa Jr., P. (1989). Reinterpreting the Myers-Briggs Type Indicator from
the Perspective of the Five-Factor Model of Personality. Journal of Personality, 57(1),
17–40.
McCrae, R., & Costa Jr., P. (1987). Validation of the five-factor model of personality across
instruments and observers. Journal of Personality and Social Psychology, 52(1), 81–90.
McNeill, D. (2005). Gesture and Thought. Chicago: The University of Chicago Press.
McNeill, D. (1992). Hand and Mind: What Gestures Reveal about Thought. Chicago: The
University of Chicago Press.
McNeill, D. (1987). Psycholinguistics: A new approach. New York: Harper & Row.
Mizoguchi, H., Sato, T., Takagi, K., Nakao, M., & Hatamura, Y. (1997). “Realization of
expressive mobile robot,” in Proceedings of the International Conference on Robotics
and Automation, 1997, pp. 581–586.
Montepare, J., Koff, E., Zaitchik, D., & Albert, M. (1999). The use of body movements and
gestures as cues to emotions in younger and older adults. Journal of Nonverbal
Behavior, 23(2), 133-152.
Morewedge, C., Preston, J., & Wegner, D. (2007). Timescale bias in the attribution of mind.
Journal of Personality and Social Psychology, 93(1), 1–11.
Nehaniv, C. (2005). “Classifying types of gesture and inferring intent,” in Proc. AISB’05
Symposium on Robot Companions, The Society for the Study of Artificial Intelligence
and Simulation of Behaviour, pp.74–81.
Nomura, T., Suzuki, T., Kanda, T., & Kato, K. (2006). Altered attitudes of people toward
robots: Investigation through the negative attitudes toward robots scale, in Proc.
AAAI-06 Workshop on Human Implications of Human-Robot Interaction, 29-35.
65
Opfer, J. (2002). Identifying living and sentient kinds from dynamic information: the case of
goal-directed versus aimless autonomous movement in conceptual change. Cognition,
86, 97-122.
Pollick, F. E., Paterson, H. M., Bruderlin, A., & Sanford, A. J. (2001). Perceiving affect
from arm movement. Cognition, 82, B51–61.
Premack, D. (1990). The infant’s theory of self-propelled objects. Cognition, 36, 1–16.
Prinz, W. (1997). Perception and action planning. Eur. J. Cogn. Psychol, 9, 129–54.
Rakison, D. H., & Poulin-Dubois, D. (2001). Developmental origin of the animate–
inanimate distinction. Psychological Bulletin, 2, 209–228.
Reeves, B., & Nass, C. (1996). The Media Equation. Cambridge: Cambridge University
Press.
Rochat, P., Morgan, R., & Carpenter, M. (1998). Young infants’ sensitivity to movement
information specifying social causality. Cognitive Development, 12, 537–561.
Rosenthal, R., & DePaulo, B. (1979). Sex differences in eavesdropping on nonverbal cues.
Journal of Personality and Social Psychology, 37(2), 273-285
Ryckman, R. (2004). Theories of Personality. Belmont, CA: Thomson/Wadsworth.
Sawada, M., Suda, K., & Ishii, M. (2003). Expression of emotions in dance: relation
between arm movement characteristics and emotion. Perceptual and Motor Skills, 97,
697–708.
Scheeff, M., Pinto, J., Rahardja, K., Snibbe, S., & Tow, R. (2000). “Experiences with Sparky:
A social robot,” in Proceedings of the Workshop on Interactive Robot Entertainment,
2000.
Schlossberg, H. (1954). Three dimensions of emotion. Psychology Review, 61.
Searle, J. R. (1969). Speech acts: An essay in the philosophy of language. Cambridge: Cambridge
University Press.
Sekiguchi, D., Inami, M., & Tachi, S. (2004). “The design of internet-based
RobotPHONE,” in Proceedings of 14th International Conference on Artificial Reality,
223–228.
66
67
Sekiguchi, D., Inami, M., & Tachi, S. (2001). “RobotPHONE: RUI for interpersonal
communication,” in CHI2001: Extended Abstracts, 277-278.
Shaarani, A. S., & Romano, D. M. (2006). “Basic Emotions from Body Movements,” in
(CCID 2006) The First International Symposium on Culture, Creativity and Interaction
Design. HCI 2006 Workshops, The 20th BCS HCI Group conference. Queen Mary,
University of London, UK
Sidner, C., Lee, C., Morency, L.-P., & Forlines, C. (2006). “The effect of head-nod
recognition in human-robot conversation,” in Proc. of ACM SIGCHI/SIGART
conference on HRI, 2006, 290–296.
Staw, B. M., Sutton, R. I., & Pelled, L. H. (1994). Employee positive emotion and favorable
outcomes at the workplace. Organization Science, 5(1), 51-71.
Trafton, J., Trickett, S., Stitzlein, C., Saner, L., Schunn, C., & Kirschenbaum, S. (2006).
The Relationship Between Spatial Transformations and Iconic Gestures. Spatial
Cognition & Computation, 6(1), 1–29.
Tremoulet, P. D., & Feldman, J. (2000). Perception of animacy from the motion of a single
object. Perception, 29, 943–951.
Appendix A: Materials for Studies 1 and 2 This section presents experimental materials used in Studies 1 and 2. As they were conducted in Japan, both English and Japanese text is provided.
68
69
Consent Form (Study 1) クマ型ロボットを用いたメッセージを伝えるジェスチャーに関する研究(実験1) 実験者: Jamy Li (Primary investigator): 080-3084-0243 [email protected] You are being invited to participate in a research study. Before agreeing to participate, it is important that you read and understand the following explanation of the proposed study procedures. The following information describes the purpose, procedures, benefits and risks associated with the study. It also describes your right to refuse to participate or withdraws from the study at any time. In order to decide whether you wish to participate in this research study, you should understand enough about its risks and benefits to be able to make an informed decision. This is known as the informed consent process. If you have any questions please contact the study investigators (contact information listed above). Please make sure all your questions have been answered to your satisfaction before signing this document. 以下に本実験の目的、手続き、効用とリスクについてご説明します。これはインフォームド・コンセントと呼ば
れる手続きです。この実験にご参加いただくにあたって、内容をよく理解し、リスクと効用を知っていただい
た上でご参加下さい。あなたには実験に参加しない権利、いつでも中断する権利があります。もし何かご質
問がありましたら遠慮なく実験者にお尋ね下さい。すべての疑問が解消されましたら、参加の同意のご署
名をお願いいたします。 Background & Purpose of Research: How can we design robots to interact with people in more social ways? Since people use gestures to convey information, we are investigating how the gestures of a robot bear can do the same. The purpose of this study is to identify different gestures of a robot bear that may communicate messages to an observer, and to investigate how different situations affect how gestures are perceived. 本研究の目的と背景 我々(実験者)は、人と社会的に関わるロボットのデザインに関心を持っています。人は情報を伝えるため
に様々なジェスチャーを使いますが、本実験では、クマ型ロボットのジェスチャーがどの程度同じことができ
るかを知りたいと考えています。本実験の目的は、ロボットのジェスチャーが観察者にどのようなメッセージ
を伝えるか、また、状況が異なった時にそのジェスチャーの知覚がどのように影響を受けるかを調べること
です。 Procedures: This study will involve creating gestures with a toy bear robot. First we will ask you to complete questionnaires about your demographic information, experience with technology and personality. Then you will be presented with a series of scenarios from everyday life, and for each scenario you will be asked to create a gesture for the bear that you think conveys a message. At the end of the session you will be asked some questions about how you came up with the gestures. The bear will be video-recorded during the session and you will be audio-recorded. The entire session will last approximately 1 hour. 手続き この実験ではクマのぬいぐるみの形をしたロボットのジェスチャーを用います。実験では、まずあなたの年
齢性別などの一般的な情報や、テクノロジーに対する経験、性格特性などに関する質問をアンケート用紙
で答えていただきます。その後、日常生活における状況を提示しますので、それぞれの状況において、適
当なメッセージを伝えていると思われるジェスチャーを、実際にクマのロボットを動かして作っていただきま
す。最後にそのジェスチャーをどのような考えで作ったのかをお聞きします。実験中、クマロボットの動きと
あなたの音声をビデオで撮影します。 実験全体にかかる時間は約 1 時間を想定しています。
Benefits: Information learned from this study will be used to gain insight into how people respond to gestures in a robot. We will evaluate the gestures created in a future study, and the results have potential to guide the design of new social robots and improve human-robot interaction. 効用 この実験で得られる情報は、人間がロボットのジェスチャーにどのように反応するかを知る手がかりを提供し
ます。作成されたジェスチャーは次の段階の実験で用いる予定です。この結果は社会的ロボットをデザイン
し、人間とロボットのインタラクションを改善するのに役立つことが期待されます。 Risks: There is minimal risk involved in this study. リスク この実験は危険なものではありません。リスクはきわめて低いと考えられます。 Privacy & Confidentiality: All information obtained during the study will be held in strict confidence. We ask that you keep the proceedings of the study confidential. You will be identified with a study number only, and the coding record will be stored in a secure cabinet accessible to the investigators only. No information identifying you will be transferred outside the investigators in this study. Original tapes, transcripts, and written observations will be stored in a secure cabinet until the end of the study. All tapes will be destroyed after the analysis is completed and manuscript written and published. プライバシーについて 本実験で知りえた情報は慎重に管理し、上で述べた研究の目的以外に用いることはありません。 テープ、トランスクリプト(発言を文字に書き起こしたもの)、手書きの観察メモなどのデータおよび分析中の
記録は実験関係者のみがアクセスできる安全な場所で管理します。分析の終了後はテープは廃棄します
分析された結果は学術的な目的で公開されることがありますが、あなた個人が特定できるような情報を実験
関係者以外に公開することはありません。 また、今後の実験に影響を与える恐れがありますので、本実験の内容を他の方に口外なさらないようお願
いいたします。 Participation: Your participation in this study is voluntary. You can choose to not participate or you may withdraw at any time without any consequences. 実験への参加 本実験への参加は自由意志によるものです。あなたは参加しないこともできますし、参加を途中でやめるこ
ともできます。 Publication of research findings: We will publish our results in aggregate form only. Transcripts and video recordings will not be released. Summary of data and quotes from transcripts may be published, but not in a manner which allows the data of individuals to be identified. 結果の公表 本実験の分析結果を、学術的な目的で公表する場合があります。 その際、要約されたデータやトランスクリプトからの引用を用いることがありますが、あなた個人が特定できる
ような情報は公開しません。
70
Honorarium: As an appreciation for your participation, you will receive JPY¥1500 after the study session. 謝礼 貴重なお時間を割いてご協力いただくお礼として、実験終了後に 1500 円をお渡しします。 Questions: If you have any general questions about the study, please call Jamy Li at 080-3084-0243. 質問 もし研究全般に関してご質問があれば、Jamy Li (080-3084-0243)にご連絡ください。
71
72
同意書 (Consent) I have had the opportunity to discuss this study, and my questions have been answered to my satisfaction. I consent to take part in the study with the understanding that I may withdraw at any time without any consequences. I have received a signed copy of this consent form. I voluntarily consent to participate in this study. 私は本研究についての説明を受け、私の疑問に対して納得のいく回答が得られました。 私はいつでも参加を中断できることを理解した上で、自分の意思でこの実験に参加することに同意します。
私はサイン済みの同意書のコピーを受け取りました。 Participant to complete: Do you understand that you have been asked to participate in a research study? 本実験に参加を求められていることを理解しましたか? はい いいえ Have you received and read the attached information sheet? 添付された情報シートを受け取り、読みましたか? はい いいえ Do you understand the risks and benefits involved in taking part in this study? 本実験の効用とリスクについて理解しましたか? はい いいえ Do you understand that you are free to refuse to participate or withdraw from the study at any time? 参加を拒否することも実験途中で参加をやめることもできることを理解しましたか?
はい いいえ Do you understand that all videotaped/audiotaped components will be indentifed by number and your name will kept confidential? あなたの名前が明かされないよう、すべての映像/音声データは数字で名付けられた上で分析されること
を理解しましたか? はい いいえ Do you understand that you will not be identified in any publications that may result from this study? 本研究の結果が公表されるとしても、あなた個人は特定されないことを理解しましたか?
はい いいえ I agree to participate in this study and I allow the study team to conduct the interview. 私はこの実験に参加し、実験者からインタビューを受けることに同意します。 ____________________________ ___________ ______________________ Signature of Participant Date Witness 参加者の署名 日付 証人の署名 ____________________________ ______________________ Printed Name Printed Name 参加者の名前 証人の名前 I believe that the person signing this form understand what is involved in the study and voluntarily agrees to participate. 上記参加者は、実験内容について理解し、自発的に参加に同意しました。 ____________________________ _______________ Signature of Investigator/Designee Date 実験者のサイン 日付
Instructions for Subject (Study 1) 実験1の教示 The purpose of this experiment is to create gestures with the toy bear in front of you. Here is a list of scenarios based on situations in everyday life. <hand over sheet> For each scenario, please do the following: First, read the scenario out loud. Pretend that you are in this scenario. Then, pretend that you see the bear robot. Create a gesture that the bear will make. The gesture must try to convey a message to you. Take your time to experiment with different movements of the bear. Keep in mind that the bear can only move its head and its two arms at the shoulder (it cannot move its elbows or legs). Once you have decided on your gesture, tell the experimenter you are ready to record the gesture and do the gesture again. If you make a mistake you can start again. After you are done with the gesture, write down the bear's message (i.e. what is the bear trying to tell you?) on the sheet of paper. If you think the gesture has more than one message, write down all of them. Last, answer the two statements. For each statement circle a number to indicate how much you agree or disagree with that statement. If you have any questions at any point in the experiment please ask us. この実験では、目の前のクマのぬいぐるみ型ロボットでジェスチャーを作ってもらいます。 これは日常生活の中の状況に基づいたシナリオのリストです。”紙を渡す” それぞれのシナリオに対して以下のことをしてください: まず、シナリオを声に出して読んでください。 シナリオの中に入ったつもりで想像してください。 次に、その状況でクマのぬいぐるみを見たと仮定して、クマがどんなジェスチャーをするかを考えて作ってください。時
間を使っていろいろな動きを試してみてください。 クマロボットが動かせるのは頭(縦方向と横方向)と両腕(肘は動きません)のみで、 肘や足は動きません。 ジェスチャが完成したら実験者に記録する準備ができたと伝え、記録のためにジェスチャを行なってください。もし間
違えてしまった場合はやり直すことができます。 ジェスチャーを終えたら、クマロボットがどのようなメッセージをあなたに伝えてきそうかを考え、紙に記入してください。
もしメッセージが複数ある場合には、それらをすべて書いてください。 最後に、二つの質問に答えてください。それぞれの質問に対して、どのくらい当てはまると思うか数字を選んでくださ
い。 実験中に何か質問があればいつでも質問してください。
73
74
Pre-study Questionnaire (Study 1) 実験前の質問(実験1) PART A – DEMOGRAPHIC INFORMATION デモグラフィックな情報 For each question, please circle the appropriate answer. 以下の質問で、あてはまるものを選んでください。 1. Please select your age group あなたの年齢にあてはまるグループ 1) 18-20 2) 21-25 3) 26-30 4) 31-35 5) 36-40 6) 41-50 7) 51-60 8) 61 歳以上 2. What is your gender? (Optional) 性別(任意) 1) Male 男性 2) Female 女性 3. Are you a native speaker of English? あなたは英語のネイティブスピーカーですか? 1) Yes はい 2) No いいえ 4. What is the highest level of education completed? 最終学歴は? 1) Less than High School 高校以下 2) High School Grad/GED 高校 3) Some College/Tech.College Graduate 短大・専門学校 4) University Graduate 大学(学部) 5) Post-Graduate 大学院 PART B – TECHNOLOGY PROFILE INVENTORY テクノロジーに対する考え We would like to know how comfortable you are with using computer technology. For each of the following questions, please circle the number that corresponds with your comfort level. 以下の文章があなたのコンピュータやロボットに対する考えや態度に対しどの程度一致しているかを 5 段階でお答えください。
Strongly disagree
全く同意しな
い
Disagree 同意しない
Neutral どちらとも
いえない
Agree 同意する
Strongly agree
強く同意す
る
1) Learning about computers can be fun even when it isn’t useful. コンピュータについて学ぶことは、たとえそれが役立たなくても楽し
い。 1 2 3 4 5
2) I don’t want to know more about computers than I have to. コンピュータについて、必要以上のことは学びたいとは思わない。
1 2 3 4 5
3) I am confident in my ability to master new skills with computers. コンピュータに関わる新しいスキルをマスターする能力に自信がある。
1 2 3 4 5
4) I can solve most problems with computers by myself. コンピュータに関わる大抵の問題は自分で解決できる。
1 2 3 4 5
5) I don’t care about how technology works as long as I can do what I want with it. テクノロジーは、それを使って自分がしたいことができればよく、しくみ
自体には興味はない。
1 2 3 4 5
6) I want to use new technologies as soon as possible. 新しいテクノロジーはいち早く使ってみたい。
1 2 3 4 5
7) I would feel uneasy if robots really had emotions. もしロボットが本当に感情を持ったら不安だ。
1 2 3 4 5
8) Something bad might happen if robots developed into living beings. ロボットが生き物に近づくと、人間にとってよくないことがありそうな気
がする。
1 2 3 4 5
9) I would feel relaxed talking with robots. ロボットと会話すると、とてもリラックスできるだろう。
1 2 3 4 5
75
1) If robots had emotions, I would be able to make friends with them. ロボットが感情を持ったら、親しくなれるだろう。
1 2 3 4 5
2) I feel comforted being with robots that have emotions. 感情的な動きをするロボットを見ると、気分がいやされる。
1 2 3 4 5
PART C – PERSONALITY QUESTIONNAIRE 性格に関する質問 以下の項目が、あなたの性格を表す言葉としてどの程度適当であるかを 7 段階でお答えください。
• Extraverted, enthusiastic. 外向的、熱中しやすい 全く同意しない 同意しない どちらかといえ
ば 同意しない
どちらでも ない
どちらかといえ
ば 同意する
同意する 強く同意する
1 2 3 4 5 6 7
Critical, quarrelsome. 批判的、短気な 全く同意しない 同意しない どちらかといえ
ば 同意しない
どちらでも ない
どちらかといえ
ば 同意する
同意する 強く同意する
1 2 3 4 5 6 7
• Dependable, self-disciplined. 頼りになる、自制力のある 全く同意しない 同意しない どちらかといえ
ば 同意しない
どちらでも ない
どちらかといえ
ば 同意する
同意する 強く同意する
1 2 3 4 5 6 7
• Anxious, easily upset. 心配症、動揺しやすい 全く同意しない 同意しない どちらかといえ
ば 同意しない
どちらでも ない
どちらかといえ
ば 同意する
同意する 強く同意する
1 2 3 4 5 6 7
• Open to new experiences, complex. 新しい経験を好む、複雑な 全く同意しない 同意しない どちらかといえ
ば 同意しない
どちらでも ない
どちらかといえ
ば 同意する
同意する 強く同意する
1 2 3 4 5 6 7
• Reserved, quiet. 控え目な、静かな 全く同意しない 同意しない どちらかといえ
ば 同意しない
どちらでも ない
どちらかといえ
ば 同意する
同意する 強く同意する
1 2 3 4 5 6 7
• Sympathetic, warm. 思いやりのある、温かい 全く同意しない 同意しない どちらかといえ
ば 同意しない
どちらでも ない
どちらかといえ
ば 同意する
同意する 強く同意する
1 2 3 4 5 6 7
• Disorganized, careless. いい加減な、不注意な 全く同意しない 同意しない どちらかといえ
ば 同意しない
どちらでも ない
どちらかといえ
ば 同意する
同意する 強く同意する
1 2 3 4 5 6 7
• Calm, emotionally stable. 冷静な、気持ちが安定した 全く同意しない 同意しない どちらかといえ
ば 同意しない
どちらでも ない
どちらかといえ
ば 同意する
同意する 強く同意する
1 2 3 4 5 6 7
• Conventional, uncreative. 型にはまった、非創造的な 全く同意しない 同意しない どちらかといえ
ば 同意しない
どちらでも ない
どちらかといえ
ば 同意する
同意する 強く同意する
1 2 3 4 5 6 7
Scenarios (Study 1) 状況(実験1) Please create a gesture using the robot bear for each of the following scenarios. Write down the message(s) you think the gesture conveys, then circle a number for each statement to indicate the extent to which you agree or disagree with that statement. クマ型ロボットを用いて、下の1~11 の状況に応じたジェスチャーを作ってください。また、そのジェスチャーが伝える
メッセージを下の枠に記入してください。最後に、作成したジェスチャーに対するあなたの確信度を表す2つのステー
トメントを 7 段階で評価してください。 Scenario 1: You have just returned home.
あなたは今家に帰ってきました。
Message(s) Conveyed by Gesture: ジェスチャーが
伝えるメッセー
ジ
Statement 全く同意でき
ない 同意できない どちらかとい
えば同意でき
ない
どちらともい
えない どちらかとい
えば同意する 同意する 強く同意する
This gesture is appropriate for the scenario. このジェスチャーはこの状
況に適切である。
1 2 3 4 5 6 7
This gesture and this message agree with each other. このジェスチャーとメッセー
ジはよく合っている。
1 2 3 4 5 6 7
Scenario 2: You are in the living room watching TV. You laugh at a funny show.
あなたは居間にいて、面白い番組をテレビで見て笑っています。
Message(s) Conveyed by Gesture: ジェスチャーが
伝えるメッセー
ジ
Statement 全く同意でき
ない 同意できない どちらかとい
えば同意でき
ない
どちらともい
えない どちらかとい
えば同意する 同意する 強く同意する
This gesture is appropriate for the scenario. このジェスチャーはこの状
況に適切である。
1 2 3 4 5 6 7
This gesture and this message agree with each other. このジェスチャーとメッセー
ジはよく合っている。
1 2 3 4 5 6 7
Note: Additional sheets omitted.
76
Interview Questions (Study 1) インタビュー(実験1)
1. How did you create your gestures? What strategies, ideas or inspirations did you use? あなたはどのようにジェスチャーを作りましたか?どんなストラテジー、考え、ひらめきがあったか教えてください。 Receipt of Honorarium Form (Study 1) 領収証(実験1) 領収証 研究プロジェクト: クマ型ロボットを用いたメッセージを伝えるジェスチャーに関する研究(実験1) 実験者: Jamy Li (Primary investigator): 080-3084-0243 [email protected] 私は上記の研究の実験参加への謝礼として 1500 円を受け取りました。 ____________________________ ___________ 参加者の署名 日付 ____________________________ 参加者の名前
77
実験 2 の教示 Your task in this experiment is to view bear animations and bear robot videos and try to identify what message the bear is communicating to you. You will be shown a series of animations and robot videos. For each viewing, fill in the information on this sheet <hand over sheet>. Select 3 messages from the list provided that you think are the best matches and indicate how much you agree or disagree that the message matches the gesture. Some of the videos will have associated scenarios and some will not. For those with scenarios, pretend that you are in the scenario when you see the bear. If you have any questions at any point in the experiment please ask us. この実験では、クマのアニメーションとクマのロボットの映像を見て、 クマがあなたに伝えようとしているメッセージが何かを 考えてもらいます。 いくつかのアニメーションとロボットの映像を見ていただきます。 それぞれに対し、この用紙の項目に記入してください(紙を渡す) リストにあるメッセージの中からそれに最も合うと思うものを3つ選び、 選んだメッセージがそのジェスチャにどの位合うか評価してください。 ビデオにはシナリオがあるものとないものがあります。 シナリオがあるものに関しては、あなたがシナリオの中にいて クマを見ていると想定してください。 実験中に何か質問があればいつでも質問してください。
78
Interview Questions (Study 2) ? ? ? ? ? ? ? ? ? 2?
1. How did you judge what message the gestures were conveying? What strategies, ideas or inspirations did you use?
あなたはどのようにしてジェスチャーが伝えようとするメッセージを判断しましたか?どんなストラテジー、考
え、ひらめきがあったか教えてください。 2. Was it easier to identify the message with the bear animation or the bear robot? クマのアニメーションとクマのロボットのどちらがメッセージを判断しやすかったですか? 3. Was it easier to identify the message with a scenario or without one?
シナリオつきとシナリオなしではどちらが判断しやすかったですか?
It was easier to identify the message with the bear animation than with the bear robot クマのアニメーションのほうがクマのロボットよりもメッセージを判断しやすかった。 It was easier to identify the message with a scenario than without one シナリオつきのほうがシナリオなしよりも判断しやすかった。 Scenario List
1. あなたは今家に帰ってきました。 2. あなたは居間にいて、面白い番組をテレビで見て笑っています。 3. あなたはこのクマを抱き上げようとして、クマの方に手を伸ばしました。 4. あなたはクマの頭をなでました。 5. 机に向かって長時間仕事(勉強)をしていたあなたは、あくびをしました。 6. あなたは服を脱ぎました。 7. あなたは夕食を食べ始めました 8. あなたはクマに「おやすみ」と言いました。 9. あなたは泣き始めました。 10. あなたはコンピュータで音楽を聴きはじめました。 11. あなたはコンピュータに向かって作業をしています。 12. あなたはビールを飲み始めました。
79
Appendix B: Materials for Studies 3 and 4 This section contains the experimental materials for Studies 3 and 4. Consent Form (Study 3) Title of research project: A Study on Message- and Emotion-Conveying Gestures with a Bear Robot (Study 1 of 2) Investigators: Mark Chignell (Primary investigator): 416-978-8951 [email protected] Li: 416-946-3995
80
[email protected] You are being invited to participate in a research study. Before agreeing to participate, it is important that you read and understand the following explanation of the proposed study procedures. The following information describes the purpose, procedures, benefits and risks associated with the study. It also describes your right to refuse to participate or to withdraw from the study at any time. In order to decide whether you wish to participate in this research study, you should understand enough about its risks and benefits to be able to make an informed decision. This is known as the informed consent process. If you have any questions please contact the study investigators (contact information listed above). Please make sure all your questions have been answered to your satisfaction before signing this document. Background & Purpose of Research: How can we design robots to interact with people in more social ways? Since people use gestures to convey information, we are investigating how the gestures of a robot bear can do the same. The purpose of this study is to identify different gestures of a robot bear that may communicate messages to an observer, and to investigate how different situations affect how gestures are perceived. Procedures: This study will involve creating gestures with a toy bear robot. First we will ask you to complete questionnaires about your demographic information, experience with technology and personality. Then you will be presented with a series of emotions and messages, and for each you will be asked to create a gesture for the bear that you think conveys that emotion or message. The entire session will last approximately 45 min. Benefits: Information learned from this study will be used to gain insight into how people respond to gestures in a robot. We will evaluate the gestures created in this study in a follow-up study, and the results have potential to guide the design of new social robots and improve human-robot interaction. Risks: There is minimal risk involved in this study.
Privacy & Confidentiality: All information obtained during the study will be held in strict confidence. We ask that you keep the proceedings of the study confidential. You will be identified with a study number only, and the coding record will be stored in a secure cabinet accessible to the investigators only. No information identifying you will be transferred outside the investigators in this study. Original tapes, transcripts, and written observations will be stored in a secure cabinet until the end of the study. All recordings will be destroyed after the analysis is completed and manuscript written and published. Participation: Your participation in this study is voluntary. You can choose to not participate or you may withdraw at any time without any consequences. Publication of research findings: We will publish our results in aggregate form only. Transcripts and recordings will not be released. Summary of data and quotes from transcripts may be published, but not in a manner which allows the data of individuals to be identified. Honorarium: As an appreciation for your participation, you will receive CDN$15.00 after the study session. Questions: If you have any general questions about the study, please call Jamy Li at 416-946-3995. If you have questions about your rights as a research subject, please contact the Ethics Review Office at 416-946-3273 or email: [email protected] Consent I have had the opportunity to discuss this study and my questions have been answered to my satisfaction. I consent to take part in the study with the understanding I may withdraw at any time. I voluntarily consent to participate in this study. Participant Name: ___________________________ Participant Signature: ________________________ Date: _____________________________________
81
Questionnaires (Study 3) PART A – DEMOGRAPHIC INFORMATION For each question, please circle the appropriate answer. 1. Please select your age group 1) 18-20 2) 21-25 3) 26-30 4) 31-35 5) 36-40 6) 41-50 7) 51-60 8) 61 or over 2. What is your gender? (Optional) 1) Male 2) Female
3. Are you a native speaker of English? 1) Yes 2) No 4. What is your highest level of education in progress or completed? 1) Less than High School 2) High School 3) College/Technical College 4) University (Bachelors) 5) Post-Graduate or Professional Degree
PART B – TECHNOLOGY ATTITUDE QUESTIONNAIRE We would like to know how comfortable you are with robots. For each of the following questions, please circle the number that corresponds with your comfort level.
Strongly disagree
Disagree Neutral Agree Strongly agree
1) I would feel uneasy if robots really had emotions. 1 2 3 4 5 2) Something bad might happen if robots developed
into living beings. 1 2 3 4 5
3) I would feel relaxed talking with robots. 1 2 3 4 5 4) I would feel uneasy if I was given a job where I had
to use robots. 1 2 3 4 5
5) If robots had emotions, I would be able to make friends with them. 1 2 3 4 5
6) I feel comforted being with robots that have emotions. 1 2 3 4 5
7) The word ‘‘robot’’ means nothing to me. 1 2 3 4 5 8) I would feel nervous operating a robot in front of
other people. 1 2 3 4 5
9) I would hate the idea that robots or artificial intelligences were making judgments about things. 1 2 3 4 5
10) I would feel very nervous just standing in front of a robot. 1 2 3 4 5
11) I feel that if I depend on robots too much, something bad might happen. 1 2 3 4 5
12) I would feel paranoid talking with a robot. 1 2 3 4 5 13) I am concerned that robots would be a bad
influence on children. 1 2 3 4 5
14) I feel that in the future society will be dominated by robots. 1 2 3 4 5
82
PART C – PERSONALITY QUESTIONNAIRE Here are a number of personality traits that may or may not apply to you. Please write a number for each statement to indicate the extent to which you agree or disagree with that statement. You should rate the extent to which the pair of traits applies to you, even if one characteristic applies more strongly than the other. This questionnaire is adapted from the Big Five Personality test from Gosling et al.(2003). Disagree strongly
Disagree moderately
Disagree a little
Neither agree nor disagree
Agree a little
Agree moderately
Agree strongly
1 2 3 4 5 6 7 I see myself as:
1. _____ Extraverted, enthusiastic.
2. _____ Critical, quarrelsome.
3. _____ Dependable, self-disciplined.
4. _____ Anxious, easily upset.
5. _____ Open to new experiences, complex.
6. _____ Reserved, quiet.
7. _____ Sympathetic, warm.
8. _____ Disorganized, careless.
9. _____ Calm, emotionally stable.
10. _____ Conventional, uncreative.
Gestures Please manipulate the robot bear to convey each of the following: 1) I’m angry.
2) I’m disgusted.
3) I’m afraid.
4) I’m happy.
5) I’m sad.
6) I’m surprised.
83
Honorarium Receipt I acknowledge that I have received an honorarium of $15.00 for participation in the study entitled “A Study on Message- and Emotion-Conveying Gestures with a Bear Robot”. Participant Name: ___________________________ Participant Signature: ________________________ Date: _____________________________________
Scripted Instructions (Study 3) The purpose of this experiment is to create gestures by manipulating the toy bear in front of you. You will be asked to create 6 different gestures, each corresponding to an emotion on this sheet. Before you begin, please spend some time familiarizing yourself with how the robot moves. You may only move its head and two arms. Each gesture should begin at a neutral position with the head facing straight and both arms down. Each gesture can be up to 30 seconds in length. Let me know if you have any questions and when you’re ready to start.
84
Consent Form (Study 4) Title of research project: A Study on Emotion-Conveying Gestures with a Bear Robot (Study 2 of 2) Investigators: Mark Chignell (Primary investigator): 416-978-8951 [email protected] Li: 416-946-3995 [email protected]
You are being invited to participate in a research study. Before agreeing to participate, it is important that you read and understand the following explanation of the proposed study procedures. The following information describes the purpose, procedures, benefits and risks associated with the study. It also describes your right to refuse to participate or to withdraw from the study at any time. In order to decide whether you wish to participate in this research study, you should understand enough about its risks and benefits to be able to make an informed decision. This is known as the informed consent process. If you have any questions please contact the study investigators (contact information listed above). Please make sure all your questions have been answered to your satisfaction before signing this document. Background & Purpose of Research: How can we design robots to interact with people in more social ways? Since people usegestures to convey information, we are investigating how the gestures of a robot bear can do the same. The purpose of this study is to identify different gestures of a robot bear that may communicate messages to an observer, and to investigate how different situations affect how gestures are perceived. Procedures: This study will involve watching videos of gestures of a toy bear robot. First we will ask you to complete questionnaires about your demographic information, attitude about robots and personality. Then you will be presented with a series of gesture videos, and for each you will be asked to complete a brief questionnaire. You will be asked a short interview question about the videos at the end of the study. The entire session will last approximately 1 hour. Benefits: Information learned from this study will be used to gain insight into how people respond to gestures in a robot. The results have potential to guide the design of new social robotsand improve human-robot interaction. Risks: There is minimal risk involved in this study.
85
Privacy & Confidentiality: All information obtained during the study will be held in strict confidence. We ask that you keep the proceedings of the study confidential. You will be identified with a study number only, and the coding record will be stored in a secure cabinet accessible to the investigators only. No information identifying you will be transferred outside the investigators in this study. Original transcripts and written observations will be stored in a secure cabinet until the end of the study. All recordings will be destroyed after the analysis is completed and manuscript written and published. Participation: Your participation in this study is voluntary. You can choose to not participate or you may withdraw at any time without any consequences. Publication of research findings: We will publish our results in aggregate form only. Transcripts and recordings will not be released. Summary of data and quotes from transcripts may be published, but not in a manner which allows the data of individuals to be identified. Honorarium: As an appreciation for your participation, you will receive CDN$12.00 after the study session. Questions: If you have any general questions about the study, please call Jamy Li at 416-946-3995. If you have questions about your rights as a research subject, please contact the Ethics Review Office at 416-946-3273 or email: [email protected] Consent I have had the opportunity to discuss this study and my questions have been answered to my satisfaction. I consent to take part in the study with the understanding I may withdraw at any time. I voluntarily consent to participate in this study. Participant Name: ___________________________ Participant Signature: ________________________ Date: _____________________________________
86
Questionnaires (Study 4) PART A – DEMOGRAPHIC INFORMATION For each question, please circle the appropriate answer. 1. Please select your age group 1) 18-20 2) 21-25 3) 26-30 4) 31-35 5) 36-40 6) 41-50 7) 51-60 8) 61 or over 2. What is your gender? (Optional) 1) Male 2) Female
3. Are you a native speaker of English? 1) Yes 2) No 4. What is your highest level of education in progress or completed? 1) Less than High School 2) High School 3) College/Technical College 4) University (Bachelors) 5) Post-Graduate or Professional Degree
PART B – ROBOT ATTITUDE QUESTIONNAIRE We would like to know how comfortable you are with robots. For each of the following questions, please circle the number that corresponds with your comfort level.
Strongly disagree
Disagree Neutral Agree Strongly agree
1) I would feel uneasy if robots really had emotions. 1 2 3 4 5 2) Something bad might happen if robots developed
into living beings. 1 2 3 4 5
3) I would feel relaxed talking with robots. 1 2 3 4 5 4) I would feel uneasy if I was given a job where I had
to use robots. 1 2 3 4 5
5) If robots had emotions, I would be able to make friends with them. 1 2 3 4 5
6) I feel comforted being with robots that have emotions. 1 2 3 4 5
7) The word ‘‘robot’’ means nothing to me. 1 2 3 4 5 8) I would feel nervous operating a robot in front of
other people. 1 2 3 4 5
9) I would hate the idea that robots or artificial intelligences were making judgments about things. 1 2 3 4 5
10) I would feel very nervous just standing in front of a robot. 1 2 3 4 5
11) I feel that if I depend on robots too much, something bad might happen. 1 2 3 4 5
12) I would feel paranoid talking with a robot. 1 2 3 4 5 13) I am concerned that robots would be a bad
influence on children. 1 2 3 4 5
14) I feel that in the future society will be dominated by robots. 1 2 3 4 5
87
PART C – PERSONALITY QUESTIONNAIRE Here are a number of personality traits that may or may not apply to you. Please write a number for each statement to indicate the extent to which you agree or disagree with that statement. You should rate the extent to which the pair of traits applies to you, even if one characteristic applies more strongly than the other. This questionnaire is adapted from the Big Five Personality test from Gosling et al.(2003). Disagree strongly
Disagree moderately
Disagree a little
Neither agree nor disagree
Agree a little
Agree moderately
Agree strongly
1 2 3 4 5 6 7 I see myself as:
1. _____ Extraverted, enthusiastic.
2. _____ Critical, quarrelsome.
3. _____ Dependable, self-disciplined.
4. _____ Anxious, easily upset.
5. _____ Open to new experiences, complex.
6. _____ Reserved, quiet.
7. _____ Sympathetic, warm.
8. _____ Disorganized, careless.
9. _____ Calm, emotionally stable.
10. _____ Conventional, uncreative.
Honorarium Receipt I acknowledge that I have received an honorarium of $12.00 for participation in the study entitled “A Study on Emotion-Conveying Gestures with a Bear Robot”. Participant Name: ___________________________ Participant Signature: ________________________ Date: _____________________________________
88
Scripted Instructions (Study 4) The purpose of this experiment is to watch video-taped gestures of a robotic teddy bear. You will be asked to view 60 different gestures. For each gesture, watch the gesture and answer the questions on this sheet. Please let me know if you have any questions.
89
Gesture 1
Disagree strongly
Disagree moderately
Disagree a little
Neither agree nor disagree
Agree a little Agree moderately
Agree strongly
I like this gesture. 1 2 3 4 5 6 7
This gesture is lifelike. 1 2 3 4 5 6 7
This gesture’s meaning is:
I’m angry. 1 2 3 4 5 6 7
I’m disgusted. 1 2 3 4 5 6 7
I’m afraid. 1 2 3 4 5 6 7
I’m happy. 1 2 3 4 5 6 7
I’m sad. 1 2 3 4 5 6 7
I’m surprised. 1 2 3 4 5 6 7 Gesture 2
Disagree strongly
Disagree moderately
Disagree a little
Neither agree nor disagree
Agree a little Agree moderately
Agree strongly
I like this gesture. 1 2 3 4 5 6 7
This gesture is lifelike. 1 2 3 4 5 6 7
This gesture’s meaning is:
I’m angry. 1 2 3 4 5 6 7
I’m disgusted. 1 2 3 4 5 6 7
I’m afraid. 1 2 3 4 5 6 7
I’m happy. 1 2 3 4 5 6 7
I’m sad. 1 2 3 4 5 6 7
I’m surprised. 1 2 3 4 5 6 7 Gesture 3
Disagree strongly
Disagree moderately
Disagree a little
Neither agree nor disagree
Agree a little Agree moderately
Agree strongly
I like this gesture. 1 2 3 4 5 6 7
This gesture is lifelike. 1 2 3 4 5 6 7
This gesture’s meaning is:
I’m angry. 1 2 3 4 5 6 7
I’m disgusted. 1 2 3 4 5 6 7
I’m afraid. 1 2 3 4 5 6 7
I’m happy. 1 2 3 4 5 6 7
I’m sad. 1 2 3 4 5 6 7
I’m surprised. 1 2 3 4 5 6 7
Note: Additional sheets omitted.
90
Appendix C: Participant information for Studies 1-4
91
92 92
Appendix D: Coding schema for Studies 1 and 2
Messages1 I want to ask a 質問があるんだけど
2 Good job (I am お疲れさまー
3 Welcome home おかえりー
4 Take care of me かまって
5 What happened? どうしたの?6 I'm bored 退屈だ
7 It’s stupid バカじゃないの
8 Hmm...I don't うーん...よくわからない
9 I want to shake hands 握手したい
10 I want to be hugged 抱きしめて
11 Thank you ありがとー
12 I want more もっと
13 Please touch me 触って
14 Let's play 遊ぼうよ
15 You seem tired お疲れのようだね
16 No いいえ
17 Don’t do that! やめてー
18 That looks delicious それおいしそうだなー
19 It’s okay よしよし
20 Good Night おやすみなさい
21 I’m hungry お腹すいた
Emotions1 I am happy うれしい
2 I am interested 面白い
3 I am surprised 驚いた
4 I am confused 混乱している
5 I am embarrassed 恥ずかしい
6 I am sad 悲しい
7 I am feeling awkward 気まずい
8 I am angry 怒っている
9 I love you あなたが好きです
10 (neutral) 感情なし
93
gesture scen. part. Message e e e m m m1 1 1 Yeay.Welcome home.Yeay-! 1 32 1 2 Welcome home- I missed you-. 33 1 3 I'm hungry 10 214 1 4 Hey. (The bear is doing something. It realized that I'm home and says t 10 35 2 1 What happened?What happened?What?What? 2 3 4 5 16 2 2 I'm bored Take care of me-. 6 47 2 3 It's stupid- 10 78 2 4 Hmm..I don't understand.(The point I'mlaughing at?) 4 89 3 1 Yeay.You're going to hug me! 1 10 13
10 3 2 I'm happy-. 111 3 3 Hand shake! 10 9 1312 3 4 Oh, then..(Realizes that I'm going to hug and hold up the arms so it'll be 10 10 1313 4 1 (looking in my eyes) Thank you-. It feels good. 1 11 1314 4 2 I'm happy-. I want more 1 1215 4 3 More-! 10 1216 4 4 What happened? Do you want to touch me more? 4 3 5 117 5 1 Keep it up.Hey Hey don't sleep-. 10 2 1518 5 2 Let's play for a break-. 10 1419 5 3 Good job- 10 220 5 4 If you're tired, shall we do some excersize? Together? 10 14 15 121 6 1 No-. This is embarrasing-. 5 7 16 17*22 6 2 Do that where I can't see! 5 7 1723 6 3 … (do nothing) 1024 6 4 No-. (embarrasing) 5 7 16 17*25 7 1 Oh.That looks delicious-. Yum Yum.(Lick the fingers) 2 1826 7 2 Chew enough before you swallow.27 7 3 Let me see it! 228 7 4 Hmm. Are meals so (good/bad or happy/unhappy)? 2 129 8 1 Okay.(fall into doze)Good Night. 10 2030 8 2 Good night (yawn) 10 2031 8 3 Good night 10 2032 8 4 1. Good night-. 2.I'll be watching so no strangers ( strange things) come 10 2033 9 1 What?What happened?[bear's name] is sad, too. Shiku-shiku(weeping) 4 6 534 9 2 Don't cry-. Good [boy/girl]. Good [boy/girl]. [heart mark] 9 1935 9 3 It's ok- 1936 9 4 Hey-. Hey-. Look at me- attentions- Take care of me(He doesn't really mean it , bu 437 10 1 Oh music! Yeay Yeay. Go Go. Tada-(pose) 38 10 2 Zun, da- ,zuzun- da-(dancing on the beat.)39 10 3 Dance!40 10 4 When I hear music, my body reacts somehow. (I want the bear robot to become a "robot" in th41 11 1 (wants to play but) Kyoro-kyoro(look around) .I'll be quiet. 10 1442 11 2 I'm bored-. 10 643 11 3 Play with me! 10 1444 11 4 Oops I made eye contact with her! (Tries to see somewhere else) (The 745 12 1 Oh. It's beer. I want some. Beer- beer- beer-! 2 1846 12 2 Good job-!(for the day) 10 247 12 3 Don't drink too much! 10 1748 12 4 Hey look! Take care of me! (Bangs something next to the bear and tries 8 4
Note: Some gestures are uncoded and excluded from analyses of recognition.
94
Appendix E: Content communication, binomial test This section presents data used to investigate content communication in Studies 1 and 2.
Binomial Test
0 338 .8 .1 .000a
1 94 .2432 1.0
Group 1Group 2Total
correct eCategory N
ObservedProp. Test Prop.
Asymp. Sig.(1-tailed)
Based on Z Approximation.a.
Binomial Test
0 267 .603 .095 .000a
1 176 .397443 1.000
Group 1Group 2Total
correct mCategory N
ObservedProp. Test Prop.
Asymp. Sig.(1-tailed)
Based on Z Approximation.a.
Note: “correct e” is correct emotion; “correct m” is correct message; 0 represents incorrect recognition of emotion/message; 1 represents correct recognition.
95
Appendix F: Emotional valence, ANOVAs and contrasts This section presents data used to investigate content communication in Studies 1 and 2.
Mauchly's Test of Sphericityb
.712 3.392 2 .183 .777 .881 .500
.903 1.015 2 .602 .912 1.000 .500
.978 .225 2 .894 .978 1.000 .500
.407 8.983 2 .011 .628 .671 .500
.744 2.956 2 .228 .796 .909 .500
.913 .907 2 .635 .920 1.000 .500
.712 3.392 2 .183 .777 .881 .500
.885 1.219 2 .544 .897 1.000 .500
MeasureheadrightarmleftarmtimelikelifelikecorrectEcorrectM
Within Subjects Effectvalence
Mauchly's WApprox.
Chi-Square df Sig.Greenhouse-Geisser Huynh-Feldt Lower-bound
Epsilona
Tests the null hypothesis that the error covariance matrix of the orthonormalized transformed dependent variables is proportional to anidentity matrix.
May be used to adjust the degrees of freedom for the averaged tests of significance. Corrected tests are displayed in the Tests ofWithin-Subjects Effects table.
a.
Design: Intercept Within Subjects Design: valence
b.
Note: Shows assumption of sphericity holds for investigated measures.
Multivariated,e
1.387 4.525 16.000 32.000 .000 .693 72.393 .999.093 4.282b 16.000 30.000 .000 .695 68.517 .998
4.611 4.035 16.000 28.000 .001 .697 64.559 .9962.684 5.369c 8.000 16.000 .002 .729 42.952 .980
Pillai's TraceWilks' LambdaHotelling's TraceRoy's Largest Root
Within Subjects Effectvalence
Value F Hypothesis df Error df Sig.Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
Exact statisticb.
The statistic is an upper bound on F that yields a lower bound on the significance level.c.
Design: Intercept Within Subjects Design: valence
d.
Tests are based on averaged variables.e.
96
Univariate Tests
6.423 2 3.212 1.327 .286 .108 2.655 .2566.423 1.553 4.136 1.327 .284 .108 2.062 .2256.423 1.761 3.647 1.327 .285 .108 2.338 .2406.423 1.000 6.423 1.327 .274 .108 1.327 .184
38.124 2 19.062 6.064 .008 .355 12.129 .83738.124 1.824 20.903 6.064 .010 .355 11.061 .80938.124 2.000 19.062 6.064 .008 .355 12.129 .83738.124 1.000 38.124 6.064 .032 .355 6.064 .61245.029 2 22.515 7.141 .004 .394 14.281 .89445.029 1.957 23.015 7.141 .004 .394 13.971 .88945.029 2.000 22.515 7.141 .004 .394 14.281 .89445.029 1.000 45.029 7.141 .022 .394 7.141 .682
6.299 2 3.150 .761 .479 .065 1.522 .1636.299 1.256 5.017 .761 .427 .065 .956 .1366.299 1.341 4.697 .761 .435 .065 1.021 .1396.299 1.000 6.299 .761 .402 .065 .761 .1265.579 2 2.789 12.226 .000 .526 24.452 .9895.579 1.592 3.503 12.226 .001 .526 19.469 .9715.579 1.819 3.067 12.226 .000 .526 22.235 .9835.579 1.000 5.579 12.226 .005 .526 12.226 .8893.678 2 1.839 4.873 .018 .307 9.745 .7443.678 1.840 1.998 4.873 .021 .307 8.968 .7163.678 2.000 1.839 4.873 .018 .307 9.745 .7443.678 1.000 3.678 4.873 .049 .307 4.873 .521
.637 2 .319 7.752 .003 .413 15.504 .918
.637 1.553 .410 7.752 .006 .413 12.040 .855
.637 1.761 .362 7.752 .004 .413 13.653 .888
.637 1.000 .637 7.752 .018 .413 7.752 .718
.237 2 .118 4.353 .026 .284 8.706 .693
.237 1.794 .132 4.353 .031 .284 7.809 .656
.237 2.000 .118 4.353 .026 .284 8.706 .693
.237 1.000 .237 4.353 .061 .284 4.353 .47853.228 22 2.41953.228 17.085 3.11553.228 19.374 2.74753.228 11.000 4.83969.153 22 3.14369.153 20.063 3.44769.153 22.000 3.14369.153 11.000 6.28769.367 22 3.15369.367 21.522 3.22369.367 22.000 3.15369.367 11.000 6.30691.032 22 4.13891.032 13.813 6.59191.032 14.752 6.17191.032 11.000 8.276
5.019 22 .2285.019 17.517 .2875.019 20.006 .2515.019 11.000 .4568.303 22 .3778.303 20.244 .4108.303 22.000 .3778.303 11.000 .755
.904 22 .041
.904 17.085 .053
.904 19.374 .047
.904 11.000 .082
.598 22 .027
.598 19.735 .030
.598 22.000 .027
.598 11.000 .054
Sphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-bound
Measurehead
rightarm
leftarm
time
like
lifelike
correctE
correctM
head
rightarm
leftarm
time
like
lifelike
correctE
correctM
Sourcevalence
Error(valence)
Type III Sumof Squares df Mean Square F Sig.
Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
97
Tests of Within-Subjects Contrasts
12.834 1 12.834 5.462 .039 .332 5.462 .5683.564 1 3.564 .530 .482 .046 .530 .1026.294 1 6.294 .811 .387 .069 .811 .131
72.212 1 72.212 10.848 .007 .497 10.848 .85031.744 1 31.744 4.728 .052 .301 4.728 .50988.933 1 88.933 13.010 .004 .542 13.010 .906
7.625 1 7.625 3.872 .075 .260 3.872 .435.303 1 .303 .025 .878 .002 .025 .052
8.702 1 8.702 18.000 .001 .621 18.000 .9718.021 1 8.021 32.843 .000 .749 32.843 .9993.706 1 3.706 4.964 .048 .311 4.964 .5296.849 1 6.849 12.090 .005 .524 12.090 .885
.006 1 .006 .150 .706 .013 .150 .065
.875 1 .875 7.382 .020 .402 7.382 .697
.262 1 .262 7.127 .022 .393 7.127 .682
.428 1 .428 6.310 .029 .365 6.310 .62925.848 11 2.35073.921 11 6.72085.390 11 7.76373.223 11 6.65773.854 11 6.71475.195 11 6.83621.660 11 1.969
134.363 11 12.2155.318 11 .4832.686 11 .2448.213 11 .7476.231 11 .566
.466 11 .0421.304 11 .119
.404 11 .037
.746 11 .068
valenceLevel 1 vs. Level 3Level 2 vs. Level 3Level 1 vs. Level 3Level 2 vs. Level 3Level 1 vs. Level 3Level 2 vs. Level 3Level 1 vs. Level 3Level 2 vs. Level 3Level 1 vs. Level 3Level 2 vs. Level 3Level 1 vs. Level 3Level 2 vs. Level 3Level 1 vs. Level 3Level 2 vs. Level 3Level 1 vs. Level 3Level 2 vs. Level 3Level 1 vs. Level 3Level 2 vs. Level 3Level 1 vs. Level 3Level 2 vs. Level 3Level 1 vs. Level 3Level 2 vs. Level 3Level 1 vs. Level 3Level 2 vs. Level 3Level 1 vs. Level 3Level 2 vs. Level 3Level 1 vs. Level 3Level 2 vs. Level 3Level 1 vs. Level 3Level 2 vs. Level 3Level 1 vs. Level 3Level 2 vs. Level 3
Measurehead
rightarm
leftarm
time
like
lifelike
correctE
correctM
head
rightarm
leftarm
time
like
lifelike
correctE
correctM
Sourcevalence
Error(valence)
Type III Sumof Squares df Mean Square F Sig.
Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
valence
3.888 .317 3.191 4.5863.399 .456 2.396 4.4022.854 .363 2.056 3.6524.524 .402 3.639 5.4102.796 .402 1.910 3.6815.249 .503 4.140 6.3573.396 .397 2.521 4.2702.300 .431 1.350 3.2505.022 .465 3.998 6.0478.323 .265 7.740 8.9067.367 .869 5.455 9.2797.526 .285 6.899 8.1533.843 .205 3.391 4.2953.877 .210 3.415 4.3394.694 .134 4.399 4.9904.297 .160 3.945 4.6504.098 .226 3.601 4.5954.853 .215 4.379 5.327
.164 .042 .071 .257
.457 .078 .286 .628
.187 .031 .119 .256
.361 .042 .267 .454
.319 .052 .205 .434
.508 .049 .400 .616
valence123123123123123123123123
Measurehead
rightarm
leftarm
time
like
lifelike
correctE
correctM
Mean Std. Error Lower Bound Upper Bound95% Confidence Interval
Note: valence value of 1 is negative valence, 2 is neutral valence, 3 is positive valence
98
Appendix G: Gesture complexity, ANOVAs This section presents data used to investigate gesture complexity in Studies 1 and 2. Arm movement
3. armMove
3.799 .153 3.462 4.1364.445 .138 4.142 4.7494.217 .123 3.946 4.4884.680 .147 4.356 5.0055.251 .125 4.976 5.5275.260 .115 5.008 5.5135.501 .169 5.129 5.8735.779 .146 5.458 6.100
.213 .029 .148 .278
.220 .031 .153 .288
.348 .020 .304 .392
.438 .034 .362 .5144.534 .030 4.468 4.6003.505 .026 3.447 3.5639.329 .025 9.274 9.384
10.089 .044 9.992 10.187
armMove1212121212121212
Measurelike
lifelike
confidenceE
confidenceM
correctE
correctM
agreeE
agreeM
Mean Std. Error Lower Bound Upper Bound95% Confidence Interval
Note: arm movement value of 1 is low movement, 2 is high movement
99
100
Tests of Within-Subjects Contrasts
.088 1 .088 .176 .683
.170 1 .170 .159 .697
.007 1 .007 .023 .881
.003 1 .003 .017 .900
.002 1 .002 .041 .844
.003 1 .003 .045 .837
.087 1 .087 .104 .753
.027 1 .027 .056 .8175.514 11 .501
11.744 11 1.0683.084 11 .2801.660 11 .151
.570 11 .052
.791 11 .0729.239 11 .8405.310 11 .483
.956 1 .956 1.816 .205
.832 1 .832 2.034 .182
.688 1 .688 6.090 .031
.318 1 .318 2.175 .168
.363 1 .363 5.268 .0423.769 1 3.769 100.343 .000
.060 1 .060 .261 .6194.688 1 4.688 9.825 .0105.794 11 .5274.499 11 .4091.242 11 .1131.609 11 .146
.758 11 .069
.413 11 .0382.532 11 .2305.248 11 .477
10.022 1 10.022 20.078 .0015.152 1 5.152 7.530 .019
.002 1 .002 .008 .9301.854 1 1.854 10.022 .009
.001 1 .001 .048 .830
.195 1 .195 6.009 .03225.423 1 25.423 624.530 .00013.869 1 13.869 257.012 .000
5.491 11 .4997.526 11 .6842.672 11 .2432.035 11 .185
.294 11 .027
.357 11 .032
.448 11 .041
.594 11 .054599 1 599 3 757 079
armMove
LinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
context
LinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
Linear
mediumLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
Linear
MeasurelikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlike
Sourcemedium
Error(medium)
context
Error(context)
armMove
Error(armMove)
medium * context
Type III Sumof Squares df Mean Square F Sig.
.599 1 .599 3.757 .079
.635 1 .635 3.740 .079
.410 1 .410 1.880 .198
.522 1 .522 1.389 .263
.002 1 .002 .041 .843
.023 1 .023 .978 .344
.073 1 .073 .125 .731
.584 1 .584 1.360 .2681.754 11 .1591.869 11 .1702.401 11 .2184.131 11 .376
.606 11 .055
.254 11 .0236.432 11 .5854.728 11 .430
.423 1 .423 5.270 .042
.057 1 .057 .546 .476
.217 1 .217 1.398 .262
.162 1 .162 .997 .340
.008 1 .008 .210 .656
.001 1 .001 .014 .9082.203 1 2.203 5.965 .033
.961 1 .961 1.396 .262
.883 11 .0801.147 11 .1041.706 11 .1551.784 11 .162
.433 11 .039
.614 11 .0564.062 11 .3697.574 11 .689
.078 1 .078 .232 .640
.001 1 .001 .008 .930
.315 1 .315 3.341 .095
.076 1 .076 .363 .559
.001 1 .001 .040 .846
.040 1 .040 .797 .391
.141 1 .141 .345 .5691.142 1 1.142 .941 .3533.714 11 .3381.482 11 .1351.039 11 .0942.317 11 .211
.254 11 .023
.546 11 .0504.491 11 .408
13.338 11 1.213.000 1 .000 .003 .960.990 1 .990 2.302 .157.417 1 .417 3.283 .097.058 1 .058 .288 .602.106 1 .106 1.916 .194.008 1 .008 .139 .716.624 1 .624 1.039 .330.123 1 .123 .141 .714
1.531 11 .1394.729 11 .4301.396 11 .1272.202 11 .200
.611 11 .056
.629 11 .0576.611 11 .6019.576 11 .871
LinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
LinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
LinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
LinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
LinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
glikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeM
medium * context
Error(medium*context)
medium * armMove
Error(medium*armMove)
context * armMove
Error(context*armMove)
medium * context *armMove
Error(medium*context*armMove)
101
Head movement
3. headMove
4.060 .128 3.777 4.3434.307 .147 3.983 4.6314.402 .099 4.183 4.6214.549 .130 4.263 4.8345.256 .117 4.997 5.5145.210 .105 4.978 5.4425.655 .149 5.328 5.9835.519 .180 5.123 5.916
.208 .027 .148 .268
.188 .032 .118 .257
.394 .024 .342 .447
.340 .040 .252 .427
.364 .002 .359 .368
.290 .002 .284 .295
.404 .001 .402 .406
.401 .002 .397 .405
headMove1212121212121212
Measurelike
lifelike
confidenceE
confidenceM
correctE
correctM
agreeE
agreeM
Mean Std. Error Lower Bound Upper Bound95% Confidence Interval
Note: head movement value of 1 is low movement, 2 is high movement
102
103
Tests of Within-Subjects Contrasts
.144 1 .144 .168 .690
.019 1 .019 .014 .906
.039 1 .039 .156 .701
.003 1 .003 .014 .909
.001 1 .001 .036 .852
.081 1 .081 1.914 .194
.001 1 .001 .330 .577
.001 1 .001 1.410 .2609.431 11 .857
14.686 11 1.3352.731 11 .2482.621 11 .238
.307 11 .028
.468 11 .043
.041 11 .004
.009 11 .0011.089 1 1.089 2.292 .158
.581 1 .581 1.683 .221
.796 1 .796 3.495 .088
.495 1 .495 1.917 .194
.236 1 .236 2.283 .1592.592 1 2.592 53.541 .000
.002 1 .002 3.193 .101
.008 1 .008 8.244 .0155.223 11 .4753.800 11 .3452.505 11 .2282.841 11 .2581.136 11 .103
.533 11 .048
.008 11 .001
.011 11 .0011.466 1 1.466 5.953 .033
.515 1 .515 2.693 .129
.049 1 .049 .558 .471
.446 1 .446 2.423 .148
.010 1 .010 .236 .637
.072 1 .072 1.741 .214
.132 1 .132 412.684 .000
.000 1 .000 1.716 .2172.709 11 .2462.103 11 .191
.972 11 .0882.024 11 .184
.469 11 .043
.453 11 .041
.004 11 .000
.001 11 9.87E-0051 131 1 1 131 6 543 027
headMove
LinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
context
LinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
Linear
mediumLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
Linear
MeasurelikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlike
Sourcemedium
Error(medium)
context
Error(context)
headMove
Error(headMove)
medium * context
Type III Sumof Squares df Mean Square F Sig.
.001 11 9.87E 0051.131 1 1.131 6.543 .027
.786 1 .786 2.827 .121
.292 1 .292 1.210 .295
.520 1 .520 1.881 .198
.006 1 .006 .118 .737
.012 1 .012 .218 .650
.007 1 .007 2.191 .167
.002 1 .002 3.144 .1041.901 11 .1733.057 11 .2782.651 11 .2413.041 11 .276
.565 11 .051
.621 11 .056
.036 11 .003
.008 11 .001
.000 1 .000 .001 .977
.012 1 .012 .027 .872
.006 1 .006 .038 .848
.012 1 .012 .183 .677
.059 1 .059 .984 .342
.008 1 .008 .195 .667
.001 1 .001 .472 .506
.003 1 .003 1.330 .2734.980 11 .4534.760 11 .4331.746 11 .159
.702 11 .064
.654 11 .059
.436 11 .040
.015 11 .001
.022 11 .002
.747 1 .747 6.376 .028
.042 1 .042 .111 .745
.109 1 .109 .637 .442
.635 1 .635 1.983 .187
.000 1 .000 .012 .914
.067 1 .067 1.129 .311
.000 1 .000 .137 .718
.000 1 .000 .271 .6131.289 11 .1174.130 11 .3751.882 11 .1713.522 11 .320
.284 11 .026
.649 11 .059
.025 11 .002
.017 11 .002
.001 1 .001 .005 .947
.018 1 .018 .040 .845
.007 1 .007 .042 .841
.132 1 .132 .871 .371
.025 1 .025 .495 .4964.74E-005 1 4.74E-005 .001 .982
.001 1 .001 .245 .631
.000 1 .000 .181 .6793.452 11 .3145.021 11 .4561.871 11 .1701.671 11 .152
.550 11 .050
.956 11 .087
.046 11 .004
.019 11 .002
LinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
LinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
LinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
LinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
LinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
glikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMagreeEagreeM
medium * context
Error(medium*context)
medium * headMove
Error(medium*headMove)
context * headMove
Error(context*headMove)
medium * context *headMove
Error(medium*context*headMove)
104
Appendix H: Situational context, ANOVA This section presents data used to investigate situational context in Studies 1 and 2.
Multivariate Testsc
1.000 380596.5b 8.000 4.000 .000 1.000 3044771.9 1.000.000 380596.5b 8.000 4.000 .000 1.000 3044771.9 1.000
761193.0 380596.5b 8.000 4.000 .000 1.000 3044771.9 1.000761193.0 380596.5b 8.000 4.000 .000 1.000 3044771.9 1.000
.150 .088b 8.000 4.000 .998 .150 .707 .058
.850 .088b 8.000 4.000 .998 .150 .707 .058
.177 .088b 8.000 4.000 .998 .150 .707 .058
.177 .088b 8.000 4.000 .998 .150 .707 .058
.994 77.001b 8.000 4.000 .000 .994 616.008 1.000
.006 77.001b 8.000 4.000 .000 .994 616.008 1.000154.002 77.001b 8.000 4.000 .000 .994 616.008 1.000154.002 77.001b 8.000 4.000 .000 .994 616.008 1.000
.706 1.202b 8.000 4.000 .459 .706 9.619 .171
.294 1.202b 8.000 4.000 .459 .706 9.619 .1712.405 1.202b 8.000 4.000 .459 .706 9.619 .1712.405 1.202b 8.000 4.000 .459 .706 9.619 .171
Pillai's TraceWilks' LambdaHotelling's TraceRoy's Largest RootPillai's TraceWilks' LambdaHotelling's TraceRoy's Largest RootPillai's TraceWilks' LambdaHotelling's TraceRoy's Largest RootPillai's TraceWilks' LambdaHotelling's TraceRoy's Largest Root
EffectInterceptBetween Subjects
medium
context
medium * context
Within Subjects
Value F Hypothesis df Error df Sig.Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
Exact statisticb.
Design: Intercept Within Subjects Design: medium+context+medium*context
c.
105
Tests of Within-Subjects Contrasts
.071 1 .071 .192 .670 .017 .192 .069
.051 1 .051 .101 .756 .009 .101 .060
.009 1 .009 .064 .804 .006 .064 .056
.004 1 .004 .038 .849 .003 .038 .0541.32E-005 1 1.32E-005 .001 .979 .000 .001 .050
.002 1 .002 .118 .738 .011 .118 .061
.053 1 .053 .149 .707 .013 .149 .064
.025 1 .025 .150 .706 .013 .150 .0654.053 11 .3685.583 11 .5081.456 11 .1321.249 11 .114.209 11 .019.230 11 .021
3.895 11 .3541.860 11 .169.494 1 .494 1.985 .187 .153 1.985 .251.280 1 .280 1.713 .217 .135 1.713 .223.324 1 .324 4.356 .061 .284 4.356 .478.093 1 .093 1.281 .282 .104 1.281 .179.158 1 .158 4.460 .058 .288 4.460 .487
1.591 1 1.591 93.849 .000 .895 93.849 1.000.106 1 .106 1.083 .320 .090 1.083 .159
2.127 1 2.127 9.411 .011 .461 9.411 .7972.740 11 .2491.795 11 .163.818 11 .074.800 11 .073.390 11 .035.186 11 .017
1.073 11 .0982.486 11 .226.350 1 .350 4.648 .054 .297 4.648 .503.289 1 .289 5.484 .039 .333 5.484 .570.167 1 .167 1.673 .222 .132 1.673 .219.248 1 .248 1.804 .206 .141 1.804 .233.001 1 .001 .054 .821 .005 .054 .055.003 1 .003 .166 .692 .015 .166 .066.254 1 .254 .792 .392 .067 .792 .129.457 1 .457 2.813 .122 .204 2.813 .334.827 11 .075.580 11 .053
1.101 11 .1001.514 11 .138.234 11 .021.185 11 .017
3.533 11 .3211.787 11 .162
context
LinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
mediumLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
LinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
MeasurelikelifelikeconfidenceEconfidenceMcorrectEcorrectMmutualAgreeEmutualAgreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMmutualAgreeEmutualAgreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMmutualAgreeEmutualAgreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMmutualAgreeEmutualAgreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMmutualAgreeEmutualAgreeMlikelifelikeconfidenceEconfidenceMcorrectEcorrectMmutualAgreeEmutualAgreeM
Sourcemedium
Error(medium)
context
Error(context)
medium * context
Error(medium*context)
Type III Sumof Squares df Mean Square F Sig.
Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
106
Descriptive Statistics
3.9944 .77845 124.3681 .46284 124.0883 .59784 124.1206 .60847 124.3182 .65843 124.6261 .61723 124.4080 .40886 124.4053 .51289 125.3821 .41374 125.0997 .58135 125.2906 .34272 125.2445 .52523 125.4916 .69603 125.7236 .63139 125.6545 .41398 125.5987 .60388 12.1575 .15314 12.2625 .18364 12.1467 .12315 12.2712 .16819 12.2001 .12725 12.5795 .13726 12.2010 .10112 12.5499 .17954 12
4.0230 .48964 123.9712 .38949 123.9437 .43385 124.1831 .44990 129.3947 .36781 12
10.0108 .30653 129.5439 .33149 129.7697 .47304 12
like_mean.0.0like_mean.0.1like_mean.1.0like_mean.1.1lifelike_mean.0.0lifelike_mean.0.1lifelike_mean.1.0lifelike_mean.1.1confide_mean.0.0confide_mean.0.1confide_mean.1.0confide_mean.1.1maxconfidm_mean.0.0maxconfidm_mean.0.1maxconfidm_mean.1.0maxconfidm_mean.1.1correcte_mean.0.0correcte_mean.0.1correcte_mean.1.0correcte_mean.1.1correctm_mean.0.0correctm_mean.0.1correctm_mean.1.0correctm_mean.1.1N_agreee_mean.0.0N_agreee_mean.0.1N_agreee_mean.1.0N_agreee_mean.1.1N_agreem_mean.0.0N_agreem_mean.0.1N_agreem_mean.1.0N_agreem_mean.1.1
Mean Std. Deviation N
Note: first number represents medium (0 is video, 1 is animation); second number represents context (0 is excluded, 1 is provided)
107
Appendix I: Personality, correlations This section presents data used to investigate personality in Studies 1 and 2.
Correlations
1 .301* .236 -.095 -.086 -.019 .062.019 .053 .260 .281 .448 .338
48 48 48 48 48 48 48.301* 1 .422** -.008 .197 -.008 -.071.019 .001 .478 .090 .478 .317
48 48 48 48 48 48 48.236 .422** 1 -.039 .040 -.058 .087.053 .001 .396 .393 .347 .278
48 48 48 48 48 48 48-.095 -.008 -.039 1 .550** .169 -.128.260 .478 .396 .000 .125 .194
48 48 48 48 48 48 48-.086 .197 .040 .550** 1 -.019 .084.281 .090 .393 .000 .449 .286
48 48 48 48 48 48 48-.019 -.008 -.058 .169 -.019 1 .135.448 .478 .347 .125 .449 .180
48 48 48 48 48 48 48.062 -.071 .087 -.128 .084 .135 1.338 .317 .278 .194 .286 .180
48 48 48 48 48 48 48
Pearson CorrelationSig. (1-tailed)NPearson CorrelationSig. (1-tailed)NPearson CorrelationSig. (1-tailed)NPearson CorrelationSig. (1-tailed)NPearson CorrelationSig. (1-tailed)NPearson CorrelationSig. (1-tailed)NPearson CorrelationSig. (1-tailed)N
personalityagree_mean
like_mean
lifelike_mean
confide_mean
maxconfidm_mean
correcte_mean
correctm_mean
personalityagree_mean like_mean lifelike_mean confide_mean
maxconfidm_mean
correcte_mean
correctm_mean
Correlation is significant at the 0.05 level (1-tailed).*.
Correlation is significant at the 0.01 level (1-tailed).**.
Correlations
1 .301* .236 -.095 -.086 -.019 .062.038 .106 .519 .561 .897 .676
48 48 48 48 48 48 48.301* 1 .422** -.008 .197 -.008 -.071.038 .003 .956 .180 .957 .634
48 48 48 48 48 48 48.236 .422** 1 -.039 .040 -.058 .087.106 .003 .792 .785 .693 .555
48 48 48 48 48 48 48-.095 -.008 -.039 1 .550** .169 -.128.519 .956 .792 .000 .250 .388
48 48 48 48 48 48 48-.086 .197 .040 .550** 1 -.019 .084.561 .180 .785 .000 .897 .571
48 48 48 48 48 48 48-.019 -.008 -.058 .169 -.019 1 .135.897 .957 .693 .250 .897 .360
48 48 48 48 48 48 48.062 -.071 .087 -.128 .084 .135 1.676 .634 .555 .388 .571 .360
48 48 48 48 48 48 48
Pearson CorrelationSig. (2-tailed)NPearson CorrelationSig. (2-tailed)NPearson CorrelationSig. (2-tailed)NPearson CorrelationSig. (2-tailed)NPearson CorrelationSig. (2-tailed)NPearson CorrelationSig. (2-tailed)NPearson CorrelationSig. (2-tailed)N
personalityagree_mean
like_mean
lifelike_mean
confide_mean
maxconfidm_mean
correcte_mean
correctm_mean
personalityagree_mean like_mean lifelike_mean confide_mean
maxconfidm_mean
correcte_mean
correctm_mean
Correlation is significant at the 0.05 level (2-tailed).*.
Correlation is significant at the 0.01 level (2-tailed).**.
Note: The data in the bottom table is identical to the top table except it is presented as a two-tailed analysis.
108
Appendix J: Author, ANOVA This section presents data used to investigate author effects in Studies 1 and 2.
Mauchly's Test of Sphericityb
.647 4.238 5 .518 .760 .969 .333
.778 2.443 5 .786 .864 1.000 .333
.551 5.802 5 .328 .725 .911 .333
.464 7.462 5 .190 .744 .942 .333
.547 5.866 5 .322 .760 .969 .333
.609 4.820 5 .440 .808 1.000 .3331.000 .000 0 . 1.000 1.000 1.0001.000 .000 0 . 1.000 1.000 1.0001.000 .000 0 . 1.000 1.000 1.0001.000 .000 0 . 1.000 1.000 1.0001.000 .000 0 . 1.000 1.000 1.0001.000 .000 0 . 1.000 1.000 1.000.483 7.076 5 .217 .662 .808 .333.725 3.129 5 .681 .840 1.000 .333.351 10.167 5 .072 .593 .700 .333.486 7.024 5 .221 .709 .883 .333.527 6.229 5 .287 .747 .947 .333.199 15.703 5 .008 .499 .560 .333
MeasurelikinglifelikenescorrectEcorrectMagreeEagreeMlikinglifelikenescorrectEcorrectMagreeEagreeMlikinglifelikenescorrectEcorrectMagreeEagreeM
Within Subjects Effectauthor
context
author * context
Mauchly's WApprox.
Chi-Square df Sig.Greenhouse-Geisser Huynh-Feldt Lower-bound
Epsilona
Tests the null hypothesis that the error covariance matrix of the orthonormalized transformed dependent variables is proportional to an identitymatrix.
May be used to adjust the degrees of freedom for the averaged tests of significance. Corrected tests are displayed in the Tests ofWithin-Subjects Effects table.
a.
Design: Intercept Within Subjects Design: author+context+author*context
b.
109
110
Univariate Tests
7.417 3 2.472 6.786 .0017.417 2.280 3.253 6.786 .0037.417 2.908 2.551 6.786 .0017.417 1.000 7.417 6.786 .0242.156 3 .719 3.138 .0382.156 2.592 .832 3.138 .0472.156 3.000 .719 3.138 .0382.156 1.000 2.156 3.138 .104.261 3 .087 1.960 .139.261 2.176 .120 1.960 .160.261 2.733 .096 1.960 .146.261 1.000 .261 1.960 .189.118 3 .039 .717 .549.118 2.231 .053 .717 .513.118 2.825 .042 .717 .542.118 1.000 .118 .717 .415.217 3 .072 628.558 .000.217 2.280 .095 628.558 .000.217 2.908 .075 628.558 .000.217 1.000 .217 628.558 .000.030 3 .010 49.410 .000.030 2.424 .012 49.410 .000.030 3.000 .010 49.410 .000.030 1.000 .030 49.410 .000
12.022 33 .36412.022 25.077 .47912.022 31.986 .37612.022 11.000 1.0937.556 33 .2297.556 28.511 .2657.556 33.000 .2297.556 11.000 .6871.465 33 .0441.465 23.939 .0611.465 30.063 .0491.465 11.000 .1331.814 33 .0551.814 24.543 .0741.814 31.078 .0581.814 11.000 .165.004 33 .000.004 25.077 .000.004 31.985 .000.004 11.000 .000.007 33 .000.007 26.669 .000.007 33.000 .000.007 11.000 .001
Sphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-bound
Measureliking
lifelikenes
correctE
correctM
agreeE
agreeM
liking
lifelikenes
correctE
correctM
agreeE
agreeM
Sourceauthor
Error(author)
Type III Sumof Squares df Mean Square F Sig.
111
Tests of Within-Subjects Contrasts
5.362 1 5.362 23.249 .001.064 1 .064 .234 .638
1.991 1 1.991 3.371 .0931.263 1 1.263 5.147 .044
.248 1 .248 .817 .385
.645 1 .645 4.657 .054
.230 1 .230 8.522 .014
.010 1 .010 .272 .612
.021 1 .021 .299 .596
.009 1 .009 .182 .678
.003 1 .003 .092 .767
.107 1 .107 1.220 .293
.129 1 .129 1929.464 .000
.069 1 .069 498.791 .000
.019 1 .019 132.585 .000
.001 1 .001 4.257 .064
.024 1 .024 285.879 .000
.004 1 .004 20.280 .0012.537 11 .2312.988 11 .2726.497 11 .5912.699 11 .2453.333 11 .3031.524 11 .139
.297 11 .027
.411 11 .037
.758 11 .069
.536 11 .049
.316 11 .029
.963 11 .088
.001 11 6.71E-005
.002 11 .000
.002 11 .000
.003 11 .000
.001 11 8.35E-005
.002 11 .0001 519 1 1 519 3 635 083
context
Li
authorLinearQuadraticCubicLinearQuadraticCubicLinearQuadraticCubicLinearQuadraticCubicLinearQuadraticCubicLinearQuadraticCubicLinearQuadraticCubicLinearQuadraticCubicLinearQuadraticCubicLinearQuadraticCubicLinearQuadraticCubicLinearQuadraticCubic
Measureliking
lifelikenes
correctE
correctM
agreeE
agreeM
liking
lifelikenes
correctE
correctM
agreeE
agreeM
liki
Sourceauthor
Error(author)
t t
Type III Sumof Squares df Mean Square F Sig.
2. author
4.427 .207 3.972 4.8824.422 .170 4.047 4.7973.824 .128 3.543 4.1053.922 .176 3.533 4.3104.506 .163 4.148 4.8644.652 .140 4.344 4.9604.329 .109 4.088 4.5704.272 .121 4.005 4.539.291 .039 .204 .378.253 .043 .159 .347.170 .054 .050 .290.173 .035 .095 .251.403 .047 .299 .507.443 .045 .345 .542.345 .054 .227 .464.407 .043 .313 .501.316 .003 .309 .323.271 .002 .266 .276.341 .003 .335 .347.402 .002 .398 .407.412 .002 .407 .417.396 .003 .390 .402.381 .002 .376 .386.428 .004 .420 .436
author123412341234123412341234
Measureliking
lifelikenes
correctE
correctM
agreeE
agreeM
Mean Std. Error Lower Bound Upper Bound95% Confidence Interval
Appendix K: Puppetry experience, ANOVA This section presents data used to investigate puppetry experience in Studies 3 and 4. Results are presented alphabetically by emotion.
Tests of Within-Subjects Contrastsb
19.440 1 19.440 . . 1.000 . ..107 1 .107 .595 .457 .051 .595 .109.107 1 .107 .263 .618 .023 .263 .076.007 1 .007 .014 .908 .001 .014 .051.091 1 .091 .146 .710 .013 .146 .064.000 11 .000
1.973 11 .1794.453 11 .4055.193 11 .4726.879 11 .625
puppeteerLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
MeasuredurationlikinglifelikenessratingpointsOverMeandurationlikinglifelikenessratingpointsOverMean
Sourcepuppeteer
Error(puppeteer)
Type III Sumof Squares df Mean Square F Sig.
Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
emotion = angryb.
Tests of Within-Subjects Contrastsb
2.160 1 2.160 . . 1.000 . ..042 1 .042 .107 .750 .010 .107 .060.202 1 .202 .412 .534 .036 .412 .090.807 1 .807 2.136 .172 .163 2.136 .267
1.009 1 1.009 3.254 .099 .228 3.254 .377.000 11 .000
4.298 11 .3915.378 11 .4894.153 11 .3783.410 11 .310
puppeteerLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
MeasuredurationlikinglifelikenessratingpointsOverMeandurationlikinglifelikenessratingpointsOverMean
Sourcepuppeteer
Error(puppeteer)
Type III Sumof Squares df Mean Square F Sig.
Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
emotion = disgustedb.
Tests of Within-Subjects Contrastsb
.240 1 .240 . . 1.000 . .
.000 1 .000 .000 1.000 .000 .000 .050
.002 1 .002 .002 .964 .000 .002 .050
.042 1 .042 .075 .790 .007 .075 .0571.307 1 1.307 5.086 .045 .316 5.086 .539.000 11 .000
7.040 11 .6408.618 11 .7836.138 11 .5582.826 11 .257
puppeteerLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
MeasuredurationlikinglifelikenessratingpointsOverMeandurationlikinglifelikenessratingpointsOverMean
Sourcepuppeteer
Error(puppeteer)
Type III Sumof Squares df Mean Square F Sig.
Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
emotion = afraidb.
112
Tests of Within-Subjects Contrastsb
.240 1 .240 . . 1.000 . .
.002 1 .002 .004 .948 .000 .004 .050
.602 1 .602 2.931 .115 .210 2.931 .346
.107 1 .107 .248 .628 .022 .248 .074
.187 1 .187 .250 .627 .022 .250 .074
.000 11 .0004.178 11 .3802.258 11 .2054.733 11 .4308.234 11 .749
puppeteerLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
MeasuredurationlikinglifelikenessratingpointsOverMeandurationlikinglifelikenessratingpointsOverMean
Sourcepuppeteer
Error(puppeteer)
Type III Sumof Squares df Mean Square F Sig.
Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
emotion = happyb.
Tests of Within-Subjects Contrastsb
6.000 1 6.000 . . 1.000 . ..271 1 .271 .949 .351 .079 .949 .145.000 1 .000 .001 .973 .000 .001 .050.240 1 .240 .725 .413 .062 .725 .122.202 1 .202 .350 .566 .031 .350 .084.000 11 .000
3.140 11 .2853.905 11 .3553.640 11 .3316.338 11 .576
puppeteerLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
MeasuredurationlikinglifelikenessratingpointsOverMeandurationlikinglifelikenessratingpointsOverMean
Sourcepuppeteer
Error(puppeteer)
Type III Sumof Squares df Mean Square F Sig.
Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
emotion = sadb.
Tests of Within-Subjects Contrastsb
3.840 1 3.840 . . 1.000 . ..042 1 .042 .103 .754 .009 .103 .060.135 1 .135 .265 .617 .024 .265 .076.027 1 .027 .056 .818 .005 .056 .055.002 1 .002 .003 .958 .000 .003 .050.000 11 .000
4.458 11 .4055.605 11 .5105.253 11 .4786.455 11 .587
puppeteerLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
MeasuredurationlikinglifelikenessratingpointsOverMeandurationlikinglifelikenessratingpointsOverMean
Sourcepuppeteer
Error(puppeteer)
Type III Sumof Squares df Mean Square F Sig.
Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
emotion = surprisedb.
113
Appendix L: Emotional valence, factor analysis, ANOVA and discriminant analysis This section presents data used to investigate emotion ratings in Studies 3 and 4. Factor analysis for novice and puppeteer groups
Rotated Component Matrixa
.809
.762
.745
.544 -.454 .921 .906 .851 .813
disgustedangryafraidsadlifelikenesslikingsurprisedhappy
1 2 3Component
Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization.
Rotation converged in 5 iterations.a.
Total Variance Explained
2.134 26.680 26.6801.703 21.292 47.9731.658 20.728 68.701
Component123
Total % of Variance Cumulative %Rotation Sums of Squared Loadings
Extraction Method: Principal Component Analysis.
114
Two-way (valence x emotion) repeated measures ANOVA for novice group
valencea
4.413 .113 4.165 4.6604.500 .187 4.088 4.9124.250 .199 3.811 4.6894.219 .240 3.689 4.7483.388 .202 2.942 3.8333.300 .270 2.706 3.894
.022 .064 -.118 .162
.043 .157 -.302 .3893.575 .200 3.135 4.0153.525 .259 2.955 4.0953.379 .233 2.866 3.8933.450 .202 3.006 3.8943.254 .278 2.642 3.8662.934 .240 2.406 3.4633.288 .194 2.860 3.7153.308 .176 2.920 3.6973.276 .231 2.768 3.7843.325 .286 2.695 3.9553.475 .240 2.947 4.0033.100 .297 2.446 3.7543.377 .202 2.933 3.8223.200 .213 2.732 3.6683.372 .222 2.884 3.8613.308 .205 2.856 3.7593.372 .222 2.884 3.8613.200 .213 2.732 3.668
valence12121212121212121212121212
Measureliking
lifelikeness
rating
pointsOverMean
anger
disgust
fear
happy
sadness
surprise
posValence
negValence
valenceRating
Mean Std. Error Lower Bound Upper Bound95% Confidence Interval
puppeteer (1=yes) = 0a.
Note: valence value of 1 is negative emotion, 2 is positive emotion
115
Tests of Within-Subjects Contrastsb
.046 1 .046 .765 .400 .065 .765 .126
.006 1 .006 .085 .776 .008 .085 .058
.046 1 .046 .376 .552 .033 .376 .087
.003 1 .003 .018 .895 .002 .018 .052
.015 1 .015 .128 .727 .011 .128 .062
.030 1 .030 .195 .667 .017 .195 .069
.614 1 .614 11.826 .006 .518 11.826 .878
.003 1 .003 .011 .920 .001 .011 .051
.014 1 .014 .042 .841 .004 .042 .054
.843 1 .843 5.581 .038 .337 5.581 .577
.188 1 .188 1.224 .292 .100 1.224 .173
.025 1 .025 .668 .431 .057 .668 .116
.178 1 .178 1.796 .207 .140 1.796 .232
.660 11 .060
.770 11 .0701.345 11 .1221.695 11 .1541.290 11 .1171.696 11 .154
.571 11 .0522.686 11 .2443.717 11 .3381.661 11 .1511.690 11 .154
.412 11 .0371.090 11 .099
valenceLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
MeasurelikinglifelikenessratingpointsOverMeanangerdisgustfearhappysadnesssurpriseposValencenegValencevalenceRatinglikinglifelikenessratingpointsOverMeanangerdisgustfearhappysadnesssurpriseposValencenegValencevalenceRating
Sourcevalence
Error(valence)
Type III Sumof Squares df Mean Square F Sig.
Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
puppeteer (1=yes) = 0b.
116
Two-way (valence x emotion) repeated measures ANOVA for puppeteer group
valencea
4.518 .194 4.092 4.9454.550 .155 4.208 4.8924.240 .273 3.639 4.8414.447 .222 3.958 4.9373.400 .254 2.842 3.9583.200 .244 2.662 3.738
.164 .140 -.145 .473-.053 .181 -.451 .3443.500 .257 2.935 4.0653.433 .269 2.841 4.0253.439 .251 2.886 3.9923.400 .245 2.861 3.9393.308 .299 2.649 3.9673.108 .247 2.566 3.6512.941 .193 2.515 3.3673.083 .212 2.617 3.5493.221 .246 2.680 3.7623.217 .191 2.795 3.6383.196 .266 2.609 3.7823.225 .260 2.653 3.7973.067 .215 2.594 3.5403.154 .218 2.674 3.6343.368 .246 2.826 3.9093.290 .207 2.834 3.7463.368 .246 2.826 3.9093.154 .218 2.674 3.634
valence12121212121212121212121212
Measureliking
lifelikeness
rating
pointsOverMean
anger
disgust
fear
happy
sadness
surprise
posValence
negValence
valenceRating
Mean Std. Error Lower Bound Upper Bound95% Confidence Interval
puppeteer (1=yes) = 1a.
Note: valence value of 1 is negative emotion, 2 is positive emotion
117
Tests of Within-Subjects Contrastsb
.006 1 .006 .045 .836 .004 .045 .054
.258 1 .258 2.099 .175 .160 2.099 .263
.240 1 .240 1.449 .254 .116 1.449 .196
.284 1 .284 1.470 .251 .118 1.470 .198
.027 1 .027 .147 .708 .013 .147 .064
.009 1 .009 .069 .798 .006 .069 .057
.240 1 .240 1.067 .324 .088 1.067 .157
.122 1 .122 .378 .551 .033 .378 .087
.000 1 .000 .000 .985 .000 .000 .050
.005 1 .005 .028 .870 .003 .028 .053
.046 1 .046 .256 .623 .023 .256 .075
.037 1 .037 .342 .570 .030 .342 .083
.274 1 .274 1.691 .220 .133 1.691 .2211.465 11 .1331.351 11 .1231.823 11 .1662.124 11 .1931.991 11 .1811.441 11 .1312.475 11 .2253.534 11 .3213.176 11 .2892.011 11 .1831.971 11 .1791.177 11 .1071.780 11 .162
valenceLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinearLinear
MeasurelikinglifelikenessratingpointsOverMeanangerdisgustfearhappysadnesssurpriseposValencenegValencevalenceRatinglikinglifelikenessratingpointsOverMeanangerdisgustfearhappysadnesssurpriseposValencenegValencevalenceRating
Sourcevalence
Error(valence)
Type III Sumof Squares df Mean Square F Sig.
Partial EtaSquared
Noncent.Parameter
ObservedPowera
Computed using alpha = .05a.
puppeteer (1=yes) = 1b.
Discriminant analysis for valence, novice group
Group Statisticsa
4.42 1.241 239 239.0004.25 1.488 239 239.0003.57 1.676 239 239.0003.38 1.628 239 239.0003.24 1.730 239 239.0003.29 1.837 239 239.0003.28 1.753 239 239.0003.47 1.876 239 239.0004.49 1.243 117 117.0004.21 1.449 117 117.0003.51 1.735 117 117.0003.41 1.641 117 117.0002.90 1.647 117 117.0003.27 1.851 117 117.0003.32 1.865 117 117.0003.07 1.731 117 117.0004.44 1.240 356 356.0004.24 1.473 356 356.0003.55 1.693 356 356.0003.39 1.630 356 356.0003.13 1.708 356 356.0003.29 1.839 356 356.0003.29 1.788 356 356.0003.34 1.837 356 356.000
likinglifelikenessangrydisgustedafraidhappysadsurprisedlikinglifelikenessangrydisgustedafraidhappysadsurprisedlikinglifelikenessangrydisgustedafraidhappysadsurprised
valence1.00
2.00
Total
Mean Std. Deviation Unweighted WeightedValid N (listwise)
puppeteer (1=yes) = 0a.
Note: valence value of 1 is negative emotion, 2 is positive emotion
118
Tests of Equality of Group Meansa
.999 .213 1 354 .6451.000 .050 1 354 .8231.000 .086 1 354 .7691.000 .019 1 354 .891
.991 3.228 1 354 .0731.000 .009 1 354 .9261.000 .039 1 354 .843
.989 3.838 1 354 .051
likinglifelikenessangrydisgustedafraidhappysadsurprised
Wilks'Lambda F df1 df2 Sig.
puppeteer (1=yes) = 0a.
Discriminant analysis for valence, puppeteer group
Group Statisticsa
4.51 1.345 237 237.0004.23 1.618 237 237.0003.51 1.789 237 237.0003.45 1.740 237 237.0003.29 1.784 237 237.0002.96 1.824 237 237.0003.22 1.902 237 237.0003.21 1.904 237 237.0004.55 1.320 119 119.0004.45 1.577 119 119.0003.45 1.867 119 119.0003.42 1.764 119 119.0003.08 1.678 119 119.0003.10 1.911 119 119.0003.24 1.858 119 119.0003.24 1.909 119 119.0004.52 1.335 356 356.0004.30 1.605 356 356.0003.49 1.813 356 356.0003.44 1.746 356 356.0003.22 1.750 356 356.0003.01 1.852 356 356.0003.22 1.885 356 356.0003.22 1.903 356 356.000
likinglifelikenessangrydisgustedafraidhappysadsurprisedlikinglifelikenessangrydisgustedafraidhappysadsurprisedlikinglifelikenessangrydisgustedafraidhappysadsurprised
valence1.00
2.00
Total
Mean Std. Deviation Unweighted WeightedValid N (listwise)
puppeteer (1=yes) = 1a.
Note: valence value of 1 is negative emotion, 2 is positive emotion
119
Tests of Equality of Group Meansa
1.000 .056 1 354 .812.996 1.457 1 354 .228
1.000 .077 1 354 .7811.000 .025 1 354 .873.997 1.202 1 354 .274.999 .472 1 354 .493
1.000 .009 1 354 .9251.000 .023 1 354 .879
likinglifelikenessangrydisgustedafraidhappysadsurprised
Wilks'Lambda F df1 df2 Sig.
puppeteer (1=yes) = 1a.
120