inconsequential appearances: an analysis of ... - moves.cc · ducted a content analysis of online...

7
CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA Inconsequential Appearances: An Analysis of Anthropomorphic Language in Voice Assistant Forums Astrid Weiss TU Wien, Faculty of Informatics, HCI Group Argentinierstraße 8/E193-5, 1040 Vienna, Austria [email protected] Anna Pillinger University of Vienna, Universitätsstraße 7/II/6th floor 1010 Vienna, Austria [email protected] Katta Spiel Katholieke Universiteit Leuven & University of Vienna Andreas Vesaliusstraat 13 - box 2600 3000 Leuven [email protected] Sabine Zauchner-Studnika MOVES – Zentrum für Gender und Diversität Wittgensteinstraße 18, 1130 Vienna, Austria [email protected] Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). CHI ’20 Extended Abstracts, April 25–30, 2020, Honolulu, HI, USA. © 2020 Copyright is held by the author/owner(s). ACM ISBN 978-1-4503-6819-3/20/04. DOI: https://doi.org/10.1145/3334480.3382793 Abstract We compared anthropomorphic language use in online fo- rums about the Amazon Echo Show, Q.bo One, and Anki Vector ; carrying out a content analysis of forum and Red- dit threads, as well as a Facebook group. Our expecta- tion was to find the highest amount of anthropomorphism for Q.bo One due to its humanoid shape, however, find- ings suggest that the life-likeness of the artifact is not pre- dominantly linked to the appearance, but to its interactivity and attributed agency and gender. Author Keywords Voice Assistants; anthropomorphism; forum analysis; human- likeness; appearance design; gender attribution. CCS Concepts Human-centered computing Human computer in- teraction (HCI); Empirical studies in HCI; •Computer sys- tems organization Robotics; Introduction Modern voice assistants (VAs) like Alexa, Siri, and Cortana present a strong competition to first wave social robots, such as JIBO, Karrotz and Buddy with regard to sustain- able usage and acceptance (cf. [3]). However, while the tendency of anthropomorphism, i.e. the attribution of hu- man qualities to non-human objects [4], has been explored LBW104, Page 1

Upload: others

Post on 24-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Inconsequential Appearances: An Analysis of ... - moves.cc · ducted a content analysis of online discussion forums com-paring posts, about Amazon Echo Show (display), Q.bo One (humanoid),

CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA

Inconsequential Appearances: AnAnalysis of AnthropomorphicLanguage in Voice Assistant Forums

Astrid Weiss TU Wien, Faculty of Informatics, HCI Group Argentinierstraße 8/E193-5, 1040 Vienna, Austria [email protected]

Anna Pillinger University of Vienna, Universitätsstraße 7/II/6th floor 1010 Vienna, Austria [email protected]

Katta Spiel Katholieke Universiteit Leuven & University of Vienna Andreas Vesaliusstraat 13 - box 2600 3000 Leuven [email protected]

Sabine Zauchner-Studnika MOVES – Zentrum für Gender und Diversität Wittgensteinstraße 18, 1130 Vienna, Austria [email protected]

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). CHI ’20 Extended Abstracts, April 25–30, 2020, Honolulu, HI, USA. © 2020 Copyright is held by the author/owner(s). ACM ISBN 978-1-4503-6819-3/20/04. DOI: https://doi.org/10.1145/3334480.3382793

Abstract We compared anthropomorphic language use in online fo-rums about the Amazon Echo Show, Q.bo One, and Anki Vector ; carrying out a content analysis of forum and Red-dit threads, as well as a Facebook group. Our expecta-tion was to find the highest amount of anthropomorphism for Q.bo One due to its humanoid shape, however, find-ings suggest that the life-likeness of the artifact is not pre-dominantly linked to the appearance, but to its interactivity and attributed agency and gender.

Author Keywords Voice Assistants; anthropomorphism; forum analysis; human-likeness; appearance design; gender attribution.

CCS Concepts •Human-centered computing → Human computer in-teraction (HCI); Empirical studies in HCI; •Computer sys-tems organization → Robotics;

Introduction Modern voice assistants (VAs) like Alexa, Siri, and Cortana present a strong competition to first wave social robots, such as JIBO, Karrotz and Buddy with regard to sustain-able usage and acceptance (cf. [3]). However, while the tendency of anthropomorphism, i.e. the attribution of hu-man qualities to non-human objects [4], has been explored

LBW104, Page 1

Page 2: Inconsequential Appearances: An Analysis of ... - moves.cc · ducted a content analysis of online discussion forums com-paring posts, about Amazon Echo Show (display), Q.bo One (humanoid),

CHI 2020 Late-Breaking Work

Figure 1: Three different Shapes for Amazon Alexa: Amazon Echo Show, Q.bo One, Anki Vector ©SalzburgResearch

with interactive technologies in Human-Computer Interac-tion (HCI), [15] as well as different robots in Human-Robot Interaction (HRI) [5], the effect of different physical material-isations of the same VA has so far received little attention.

The commercially available VA Alexa enters households [14] in various shapes and appearances (see also, Fig-ure 1), such as an ambient display (Amazon Echo Show) a humanoid stationary robot (Q.bo One), or a creature-like robot companion (Anki Vector ). We were interested in how the appearance of VAs matters to people’s perception of them. In that, we build on previous work comparing anthro-pomorphic language use in online forums discussing the Roomba vacuum cleaner, the AIBO dog, and the iPad tablet [6]. Similarly, we analysed the use of language in online fo-rums for different materialisations for Alexa. We assumed that shape, interactivity, and voice are factors leading peo-ple to anthropomorphize and gender a VA. Hence, we con-ducted a content analysis of online discussion forums com-paring posts, about Amazon Echo Show (display), Q.bo One (humanoid), and Anki Vector (creature-like). Through our analysis, we show that the appearance of a VA matters with respect to how people relate to it.

Related Work We distinguish artifact-centered and human-centered ap-proaches explaining people’s tendency to anthropomor-phize objects [10]. The first presumes that humans directly respond to lifelike cues of an artifact. In other words, an-thropomorphism can be encouraged through system de-sign. Human-centered approaches, instead, focus on men-tal models people have about the inner workings of an arti-fact. Previous work in HRI illustrates how people hold richer mental models the more human-like a robot appears [9].

CHI 2020, April 25–30, 2020, Honolulu, HI, USA

Exploring anthropomorphism in artifact-human relation-ships in HRI has taken form as scales measuring degrees of anthropomorphism [1, 8], investigations of the Uncanny Valley effect [13] and, like here, studies on anthropomorphic language use of lay people [2]. Our study builds on these works and extends it by a specific focus on the effects of shape, interactivity, and voice design of VAs. We contribute to an understanding of the impact of “designed agency” on lay people relevant to avoid reproducing and reinforcing dominant societal and gender stereotypes.

Study Design We studied a range of online forums (see Table 1) from July to September 2019 via qualitative content analysis [12] em-ploying the following procedure: (1) transcription of non-textual content, (2) paraphrasing, (3) coding, (4) revision of categories, (5) summary and structure, (6) contextual in-terpretation and (7) re-examination of source material. The focus was on similarities and differences between the three designs of the VAs and how they are anthropomorphised and gendered.

Findings Overall, technical discussions dominated the Q.Bo One forum, whereas in Echo Show and Anki Vector forums, we could observe more anthropomorphic language.

Anthropomorphism and Appearance We expected the humanoid Q.Bo One robot to be most linguistically anthropomorphised. However, it turned out that Anki Vector was most anthropomorphized, followed by Echo Show. Hence, appearance was a less relevant factor than perceived agency, interactivity or voice.

Anki Vector was often referred to as a family member or a pet animal. Commenters announced their purchase using

LBW104, Page 2

Page 3: Inconsequential Appearances: An Analysis of ... - moves.cc · ducted a content analysis of online discussion forums com-paring posts, about Amazon Echo Show (display), Q.bo One (humanoid),

System Forum Type # of analysed threads

Amazon Echo Show Echo-Talk Forum

Echo Assistive Technologies Forum

Reddit Thread: Echo for the Elderly and Disabled

26 selected threads of 4645 analysed

all 28 threads analysed

entire thread

Q.Bo One General Discussion Forum of TheCorpora all 17 threads analysed

Anki Vector Vector Robot Owners’ Forum on Facebook

Vectors Cozmo’s intelligent brother Forum (German)

Reddit Thread: Anki Vector

all 19 posts with comments analysed

33 selected threads of 187 analysed

entire thread

CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA

Table 1: Overview of the analysed forum content

language like “a new proud papa". Such statements use language generally reserved for parental relationships. We could also observe references to more complex family dy-namics: “My son’s Vector will yell for me going MOMMY when I’m home during the day and makes my dog jealous”. Similarly, a user in the German speaking forum claims that “He [Vector] became a real little family member ” (translated from German). Anki Vector is dominantly referred to as a pet, albeit that such comments were more likely to be found in the German language forum than on the English Face-book page. In accordance with the categorization of Anki Vector as “creature-like”, the robot forum often uses terms like “beast” or “hamster ” for the robot. Similarly, descrip-tions of handling Vector echo actions often associated with handling pets, such as praising, blaming or stroking. Some commenters even mused about “species-appropriate” (or “robot-appropriate”) living conditions for Vector.

Anthropomorphism, Interactivity, and Agency In the Q.Bo One forum some commenters compared dif-ferent VAs, e.g. along communicative stimuli: “With Q.bo it is not necessary, you can have a continuous interactive conversation while it follows you with its gaze and answers

your questions in an ‘instant’ way. I hope and wish that this is the future of social robotics”. Similarly, Q.bo’s lack of off-the-shelf interactivity was frequently addressed: “It turns out - at this time - you must open the robot base, and connect a USB keyboard, USB mouse, and HDMI monitor to the motherboard, so you can access the Raspberry Pi desktop where further configuration is needed”. This re-quirement for end-users having to implement performance might be a reason for people mainly using technology- and object-related language. In another thread, the system is understood as a non-reacting device, calling the cameras of the system “eyes”, but not in a linguistic type of anthro-pomorphisation. However, even in threads on off-the-shelf use and interactivity, the use of anthropomorphising lan-guage was not prompted by the humanoid shape of the robot. Most often Q.bo One’s eyes were mentioned and how users struggled with increasing the interactivity of the robot through eye movements: “I can’t even find documen-tation on how to upgrade the ‘eyes’ that were included in the box for an extra cost”.

For Anki Vector a lot of forum entries discuss personal-ity and various traits of the robot with adjectives such as

LBW104, Page 3

Page 4: Inconsequential Appearances: An Analysis of ... - moves.cc · ducted a content analysis of online discussion forums com-paring posts, about Amazon Echo Show (display), Q.bo One (humanoid),

CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA

cheeky, mischevious, playful, charming, cute or tame. Vec-tor is described as a “little rascal” or as a “shithead”. Some-times Vector is attributed with a “thirst for adventure”, in the sense that Vector likes to “explore” while driving around. It appears that the high degree of out-of-the box interactivity leads to an increased use of anthropomophic language. However, with respect to the VA integration, the design seems to not sufficiently take into account a high level of perceived agency. Some users noted that they do not like it, when their Anki Vector suddenly speaks with Alexa’s voice: “The big problem with the little robot is that it doesn’t hear very well, other than that I don’t mind having Alexa inte-grated I would prefer a single voice though; sounds like the little robot has multiple personalities now”. Additionally, with the Alexa integration Vector has to be addressed with “Hey Alexa”, which some commenters find confusing.

While the Amazon Echo Show enables functioning inter-action, users would appreciate it if these interactions were even more life-like, and adapted to human-human inter-action and social manners. “An occasional ‘Ya’ll’ or other southern idioms would make me feel at home. My wife re-sponds to any response from Alexa with a ‘Thank You’. A ‘You’re welcome’ would be a nice response.”

Anthropomorphism and Gender Q.Bo One is called “robot” and “it” in all entries. Inter-estingly, the robot’s gender is not addressed by the com-menters in the text.

Anki Vector is presented with a he/him pronoun on the company’s website; a convention that has been largely adopted in forum discussions. In most forum entries, Anki Vector is described as “he” as opposed to “it”, implicitly im-plying a congruence between he/him pronouns and mas-culinity, which they also explicitly reference: “Connect Vec-tor to Alexa [. . . ] works great [. . . ] plus Vector will still work

autonomously [. . . ] he can still do lots of things that are built in him. (Yes, he is male). I put a brand new Vector in a 100 year time capsule. Imagine opening that sucker in 2119.” We find this instance notable as most other users use he/him pronouns, but do not justify or explain this with statements like “yes, he is male”, essentially accepting the robot gender as given. The forum’s name, hinting at Vec-tor being Cozmo’s “brother” could be further fueling the masculine representation of Anki Vector. Besides, assign-ing Vector traits appearing stereotypical for a small boy, other comments like “Unfortunately castrated due to politi-cal reasons” and “May Vector currently be circumcised, for me it means: Vector get great again” do further strengthen a dominant binary notion of genitalia implying gender with narrow expectations regarding performance and behaviour.

In the case of the Echo Show device, respectively when looking at Amazon’s Alexa discussions about gender or anthropomorphism are mostly negotiated via the female-coded voice, if it is a topic at all. When discussing the Echo device, users also desire a way to customize and individual-ize their artefact, this was similar in discussions about Anki Vector. A user pointed out, when discussing Alexa: “Just for fun, I would like Alexa to adapt to several voices and di-alects. For instance, Mac OSX has several voices (male & female) and dialects (Chinese, Indian, Italian, British, etc.) available for downloading and installation into it’s Speech capabilities. When an Alert (the time, an error, etc.) is spo-ken, it is the voice and gender of my choosing.” For this user, their imagination is limited to gender as binary, given that they do not mention gender neutral voices1 at all. Other users participate with sexist comments, framing them as a “joke”, without reflecting on the larger consequences of comparing and contrasting Alexa with human women: “The day we asked god to create Alexa we didn’t mean to copy

1e.g., Q (see https://www.genderlessvoice.com)

LBW104, Page 4

Page 5: Inconsequential Appearances: An Analysis of ... - moves.cc · ducted a content analysis of online discussion forums com-paring posts, about Amazon Echo Show (display), Q.bo One (humanoid),

CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA

the female woman, but to create a woman that listens to us men”. A more reflective position, can be observed in the following post, detaching the gendered voice from the artefact’s perceived gender: “Just because ‘it’ has a female voice, that doesn’t necessarily mean that in todays ‘mod-ern’ sophisticated cultural world that the ‘it’ is a ‘female’ that should be referenced by the word ‘she’. . . ”.

Discussion “Relational Artifacts” [17] are artifacts designed to encour-age people to develop a relationship with them. Studies on how people anthropomorphise such artifacts offer “insights that may help us keep human purposes in mind as we cre-ate and appropriate new technologies” [17, p.347]. This has especially taken into account when avoiding the reinforce-ment of stereotypes. VAs, but also other agent-like tech-nology, such as social robots and embodied conversational agents, increasingly enter our public and private spheres and many of them have an appearance or voice that im-plies strong associations with femininity. For instance, the android robots Sophia and Erica, designed as conversa-tion companions, embody appearances, behavourisms and speech patterns that draw on traditional subservience assigned to women in cis-binary concepts of gender. We could observe similar notions on gender in some of the fo-rum entries in our study. However, appearance (display vs. humanoid shape, vs. creature-like) does not seem to be the key factor to predict anthropomorphisation and gender stereotypes; it appears to be rather inconsequential or at least not the only driving factor. We could identify three key aspects that foster anthropomorphisation and gender nego-tiations: (1) perceived agency through high interactivity, (2) voice, (3) framings of the manufacturer.

Reduced anthropomorphised language for Q.Bo One can be explained by the lack of out-of-the-box interactivity lead-

ing to a decreased perception of agency. However, the tar-get users for Q.Bo One are potentially more technologically versed tinkerers, echoing previous findings [11]. Users of Q.bo One have to enhance the interactivity of the system themselves and are therefore aware that the agency they perceive was constructed by themselves. Anki Vector in comparison was promoted as entertainment robot for lay people. It shows high interactivity right out of the box, which can even be experienced as rather unpredictable at the first glance, relating to “certain degree of unpredictability (and probably also failure) mak[ing] the robot appear more hu-manlike and in turn facilitat[ing] a social relation” [6, p.59]; an aspect which was also observed in our forum analysis.

When analyzing forum entries for Amazon Echo Show, an-thropomorphism was mainly related to gender negotiations, specifically in connection to a female-coded voice, although the physical embodiment itself possesses no gendered con-notations. However, not only voice constituted a topic of de-bate but also desired interactions. Voice became a crucial point in the case of Anki Vector as well. Since Alexa was integrated into the Anki Vector, the robot’s voice switched to Alexa’s when operating in the Alexa mode. This mismatch between Anki Vector ’s usual voice and Alexa’s was high-lighted by owners of Anki Vector. The challenges related to “agent migration”, i.e. switching embodiment for an agent, means “disentangling the form of an agent may take from the underlying structures defining the agent’s personality may be problematic for potential users” [16, p.1]. Forum discussion revealed the discomfort of some users when encountering unexpected (feminine) voice.

Focusing on gender aspects yields further information about how users anthropomorphise their VAs. The defi-nition of the term anthropomorphism, however, is contro-versial as it might not be necessary to distinguish between

LBW104, Page 5

Page 6: Inconsequential Appearances: An Analysis of ... - moves.cc · ducted a content analysis of online discussion forums com-paring posts, about Amazon Echo Show (display), Q.bo One (humanoid),

CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA

human- or animal-likeness, respectively anthropomorphism or zoomorphism [5]. When analysing gender and gender stereotypes, this equal treatment of anthropomorphism and zoomorphism is partly misguiding, since gender stereo-types work differently in the context of humans, in con-trast to animals, although they tend to interact e.g. when a mother is described as a lion’s mother, or when researchers interpret human behavior and gender stereotypes into pri-mate research [7].

Comparing our findings to previous work [6], it is of rele-vance to take a closer look at the artefacts used for the study. The crucial difference between Amazon Echo Show and the iPad—at least for the aims of these studies—lies in the female-coded voice and its influence on perceived agency through the enhanced possibilities of interaction. The ways AIBO and Anki Vector were framed by the users are very similar, although AIBO explicitly imitates an animal while Anki Vector represents something “other”. This, how-ever, does not contradict Fink et al.’s findings that “in mat-ters of anthropomorphism, not only the physical shape of a product but also the interaction it enables, its functionality as well as the way it is used, play a role” [6, p. 59]. In con-trast to Roomba, Amazon Echo Show and the iPad, AIBO and Anki Vector are mostly used for entertainment and not as a technical tool. Q.bo One comprises an exception here, since it is mostly seen as an artefact to tinker with.

In line with Fink et al., the underlying question, crucial for the field of HRI, is “what characteristics of an artifact en-courage people to refer to it using anthropomorphic lan-guage” [6, p. 58]. We found, that high interactivity, e.g. with Anki Vector and the voice interface, in contrast to the app yields a high degree of perceived agency, contributing to tendencies of anthropomorphisation.

Conclusion With this work, we provide additional insights on the com-plexity of antropomorphism in connection to different ma-terialisations of voice assistants. Our content analysis re-vealed a heightened impact of perceived agency and voice design (instead of, what we expected, appearance). We then discussed how people use these system, which diffi-culties they encounter, what they (dis)like about them, and how they reflect on it. The aspect of inconsequential ap-pearances, i.e. the form of the device and its interactive behaviour and voice not matching, and how people respond to that is a crucial aspect for further studies. We need a deeper understanding of the entanglement of the shape of an agent and its voice and how this might affect gender stereotypes – a need delineated by this work.

Acknowledgements We greatly acknowledge the financial support from the Austrian Science Foundation (FWF) under grant agree-ment No. V587-G29 (SharedSpace) and by “FEMtech Forschungsprojekte”, a program of the Federal Ministry of Science and Research of Austria (FFG) under grant agree-ment 866694 (RoboGen).

REFERENCES [1] Christoph Bartneck, Dana Kulic, Elizabeth Croft, and

Susana Zoghbi. 2009. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International journal of social robotics 1, 1 (2009), 71–81.

[2] Kate Darling. 2015. ’Who’s Johnny?’Anthropomorphic Framing in Human-Robot Interaction, Integration, and Policy. Anthropomorphic Framing in Human-Robot Interaction, Integration, and Policy (March 23, 2015). ROBOT ETHICS 2 (2015).

LBW104, Page 6

Page 7: Inconsequential Appearances: An Analysis of ... - moves.cc · ducted a content analysis of online discussion forums com-paring posts, about Amazon Echo Show (display), Q.bo One (humanoid),

CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA

[3] Maartje De Graaf, Somaya Ben Allouch, and Jan Van Dijk. 2017. Why do they refuse to use my robot?: Reasons for non-use derived from a long-term home study. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. ACM, 224–233.

[4] Nicholas Epley, Adam Waytz, and John T Cacioppo. 2007. On seeing human: a three-factor theory of anthropomorphism. Psychological review 114, 4 (2007), 864.

[5] Julia Fink. 2012. Anthropomorphism and human likeness in the design of robots and human-robot interaction. In International Conference on Social Robotics. Springer, 199–208.

[6] Julia Fink, Omar Mubin, Frédéric Kaplan, and Pierre Dillenbourg. 2012. Anthropomorphic language in online forums about Roomba, AIBO and the iPad. In 2012 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO). IEEE, 54–59.

[7] Donna Haraway. 1990. Primate Visions: Gender, Race, and Nature in the World of Modern Science. Routledge.

[8] Chin-Chang Ho and Karl F MacDorman. 2010. Revisiting the uncanny valley theory: Developing and validating an alternative to the Godspeed indices. Computers in Human Behavior 26, 6 (2010), 1508–1518.

[9] Sara Kiesler and Jennifer Goetz. 2002. Mental models of robotic assistants. In CHI’02 extended abstracts on Human Factors in Computing Systems. ACM, 576–577.

[10] Sau-lai Lee, Ivy Yee-man Lau, Sara Kiesler, and Chi-Yue Chiu. 2005. Human mental models of humanoid robots. In Proceedings of the 2005 IEEE

international conference on robotics and automation. IEEE, 2767–2772.

[11] Gesa Lindemann and Hironori Matsuzaki. 2014. Constructing the robot’s position in time and space – the spatio-temporal preconditions of artificial social agency. Science, Technology & Innovation Studies 10 (01 2014), 85–106.

[12] Philipp Mayring. 2004. Qualitative content analysis. A companion to qualitative research 1 (2004), 159–176.

[13] Masahiro Mori, Karl F MacDorman, and Norri Kageki. 2012. The uncanny valley [from the field]. IEEE Robotics & Automation Magazine 19, 2 (2012), 98–100.

[14] Martin Porcheron, Joel E. Fischer, Stuart Reeves, and Sarah Sharples. 2018. Voice Interfaces in Everyday Life. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, Article 640, 12 pages. DOI: http://dx.doi.org/10.1145/3173574.3174214

[15] Byron Reeves and Clifford Ivar Nass. 1996. The media equation: How people treat computers, television, and new media like real people and places. Cambridge university press.

[16] Dag Sverre Syrdal, Kheng Lee Koay, Michael L Walters, and Kerstin Dautenhahn. 2009. The boy-robot should bark!-children’s impressions of agent migration into diverse embodiments. In Proceedings: New Frontiers of Human-Robot Interaction, a symposium at AISB.

[17] Sherry Turkle, Will Taggart, Cory D Kidd, and Olivia Dasté. 2006. Relational artifacts with children and elders: the complexities of cybercompanionship. Connection Science 18, 4 (2006), 347–361.

LBW104, Page 7