a design space for conversational in-vehicle information ...figure 3: graphical representation of...

8
A Design Space for Conversational In-Vehicle Information Systems Michael Braun BMW Group 85748 Garching, Germany [email protected] Bastian Pfleging LMU Munich 80333 München, Germany bastian.pfleging@ifi.lmu.de Nora Broy BMW Group 85748 Garching, Germany [email protected] Florian Alt LMU Munich 80333 München, Germany florian.alt@ifi.lmu.de Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). Copyright held by the owner/author(s). MobileHCI ’17, September 4–7, 2017, Vienna, Austria ACM 978-1-4503-5075-4/17/09. https://doi.org/10.1145/3098279.3122122 Abstract In this paper we chart a design space for conversational in- vehicle information systems (IVIS). Our work is motivated by the proliferation of speech interfaces in our everyday life, which have already found their way into consumer electron- ics and will most likely become pervasive in future cars. Our design space is based on expert interviews as well as a comprehensive literature review. We present five core dimensions – assistant, position, dialog design, system ca- pabilities, and driver state – and show in an initial study how these dimensions affect the design of a prototypical IVIS. Design spaces have paved the way for much of the work done in HCI including but not limited to areas such as input and pointing devices, smart phones, displays, and automo- tive UIs. In a similar way, we expect our design space to aid practitioners in designing future IVIS but also researchers as they explore this young area of research. Author Keywords Automotive User Interfaces; Speech Interaction; Natural Language Interfaces; Design Space. ACM Classification Keywords H.5.2 [Information interfaces and presentation]: User Inter- faces

Upload: others

Post on 19-Sep-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Design Space for Conversational In-Vehicle Information ...Figure 3: Graphical representation of the design space for conversational in-vehicle information systems top of the dashboard

A Design Space for ConversationalIn-Vehicle Information Systems

Michael BraunBMW Group85748 Garching, [email protected]

Bastian PflegingLMU Munich80333 München, [email protected]

Nora BroyBMW Group85748 Garching, [email protected]

Florian AltLMU Munich80333 München, [email protected]

Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).

Copyright held by the owner/author(s).MobileHCI ’17, September 4–7, 2017, Vienna, AustriaACM 978-1-4503-5075-4/17/09.https://doi.org/10.1145/3098279.3122122

AbstractIn this paper we chart a design space for conversational in-vehicle information systems (IVIS). Our work is motivatedby the proliferation of speech interfaces in our everyday life,which have already found their way into consumer electron-ics and will most likely become pervasive in future cars.

Our design space is based on expert interviews as well asa comprehensive literature review. We present five coredimensions – assistant, position, dialog design, system ca-pabilities, and driver state – and show in an initial study howthese dimensions affect the design of a prototypical IVIS.

Design spaces have paved the way for much of the workdone in HCI including but not limited to areas such as inputand pointing devices, smart phones, displays, and automo-tive UIs. In a similar way, we expect our design space to aidpractitioners in designing future IVIS but also researchersas they explore this young area of research.

Author KeywordsAutomotive User Interfaces; Speech Interaction; NaturalLanguage Interfaces; Design Space.

ACM Classification KeywordsH.5.2 [Information interfaces and presentation]: User Inter-faces

Page 2: A Design Space for Conversational In-Vehicle Information ...Figure 3: Graphical representation of the design space for conversational in-vehicle information systems top of the dashboard

IntroductionSpeech interfaces have found their way into our everydaylife. We ask our smart phones to call our friends, we talk toour laptop computer to enter search queries, and we tell ourTV which movie to load from the streaming platform of ourchoice. Such interfaces are in most cases limited to rathersimple, user-initiated requests and do not exploit the oppor-tunities of conversations, where dialogs can take more com-plex forms. We see particular potential for such interfacesin cars, where it is important to minimize driver distractionwhich usually occurs due to the need for directing visual at-tention towards an in-car display. With voice, the user couldsimply request information such as the weather forecast,the estimated time of arrival, new messages, or upcomingappointments. Such a system could also initiate a dialog tokeep drivers alert in situations where they get bored due toa monotonous drive [6].

Designing speech interfaces is difficult. Sometimes it is noteasy to make a specific request in such a way that it can beeasily understood by the machine. Often users may needto interrupt the current conversation due to a situation onthe road that requires their attention. In this case it may bedifficult to remember where the dialog was interrupted andusers may require the be supported of additional visualiza-tions. Additionally, it is not clear what effect the use of sucha system has on driving performance.

To understand possible impacts on how such a conversa-tional IVIS can be used in a safe and efficient manner, wechart a design space based on a thorough literature reviewand expert interviews. We envision this design space tocontribute to the field of HCI (1) by fostering research in thearea since it points at interesting challenges and opportuni-ties and (2) by establishing common ground for designers ofIVIS regarding important decisions in the design process.

Our work is complemented by a description of how the de-sign space can be used to create a conversational IVIS. Wereport on the design and implementation of the system andpresent the results from an initial user study.

Related WorkWhen designing automotive user interfaces, standards andguidelines provide valuable input on how to structure theseinterfaces [17] and minimize distraction [7, 8]. At the sametime, design spaces provide a foundation for ideation onnew combinations of modalities and devices and therebyenable possible system improvement not through regula-tion, but through innovation.

A design space represents an abstraction of existing andplausible point designs which can be used to create newsystems through rearrangement of components [3]. Firstdesign spaces by Buxton and Card et al. focused on theclassification of input devices for HCI [2, 4] and with emerg-ing technologies, design spaces for various upcoming fieldshave been presented and refined over the years. For exam-ple, Card and Mackinlay published a design space for infor-mation visualization based on first point designs [3] and Chilater improved on the taxonomy [5]. Design spaces for pub-lic displays [14], input on hand-held devices [1] and generalmultimodal interaction [15] have equally attracted attentionin their respective communities as publications in the realmof automotive UIs, which we would like to contribute to.

Notable publications in the automotive context include Kernand Schmidt’s design space for driver-based automotiveuser interfaces, which describes fundamental input and out-put modalities, positioning and graphical representations forin-vehicle UIs [12]. Rümelin et al. present a classification ofinteraction areas for drivers and passengers and commenton the potential of collaboration to reduce workload [20].

Page 3: A Design Space for Conversational In-Vehicle Information ...Figure 3: Graphical representation of the design space for conversational in-vehicle information systems top of the dashboard

Recently, Riener et al. focussed on the compatibility of mul-tiple in-car systems related to gesture input [19]. They showan interaction space which helps manufacturers define gen-erally valid gesture sets without affecting other domains.Thus, they provide input for possible future norms on con-sistent gesture classification. Another novel design space,focusing on automotive windshield displays, is presentedby Häuslschmid et al. [10]. They analyzed existing literatureand reviewed patents to summarize what has been donealready and give an outlook on future opportunities.

MethodologyFor our work on a conversational in-vehicle information sys-tem (IVIS), we searched for mobile and automotive inter-faces, speech interaction, affective and multimodal in-carUIs, and digital assistants within the repositories of digi-tal libraries, resulting in 85 relevant publications (e.g., [9,11, 21]). Furthermore, we examined existing smart phoneand home automation assistants (Apple Siri, Google Now,Microsoft Cortana, Amazon Alexa, SoundHound Hound)regarding interaction techniques and feedback channels.Next we looked at the current landscape in speech-enabledIVIS (e.g., BMW Voice Control System) and concept studiesof conversational IVIS (e.g., Nissan Pivo, Audi AIDA) andconsulted design spaces from other domains for inspirationand methodological advice. We then brought our collectionof items into a focus group consisting of 6 automotive inter-face designers of BMW who identified groupings and iter-ated upon the arrangement following grounded theory. Ourapproach consisted of clustering germane items and com-bining closely related topics to then discuss their placementin the design space or to dismiss them if they were foundinept. The resulting 5 categories, discussed in the followingsection, cover most of today’s design issues of naturalisticconversational user interfaces, with a focus on automotiveapplications.

Figure 1: Assistant representationof MIT SENSEable City LabAffective Intelligent Driving Agent(AIDA) [13]

Figure 2: Status visualizationsused for Microsoft Cortana – usedwith permission from Microsoft

Design SpaceThis design space is the result of a search for expressivevariables for future conversational interaction systems. Itreflects the current landscape of possibilities and should beextended continuously as innovations relevant to the fieldemerge. The following passages describe the design spaceand its categories.

AssistantIn a conversational IVIS, users talk to a digital assistantwhich adds a further layer between user and machine.The assistant acts as a conversational partner which canmediate between driver and car in natural language. Itsspeech behavior plays a major role in how it is perceivedand should be modelled intently. Like human input, assis-tants can vary in speech granularity and personality, andthey can take different genders in their speech synthesissettings. These attributes affect the assistant’s personal-ity, which could become a distinguishing feature for futuremarkets.

Independent of the assistant’s personality, designers haveto think about how to convey the system’s status to theuser. This can be achieved through personification or ab-straction, through graphics, lights and many more. Evenomitting the display is an option. It could make sense toeither express system states (e.g., listening, processing,talking) in the representation or to translate the assistantsemotional state into a graphical interpretation as Microsoftdid with Cortana (see Figure 2).

PositionCurrently, IVIS are mostly displayed on screens in the cen-tral dashboard (central information display, CID) or on head-up displays (HUD) [20] and sometimes on the instrumentcluster (IC) behind the steering wheel. Other positions likeon the steering wheel, in the windshield or the mirrors, or on

Page 4: A Design Space for Conversational In-Vehicle Information ...Figure 3: Graphical representation of the design space for conversational in-vehicle information systems top of the dashboard

Figure 3: Graphical representation of the design space for conversational in-vehicle information systems

top of the dashboard are potential locations designers canchoose from. By assigning the assistant to a dedicated spotit might even be possible to establish a mental model whereit is separated from the rest of the car. Figure 4 shows pos-sible positions for a standalone assistant in the cockpit.

Figure 4: Plausible displaypositions for a conversational IVIS

Dialog DesignThe first step for a dialog is an activating action, such as es-tablishing eye contact between humans or saying a wake-up word to your smart phone. There are many options forinitiating a chat with a machine, such as push-to-talk, talk-to-talk, look-to-talk, and gesture-to-talk [16]. Variations canalso occur in the choice of placement. Push-to-talk can be,for example, assigned to multiple touch sensitive areas withdifferent domains mapped to them for quick access or it canbe incorporated into an unused space, like the far left footarea in cars with automatic transmission [12]. Ultimately,

with intelligent systems the initiative to start a dialog mightcome from the assistant itself. The system could remark onpoints of interests – if the situation permits – or propose abreak when it senses clues for fatigue.

Next, some form of input is needed to tell the system aboutthe users’ intentions. Speech is a natural input modality forhumans, especially in modern times where computers canunderstand complex sentences. Although natural languagemight be preferred in many scenarios, it makes sense tokeep the opportunity for different speech granularities, e.g.,keywords, in mind. Natural language also comes in differ-ent tonalities which a system might want to pay attention toin conversations to detect the user’s intentions. Addition-ally, we often communicate in non-lexical sounds, or grunts,which can be used to identify driver states such as emo-tions or moods, and affirmation or declination [22].

Page 5: A Design Space for Conversational In-Vehicle Information ...Figure 3: Graphical representation of the design space for conversational in-vehicle information systems top of the dashboard

Other input modalities such as gestures and gaze controlcan enhance the speech dialog. Be it hand movements, fa-cial expressions, or general body movements, they can beutilized as input to select domains, to control settings, or asfeedback channels for the system. For example, the 2017BMW 7series incorporates hand gestures to switch throughsongs or control the volume. These modalities can be com-bined with speech input in conversational interfaces to cre-ate a more natural experience by, for example, confirmingcommands with a frugal nod or choosing which mirror toadjust by simply looking at it.

The dialog visualization is another important topic to ad-dress, especially in the automotive setting, as drivers maynot be able to unrestrictedly follow the conversation at alltimes. Therefore we need to think about a sensible way ofdisplaying information without it becoming too much of adistraction to the driver. In an accompanying study we ex-plored different ways to format and visualize the dialog: achat metaphor found in many smart phone apps, a con-densed view with the text summarized in keywords, and acombination of keywords and additional icons representingthe domain in focus (cf. Figure 5). We can also support theuser by displaying a history, either on-demand or in somekind of simplified form like tabs or a tag cloud. Beside thevisualization of the spoken input and output, graphics (e.g.,a map), charts and diagrams can convey information that ishard to express solely by speech.

Figure 5: Dialog concepts withstatus visualization and differenttext variants embedded in an IVIS

Figure 6: Exemplary systemcapabilities for a modern car: (1)communication with IoT devices,(2) smart home controls, (3)environment perception throughsensors

System CapabilitiesSystem capabilities will influence the way the interface isperceived by its users and therefore need to be included inthe design process, even if they cannot be altered in certainsettings. They are the foundation for use cases and theirintegration can have substantial influence on the success ofthe system. Possible capabilities include multimedia, safety

and navigation features, online services, car status visu-alizations, smart home connectivity, and all kinds of otherextensions to current systems’ range of function. We claimthat the fidelity of the speech output as well as the level ofthe assistant’s personality should correlate with the systemcapabilities. For example, if the conversational IVIS is char-acterized by a human-like speech output and a multifacetedpersonality, the user would expect to access various func-tions through the speech UI and not only a specific subsetof features.

Driver StateUnderstanding and controlling the driver state is importantto reduce driver distraction and mental workload, and todesign an appropriate user experience [6]. Besides eval-uating how different interfaces affect these aspects, thedriver state can also be a separate dimension of the de-sign space. The conversational IVIS can be adapted ac-cording to the current driver state. As outlined by Coughlinet al. (based on the Yerkes-Dodson law), the driver’s per-formance is ideal at an intermediate level of workload anddecreases both with fatigue or inattention and active dis-traction or overload [6]. Thus, the goal is to keep the driverin this ideal range. When driving on an empty and tiringhighway, a system-initiated dialog might help to keep thedriver alert and, thus, maintain ideal driving performance.

Application of the Design SpaceTo showcase the potential of the presented design space,we applied it in a user study on GUI concepts for conver-sational IVIS. We developed several concepts based onthe dimensions assistant and dialog design in order to in-vestigate user preferences for the display of conversationcontents while driving.

Page 6: A Design Space for Conversational In-Vehicle Information ...Figure 3: Graphical representation of the design space for conversational in-vehicle information systems top of the dashboard

ApparatusThe prototype consists of an application that can processnatural language input and respond through synthesizedspeech. For the evaluation, we created 7 distinct variants ofthe prototype which differed in 3 sub-dimensions of the de-sign space, namely dialog visualization (full text, keywords,keywords & icons, see Figure 5), assistant representation(none, abstract), and granularity of the speech output (key-words, natural language) as shown in Figure 7.

Study DesignFigure 7: Variants of the prototypetested in the experiment

Figure 8: Ranking on systemoutput visualizations

We used a within subject design and collected subjectivefeedback from ten participants aged 29.2 ± 8.7 years. Thegroup included only 1 woman and all of the subjects had abackground in the automotive industry. As the primary as-signment, the participants performed the Critical TrackingTask simulating a measure of workload [18]. The secondarytask consisted of talking to the conversational IVIS proto-type, following a predefined agenda. The visual output ofthe prototype was shown at the position of the CID.

Setup and ProcedureParticipants experienced the different visualizations in per-muted order. The conversation agenda included 4 usecases including turn-taking between the user and the assis-tant. For example the assistant would say that the car wasrunning out of gas soon. The user would then ask for thelocation of the nearest gas station and get an answer fromthe system. Subsequently, the user had to ask a follow-upquestion, in this case how far away the gas station was,which was also answered by the system. Once the partic-ipants experienced all of the concepts, they ranked themaccording to personal preference.

ResultsUsers rated the variants with fewer text as less distracting.However, displaying text was generally desired as it simpli-fied error detection. The dialog visualization using keywordswith accompanying icons was ranked highest, followed bykeywords alone, while full text display was rather disliked(χ2 = 11.4, df = 2, p < .003, see Figure 8). All partic-ipants opted for natural language speech synthesis com-pared to a shortened audio mode (Z = −2.8, p = .005).Seven out of 10 users stated they prefered an abstract rep-resentation of an assistant, after no representation at all.

ConclusionIn this paper, we propose a design space for conversationaluser interfaces in cars. We derived the design space fromanalyzing conversational UIs of modern consumer electron-ics devices and cars. It consists of 5 major dimensions: thedesign of the assistant, which constitutes the dialog partnerfor the user, its position, the dialog design, and the sys-tem capabilities have to be chosen with the driver’s state inmind. We claim that this design space helps automotive UIdesigners to come up with novel conversational UI conceptsin cars and opens exciting questions for research. E.g., howpeople talk to a graphical or physical assistant compared toan assistant without any representation.

In the future, we plan to use this design space to investi-gate, among other things, dialog visualizations in differentdriving contexts (e.g., bored or stressed drivers) buildingupon the initial study presented above. As natural languageinterfaces are an emerging field, we hope to encouragediscussions in the HCI community and expect this designspace to be heavily used and extended as new technolo-gies emerge.

Page 7: A Design Space for Conversational In-Vehicle Information ...Figure 3: Graphical representation of the design space for conversational in-vehicle information systems top of the dashboard

REFERENCES1. Rafael Ballagas, Michael Rohs, Jennifer G. Sheridan,

and Jan Borchers. 2008. The Design Space ofUbiquitous Mobile Input. Handbook of Research onUser Interface Design and Evaluation for MobileTechnology I, 11 (2008), 1172. DOI:http://dx.doi.org/10.4018/978-1-59904-871-0

2. William Buxton. 1983. Lexical and PragmaticConsiderations of Input Structures. SIGGRAPHComputer Graphics 17, 1 (1 1983), 31–37. DOI:http://dx.doi.org/10.1145/988584.988586

3. Stuart K. Card and Jock Mackinlay. 1997. The structureof the information visualization design space. InProceedings of the 1997 IEEE Symposium onInformation Visualization (INFVIS ’97). IEEE, 92–99.DOI:http://dx.doi.org/10.1109/INFVIS.1997.636792

4. Stuart K. Card, Jock D. Mackinlay, and George G.Robertson. 1991. A Morphological Analysis of theDesign Space of Input Devices. ACM Trans. Inf. Syst.9, 2 (April 1991), 99–122. DOI:http://dx.doi.org/10.1145/123078.128726

5. Ed H. Chi. 2000. A taxonomy of visualizationtechniques using the data state reference model. InProceedings of the IEEE Symposium on InformationVisualization 2000 (INFOVIS 2000). IEEE, 69–75. DOI:http://dx.doi.org/10.1109/INFVIS.2000.885092

6. Joseph F. Coughlin, Bryan Reimer, and Bruce Mehler.2011. Monitoring, Managing, and Motivating DriverSafety and Well-Being. IEEE Pervasive Computing 10,3 (7 2011), 14–21. DOI:http://dx.doi.org/10.1109/MPRV.2011.54

7. Department of Transportation and National HighwayTraffic Safety Administration. 2013. Visual-ManualNHTSA Driver Distraction Guidelines for In-VehicleElectronic Device. Federal Register 78, 81 (2013),24817–24890.https://federalregister.gov/a/2013-09883

8. Paul A. Green. 2008. Driver Interface/HMI Standards toMinimize Driver Distraction/Overload.. In Convergence2008 Conference Proceedings. Society of AutomotiveEngineers, SAE International, Warrendale,Pennsylvania, USA, Article 2008-21-0002, 19 pages.http://papers.sae.org/2008-21-0002/

9. Linn Hackenberg, Sara Bongartz, Christian Härtle, PaulLeiber, Thorb Baumgarten, and Jo Ann Sison. 2013.International Evaluation of NLU Benefits in the Domainof In-vehicle Speech Dialog Systems. In Proceedingsof the 5th International Conference on Automotive UserInterfaces and Interactive Vehicular Applications(AutomotiveUI ’13). ACM, New York, NY, USA,114–120. DOI:http://dx.doi.org/10.1145/2516540.2516553

10. Renate Haeuslschmid, Bastian Pfleging, and FlorianAlt. 2016. A Design Space to Support the Developmentof Windshield Applications for the Car. In Proceedingsof the 2016 CHI Conference on Human Factors inComputing Systems (CHI ’16). ACM, New York, NY,USA, 5076–5091. DOI:http://dx.doi.org/10.1145/2858036.2858336

11. Grega Jakus, Christina Dicke, and Jaka Sodnik. 2015.A user study of auditory, head-up and multi-modaldisplays in vehicles. App. Erg. 46, Part A (2015),184–192. DOI:http://dx.doi.org/10.1016/j.apergo.2014.08.008

Page 8: A Design Space for Conversational In-Vehicle Information ...Figure 3: Graphical representation of the design space for conversational in-vehicle information systems top of the dashboard

12. Dagmar Kern and Albrecht Schmidt. 2009. DesignSpace for Driver-based Automotive User Interfaces. InProceedings of the 1st International Conference onAutomotive User Interfaces and Interactive VehicularApplications (AutomotiveUI ’09). ACM, New York, NY,USA, 3–10. DOI:http://dx.doi.org/10.1145/1620509.1620511

13. MIT SENSEable City Lab. 2009. AIDA - AffectiveIntelligent Driving Agent. (2009).http://senseable.mit.edu/aida

14. Jörg Müller, Florian Alt, Daniel Michelis, and AlbrechtSchmidt. 2010. Requirements and Design Space forInteractive Public Displays. In Proceedings of the 18thACM International Conference on Multimedia (MM ’10).ACM, New York, NY, USA, 1285–1294. DOI:http://dx.doi.org/10.1145/1873951.1874203

15. Laurence Nigay and Joëlle Coutaz. 1993. A DesignSpace for Multimodal Systems: Concurrent Processingand Data Fusion. In Proceedings of the INTERACT ’93and CHI ’93 Conference on Human Factors inComputing Systems (CHI ’93). ACM, New York, NY,USA, 172–178. DOI:http://dx.doi.org/10.1145/169059.169143

16. Alice Oh, Harold Fox, Max Van Kleek, Aaron Adler,Krzysztof Gajos, Louis-Philippe Morency, and TrevorDarrell. 2002. Evaluating Look-to-talk: A Gaze-awareInterface in a Collaborative Environment. In CHI ’02Extended Abstracts on Human Factors in ComputingSystems (CHI EA ’02). ACM, New York, NY, USA,650–651. DOI:http://dx.doi.org/10.1145/506443.506528

17. ITU-T Focus Group on Driver Distraction. 2013. Reporton User Interface Requirements for AutomotiveApplications. Technical Report. International

Telecommunication Union. https://www.itu.int/en/ITU-T/focusgroups/distraction

18. Tibor Petzoldt, Hanna Bellem, and Josef F. Krems.2014. The Critical Tracking Task: A Potentially UsefulMethod to Assess Driver Distraction? Human Factors56, 4 (2014), 789–808. DOI:http://dx.doi.org/10.1177/0018720813501864

19. A. Riener, A. Ferscha, F. Bachmair, P. Hagmüller, A.Lemme, D. Muttenthaler, D. Pühringer, H. Rogner, A.Tappe, and F. Weger. 2013. Standardization of theIn-car Gesture Interaction Space. In Proceedings of the5th International Conference on Automotive UserInterfaces and Interactive Vehicular Applications(AutomotiveUI ’13). ACM, New York, NY, USA, 14–21.DOI:http://dx.doi.org/10.1145/2516540.2516544

20. Sonja Rümelin, Peter Siegl, and Andreas Butz. 2013.Could you please. . . Investigating Cooperation In TheCar. In Adjunct Proceedings of the 5th InternationalConference on Automotive User Interfaces andInteractive Vehicular Applications (AutomotiveUI ’13Adjunct). 61–64. http://auto-ui.org/13/docs/aui_adjunct_proceedings_final.pdf#63

21. David L Strayer, Joel M Cooper, Jonna Turrill,James R. Coleman, and Rachel J. Hopman. 2015. TheSmartphone and the Driver’s Cognitive Workload: AComparison of Apple, Google, and Microsoft’sIntelligent Personal Assistants. Technical ReportOctober. AAA Found. for Traf. Saf.https://www.aaafoundation.org/sites/default/files/strayerIIIa_FINALREPORT.pdf

22. Nigel Ward. 2006. Non-lexical conversational sounds inAmerican English. Prag. & Cog. 14, 1 (2006), 129–182.DOI:http://dx.doi.org/10.1075/pc.14.1.08war