„but wait, there’s more!” – a deeper look into gestures on ...€¦ · „but wait,...

16
„But Wait, There’s More!” – A Deeper Look into Gestures on Touch Interfaces Emilie Lind Damkjær Aalborg University Aalborg, Denmark [email protected] Liv Arleth Aalborg University Aalborg, Denmark [email protected] ABSTRACT The language used in interaction design is affected by the wide array of academic backgrounds of interaction designers. Therefore one word may have several meanings, which can be confusing when conducting research in this field. In this paper, we define three levels of interaction: macro-, micro- and nanointeractions, the latter of which is the focus of this study. We then use Buxton’s three state model to break down common gestures on touch interfaces into nanointeractions, thereby identifying where in the process of a gesture signifiers for said gesture can appear. This is useful when overloading controls on an interface. We conducted an experiment to de- termine whether the temporal placement of a signifier made any difference for the discoverability of an affordance, in this case double tap and long tap. To test this, we developed an application which we exposed to 64 test participants, and log- ged their every gesture. A questionnaire was also administered after each trial concerning the participants’ own experience with the app. Kruskal-Wallis tests were performed on the data with no significant results. No clear tendencies were found regarding whether the temporal placement of the signifier af- fected the discoverability of the chosen affordances. However, we believe the concept of nanointeractions to be a valuable contribution to the field of interaction design in and of itself. Author Keywords HCI; interaction; usability design; microinteractions; touch interactions; signifiers; feedforward; affordance; gestures; nanointeractions; INTRODUCTION Verplank [17] says that we as interaction designers must answer three questions: How does one do? How does one feel? How does one know? The user has some form of knowledge (know) from previous applications with which they have interacted, and perhaps a mental map of how they imagine the current application to work. When presented with some form of control (such as a Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). © 2019 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-2138-9. DOI: 10.1145/1235 button), the interface may provide the user with feedforward (feel), revealing some information as to what will happen if certain gestures are performed on the control. The user will process this information based on their previous knowledge and expectations, and perform some action (do) on the control. Based on which action is performed, the control may provide some form of feedback (such as a sound signifying success, or the sensation of a button being pressed), which will in turn inform the user how to proceed. In our previous paper, we have discussed the concept of micro- interactions, and how it is significant to consider not only the overall process of interacting with an application, but also how the user chooses each gesture based on what they know and feel [4]. However, even this approach is simplifying things, as each gesture is a process of various small actions, and the user may process information or change their course mid-gesture. This paper focuses on the nature of these types of interactions. TERMINOLOGY The terminology in the field of interaction design and user interfaces is not always consistent due to interaction designers, UX designers, etc. coming from many different scientific back- grounds, each with their own language. Therefore, we describe our terminology thoroughly in the hopes it will help streamline the language in the field of interaction and UX design. Interactions The word interaction is used to describe many different things. Sometimes it refers to the overarching task like using an appli- cation to take a photo. Other times it refers to the action of tapping the shutter button in a camera application. And some- times it refers to the action of tapping on the screen of a smart- phone. To make it clear to which we refer, we separate the word interaction into several specialised definitions. Macro- interactions are the overarching tasks, the process itself, e.g. taking a photo. These usually benefit from the user having a good mental map of the system in question. A microinteraction is the small interaction with a contextual purpose limited to a single gesture, e.g. clicking the shutter button to take a photo [10]. However, even gestures as simple as a touch screen button press is comprised of several smaller actions: approaching the button with your finger, touching the button, and letting go. These are the type of action we will be calling nanointer- actions. 1

Upload: others

Post on 19-May-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: „But Wait, There’s More!” – A Deeper Look into Gestures on ...€¦ · „But Wait, There’s More!” – A Deeper Look into Gestures on Touch Interfaces Emilie Lind Damkjær

„But Wait, There’s More!” – A Deeper Look into Gestures onTouch Interfaces

Emilie Lind DamkjærAalborg UniversityAalborg, Denmark

[email protected]

Liv ArlethAalborg UniversityAalborg, Denmark

[email protected]

ABSTRACTThe language used in interaction design is affected by thewide array of academic backgrounds of interaction designers.Therefore one word may have several meanings, which canbe confusing when conducting research in this field. In thispaper, we define three levels of interaction: macro-, micro-and nanointeractions, the latter of which is the focus of thisstudy. We then use Buxton’s three state model to break downcommon gestures on touch interfaces into nanointeractions,thereby identifying where in the process of a gesture signifiersfor said gesture can appear. This is useful when overloadingcontrols on an interface. We conducted an experiment to de-termine whether the temporal placement of a signifier madeany difference for the discoverability of an affordance, in thiscase double tap and long tap. To test this, we developed anapplication which we exposed to 64 test participants, and log-ged their every gesture. A questionnaire was also administeredafter each trial concerning the participants’ own experiencewith the app. Kruskal-Wallis tests were performed on the datawith no significant results. No clear tendencies were foundregarding whether the temporal placement of the signifier af-fected the discoverability of the chosen affordances. However,we believe the concept of nanointeractions to be a valuablecontribution to the field of interaction design in and of itself.

Author KeywordsHCI; interaction; usability design; microinteractions; touchinteractions; signifiers; feedforward; affordance; gestures;nanointeractions;

INTRODUCTIONVerplank [17] says that we as interaction designers mustanswer three questions: How does one do? How does onefeel? How does one know?

The user has some form of knowledge (know) from previousapplications with which they have interacted, and perhaps amental map of how they imagine the current application towork. When presented with some form of control (such as a

Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).

© 2019 Copyright held by the owner/author(s).

ACM ISBN 978-1-4503-2138-9.

DOI: 10.1145/1235

button), the interface may provide the user with feedforward(feel), revealing some information as to what will happen ifcertain gestures are performed on the control. The user willprocess this information based on their previous knowledgeand expectations, and perform some action (do) on the control.Based on which action is performed, the control may providesome form of feedback (such as a sound signifying success,or the sensation of a button being pressed), which will in turninform the user how to proceed.

In our previous paper, we have discussed the concept of micro-interactions, and how it is significant to consider not only theoverall process of interacting with an application, but also howthe user chooses each gesture based on what they know andfeel [4]. However, even this approach is simplifying things, aseach gesture is a process of various small actions, and the usermay process information or change their course mid-gesture.This paper focuses on the nature of these types of interactions.

TERMINOLOGYThe terminology in the field of interaction design and userinterfaces is not always consistent due to interaction designers,UX designers, etc. coming from many different scientific back-grounds, each with their own language. Therefore, we describeour terminology thoroughly in the hopes it will help streamlinethe language in the field of interaction and UX design.

InteractionsThe word interaction is used to describe many different things.Sometimes it refers to the overarching task like using an appli-cation to take a photo. Other times it refers to the action oftapping the shutter button in a camera application. And some-times it refers to the action of tapping on the screen of a smart-phone. To make it clear to which we refer, we separate theword interaction into several specialised definitions. Macro-interactions are the overarching tasks, the process itself, e.g.taking a photo. These usually benefit from the user having agood mental map of the system in question.

A microinteraction is the small interaction with a contextualpurpose limited to a single gesture, e.g. clicking the shutterbutton to take a photo [10].

However, even gestures as simple as a touch screen buttonpress is comprised of several smaller actions: approachingthe button with your finger, touching the button, and lettinggo. These are the type of action we will be calling nanointer-actions.

1

Page 2: „But Wait, There’s More!” – A Deeper Look into Gestures on ...€¦ · „But Wait, There’s More!” – A Deeper Look into Gestures on Touch Interfaces Emilie Lind Damkjær

SignifiersIn this paper, a signifier refers to a specific design aspect tellingthe user something about the affordances of the application.These can for example be visual, haptic, or auditory. In ourprevious work [4], different visual signifiers (a shadow on abutton, a drag handle among others) were tested to see whichone(s) best conveyed the affordance. In smartphone applicationdesign, signifiers are often in the form of text or based ondesign guidelines.

Affordance, Feedforward, and FeedbackThe three terms affordance, feedforward and feedback arecommonly used in the field of interaction design, but does notalways indicate the same thing. We partially rely on Vermuelenet al.’s synthesis and definitions of these three terms [16].However, even in their synthesis, there is some ambiguityas to the definition of affordance, and therefore we defineaffordance as the possible actions in relation to an object, e.g.a button can be pressed. Affordance is not always clear andcan in fact be hidden. Therefore we further make a distinctionbetween perceived affordance and hidden affordance, whichare subcategories of affordance. The perceived affordance iswhat can easily be gathered from signifiers on an object, andhidden affordance is when an object has affordances that arenot visible to the user. A long tap on a button is often a hiddenaffordance. It should be noted that Norman has previouslyused the term perceived affordance to mean what he now callsa signifier [12], and therefore several papers use these termsinterchangeably. Our use of the term should not be confusedwith the term signifier, as they are entirely different concepts.

Feedforward tells the user something about what would hap-pen if they perform the action allowed by the affordance, e.g.pressing a green button or a button with a text label saying"ok" on it is a confirmation action.

Feedback is what happens during or after you perform theaction allowed by the affordance, e.g. the confirm action isperformed and a toast appears.

GesturesA gesture is the physical action performed on a touch interface,e.g. a tap. This is not to be confused with a microinteraction.A microinteraction involves a gesture but must also include apurpose such as scrolling by using a swipe. In this paper, wewill look at the gestures tap, double tap, long tap, and drag.

Figure 1: Buxton’s three state model [3].

Figure 2: The cycle of a user interacting with a control, modi-fied from Verplank’s sketch [17].

Another way to describe gestures besides using words, is utili-sing models. In this paper we use Buxton’s three state model[3] to break down gestures. As seen in Figure 1, the modelconsist of three states: State 0 is the state where you are out ofrange of the gesture; State 1 represents when you are in range;and State 2 is when the intended gesture is carried out.

To show how all these terms play together in terms ofVerplank’s interaction model, we have modified his sketchaccording to the terms established in this section, shown inFigure 2.

BREAKING DOWN GESTURESWhen Verplank speaks of interactions, he uses the example offlipping a switch and seeing the light come on [17]. Howev-er, as we have previously stated, even a gesture as simple asflipping a switch may be broken down into a series of nano-interactions. The user may discover new information duringa nanointeraction, before they have completed the intendedgesture.

As an example, placing one’s finger on the switch is a nano-interaction in and of itself. At this state the user may feel thatit is impossible to flip the switch one way, but possible to flipit the other. The nanointeraction of applying pressure whileflipping the switch gives the user some information on howmuch resistance the switch gives, and thus how much pressurethey must apply. They may even discover a previously hiddenaffordance of fading the light gradually. Breaking gestures intonanointeractions in this manner rather than one instantaneousaction is useful when designing for a touch device.

Figure 3: The tap gesture, broken into nanointeractions.

2

Page 3: „But Wait, There’s More!” – A Deeper Look into Gestures on ...€¦ · „But Wait, There’s More!” – A Deeper Look into Gestures on Touch Interfaces Emilie Lind Damkjær

To show how gestures are broken into nanointeractions, weprovide some examples visualised through diagrams inspiredby Larsen [8], using Buxton’s three state model [3].

The simplest touch interaction, the tap, is illustrated in Figure3. In our model, State 0 represents when the user’s finger isout of range of the control on the touch screen, State 1 whenthe user’s finger is in range of the control, and State 2 whenthe user interacts with the control. For the tap gesture thisis when the finger touches the control. The arrow marked inred represents the moment that the system delivers feedbackregarding the affordance of the control. In the case of a tapthis doesn’t happen until the action is completed, i.e. when theuser lifts their finger.

Figure 5 illustrates the drag gesture. Here, users may transitionto State 2a by performing the nanointeraction of moving theirfinger once placed on the screen. Moving the finger on thescreen is actually several nanointeractions, but for the sake ofsimplicity it is illustrated as only one in our model. The gestureis completed when the user lifts their finger, thus returning toState 1. The affordance of dragging is revealed by feedbackthe instant movement is detected, thereby appearing duringthe gesture. This differs from a tap in that rather than afterthe gesture is complete, the feedback is provided mid-gesture,allowing users to discover the affordance before completingthe gesture. However, this takes place during a very shortamount of time, so it is not always possible to react to thefeedback.

The long tap gesture is depicted in Figure 6. To get to State 2athe user must hold their finger’s position for a certain amountof time, and when this threshold is reached the hidden affor-dance is revealed and the gesture is completed when the fingeris lifted.

In many cases a long tap is combined with a drag when imple-menting them on touch screen. This is similar to selecting anobject by clicking and holding when using a computer mouse.As can be seen in Figure 4 it adds the drag gesture to thelong tap model by introducing an additional state, State 2b.Compared to a normal long tap the signifier revealing the af-fordance is delayed to correspond to the signifier revealing thedrag affordance instead, i.e. giving feedback when the user’sfinger moves after the long tap.

Figure 4: The long tap gesture continued with the drag gesture,broken into nanointeractions.

Figure 5: The drag gesture, broken into nanointeractions.

Figure 6: The long tap gesture, broken into nanointeractions.

Figure 7: The double tap gesture, broken into nanointeractions.

The gesture double tap illustrated in Figure 7 doesn’t havea linear state sequence like the three previously describedgestures. As the name implies it is the act of tapping twice.Like the long tap, this gesture has a temporal aspect to thenanointeractions, i.e. the time between finger lift and fingeron determines whether you perform a double tap or just twoseparate taps. This is illustrated in Figure 7 where State 1a ispresent. This represents the temporal nature of the gesture by

3

Page 4: „But Wait, There’s More!” – A Deeper Look into Gestures on ...€¦ · „But Wait, There’s More!” – A Deeper Look into Gestures on Touch Interfaces Emilie Lind Damkjær

working as a timer state, where if you timeout you go backto State 1 and have to start the gesture over again. Unlessspecifically designed for it, the affordance is not revealed untilthe user’s finger is lifted after the second tap, making it veryhard to discover.

Figures 3-7 illustrate the nanointeractions of hypothetical con-trols with an affordance of only one gesture each. However,it is useful to overload controls so that the user may performseveral gestures on one control. To illustrate how to breakdown a system with numerous gesture affordances, Figure 8depicts both the tap, drag, long tap, long tap+drag and doubletap affordances, broken into nanointeractions.

Figure 8: The combination of the tap, drag, long tap, longtap+drag and double tap gestures, broken into nanointer-actions.

All five gestures require the user to first approach the control,and then place their finger on it, thus entering State 2 on touchinterfaces. Moving one’s finger while in this state reveals theaffordance of dragging, which initiates state 2a. The user maycomplete the dragging gesture by lifting their finger off thecontrol. If, instead, the user rests their finger on the controlwhile in State 2, they enter State 2b, initiating a long tap. Fromhere, they may either complete this gesture by lifting theirfinger, or initiate a longtap+drag by moving their finger. If theuser lifts their finger while in State 2, they have performed aregular tap. However, as both the single- and double tap areafforded by this system, the single tap is not complete until acertain timer runs out, ensuring that the user did not performa double tap. This timeout will usually be quite short, so theuser does not sense a delay upon tapping a control. If, instead,they place their finger back on the control and lift it again, adouble tap is performed. In Figure 8, the red arrows signifyplaces in the process where feedback is typically provided.

Illustrating interactions in this manner and thinking of gesturesas a system of nanointeractions rather than something thathappens instantaneously may prove useful when consideringhow to design an application with meaningful signifiers, bothin terms of feedforward and feedback.

RELATED RESEARCHIn this section we discuss design guidelines for touch inter-faces, examine current research in exploratory affordance de-sign and re-examine our previous research based on our modelfor gestures broken down into nanointeractions.

Design GuidelinesTo ease the user’s interaction with a touch screen, the designermust consider the perceived affordances based on the feed-forward given to the user. It may be beneficial to follow designguidelines, as users will have certain expectations as to howan interface will look depending on which devices they areaccustomed to.

Many systems, such as Apple, provide developer guidelinesto designers in order to keep signifiers consistent and easilycomprehensible to users. Their guidelines include adding aborder or background to buttons when necessary to make themappear tappable [1]. They also recommend displaying a rowof dots [1] at the bottom of the screen to show the user theirposition in a flat list of pages, as seen in Figure 9.

Figure 9: Dots at the bottom of the screen to indicate theexistence of other pages, as well as the user’s position withinthe list of pages [1].

If the user is to pick one option out of a list, the designer mayimplement a picker, seen in Figure 10, as recommended by theguidelines. Note that Apple’s standard picker design includessignifiers for the affordance of dragging; other options than thecurrently active one are dimly visible and increasingly warpedthe further they are from the middle. This emulates a hapticcylindrical object the user may roll with their finger.

Figure 10: A picker, allowing the user to select a single itemin a list of many [1].

Another form of control is the slider [1] as seen in Figure 11,which allows the user to slide a value between its minimumand maximum value (such as screen brightness). The finite

4

Page 5: „But Wait, There’s More!” – A Deeper Look into Gestures on ...€¦ · „But Wait, There’s More!” – A Deeper Look into Gestures on Touch Interfaces Emilie Lind Damkjær

track and a thumb with a drop shadow signifies that the usermay drag the thumb to a position on the track.

Figure 11: A slider [1].

It should be noted that for some recommended controls, Appleprovides design guidelines, but no explanation as to what willsignify to the user which gesture is needed. One example isthe edit menu [1]. The guidelines state that by default, the usermay long tap or double tap an element in a text field in order tobring out the edit menu with options to cut, copy, etc. However,they provide no explanation as to how the user would know toeither long tap or double tap. Another example is refreshing apage. The affordance to drag the screen downwards is oftenavailable [1], with no signifiers to let the user know.

According to Google’s design guidelines for touch interfaces,signifiers on the interface should indicate which gestures areafforded [5]. Many examples of gestures are provided, but theguidelines fail to provide examples of signifiers to let usersperceive the affordances before a gesture is executed.

Note that while Android and iOS users may have differenthabits stemming from the design patterns they are used to[15], neither Apple nor Google provide sufficient guidelinesas to how to design signifiers for non-standard gestures (fortouch screens, any gesture other than tapping). In our previousresearch, we found this to be a challenge as well [4].

Affordance DesignOne way to signify certain affordances is by using metaphorsto tap into users’ existing knowledge. Oakley et al. utilises thisidea for their smartwatch prototype as a way to introduce newaffordances [13]. Examples of this can be seen in Figure 12.

Figure 12: How the mute and pause/play functions are toggled,respectively [13].

A finger placed vertically across the center of the watch wasimplemented as a way to activating the mute function on a me-dia player application, as the action resembles placing a fingeracross lips. Two fingers across the watch toggle between pauseand play, as the two fingers resemble the traditional ’pause’icon which is two vertical lines. Placing the finger horizontallyalong the bottom of the screen emulates the shape subtitles

take on a screen, and thus enables the subtitles. This is signi-fied only through feedback — no feedforward is provided tothe user, thus Oakley et al. relied on explaining these affor-dances to test users, and we do not know whether previousknowledge is sufficient to discover the affordances in this case.

It should be noted that they never quantitatively test theseaffordances, but rather rely on participant reports for their eva-luation of the understandability. Users supported the notion ofmetaphors and found the pause and mute functions "intuitive",but there was no formal testing of error rates etc. Furthermore,some of the implemented gestures lack a metaphor or signifierand were thus completely hidden affordances, such as the abi-lity to copy an existing appointment in a calendar applicationby picking up a finger from flat against the screen to just afingertip.

An alternative way to signify affordances is to guide the userthrough nudges, rather than design the object itself with feed-forward. This concept was explored by Lopes et al. [9]. Theirproduct, Affordance++, can stimulate the user’s arms as theyapproach the object in question to nudge them towards theproper gesture. An example of this can be seen in Figure 13.

Figure 13: Affordance++ helps the user interact with an unfa-miliar lamp [9].

They found that this is a useful way to communicate especial-ly dynamic interactions to the user, e.g. shaking a spray canbefore spraying. However, it is unrealistic for real-world appli-cation, as it is unlikely users will want to wear an arm-mounteddevice at all times.

What is interesting about the Affordance++ is that the feed-forward is given not by the control itself (the lamp), but ratherby an external device. Furthermore, the nudges continuouslyprovide feedforward to the user based on which state theyare currently in, directly nudging them towards the next state.This is shown in Figure 14, where we have illustrated theinteraction using the previously described model.

5

Page 6: „But Wait, There’s More!” – A Deeper Look into Gestures on ...€¦ · „But Wait, There’s More!” – A Deeper Look into Gestures on Touch Interfaces Emilie Lind Damkjær

Figure 14: Turning on the lamp, guided by Affordance++.

It should be noted that overloading would create a problem forthis solution, as the possibility of moving to several differentstates from the current one means that there is little point inbeing nudged towards just one.

Another way to provide users with feedforward without visualsignifiers, is to rely on audio, as is the case with e.g. answeringmachines and automated phone call systems. Here the user isprovided with all the affordances by getting feedforward inthe form of what is usually a list of options available.

Figure 15: A chronological illustration of a hypothetical an-swering machine.

This system provides both feedforward and feedback. How-ever, sometimes it provides too much of it, or gives the feed-forward in a problematic order. It is also very time-consuming,especially if the user doesn’t know what they are searching for

and needs to listen through all the options more than once. Asillustrated by Figure 15 the user’s options are limited by lackof knowledge of said options, until a certain amount of timehas passed. Often this can lead to an overwhelming amountof information, and listening to the options more than once.Overloading in this situation is not a possible solution, since itis limited to one modality, audio.

Harrison et al. acknowledge the need for overloading, andachieve a level of finger overloading by differentiating be-tween different types of finger input - the tip, the pad, the nail,and the knuckle [6]. They found that it is possible to builda touch surface which can accurately differentiate betweenthese distinct inputs, but performed no tests concerning howto signify these affordances.

Pedersen and Hornbæk [14] propose touch force as a modalityfor overloading. They found that users can accurately controltwo different levels of pressure, although test users stated thatthe gesture took some getting used to. Moreover, users ex-pressed fatigue after having touched the screen with increasedpressure for a while, indicating that this modality is inferior asan interaction and should be used to a limited degree.

Aslan et al. propose the gazeover as a way to implement some-thing similar to the mouseover on a mouse-and-keyboard setup,but potentially available for touch interfaces [2]. However, theydo not test user’s ability to perceive this affordance on a touchinterface.

Breakdown of Previous DesignIn our previous study on signifiers [4] we designed four signi-fiers for the same control, two for dragging and two for doubletapping. For the dragging control, the two signifiers were adrag handle and a drop shadow, as illustrated in Figures 16(a)and 16(b). Figure 16(c) shows when the signifier is visible ingreen, and when feedback for the gesture is provided in red.For both dragging signifiers the affordance was present at alltimes, illustrated by the green border on the box, and the feed-back was limited to the regular drag feedback of moving thecontrol. No real differences existed between the two signifierson a functional level, only the visuals were different.

This was not the case for the two signifiers for the doubletapping control. Both rely on temporal information, but thepresentation and execution of them are different. In Figure17(a) the border signifier is presented, and Figure 17(b) showswhen the affordance signifier is present and when feedback isgiven. When the control is single-tapped, the inner border litup in green and faded away if it wasn’t pressed again before theinteraction entered timeout. If the control was double tapped,the outer border would also become green. This means that theaffordance signifier was present the entire time, but feedbackwas given three distinct places in the interaction process: atthe conclusion of a single tap; in the event of timeout; and atthe conclusion of a double tap.

6

Page 7: „But Wait, There’s More!” – A Deeper Look into Gestures on ...€¦ · „But Wait, There’s More!” – A Deeper Look into Gestures on Touch Interfaces Emilie Lind Damkjær

(a) (b)

(c)

Figure 16: (a) The drag control with a drag handle signifier.(b) The drag control with a drop shadow signifier. (c) Thethree-state diagram for the two dragging controls from ourprevious study [4].

(a)

(b)

Figure 17: (a) The double tap control with a temporal feedbackborder signifier. (b) The three-state diagram for the double tapborder signifier from our previous study [4].

The other double tap control signifier was a pulsing animation,as can be seen in Figure 18(a), initiated when entering thescreen, and then never repeated. This signifier is illustrated inFigure 18(b) as a green arrow when entering State 0. The onlyfeedback given is when the double tap is completed.

When looking over these control signifiers with the three-statemodel in mind, it becomes clear why our double tap signifiers

had a poor success rate. They both relied on temporal signifiersto convey the affordance of the control. In the case of the pulsesignifier, it only showed up upon entering the screen, and thenthere was no way to repeat it, making it easy to miss for theuser. The temporal aspect of the border signifier was slightlybetter in this regard, as the user was able to make the signifierrepeat itself. However, the intention of it conveying that theouter border would light up if pressed again was not clearbased on only the temporal aspect. So even though there isfeedback, the feedback is not particularly informative. Thedragging signifiers performed as we expected and their three-state diagram is also exactly like a standard dragging controlwith the addition of a signifier present at all times.

(a)

(b)

Figure 18: (a) The double tap control with a temporal pulsesignifier. (b) The three-state diagram for the double tap pulsesignifier from our previous study [4].

A lot of creative solutions exist using different modalities tocommunicate affordance or implement overloading. Howev-er, many lack signifiers for the affordance overload, causinghidden affordances. This is unintuitive if the user is given no in-struction, damaging the usability of the application in question.This is a research gap we intend to explore by investigatinghow, on touch screen devices, we can turn hidden affordances,such as double tap and long tap, into perceived affordances,thereby unlocking intuitive overloading. We will investigatethis by focusing on when, during the nanointeractions of agesture, signifiers appear.

EXPERIMENTIn this experiment we tested whether the temporal placementof a signifier for double tap and a signifier for long tap affectedparticipants’ ability to discover the relevant affordance.

Based on the identified research gap we set up two null hypo-theses, with the dependent variable being the discoverabilityof the relevant gesture, and the independent variable being thesignifier and the temporal placement thereof:

7

Page 8: „But Wait, There’s More!” – A Deeper Look into Gestures on ...€¦ · „But Wait, There’s More!” – A Deeper Look into Gestures on Touch Interfaces Emilie Lind Damkjær

1. The addition of a signifier does not matter for the discovera-bility of the affordance of the relevant gesture.

2. The temporal placement of the signifier does not matter forthe discoverability of the affordance of the relevant gesture.

The PrototypeTo test the hypotheses, we created a prototype app in AndroidStudio. This prototype was designed for the purpose of expo-sing test participants to nine different stimuli as controlled bythe test facilitator. These stimuli are as follows:

• Four versions of the app with a double tap affordance:

– A control version with no visible signifier for the dou-ble tap affordance.

– A version in which a signifier appears upon enteringthe screen (State 0 in Figure 7). This signifier reappearson a five second loop, repeatedly.

– A version in which a signifier appears when the usertouches the screen (State 2 in Figure 7).

– A version in which a signifier appears after the userhas lifted their finger from the screen, thus completinga single tap (returning from State 1a to 1 on Figure 7).

• Four versions of the app with a long tap affordance:

– A control version with no visible signifier for the longtap affordance.

– A version in which a signifier appears upon enteringthe screen (State 0 in Figure 6). This signifier reappearson a five second loop, repeatedly.

– A version in which a signifier appears when the usertouches the screen (State 2 in Figure 6).

– A version in which a signifier appears after the userhas lifted their finger from the screen, thus completinga single tap (returning to State 1 on Figure 6).

• A confuser version implemented in order to account for thelearning factor. In this version, instead of a double tap orlong tap affordance, there is a swipe affordance with nosignifier indicating this.

Upon launching the app, the screen shows a menu of thesenine versions. This menu is not to be seen by test participants,but is a tool for the test facilitators to control which version isapplied. This menu can be seen in Figure 19(a).

Once a version has been chosen, the application redirects tothe screen seen in Figure 19(b). As can be seen, it is a simpleto-do list application which allows users to add items to alist of chores, mark the ones they have completed, and deleteitems from the list completely. Users may write the name of achore in the text field (1), then press the button (2) to then addthat item to the list (3). If the text field is empty upon buttonpress, no item is added to the list. As can be seen in Figure19(b), each list item contains a box which may be checkedupon tapping it. This box is also checked if the list item issingle tapped at all, whether it is on the text or in the emptyspace to the right.

(a) (b)

Figure 19: (a) The version menu on the prototype application.(b) The main screen of the to-do list app.

For each version of this app, there is a hidden affordance todelete items added to the list. For four of these versions, thetrigger is to double tap, for four others it is to long tap, and forthe last one (the confuser) the trigger is to swipe. The gesturein question must be performed on the item the user wishes todelete, but it does not matter whether the gesture is performedon the text, the box, or the empty space.

Figure 20: The signifier for the double tap affordance.

As described previously, a signifier revealing the given af-fordance may appear at various times depending on whichversion is currently chosen. For the double tap versions (ex-cept for the control version), the signifier is a pulse of tworings which expand one after the other, then disappear. This isto emulate the idea of a double tap. A screenshot of this canbe seen Figure 20.

Figure 21: The signifier for the long tap affordance.

8

Page 9: „But Wait, There’s More!” – A Deeper Look into Gestures on ...€¦ · „But Wait, There’s More!” – A Deeper Look into Gestures on Touch Interfaces Emilie Lind Damkjær

For the long tap versions (except for the control version),the signifier is a wheel which gradually fills out over time,indicating that the user may hold their finger on the screen foran extended period of time. A screenshot of this can be seenin Figure 21.

Throughout the application, every touch gesture performed, aswell as every successfully added, marked, or deleted item, islogged for analysis purposes.

Experiment DesignThis evaluation consists of two experiments, one for the dou-ble tap gesture and the other for the long tap gesture. Thehypotheses apply to both experiments.

The independent variable is the temporal placement of thesignifier. For each gesture, four conditions were tested, plus aconfuser to slightly alleviate the learning curve of participants:

• A control version with no signifier (Long tap - LC, Doubletap - DC).

• An enter screen version, where the signifier appeared afterthe screen was opened and every five seconds after this on aloop (Long tap - LE, Double tap- DE).

• A version with the signifier appearing during the middle ofa tap (Long tap - LM, Double tap - DM).

• A version with the signifier appearing after a tap (Long tap- LA, Double tap - DA).

• A version with no signifier and the required gesture beingswipe (C).

The dependent variable is the discoverability of the affordanceof the relevant gesture. This is measured by looking at thesuccess rate, the time until success and the amount of differentgestures tried before the correct one was performed. This wascollected by logging this information within the prototype.Another measure we used was the participants’ answers to aseries of questions inspired by the NASA Task Load Index(TLX) [11]. We tested the raw TLX method (as described byHart [7]) in a pilot trial, but since the test participant found theoriginal scale confusing, we tweaked the questions to insteadinclude a smaller scale of 0 to 10. The questions were asfollows:

• How mentally demanding did you find the task on a scaleof 0-10, where 0 is not at all mentally demanding and 10 isextremely mentally demanding?

• How much did you feel you had to rush when performingthe task on a scale of 0-10, where 0 is no time pressure and10 is extreme time pressure?

• How much success did you have in accomplishing the task,on a scale of 0-10, where 0 is no success and 10 is completesuccess?

• How much effort did you have to put in when accomplishingthe task, where 0 is no effort at all and 10 is extreme effort?

• How irritated, stressed, annoyed, or frustrated did you feelwhen performing the task, where 0 is none at all and 10 isextreme frustration?

Due to the experiment becoming too long to be able to recruitpeople of off the street, each participant only tried five condi-tions: two of each gesture and the confuser as the middle trial,making the experiment a between subjects design. To alleviatethe learning curve of the participants somewhat, the order ofthe conditions was determined by using a Latin square design.

The only requirement for the participants was that they nothave a background in interface design. For this test 64 randompeople in the the age group of 14-77 were recruited off of thestreets of Aalborg and or amongst employees at RegionshusetNordjylland. The experiments were conducted three differentplaces due to recruitment issues: a space at Aalborg University,an office at Regionshuset Nordjylland and the Main PublicLibrary in Aalborg. Out of the 64 participants, we had 40female and 24 male participants, 39 used iOS and 25 usedAndroid on their smartphones, and 52 were right-handed. Theage distribution is illustrated in Figure 22.

Figure 22: The age distribution of the participants, in intervalsof 10 years.

The apparatus used for this experiment consisted of a SonyXperia XZ2 Compact smartphone, a laptop for running ourprototype and saving the log, a laptop for notetaking andquestionnaire answers, and a video camera on a tripod to filmthe participant’s hands interacting with the smartphone.

The procedure for the experiment was as follows: first the parti-cipant signed a consent form and was explained the procedureby one of the two facilitators. They then filled out a demo-graphics questionnaire. The video camera was turned on whenthe participant received the first version of the prototype. Theywere told to first add an item to the list, then mark an item ascompleted and finally to delete an item. When the participanteither succeeded in the last task or gave up, they were askedsome follow up questions regarding their actions while usingthe version. They were then asked the TLX-inspired questions.This process repeated with the next four versions, with thethird version always being the confuser condition. After theparticipant finished with the fifth version, the log was copied

9

Page 10: „But Wait, There’s More!” – A Deeper Look into Gestures on ...€¦ · „But Wait, There’s More!” – A Deeper Look into Gestures on Touch Interfaces Emilie Lind Damkjær

(a)

(b)

Figure 23: The distribution of successes and failures of thedouble tap (a) and long tap (b) versions.

from Android Studio to a text file. The entire procedure tookapproximately 15 minutes from start to finish.

ResultsIn this section we will present our results from the test. Notethat we do not compare results from the different gestures, asit is trivial that users should attempt the long tap gesture morefrequently than the double tap gesture.

The Logging DataThe distribution of successes and failures of each versionexcluding the confuser is illustrated by Figures 23(a) and 23(b).A trial was considered a success if an item was deleted. A quicklook on this graph shows a tendency for participants havingan easier time discovering the affordance of long tapping thanthe affordance of double tapping.

An interesting tendency for the first non-single tap gestureperformed in each trial when pairing it with the OS the partici-pants are used to, is that Android users are more likely to try alongtap first and iOS users are more likely to try a swipe first.This is illustrated in Figure 24.

Due to only being able to use the successful trials for statisticaltests, the sample sizes for each condition reduces from 32 for

Figure 24: The first gestures performed by participants in eachtrial, presented in percentages. N/A represents the trials whereparticipants never tried a non-single tap gesture.

(a) (b)

Figure 25: Box plots of the time taken until success of (a) thedouble tap versions and (b) the long tap versions

each to 10 (DC), 18 (DE), 15 (DM), 12 (DA), 27 (LC), 25(LE), 22 (LM) and 26 (LA). These are all <30 meaning thateven though the data gathered from the logging is parametric,we should not use parametric tests. In a case of a sample ofthis size, outliers also affect the mean quite a bit, so in our casethe median is a better representation of the central tendency ofthe data, which is another reason to choose a nonparametrictest. Parametric tests also assume a normal distribution, whichall the samples are checked for. When the distribution is notnormal we use a Kruskal-Wallis test instead of a one-wayANOVA when comparing the four double tap versions andwhen comparing the four long tap versions. A Kruskal-Wallistest works by assigning ranks to all the data–from smallestto largest–then dividing it back into the samples, calculatingthe sum of ranks for each sample and using the mean of thesum of ranks to calculate the test statistic, H. It calculateswhether one of the samples differ significantly from one ofthe others, but does not tell us which one. To discover this,post-hoc tests are needed. Comparing the double tap versions’success rate to each other resulted in (H(3) = 4.65, p-value =0.1993), meaning there is no significant difference betweenany of the four versions. The same procedure yielded (H(3) =

10

Page 11: „But Wait, There’s More!” – A Deeper Look into Gestures on ...€¦ · „But Wait, There’s More!” – A Deeper Look into Gestures on Touch Interfaces Emilie Lind Damkjær

2.54, p-value = 0.4681) for the long tap versions, also resultingin no significant difference between the four versions.

Looking at completion time for the successful trials, normalitychecks were performed and assumptions were not met foreither double tap or long tap, therefore Kruskal-Wallis testswere performed. We got (H(3) = 2.6, p-value = 0.4489) whencomparing the four double tap versions, and (H(3) = 2.65,p-value = 0.8572) when comparing the four long tap versions.The distribution of the data is illustrated in Figure 25. Nosignificant difference was found for either gesture regardingcompletion time.

Comparing the amount of gestures performed before success,normality checks were performed and assumptions were notmet for double tap or long tap, therefore Kruskal-Wallis testswere performed. This resulted in (H(3) = 2.62, p-value =0.4535) for the four double tap versions, and (H(3) = 3.79,p-value = 0.2849) for the four long tap versions. Neither aresignificantly different. The distribution of the amount of ges-tures are illustrated in Figure 26.

Cutting out the single taps from the equation, normality checkswere performed and assumptions were not met for double tapor long tap, therefore Kruskal-Wallis tests were performed,yielding (H(3) = 3.67, p-value = 0.2999) and (H(3) = 2.61,p-value = 0.4565) for the double tap versions and long tapversions respectively. The distribution is illustrated in Figure27. No significant difference was found.

Seeing no significant difference in the amount of gesturesperformed, we look to the variety of gestures performed be-fore success, excluding the single tap. Normality checks wereperformed and assumptions were not met for double tap orlong tap, therefore Kruskal-Wallis tests were performed. Com-paring the four versions of double tap yields (H(3) = 2.63,p-value = 0.4516), and comparing the four long tap versionsyields (H(3) = 2.82, p-value = 0.4199). The distribution isillustrated in Figure 28. No significant difference was found.

The Questionnaire DataThe questionnaire data is analysed based on the five questionsfocusing on the mental demand, temporal demand, perfor-mance, effort and frustration experienced during the trials.The results are compared between the four different versionsof long tap and double tap respectively. The overall distribu-tion of answers is visualised in Figures 29 and 30. No answerswere discarded in this analysis since the questionnaire results’ability to be statistically analysed were not affected by wheth-er a participant succeeded or not. One participant elected notto answer the questions for two of the trials, which were Cand DC. Since we are uninterested in statistically analysingthe confuser, the only impact this had was the sample sizeof DC being reduced to 31. The data gathered from the ques-tionnaire is based on a ranking scale, and thus ordinal andnon-parametric, and is analysed by performing a Kruskal-Wallis test, to determine whether any significant differencesexist.

(a)

(b)

Figure 26: Box plots of the amount of gestures performedbefore success for (a) double tap and (b) long tap.

11

Page 12: „But Wait, There’s More!” – A Deeper Look into Gestures on ...€¦ · „But Wait, There’s More!” – A Deeper Look into Gestures on Touch Interfaces Emilie Lind Damkjær

(a)

(b)

Figure 27: Box plots of the amount of non-single tap gesturesperformed before success for (a) double tap and (b) long tap.

(a)

(b)

Figure 28: Box plots of the amount of different types of ges-tures performed before success, excluding the single tap. (a)shows the results for the double tap versions, and (b) showsthe results for the long tap versions.

12

Page 13: „But Wait, There’s More!” – A Deeper Look into Gestures on ...€¦ · „But Wait, There’s More!” – A Deeper Look into Gestures on Touch Interfaces Emilie Lind Damkjær

(a) (b)

(c) (d)

(e)

Figure 29: Box plots of TLX data from the double tap versions.(a) mental demand, (b) temporal demand, (c) performance, (d)effort, and (e) frustration.

(a) (b)

(c) (d)

(e)

Figure 30: Box plots of TLX data from the long tap versions.(a) mental demand, (b) temporal demand, (c) performance, (d)effort, and (e) frustration.

13

Page 14: „But Wait, There’s More!” – A Deeper Look into Gestures on ...€¦ · „But Wait, There’s More!” – A Deeper Look into Gestures on Touch Interfaces Emilie Lind Damkjær

Comparing the answers for mental demand for the double tap-ping versions yielded (H(3) = 3.36, p-value = 0.3388) meaningno significant difference between the four versions was found.The long tap versions yielded (H(3) = 3.45, p-value = 0.327)and no significant difference.

Temporal demand yielded (H(3) = 1.35, p-value = 0.717) fordouble tapping and (H(3) = 1.63, p-value = 0.6524) for longtapping, and no significant differences.

The answers for performance resulted in (H(3) = 6.41, p-value= 0.09345) for double tapping and (H(3) = 0.54, p-value =0.9097) for long tapping and no significant differences.

The results for effort were (H(3) = 3, p-value = 0.3914) fordouble tapping and (H(3) = 4.66, p-value = 0.1982) for longtapping, and no significant differences.

The frustration data yielded (H(3) = 2.4, p-value = 0.4931) fordouble tapping and (H(3) = 1.34, p-value = 0.719) for longtapping, and no significant differences.

DISCUSSIONThe results of our experiment showed no significant differencebetween the different versions. This means that we can neitherdisprove that the addition of a signifier does not matter for thediscoverability of the affordance of the relevant gesture, northat the temporal placement of the signifier does not matter forthe discoverability of the affordance of the relevant gesture.Even though we couldn’t reject our null hypotheses we canstill gather information from the data. Based on the loggingdata and what participants said during the experiment, longtap is the most common gesture when no signifiers are presentto indicate other affordances are possible. During the test itwas also made clear that a swipe is also a common gesturewhen wanting to delete an item in the type of application wemade, especially for iOS users. Our results show that in thisexperiment our signifiers did not effectively communicate theaffordances of long tap or double tap.

Regarding the validity of our results a few things should beconsidered. The order in which participants tried the differentversions was balanced using a Latin square design having allcombinations an equal amount of times during the experiment.Our participants came from a wide variety of backgroundsand age groups. We even had a few different nationalities.These two factors strengthen the validity of our results. Ourprimary data was gathered from the log of the application.When looking through the logs the program did not alwaysassign the correct gesture name to what the application clearlyread, e.g. in a double tap version a double tap not resultingin a delete action, therefore having been read as two singletaps, or in a long tap version a scroll being read as a long tapand success being reached that way. This was all correctedby having a person read through the log and note the results,but it does hurt the validity of our log data. Regarding thedata from the questionnaire, self-reporting is always hard tovalidate especially when the experiment is a between-groupssetup. Having the questions based on a standard in the field(NASA TLX) heightens the validity of our results, and the factthat the results correspond quite well with the log data alsostrengthens the validity of our results.

Reliability is based on whether the results can be replicatedif the experiment is repeated under the same circumstances.This is affected negatively by the fact that we changed locationthree times with one of these locations being in a public space,albeit a somewhat isolated corner of this space. The wideage range and background of our participants should havegiven us a reliable sample of the population, strengtheningthe reliability of our results in that aspect. Overall, both thevalidity and the reliability of our experiment and results aregood.

During the test, we discovered some issues which may haveimpacted our results in some way.

We realised during the test that if the user swipes a list itemslowly, it sometimes registered as a long tap, regardless offinger movement. This means that while the log data mayshow that some users discovered the long tap affordance, theymay have actually swiped.

We also realised during the first few trials that the way wephrased the task of deleting an item was misleadingly vagueand caused some test participants to believe that checkinga box on the screen was sufficient. When we rephrased theassignment to specify that the item had to disappear altogether,participants understood the task better.

Several times throughout the experiment, in their search for abutton to delete an item, test participants would accidentallyreturn to the secret menu screen or to the phone’s home screen.This was accounted for when analysing the log. This mayhave impacted the user in two ways: The confusion may havecaused the participants to feel more insecure and less inclinedto try different approaches to the task. On the other hand, if aparticipant caught a peek of the text on the buttons, it may haverevealed to them an affordance that they did not previouslyperceive.

The concept of nanointeractions opens up several alleys ofpotential future research which may be interesting. With moretime and resources, we would have performed a large-scalewithin-subjects experiment with a more easily understandablesignifier in order to better determine the viability of revealingsignifiers to the user while they are at a certain nanointeractionstage in a gesture. Furthermore, it may be valuable to explorethe nature of changing gestures (e.g. performing a long tap, butthen "cancelling" by moving one’s finger) and how designerscan take advantage of this.

It may also be interesting to explore whether the order inwhich the different versions were experienced has an effecton the data. It is possible that, if a gesture (e.g. a long tap)is not possible in the first version a participant experienced,the participant may never attempt that gesture again in latertrials where the gesture is possible. On the other hand, if thefirst version has the affordance of the most obvious non-tapgesture to that participant (often long tap), the participant maybe more willing to try other gestures, as they have already seenthat deleting an item is possible.

We also find that it would be interesting to study the relation-ship between the position on the screen of a visual signifier

14

Page 15: „But Wait, There’s More!” – A Deeper Look into Gestures on ...€¦ · „But Wait, There’s More!” – A Deeper Look into Gestures on Touch Interfaces Emilie Lind Damkjær

and the position on which the user performs a gesture. Ontouch screens, the user’s finger may block the visual signifierfrom their view if it appears upon touch, which may cause theuser to never see the signifier, thus hindering discoverability.This is less of an issue for signifiers which are always visible.

In this report, we focused our research on two gestures: longtap and double tap. This was to keep the scope manageable.There are many other touch gestures which can be brokendown into nanointeractions, and the complexity of some ofthem make them especially interesting. An example is the draggesture requiring the user to change the position of their finger.While we have mapped a drag gesture into nanointeractions,the user may in theory change their course many times whiledragging, which could be considered nanointeractions in andof themselves. Every touch gesture is different, and if we wereto do further research, we would most definitely study thenature of other gestures as well.

We also chose to focus this report entirely on visual signifiers,but other types of feedback may have completely differenteffects on the user. Future research may reveal an interestingrelationship between audio or haptic feedback and nanointer-action stages.

CONCLUSIONWhile the experiment in itself did not show any significantresults, we argue that this project is still highly valuable forfuture research. Interaction design as a field suffers from dif-fering use of terminology, as researchers have so many dif-ferent backgrounds. In this report, we contribute to a moreunified language when talking about human-computer interac-tion and the mechanics thereof. We further cement previouslydetermined language, and define terms for levels of interactionnot previously discussed in detail, while also analysing currentresearch into different affordance design angles with differentmodalities with our new terminology.

Our main contribution to the field is the concept of nanointer-actions. Most research thinks of touch screen gestures as onesingle interaction, without breaking it down. However, if weas interaction designers instead consider the elements whichmake up a gesture–the nanointeractions–it will open the doorto several new considerations. In this report, we focus on thediscoverability of gesture affordances depending on whethera signifier is made visible before any gesture is attempted,during a gesture, or after a gesture has been completed. Thepossibility of attempting to let the user perceive a previouslyhidden affordance as they are currently "in the middle of" agesture has not previously been explored, and we hope thatfuture researchers will further investigate. Furthermore, if onethinks of touch gestures as a series of nanointeractions, onemay also explore the nature of changing gestures. For exam-ple, if the user has initiated a long tap by placing their fingeron a control and holding it there, but then moves the fingeraway from the control before lifting, they have changed theircourse "in the middle of" a gesture, which designers and fu-ture researchers may take into account, as it allows for newcombinations of gestures to design affordances for.

ACKNOWLEDGEMENTSIn closing, the authors of this paper would like to give ourthanks to everyone who volunteered to participate in our in-vestigation. We thank Aalborg Hovedbibliotek and RegionNordjylland Regionshuset for allowing us to use their facili-ties for testing. We would also like to thank our supervisor,Hendrik Knoche, without whom this project would not havebeen possible.

REFERENCES1. Apple, inc. 2019. iOS Human Interface Guidelines -

Buttons. https://developer.apple.com/design/human-interface-guidelines/ios/controls/. (2019).Accessed on: 06-03-2019.

2. Ilhan Aslan, Michael Dietz, and Elisabeth André. 2018.Gazeover – Exploring the UX of Gaze-triggeredAffordance Communication for GUI Elements. InProceedings of the 20th ACM International Conferenceon Multimodal Interaction (ICMI ’18). ACM, New York,NY, USA, 253–257. DOI:http://dx.doi.org/10.1145/3242969.3242987

3. William Buxton. 1990. A three-state model of graphicalinput. In Human-computer interaction-INTERACT,Vol. 90. 449–456.

4. Emilie Lind Damkjær, Liv Arleth, and Hendrik Knoche.2018. “I Didn’t Know, You Could Do That”-AffordanceSignifiers for Touch Gestures on Mobile Devices. InInteractivity, Game Creation, Design, Learning, andInnovation. Springer, 206–212.

5. Google. 2019. Material Design - Gestures.https://material.io/design/interaction/gestures.html.(2019). Accessed on: 06-03-2019.

6. Chris Harrison, Julia Schwarz, and Scott E. Hudson.2011. TapSense: Enhancing Finger Interaction on TouchSurfaces. In Proceedings of the 24th Annual ACMSymposium on User Interface Software and Technology(UIST ’11). ACM, New York, NY, USA, 627–636. DOI:http://dx.doi.org/10.1145/2047196.2047279

7. Sandra G. Hart. 2006. Nasa-Task Load Index(NASA-TLX); 20 Years Later. Proceedings of the HumanFactors and Ergonomics Society Annual Meeting 50, 9(2006), 904–908. DOI:http://dx.doi.org/10.1177/154193120605000909

8. Jeppe Veirum Larsen and Hendrik Knoche. 2017. Statesand Sound: Modelling User Interactions with MusicalInterfaces. In New Interfaces for Musical Expression2017New Interfaces for Musical Expression. NIME.

9. Pedro Lopes, Patrik Jonell, and Patrick Baudisch. 2015.Affordance++: Allowing Objects to CommunicateDynamic Use. In Proceedings of the 33rd Annual ACMConference on Human Factors in Computing Systems(CHI ’15). ACM, New York, NY, USA, 2515–2524. DOI:http://dx.doi.org/10.1145/2702123.2702128

10. Rudy McDaniel. 2015. Understanding microinteractionsas applied research opportunities for information

15

Page 16: „But Wait, There’s More!” – A Deeper Look into Gestures on ...€¦ · „But Wait, There’s More!” – A Deeper Look into Gestures on Touch Interfaces Emilie Lind Damkjær

designers. Communication Design Quarterly Review 3, 2(2015), 55–62.

11. NASA. 2019. NASA TLX: Task Load Index.https://humansystems.arc.nasa.gov/groups/TLX/. (2019).Accessed on: 30-07-2019.

12. Donald A Norman. 2008. The way I see IT signifiers, notaffordances. interactions 15, 6 (2008), 18–19.

13. Ian Oakley, Carina Lindahl, Khahn Le, and IslamMD Rasel Lee, DoYoung. 2016. The Flat Finger:Exploring Area Touches on Smartwatches. CHI 2016(2016).

14. Esben Warming Pedersen and Kasper Hornbæk. 2014.Expressive Touch: Studying Tapping Force on Tabletops.In Proceedings of the SIGCHI Conference on Human

Factors in Computing Systems (CHI ’14). ACM, NewYork, NY, USA, 421–430. DOI:http://dx.doi.org/10.1145/2556288.2557019

15. Ved. 2017. Interaction Design patterns : iOS vs Android.https://medium.com/@vedantha/

interaction-design-patterns-ios-vs-android-111055f8a9b7.(2017). Accessed on: 06-03-2019.

16. Jo Vermeulen, Kris Luyten, Elise van den Hoven, andKarin Coninx. 2013. Crossing the bridge over Norman’sGulf of Execution: revealing feedforward’s true identity.In Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems. ACM, 1931–1940.

17. Bill Verplank. 2009. Interaction Design Sketchbook.(2009).

16