enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile...

75
Master Thesis Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications Candidate: Francesco Bonadiman Supervisor: Prof. Dr.-Ing. Sebastian Möller Co-Supervisor: Benjamin Bähr EIT Digital Master School Technische Universität Berlin 2016

Upload: francesco-bonadiman

Post on 20-Jan-2017

773 views

Category:

Technology


0 download

TRANSCRIPT

Page 1: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

Master Thesis

Enhancing the interaction space

of a tabletop computing system

to design paper prototypes

for mobile applications

Candidate: Francesco Bonadiman

Supervisor: Prof. Dr.-Ing. Sebastian Möller

Co-Supervisor: Benjamin Bähr

EIT Digital Master School

Technische Universität Berlin

2016

Page 2: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 2 -

This page was intentionally left blank.

Page 3: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 3 -

Acknowledgments Firstly, I would like to express my sincere gratitude to my Supevisor Prof. Möller for giving me the opportunity to write this Master Thesis at the Telekom Innovation Laboratories. However, my deepest thanks go to Benni, for the continuous support and for his patience while helping me during the implementation phases. At the same time, I would like to thank all the staff of the T-Labs buildings, for always being kind and preparing delicious cakes. I am indebted to the people who participated in the User Studies, especially the members of the Facebook group “Free Your Stuff Berlin”: thank you for devoting your time to science! I want to thank Facebook and Telegram to allow me to keep in touch with my family and friends around the world, and Couchsurfing, for the lovely and open-minded people I met. I am extremely grateful to the EIT Digital Master School for giving me the chance of this life-changing experience, for realizing several of my dreams and, especially, for broadening my horizons and my mind. Thanks for being not only a school, but a real colorful family. Thank you Berlin for being so cool and young to me, so open and multicultural, so vibrant and vital. Thanks to all the friends who shared this part of my life with me: the HCID group, Pietro, my blonde Petra, my neighbor Pitt, my personal teacher Gigi and my Berliner family, Kathi Uschi Anne and, especially, Craig, for always “following me” along these two years. Thank you Paris for being so full of joy and life, for the amazing views you gave me and for never letting me down during my first experience abroad. I thank the people I met there and the Paris Drink crew for the unforgettable year spent together: my sense of gratitude to Phil, Gavì, Miguel, Fatsa, Xin, Luca and, above all, Dimitris & Borče, “nice friends” for a lifetime. Thank you Trentino for raising me and being my home for 24 years and counting. Thanks to the Da Vinci High School for the foreign languages and to the Faculty of Sciences to make me follow my ambitions. Thanks to the friends celebrating with me every time I am back: the crazy UniTN gang, my squirrel Jenny, my beloved Parri and my blood brother Poio. I thank all my relatives, in particular my grandparents and my aunts, for the unceasing support and encouragement and to trust me when I promise them to repair their computers. Finally, unlimited thanks to my family for always believing in me and for making me realize “There’s no place like Home”. Thanks Angi for being the best sister I could imagine, and making me feel so close to you with your daily ration of photos of Pucci and Crusoe. Thanks Dad, for being an out-of-the-common father and a model for many things. Thanks Mum, for making me adore English and the world out there, and always loving me unconditionally.

Page 4: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 4 -

Declaration I hereby declare I have completed this thesis work independently without any unauthorized external help using only the cited sources and references.

Deklaration Hiermit erkläre ich, dass ich die vorliegende Arbeit selbstständig und eigenhändig sowie ohne unerlaubte fremde Hilfe und ausschließlich unter Verwendung der aufgeführten Quellen und Hilfsmittel angefertigt habe. Berlin, January 2016 . . . . . . . . . . . . . . . . . . . . . . . Francesco Bonadiman

Page 5: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 5 -

Table of contents Abstract ...................................................................................................................... 6

1. Introduction ........................................................................................................... 7

2. State of the Art ....................................................................................................... 9

2.1 Tabletop tangible interfaces ............................................................................................ 9

2.2. Prototyping ................................................................................................................... 12

2.2.1. Paper-Prototyping .................................................................................................. 13

2.2.2. Mobile Devices ........................................................................................................ 16

2.2.3. Existing Tools ......................................................................................................... 18

3. The System and its Enhancement ......................................................................... 24

3.1. The Existing System ...................................................................................................... 24

3.2. Low-tech interaction techniques .................................................................................. 27

3.2.1. Color detection ....................................................................................................... 29

3.2.2. Barcode recognition................................................................................................ 32

3.2.3. Further improvements ............................................................................................ 33

3.3. The Enhanced System ................................................................................................... 33

3.3.1. Modality A .............................................................................................................. 34

3.3.2. Modality B ............................................................................................................... 39

4. Evaluation ............................................................................................................ 45

4.1. Experiment design ........................................................................................................ 45

4.2. User studies ................................................................................................................... 49

4.3. Analysis of the Results .................................................................................................. 51

4.4. Discussion ..................................................................................................................... 61

5. Conclusions and Future Work ............................................................................... 64

6. Bibliography ......................................................................................................... 66

Appendix A – List of tasks ........................................................................................ 70

Appendix B – List of instructions ............................................................................. 74

Appendix C – Consent Form ..................................................................................... 75

Page 6: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 6 -

Abstract This Master Thesis presents new low-tech interaction techniques developed to enhance the user interaction in tabletop computing environments. These low-tech approaches, indeed, aim at creating a valuable alternative to high-tech solutions, which instead focus their interaction on computer and mobile devices. Electronic tools, in fact, tend to distract the users and disturb the creative design process when working with low-tech methods within tabletop-based environments. The new interaction techniques were designed, implemented and finally applied to a tabletop computing system conceived to enhance paper-prototyping for mobile devices. This way, designers do not need to continuously shift the fidelity of the media they are working with and are able to perform a number of different actions on the paper sketches (like editing or duplicating screens) by only using low-tech solutions. Therefore, these interaction techniques are evaluated with users to assess how fast or mentally demanding they are and which of the developed approaches is preferred.

Page 7: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 7 -

1. Introduction This Thesis describes the design and the following implementation of a set of new low-tech interaction techniques developed for enhancing the User-Experience with a tabletop-based computing system. At the present moment, most tabletop computing environments include a number of devices and media with different degrees of fidelity ranging from paper and office supplies to computers and smart objects. However, as mentioned in some studies and underlined by Caroline Snyder in her book [1], continuously changing the fidelity of the tool and the modality of interaction with the system tends to disrupt the creative design process and confuse the users. The disruption happens particularly when there is a need to shift to high-tech approaches while working with low-tech design techniques: therefore, the interactions presented in this Thesis are low-tech approaches created to replace the high-tech ones used in a tabletop computing environment. These are eventually applied to Blended Prototyping1, a tabletop computing system designed to effectively paper-prototype, which was built by Benjamin Bähr who is a researcher at Telekom Innovation Laboratories in Berlin. This system was noticed to highly enhance and accelerate the development of mobile apps during early design phases; however, as said above, the combination of low- and high-tech solutions was observed to be puzzling. Once designers want to virtualize a sketch on the table or edit a prototype, they need to use an application on a tablet device. This operation, although usable and already implemented, works strongly against the principles and aims of the tabletop system itself: in fact, it requires that one single user stops the ideation process just to digitize the drawings or perform any operation on them. Therefore, it is breaking this collaborative and creative moment. Moreover, all the users involved get distracted and unfocused during this phase and, at the same time, they pause the design process, not interacting with each other anymore. Thus, for this Thesis new interaction techniques were developed as an enhancement of this system in order to provide better ways not to shift the fidelity of the media designers are working with. After introducing the topic of this Thesis, it is useful to present shortly how this dissertation is structured. Chapter 2 describes the State of the Art regarding the methods and technologies involved in this research thesis: therefore, Chapter 2.1 analyzes the literature about tangible ways of interaction within tabletop environments. Moreover, being this system designed for enhancing paper-prototyping for mobile devices, in Chapter 2.2 there is a

1 Blended Prototyping. Retrieved January 16, 2016, from http://www.blended-prototyping.de/

Page 8: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 8 -

section of related literature for each of these subjects, followed by an analysis of the Prototyping Tools already existing on the market. As explained further on, there is a lack of adequate tools to effectively paper-prototype for mobile devices, which is the reason why Blended Prototyping was created. Therefore, in Chapter 3.1 the existing system is illustrated and its current state is explained. After describing the tabletop system, the new interaction techniques are presented: these, as said above, are alternative solutions to avoid using any high-tech input devices, yet keeping the whole design process as low-tech as Paper-Prototyping. Hence, Chapter 3.2 depicts all the enhancement process of this tabletop system, explaining the choices and the assumptions that were made and giving some technical details of the implementation. At the same time, the limitations of the different approaches are justified. In Chapter 3.3, the final version of the system is described including the interaction techniques developed, which are divided into the two different designs that will be tested in the following phase. Chapter 4 explains in detail how the system was evaluated, what the research questions were, which variables were assessed and how these were measured (Chapter 4.1). After presenting the User Studies (4.2), the data is analyzed and interpreted to confirm or reject the research hypotheses by comparing the different interaction techniques which were tested with the users (Chapter 4.3). Hereafter, there is space for further discussions and considerations about the evaluation (4.4). With Chapter 5, this Master Thesis is finished by the conclusions showcasing possible ways to explore the topic in more detail and suggesting further research that can be undertaken.

Page 9: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 9 -

2. State of the Art In order to fully understand what this Thesis is about, I need to explain few concepts I just introduced above, which are mandatory to have a clear and deep view on the State of the Art regarding the topics involved. These, indeed, are all mentioned in this thesis’ title: “tabletop computing system to design paper prototypes for mobile applications”. Here we can detect two main topics, which allow us to divide this chapter into two connected parts, analyzing the environment of this thesis by two different points of view and, at the same time, through a gradually more and more detailed and specific context. Primarily, I am going to focus on the first part of this title, which is “tabletop computing system”: therefore, in this section I will analyze the literature regarding the use of physical and tangible objects in tabletop environments, which has been one of the main topics of interested within the HCI community for the last 15 years. This will be extremely useful when, later, I will present and discuss the different interaction techniques I developed and applied to the tabletop computing environment of the Blended Prototyping system. As a consequence, then, I will describe the context in which this tabletop computing system works. This is highlighted in the second part of this thesis’ title: “paper prototypes for mobile applications”. Hence, I am firstly going to explain what the word Prototyping means and why this technique is so important in the design and development environments; secondly, why Paper-Prototyping is that indispensable, followed by several examples of really significant research papers dealing with it; ultimately, I am introducing the concept of prototyping for Mobile devices, justifying why we focused on it and, in the same way, I will present a number of papers about. Finally, I will draw an analysis of the existing prototyping tools actually on the market, describing advantages and drawbacks and explaining why Blended Prototyping is different from most of them.

2.1 Tabletop tangible interfaces As stated above, this first part of the Literature Analysis concentrates on the exploration of the researches and studies which were undertaken to exploit the potentialities of using physical and tangible objects as means of interaction, and their consequent application to tabletop environments. However, other similar environments which aim at enhancing the creative design process are taken into consideration. The first paper [2] I am introducing, indeed, presents ClearBoard and dates back to 1992: this is a transparent mirror used to simulate a whiteboard as a shared drawing medium. The purpose of this system is to allow users to sketch on the same shared surface, however

Page 10: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 10 -

keeping the eye-contact, which is remarked to be crucial for creative processes. This tool, therefore, records the image through the transparent screen to realize gaze awareness and projects it mirrored on the partner’s board: this way, remote collaboration is not only supported, but also improved thanks to the focus on each other’s face. Furthermore, in this paper it is discussed how talking over a table is a good way to keep eye-contact, although it is underlined how one of the two participants has to sketch upside down. On the other hand, it is highlighted how virtual reality could be a drawback due to increasing cognitive loads. One year later [3], Grudin joined the authors of the previous paper, Ishii and Kobayashi, to describe the evolution of ClearBoard to its second version, which includes a multiuser paint editor and a digitizer pen. This way, drawings can be easily exported and saved for a later use, while external documents can be imported and expanded. Moreover, thanks to this new editor based on an intuitive user interface, the participants feel like they are simply drawing on a sketch pad with a pencil. This system, although mostly focusing on gaze awareness, is certainly interesting since it can be considered the first natural integration of groupware with video conferencing to boost the creative design process on a shared drawing surface. In 1995 Ishii collaborated with Fitzmaurice and Buxton to develop Bricks [4]: this paper is a milestone for this research environment, since it defines the concept of Graspable User Interfaces. These consist of a physical handle, the so-called brick, which is basically a new input device used to directly manipulate and perform operations on a virtual object. Thus, this new way of interacting with electronic objects aims at creating a bridge between the physical and the virtual world. The bricks can be used to execute tasks like transforming or selecting an object, moving and rotating it or, by using more handles, stretching squares. Even if this paper is more than 20 years old, it is surprising to notice how some of the interactions I developed for this Thesis have common traits with the ones Bricks introduces. The same concept of Graspable User Interfaces is recalled in [5] with the name of Tangible Bits, as to mean that users can control and manipulate the virtual bits by matching them with physical objects and interactive surfaces. The term “tangible” is therefore introduced for the first time: in this case, it identifies the transformation of surfaces (like doors, windows, walls, desktops) into active interfaces, coupling the physical to the virtual world. Moreover, the focus of this paper is on expanding the users’ interaction with the cyberspace, which is often limited to traditional GUIs. For this reason, this technique aims at raising users’ awareness of the background peripheral space by emphasizing the perception of ambient light, sound, airflow and water movements. This idea of Tangible Bits is then realized through a prototype system named metaDESK, introduced in the previous paper and discussed further in [6]. This platform, indeed, uses physical objects, surfaces and instruments as elements of Tangible User Interfaces (TUIs).

Page 11: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 11 -

These allow users to manipulate bits within their center of attention, which is represented by a near-horizontal display surface complemented by an active and a passive display (lens). The last few papers were mainly presenting examples of Graspable/Tangible User Interfaces. However, several years later (in 2005), Kaltenbrunner et al. [7] describe a whole protocol which defines them within a tabletop environment. This is done to interconnect the tables which are being developed at that time, by determining common gestures and properties of the objects being manipulated. This protocol, named TUIO, implements a set message, which provides information about the state of an object (e.g. position and orientation), and an alive message, which recognizes the objects actually on the surface according to their ID. This protocol was then implemented within the reacTable system, described in [8], which uses a tabletop tangible interface to generate live music. Indeed, this is a round table where musicians can manipulate physical artifacts to produce different audio typologies: this is achieved though several means of interaction, such as rotating or moving the handles on the surface, caressing the synthesizer components, connecting or disconnecting them to edit and create new kinds of sound. According to their shape and marker, these objects have a distinct output: for instance, they can be generators, filters, controllers or mixers. These can be detected thanks to the vision engine framework this table is based on, the ReacTIVision, explained in detail in [9]. This framework, specifically developed to track certain markers in real-time, resembles the one used by the Blended Prototyping system to detect barcodes. Narrowing the focus onto cooperative work and paper-centric interfaces, a paper from 2009 presents CoScribe [10], a concept and prototype system which allows users to interact with both printed and projected digital documents. This is done through the same digital pen and interactions, according to a new paradigm called Pen-and-Paper User Interface (PPUI). This way, paper notes are seamlessly integrated onto the tabletop display, and learners can easily annotate them, add link and tag both digital and material documents: this approach, excluding the digital pen, is similar to that of Blended Prototyping. In the same year a paper describing TinkerSheets [11] is published: this is an interactive paper-based form which allows users to physically manipulate the controls to set different values and parameters. This is done by either moving magnetic tokens or drawing with a pen inside the fields of the form: afterwards, an algorithm checks every dark round object within a certain threshold and detects whether this is an intended user’s input or not. In the following years, other research papers experimented different interaction techniques for means like rollable displays, flat panel displays or flexible bending projected displays; these, however, are not investigated further. For the same reason, other uses of tabletop systems (such as geometric drawing or classifying shapes) are not taken into consideration, having too distinct aims or implementations respect to the tabletop of Blended Prototyping.

Page 12: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 12 -

2.2. Prototyping After presenting the Literature Analysis regarding tangible tabletop interactions, I will now focus on prototyping, since this is the setting in which the Blended Prototyping system takes place. The term prototype is composed of two Greek terms, proto (first) and typos (impression): as the name already suggests, therefore, a prototype represents an initial design which gives us a first impression of a not-yet-developed product; other definitions are “a concrete representation of part or all of an interactive system”, “a tangible artifact” [12]. Prototyping, of course, is the process of building prototypes. The purpose of this method is to offer other designers and users a clear and straightforward idea about how the envisioned design should be realized; on top of that, they can even test and try it out, giving feedback and suggestions or simply expressing their thoughts and concerns, according to the concepts of User-Centered-Design [13]. Moreover, a prototype helps clarify someone’s mind by finding “a new perspective and experience on one’s own ideas”, by giving them a definite and “ordered structure” [14]. As mentioned above, a prototype has two main goals: testing diverse design solutions and gathering impressions and recommendations about them. This technique is completely different from normal testing, anyway, as it is performed far before starting the coding phase, in a so-called early stage of development. For this reason, this process obviously saves time - to design and code the chosen idea - and money - to pay employees. However, we will return to this point more accurately later on, while examining Paper-Prototyping. Moreover, a prototype is particularly helpful thanks to its iterative nature: this means it can be modified and improved continuously until a satisfactory result is obtained, on the basis of user feedback, usability testing and design choices. Repetitive refinement, indeed, is “the only way to build a successful software user interface” according to Nielsen [15]. There are several types of prototyping techniques and any of them might be more (or less) efficient according to a certain context of use or external condition: therefore, the chosen one is highly decisive in the results of the whole design and evaluation of the system. These are usually classified according to their fidelity, which is “the degree to which the prototype accurately represents the appearance and interaction of the product”2: this can be high (computer-based functional simulation with polished graphic design), medium (somewhat detailed but with approximated objects) and low (little or no functionality with mostly rough and schematic sketches) [16]. This latter type is the one we will focus on.

2 Prototyping for Design and Evaluation. (1998, Fall). Retrieved January 16, 2016, from http://grouplab.cpsc.ucalgary.ca/saul/681/1998/prototyping/survey.html

Page 13: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 13 -

Low-fidelity (Lo-Fi) prototypes are mainly used to create alternative design solution and quickly express ideas: in this process, thus, the functionalities and the user interaction are generally less relevant. The main goal of this technique, indeed, is to show people the envisioned “look and feel” of the interface, exchanging thoughts and collecting impressions from them. The easiest version of Lo-Fi prototype can be realized by simply sketching on paper with a pencil, which exactly represents our second category: Paper-Prototyping.

2.2.1. Paper-Prototyping Paper-Prototyping, also known as Paper-based Prototyping, is an essential technique in both research and industry environments. The term indicates the process of generating paper reproductions of User Interfaces and letting people test these versions rather than a real software UI, as Snyder depicts in her book. According to her, this method was probably born at the beginning of the 1990s, being “used by a few pockets of usability pioneers”; however, it soon became popular and, few years later, it was already considered a standard in the development process of renowned enterprises. As we can notice from our everyday life, Information and Communication Technology has deeply evolved in the last 25 years, though the way UI interfaces are built does not reflect this tendency. This can be clearly seen by simply looking at Paper-Prototyping: what is surprising about this technique, in fact, is that it has (nearly) not changed during all this time, yet it is still indispensable to effortlessly create efficient UI prototypes in the early phases of design. For instance, the tools needed are always the same: paper, pencils, highlighters, glue, scissors, post-its and other office supplies. The procedure is still extremely easy: the users are given a task, which they are required to accomplish. However, they will not be interacting with a real software interface, but - as the name implies - with a paper version of it: “they will click by touching the prototype buttons or links and type by writing data right on the prototype”. The dynamic behavior of the interface will be simulated by a team member named “computer”, who will move papers and sketches by hand in response to end-user actions, but without explaining how the system works. Frequently there is a “facilitator” too, who guides the testing, and some “observers” who take notes about what is happening during the testing and how the users react to the tasks. One of the most important aspects of Paper-Prototyping is that it allows to easily create groups with users of any background or expertise: indeed, sketching with pencils on paper is such an elementary action and, moreover, the drawings do not need to look perfect.

Page 14: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 14 -

This stimulates the collaboration between users of different disciplines and qualifications, enhancing the generation and the communication of ideas. Snyder expresses this concept in her book by stating that Paper-Prototyping “can be considered a method of brainstorming, designing, creating, testing, and communicating user interfaces” because, thanks to its hybrid nature, it simplifies “the communication within the team and between dev team and customers”. Paper-Prototyping is generally portrayed by means of two adjectives: fast and cheap.

● Fast since it allows to quickly express ideas and generate a myriad of different design solutions, trying them out with users right after [17]. In addition, reducing the niceties and the specifications of the interface means saving a huge amount of time, which is crucial especially in the early phases of design. This because the initial design does not need to be too detailed, since Paper-Prototyping is based on an iterative process: indeed, design solutions are continuously produced, tested and improved or discarded.

● Cheap, for several reasons. As we all know, “time is money”, therefore saving time

normally means saving money too. Thus, adopting Paper-Prototyping helps avoiding wrong development choices and identifying mistakes of an interface even before this is coded: this is extremely precious time, which is converted into an enormous amount of money. Snyder even asserts that “the benefits from early usability data are at least ten times bigger than the benefits from late usability data”; however, nevertheless, there is still a good number of people who do not believe to “get enough information from something that simple and that cheap”.

Furthermore, what is probably even more fascinating and surprising about this technique, is that the users feel totally free to express their thoughts and critiques. As emphasized by Bähr [18], indeed, “a polished interface increases the users’ hesitation to critically communicate their experiences”, because they could be too “shy to describe their problems and issues with the software. These people will be more likely to discuss their opinions, when they are presented with a simple diagram, or even childish looking paper-based sketch interface representation”. In other words, an interface with a sketchy and unfinished look is more creatively captivating and will produce more valuable and spontaneous feedback respect to a very elaborate one where, besides, a developer has put effort and work that will get wasted3.

3 Why Sketching and Wireframing Ideas Strengthens Designs. (2010, September 17). Retrieved January 16, 2016, from http://spyrestudios.com/why-sketching-and-wireframing-ideas-strengthens-designs/

Page 15: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 15 -

This is even confirmed by a paper [19] published back in 1996, where it was proven how several architects, in the early stages of design, prefer to present hand-drawn sketches to their clients respect to high-quality computer models, because they were noticed to tremendously encourage the discussion. In addition, such a participatory and cooperative method smooths the shift between the ideation phase and its consequent development. Finally, it is proven by a number of published research papers, like [20], [21] and [22], that not only “paper-prototypes are as effective as high-fidelity prototypes at detecting many types of usability issues”4, but also that the exact same usability issues were captured using any of the two techniques. For instance, the latter research paper demonstrates that a paper prototype discovered the same 3 major usability problems as an interactive prototype did; not only that, but the changes to the interface could also be incorporated more quickly when just using paper. As discussed above, there are several studies related to the topic of Paper Prototyping, and most of them present hybrid tools trying to adapt this technique to interactive systems. This is exactly the concept the Blended Prototyping system is based on: trying to exploit the power of Paper-Prototyping while, at the same time, combining it with a computer-based tool to make it more adaptable, robust and reproducible. The first one dates back to 1996 and it is named “SILK: Sketching Interfaces Like Krazy” [17]: this tool allows designers to sketch using an electronic pad and a stylus, therefore maintaining the properties of paper and pencil, and test the design solutions by interacting with them. The sketches can be created via gestures and storyboards can be generated to move through the screens; parts of old designs can be reviewed and reused to produce different prototypes and the drawings are automatically converted into widgets and graphical objects. Although this paper is almost 20 years old, it is terribly relevant to our goal: indeed, it already highlights the importance of a quick iteration in early stages of design, which needs no over-specification (implying loss of spontaneity), but at the same time it remarks the necessity of flexible and fast UI construction tools. The paper even predicts a utopian vision of the future, “in which most of the UI code will be generated by UI designers using tools like SILK rather than by programmers writing the code”. The author of this paper also contributed to the birth of DENIM, a tool presented in 2000 [23]. Similarly to SILK it supports pen-based interaction; however, it offers several different levels of refinement and sketching, which are then unified through zooming from the less to the more detailed (i.e. Site maps → Storyboards → Thumbnails → Schematics → Mockups).

4 Paper Prototypes Work as Well as Software Prototypes. (2005, June). Retrieved January 16, 2016, from http://www.usability.gov/get-involved/blog/2005/06/paper-prototypes-and-software-prototypes.html

Page 16: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 16 -

Therefore, although this system might look like using pen on paper, it allows designers to express ideas more precisely and to a greater level of detail, which will help the design later in the process. Indeed, once the screens are drawn and the shapes are detected as UI widgets, the designers can then connect these screens to create storyboards, testing and interacting with them right after. Lin, who contributed to DENIM, published another paper in 2003 describing Damask [24], which follows the same design-platform and mostly aims at better generating UIs for different types of devices by only specifying the design patterns. The final purpose, of course, is to avoid designing a separate User Interface for every single device, which is time consuming and not enough supported by appropriate applications. Damask creates an abstract model of the design by using the sketches and patterns of the interface at a high level of abstraction: this is then adapted to build the various device-specific UIs. A few years later, Landay (who previously presented SILK) introduced SketchWizard [25], which is a tool for generating Wizard of Oz prototypes of pen-based User Interfaces. The so-called Wizard of Oz technique allows designers to simulate not-yet-implemented behaviors of an interface “behind a curtain”, while observing how the users - on the other side of the drawing canvas - react to them. This way the experimenter can easily fake complex transformations of the user’s input. Even in this paper it is highly emphasized how using these early-stage prototyping tools helps take design choices as the interface evolves, especially for pen-based applications where - back in 2007 - design and technology are described as tightly coupled. Being still mostly prototypes and experiments, all these tools remained limited to research environments and none of them became a real final product. However, they definitely helped the design and development of the future prototyping applications we will analyze further on.

2.2.2. Mobile Devices According to what we have read so far, Paper-Prototyping presents several great benefits. However, as discussed above, it has almost not changed since its inception nearly 25 years ago, whereas a number of technological revolutions happened in the meantime. This will definitely make the learning process easier but, on the other hand, might provide inadequate solutions for novel and emerging technologies. An example is the advent of mobile devices: indeed, while Paper-Prototyping performs perfectly within a lab environment, there is a number of complications when it comes to use this technique for mobile testing.

Page 17: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 17 -

Firstly, the word “mobile” itself means something moving the design process outside the laboratory by testing and evaluating the devices in real-life situations and conditions. The User Interface is thus put to the test by this variety of usage contexts and environmental settings, since it needs to be incessantly usable and adaptable to the changing level of attention of the end-users. In addition, for the same reason, even monitoring and tracking their interactions with the device is much harder than for in situ testing. Furthermore, there are countless different models of mobile devices, and every one of them has its own potentialities and strengths but, simultaneously, hardware limitations and constraints which are hard (if not impossible) to properly reproduce by using a paper-prototype. Besides, a typical paper-prototype usually deteriorates considerably during outside evaluation sessions. As we can imagine, even about prototyping for mobile devices there is a good number of research papers and studies: most of them introduce tools which try to apply the core concepts of Paper-Prototyping to the agile context of mobile devices. However, although these tools were testable outside the lab, they experienced many significant issues which prevented them to reach the market. The first researchers dealing with this topic were de Sá and Carriço who, from 2006 to 2009, published four papers about low-/mixed-fidelity prototyping for mobile devices. The earliest [26] focuses on how to involve mobility and new hardware constraints into initial stage prototyping without losing quality or usability standards. For instance, while the screen measurements or the available components of a device might be reproduced not too accurately, the context of use and the scenarios are essential to effectively run usability testing. In order to prove this, they built identical wooden (and plastic) reproductions of mobile devices, where they inserted paper representations to simulate the different screens. In the second paper [27], instead, they present a framework for mixed-fidelity prototyping: this means one can start from either hand drawn sketches or pre-programmed components, thus adapting the prototype to the desired level of refinement and functionalities. This way the sketching and writing habits are still preserved and, at the same time, the prototypes which are generated can be tested on actual devices “giving users a more tangible and realistic feel of the future application” (even using the Wizard-of-Oz technique). As a consequence, of course, logging the interactions (both passively and actively) and evaluating the user experience (even by reproducing it) is certainly easier: indeed, by targeting only mobile devices, this tool supports different hardware constraints, screen sizes, functionalities and interaction modalities, which can be chosen according to the environmental conditions. Furthermore, users are able to edit and improve the prototypes directly on the mobile device while testing them in real world settings; moreover, they can give feedback directly through diary studies and questionnaires included in the tool.

Page 18: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 18 -

The last two papers on the topic, [28] and [29], both from 2009, are basically variations of the previous one: they provide again a detailed description of this framework for mixed-fidelity prototyping, mostly focusing on the advantages of evaluating systems within their natural usage context. Indeed, not only this creates a more realistic usage experience, but it also promotes participatory design (with users directly contributing to build the interface) and boosts the user engagement. Together with Duarte [30], afterwards, they directed their attention towards using physiological data in mobile contexts to analyze user satisfaction (or frustration) during the evaluation phase: with the constant improvement of sensors, this will definitely become a common technique in the next few years. MobiDev [31] dates back to 2010: as described in the paper, it is a tool designed for creating apps for mobile devices directly on the phone by starting from UIs sketched on paper. This of course reduces the text input effort and aims at enhancing the interactions with the system. The tool is based on image processing, which means that the elements of the screens are parsed and abstracted according to certain symbols and logic (e.g. an arrow is detected as a transition). Another paper from 2012 [32] is extremely clear and exhaustive about the potential of Paper-Prototyping, stating in detail all the benefits (low development costs, short production time, easy-to-use approach, no over-detailed interface, …) we introduced up to now. And, as we can expect, it highlights how this technique is not well suited for the evaluation of mobile interfaces, mostly due to the difficulties in emulating real world contexts and realistic settings. In order to try to solve these problems, it introduces a system using a digital pen to capture hand-drawn sketches and generate prototypes of different fidelities. Similarly to SILK, indeed, these are converted to a digital version - through HTML pages and elements - according to given sketching templates, and can then be easily tested on the mobile phone’s browser.

2.2.3. Existing Tools As discussed above, most of the tools previously seen in the Literature Analysis never became marketable products and rather remained just prototypes or research experiments. On the other hand, there is such an astonishing number of products for prototyping on the market, which I am going to introduce shortly hereafter. To have a broader view of them, you might want to refer to the Market Analysis presented in my Minor Thesis5. This latter, indeed, is a dissertation which required to be tightly coupled to this Master Thesis, however focusing on the business and entrepreneurship side of the topic of Prototyping.

5 A tabletop system to paper-prototype for mobile applications. Retrieved January 16, 2016, from http://slideshare.net/franzonadiman/a-tabletop-system-to-paperprototype-for-mobile-applications

Page 19: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 19 -

Unluckily, there is no numerical evidence about market shares or number of users of prototyping tools: indeed, no official data exist to reveal which companies (or startups, research departments, freelancers, …) use a specific tool during their working life. Moreover, most of them normally make use of several tools at the same time to serve their purposes. It is therefore easy to realize how open and dynamic this market is: thousands of websites and blogs discuss daily about which one might be the best according to different parameters and goals. And, despite the considerable number of competitors, there is not a monopoly while, on the contrary, new ones are continuously coming to light. We could even say that the market is overcrowded by a plenty of similar products, which mostly differ in minimal functionalities or details. Of course, some work better than others and are somewhat closer to what we envision; however, we are still certainly far from perfection. Indeed, it is noticeable that there is a real need for Paper-Prototyping tools. Not only this can be perceived by simply observing the trend of the market in the last few years, where most of the new apps (as we will see later) are designed to generate digital versions of sketches, helping the Paper-Prototyping process by integrating manual drawings. But it is also highlighted by a survey, run a few years ago by Rosenfeld Media6, with nearly 200 participants who “represented a mix of roles in the UX community”. Surprisingly, with the 81% of preferences, it revealed that the most common tool for prototyping was just Paper! Unfortunately, the majority of the prototyping tools are difficult to use, or they are not quick and lightweight enough, or they do not even permit to design prototypes by starting from manual drawings. Not only that, but also very few of them address the specific requirements of mobile devices, since this is even now a recent and immature field of research: most of them, indeed, are still based on desktop/laptop UIs and do not allow to do usability testing on the field. Thus, moving backwards respect to the Literature Analysis, I will first start by introducing the most mobile-friendly products, then the ones related to Paper-Prototyping and finally I will present other classic prototyping tools. The following list draws inspiration from the evaluation run in 2013 by Emily Schwartzman, who tried to generate a prototype with each of 10 different prototyping tools and, afterwards, created a chart to rank them according to certain parameters, features and purposes7. She then continued to constantly update her chart with other new products.

6 First Prototyping Survey Results | Rosenfeld Media. (2008, August 1). Retrieved January 16, 2016, from http://rosenfeldmedia.com/prototyping/first-prototyping-survey-resul/ 7 Designer's Toolkit: Road Testing Prototype Tools. (2013, July). Retrieved January 16, 2016, from http://www.cooper.com/journal/2013/07/designers-toolkit-proto-testing-for-prototypes

Page 20: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 20 -

I therefore referred to this table8 and merged it with articles9 10, blog-posts11 and my personal experience so far.

● Pop: Prototyping On Paper (popapp.in): this app is probably the most similar to the Blended Prototyping system and to how this latter is supposed to be. As the name suggests it focuses on Paper-Prototyping, because it allows to start from hand-drawn sketches to then turn them into testable prototypes by easily linking them using hotspots. It is extremely fast, easy-to-use and optimized for mobile interaction.

● Invision (invisionapp.com): one of the most popular within the new prototyping

tools, it looks much the same as POP but only works with existing mockups, having no drawing or image creation tool. It is intuitive, easy-to-learn and constantly improved; it aims at creating a quick click-through prototype which is easily shareable and exportable.

● Marvel (marvelapp.com): apart from some extra features, it is a sort of copy of Invision (or vice versa). Again, it is not primarily focused on paper sketches, since one can only start from actual mockups; however, it offers a good number of frames for different devices and it allows to easily connect screens or add transitions through gestures.

● Flinto (flinto.com): this app concentrates on the possibility of testing the design from

the very beginning, applying corrections and improvements on the go and replacing sketches with mockups just using drag-and-drop. Thus, the final prototypes are generated with an already high-fidelity degree.

● Balsamiq (balsamiq.com): being one of the oldest (and most appreciated), it has

such limited choices that it is difficult to spend time on useless details: the “whiteboard” look forces to focus only on content and interactions. For the same reason, however, it is often inadequate for complex tasks and it has few import/export modalities.

8 Designer's Toolkit: Prototyping Tools. (2015). Retrieved January 16, 2016, from http://www.cooper.com/prototyping-tools 9 6 New UX Prototyping Tools for Designers. Retrieved January 16, 2016, from http://www.core77.com/posts/39834/6-New-UX-Prototyping-Tools-for-Designers 10 Top 7 Interactive Prototyping Tools. (2015, June 23). Retrieved January 16, 2016, from http://www.coderewind.com/2015/06/top-7-interactive-prototyping-tools/ 11 List of Prototyping Tools. (2015, August 19). Retrieved January 16, 2016, from http://blog.templatemonster.com/2015/08/19/list-of-prototyping-tools/

Page 21: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 21 -

● Solidify (solidifyapp.com): specifically designed for usability testing, it allows to easily track every interaction the user has with the system, collect feedback and create reports. On the other hand, it has no animations nor creation or editing tool for single elements.

● Justinmind (justinmind.com): one of the youngest tools, though flexible, elegant

and with powerful functionalities to easily generate click-through prototypes with existing design patterns. It supports gesture-based interaction and offers design templates for graphics; on the other hand, there is little documentation.

● FluidUI (fluidui.com): a browser-based tool mostly used to design touch interfaces.

These can be built either from existing mockups or by placing widgets onto screens via a drag-and-drop editor using a Zooming User Interface. There are many different elements and libraries for several devices; however, it has a medium learning curve.

● Pixate (pixate.com): this tool offers live simulation of the prototype and updates in

real-time on the device: this way it is perfect to test complex animations, interactions and gestures. Anyway, being based on layers and actions (instead of UI elements) it has a moderate learning curve.

● Axure (axure.com): one of the oldest (and still ruling) prototyping tools for

companies. Thus, it has an enormous user base, an active community and dedicated forums for support. Unfortunately, it has a pretty steep learning curve due to the numerous and advanced functions, such as desktop animations, group workflow and version control.

● Proto.io (proto.io): this tool is extremely time-consuming and pretty difficult to

digest. However, it has video training and support documentation to help the learning process. The widget library is huge and every element or setting can be edited with a single click. It supports team workflow with different roles for every project.

● UXPin (uxpin.com): even if it was built “by UX designers for UX designers”, the

interface is frustrating and buggy. The prototypes can be generated from existing mockups or external files. It is similar to Justinmind but focuses more on team collaboration, with features like VOIP, screen-sharing and a video-conferencing.

I placed all these products on the graph below, positioning them according to the two criteria I consider essential for a tool designed for Paper-Prototyping for mobile devices: on the X-axis, the ease-of-use, which could be described as a combination of time and effort needed to create a working prototype; on the Y-axis, the suitability for paper sketches and mobile usability testing.

Page 22: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 22 -

Figure 1 - Market differentiation for prototyping tools and possible location of Blended Prototyping system (from 6).

As can be noticed in Figure 1, the green area shows where the Blended Prototyping system would be located compared to the previously presented products: indeed, it aims at making Paper-Prototyping extremely agile and it is totally focused on building a working mobile version for usability testing. POP is probably the most similar tool and, together with the other first 4 apps on the list, they could be considered as “direct competitors”12: the products are comparable (despite some fundamental differences) and target the same customers. They are fast, light and perfect to produce many different ideas, testing and evaluating them right after. On the other hand, the other products (from Balsamiq on) are “indirect competitors”, because they allow to obtain (almost) the same results, but by using their functionalities in a different way as they are meant to. Indeed, they usually force to digitally redesign the prototype from scratch, need more time to get familiar with the tool, provide a huge amount of features and encourage the user to focus on details which are not important in the early design phase.

12 Understanding Your Competition | Small Business BC. (2015, February 12). Retrieved January 16, 2016, from http://smallbusinessbc.ca/article/understanding-your-competition/

Page 23: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 23 -

I decided to exclude from this analysis several other tools, since either they are too similar to the listed ones or too concentrated on wireframing and layouts, or their user base is too small. Here are some of them: Wireframesketcher (wireframesketcher.com), FieldTest (fieldtestapp.com), UX-App (ux-app.com), Principle (principleformac.com), Prototypes (prototypesapp.com), HotGloo (hotgloo.com), Form (relativewave.com/form), Protoshare (protoshare.com), Briefs (giveabrief.com), Moqups (moqups.com), Codiqa (codiqa.com), Mockups.me (mockups.me), Mockflow (mockflow.com), Flairbuilder (flairbuilder.com), Webflow (webflow.com), Framer.js (framerjs.com), Origami (facebook.github.io/origami), Wireframe (wireframe.cc), Indigo Studio (infragistics.com/products/indigo-studio). All of these products, however, found their design process on computer or mobile applications: therefore, the previously described benefits of collaborative sketching are not valid in this case. As Bill Buxton stated in [33], you first have to get “the right design, before proceeding with getting the design right”; but computer programs and other electronic tools are not designed to assist the numerous ideas and conflicting opinions needed to get the right design. And this is one of the reasons why the Blended Prototyping system is still based on a traditional structure: paper sketches on a table.

Page 24: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 24 -

3. The System and its Enhancement In this Chapter I am firstly going to present the original version of the Blended Prototyping system, describing why it is so different respect to other tabletop-based environments and how it aims at improving the Paper-Prototyping process. Then, I will present the interaction techniques I developed, explaining the considerations and the assumptions who brought me to implement these approaches. Therefore, the whole enhancement process and the final version of the system are described: I will illustrate the different interaction methods using pictures and giving some technical details of their implementation. Furthermore, I will justify the limitations of these interaction techniques.

3.1. The Existing System The previous chapter gave us a precise idea of which functionalities and specifications an ideal tabletop system to paper-prototype for mobile devices should have. In particular, this should permit to sketch quickly, creating multiple design alternatives without requiring too many unimportant details; and, right after, to examine and discuss these design solutions directly with the users along a collaborative method around the table. Ultimately, the final prototypes ought to be suitable for usability testing both in and out of the lab, within real-life scenarios and in a thousand of different usage conditions. As revealed in advance, the system presented now should be appropriate for these purposes. This is a tabletop computing system originally named Blended Prototyping [34] that, therefore, still concentrates its design process on hand-drawn paper sketches, but which can be then converted into digital versions (projected on the table) and testable applications (running on a mobile device). This way, the aim of this design-tool is to combine the potential of both Paper-Prototyping and mobile devices to create “an environment that allows to transform sketched ideas into code that can run and can be tested” [35], thus simplifying and accelerating the design process. What is innovative about this system is the central role Paper plays: indeed, designers can meet at this “multimedia-enhanced table” and sketch interfaces right on regular paper sheets. Thanks to the elementary and effortless use of pen on paper, the design process can be as interdisciplinary and creative as possible, bringing collaboration within the group members at a higher level and building the perfect environment for ideation and discussion. However, even if easy-to-learn and extremely fast, this approach offers “enough complexity to be relevant for the development”: in fact, as depicted by Bähr, it allows to “define dynamic

Page 25: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 25 -

interface behavior through actions on the tabletop” and “program functionality to sketched prototypes in a native programming language” [18]. The prototypes just generated can then be tested directly on a target mobile device and comfortably evaluated “on the field”. Another advantage of this system is that it allows to easily replicate and share the digital copies of the prototypes: this way designers are able to distribute their design ideas among their team or to external people (like customers), reuse them later or simply store them. The system is composed of two distinct parts: the hardware (Figure 2) and the software. The hardware component consists of a few different elements (the so-called tabletop setup):

● a video projector, perpendicularly located in a central position above a normal meeting table, which projects the mobile frames (where users need to sketch in) and the already digitized prototypes;

● a webcam, placed below the projector and pointing at the table, used for barcode recognition with the aim of determining the position of the screens and mapping them;

● a DSLR camera, used for shooting high-resolution photographs of the sketches on the

tabletop surface, which can then be translated into digital versions;

● a tablet device, which allows the designers to interact with the system and perform a number of different actions on the prototypes.

Figure 2 - System setup with projector, DSLR camera, webcam, paper sheet and tablet device, from [34] and [35].

The software part is instead a Java application (running on a computer) which controls the behavior of the projector and the cameras: it basically allows to identify the drawings as

Page 26: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 26 -

they evolve by reading the barcode markers on top of the paper sheets. Therefore, it allows to digitize the sketches and convert them into the actual prototypes: these can then be enhanced, edited and displayed on a mobile device, where users can perform actions by trying them out in real-life usage contexts. Hereafter, I am going to describe in detail how this system works and in which way the different components are connected along the whole design process. As introduced before, every paper sheet the designers use is equipped with a barcode marker (Figure 3), which is similar to a QR-code but wider and optimized to be seen by the webcam. This latter, thus, is then able to easily detect the sheet’s current position and rotation on the table.

Figure 3 - Example of barcode marker

At any time, the design team can use a tablet device to digitize the drawings, which is done by taking a picture of the whole table surface. At this point, an algorithm uses the barcode markers as a reference to extract the desired screens, corrects the image frames (by removing the “perspective- and lens-distortion effects”) and crops them to include only the elements of the interface. The hand-drawn sketches can then be removed from the table, being substituted by their digital versions which are now projected onto new blank paper sheets. These, of course, are still associated to the barcode marker and thus updated in real-time as soon as they are edited or moved around the table. Moreover, new sketched content can be continuously added onto the projected interface and then merged into a new digital version at any time. However, one of the main features of this system is the possibility to determine widgets and semantics: by using the tablet, indeed, the designers can manually define “hotspots” on the prototype and turn them into different design patterns - such as buttons, textboxes, images, checkboxes and so forth. Furthermore, they can create links between the prototypes in order to replicate the transition from a button to the following screen: once done, the interaction paths will be projected on the table’s surface, allowing to trace the connections and the storyboard along the design process. Finally, all the data generated by the design-tool can be easily converted into working code (e.g. Java classes), which reflects the logic just realized on the table. This way, designers can simply program (in a language native to the devices) additional behaviors and functionalities, enhancing the testable prototypes and smoothing the shift towards the development phase.

Page 27: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 27 -

3.2. Low-tech interaction techniques The purpose of this study, as specified in the title of this Thesis, is to research and implement possible ways to improve the interaction space of the tabletop system just presented. As described in the Literature Analysis, one of the strengths of Paper-Prototyping is the ability to easily involve people with different backgrounds and skills, “filling the gaps” between the group members and allowing everyone to express their thoughts. For this purpose, a tabletop environment seems a good choice, since it allows users to discuss and collaborate all together within a well-known and familiar environment. However, as stated at the beginning, the drawbacks of this mixed-fidelity approach come when using high-tech solutions: indeed, every time the users need to execute a task on a prototype, such as digitizing a screen, editing a component or deleting a feature, they are forced to stop this cooperative exercise to only use a tablet device. Therefore, not only this step is perceived as too technical respect to sketching with pen on paper, but is also considered as isolating and distracting by the users. Indeed, while pausing for using the tablet or waiting for a colleague to use it, they might get unfocused and - as a consequence - lose the so-called “flow” [36], which is the perfect state of mind when doing creative activities. Hence, the main focus is on finding and implementing alternative solutions to avoid using the tablet application and, at the same time, keeping the interaction techniques as learnable and usable as possible yet without interrupting the design phase. Which means, in other words, that the system should be sufficiently effective by only using low-tech approaches. Hereafter, then, I am going to present and examine the solutions taken into consideration and explain the reasons why these were chosen or discarded. First of all, since the goal is to replace the functionalities provided by the tablet, it is necessary to have a way to define “hotspots” on the prototypes, i.e. let the system know that a certain area is a button, or a textbox, etc. To do this, I analyzed the prototyping tools previously described and the literature about the use of physical and tangible objects in tabletop environments. Therefore, I generated a number of different solutions and, after a Brainstorming session with my colleagues, I created a list with these techniques, ranking them from the less to the more feasible one.

● Voice recognition: even if it might look strange at the beginning, an interesting way of interacting with the system would be via voice. Indeed, it is quite common during paper-prototyping sessions to discuss and “think-aloud”; however, on the other hand, with multiple people talking it may be hard for the system to recognize many commands and different users. Anyway, a special marker/pen could be built to tell the system which action is being performed: for instance, while drawing a textbox, a “record” button might be pressed and by pronouncing “Textbox” the system should

Page 28: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 28 -

detect the sketch as that type of component. Of course, this would be an example of multimodal interaction, because it would need to integrate the voice-channel with another input source to recognize the location of the object: therefore, due to the implementative complexity and the low success rate of voice recognition obtained nowadays, I discarded this option.

● Object recognition: one of the most obvious approaches would be definitely using object recognition, which basically means identifying a specific object inside an image. Unfortunately, there are two problems even here: firstly, the components are not univocal, e.g. a textbox might look like a button when sketched; secondly, exactly like voice recognition, this technique is still far from being perfect, therefore I decided not to investigate it further.

● Gesture recognition: the same arguments as above are valid for gesture recognition. In fact, this would be such an engaging technique for detecting components, for example by doing the “V sign” to mean checkbox or the “OK sign” to identify a radio button. However, gesture recognition is still a work-in-progress technique and, moreover, it should be coupled to another method to detect the exact position of the component: thus, I discarded it.

● Special pen: as suggested by a few papers presented in the Literature Analysis, the system could be integrated with a special pen. This should be equipped with a few buttons that, when pressed, would indicate which kind of object is being drawn: as soon as one stops writing on the paper sheet, the object would then be created. Again, however, in order to realize this tool, the hardware should be firstly built and tested, which prevented me from continuing exploring.

● Special button: the table could be equipped with a special button. As soon as this is pressed, a picture is taken; this is then matched with one taken previously and, by using image analysis and comparison, the resulting difference would be detected as the sketched component and automatically generated. Anyway, this solution would be based on a technique which still requires a huge implementation effort (for unsure outcomes) and special hardware, hence I looked for other options.

● Colored objects: map pins (or other objects) of different colors might be placed near the sketches in order to detect what kind of component is being drawn. Yet, this would require to recognize the colors of the objects which, due to the limited sizes and other variables (like inclination, lighting conditions, etc.), might be unfeasible.

● Barcode recognition: the barcode markers, which are already used to project the screens, might be used to detect the components. However, these barcodes have a

Page 29: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 29 -

standard aspect and cannot be resized excessively or randomly, otherwise they would not be detected anymore. Therefore, they are not suitable for this purpose, since the size of the components is continuously different according to design and goal. On the other hand, this technique could be used to replace other functionalities provided by the tablet device (as we will see further).

● Transparent layers: a transparent sheet (or semi-transparent, such as greaseproof paper - abbreviated to gp-paper from now on) might be placed over the component being recognized to act like a draggable area (hotspot) in the tablet implementation. This additional layer could then be used to highlight the desired component, by recognizing colors or even textures from a predefined set. This technique evolved into the implementation of color detection, which is presented hereafter.

3.2.1. Color detection As seen above, unfortunately, most of the envisioned approaches are unfeasible, mainly due to the high implementative complexity, the need of not-yet-existing hardware, a wrong target environment or, obviously, the short time constraints given by this research project. Color detection, therefore, is probably the easiest option, since it does not require any new specific hardware nor extreme implementation efforts; however, at the same time, it definitely suits the type of dynamic and “playful” interaction of the system. The concept is basically using different colors to highlight the various components of the prototype: the system should then recognize each color and generate the corresponding object onto that exact location. Once chosen this approach, I searched for useful libraries, examples and implementations of this and similar techniques: after exploring algorithms for generic object recognition and taking inspiration from some code and concepts, I then mainly focused on JavaCV13. This is an adaptation for Java platforms of OpenCV14, which is the most used libraries in the fields of robotics and computer vision and allows researchers to “detect, track and understand the surrounding world captured by image sensors”15. Unluckily, JavaCV almost completely lacks documentation on how to use the API, thus I referred to examples found on StackOverflow16 and manually converted OpenCV tutorials17 into Java18.

13 Java interface to OpenCV. Retrieved January 16, 2016, from http://github.com/bytedeco/javacv 14 OpenCV. Retrieved January 16, 2016, from http://opencv.org/ 15 How to Detect and Track Object with OpenCV. Retrieved January 16, 2016, from http://www.intorobotics.com/how-to-detect-and-track-object-with-opencv/ 16 How to get x, y coordinates of extracted objects in JavaCV? Retrieved January 16, 2016, from stackoverflow.com/questions/12106307/how-to-get-x-y-coordinates-of-extracted-objects-in-javacv 17 OpenCV Tutorial C++. Retrieved January 16, 2016, from http://opencv-srf.blogspot.ro/2010/09/object-detection-using-color-seperation.html

Page 30: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 30 -

Then, the user interaction for recognizing components through color detection was defined: the main purpose was indeed to provide the designers with as much freedom as possible. For this reason, I did not want to have a predefined set of colors (which they had to pick from), yet I wanted them to use the ones they preferred or had at that moment. This is how it was finally envisioned: the users are provided with a control card, named “toolbox” from now onwards, wherein they need to paint with a certain colored marker; using the same color, then, they need to fill up the chosen component. Once they digitize the sketch, the average color inside the toolbox is calculated and a threshold is generated. This means that if any pixel of a screen is identified to be within this threshold, it will be associated with the color inside the toolbox and, consequently, with the desired UI component. An algorithm checks the component’s RGB value and (if not white) verifies if it belongs or not to the color threshold, which is generated by creating a range (+30/-30) around the detected color. While doing this, the most exterior pixels of the toolbox are discarded, preferring the central values to avoid including possible mistakes by the users, such as colors overlapping. Other color spaces such as HSL or HSV19, which are widely adopted in fields like image analysis or computer vision, were investigated; however, being mostly used for creating a numerical span, the RGB was eventually chosen. Using a threshold was definitely the only possible way to accurately detect colors in such an unstable environment: indeed, according to the different lighting conditions, whether it is day or night, or sunny or cloudy, the brightness of the colors appears differently to the camera, which therefore might not be able to fully recognize a whole component. In fact, once the colors are detected, an algorithm recognizes the contours of the shape and approximates it to a rectangle, where the corresponding component is generated and projected (as can be seen in Figure 4).

18 Hints for Converting OpenCV C/C++ code to JavaCV. Retrieved January 16, 2016, from http://code.google.com/p/javacv/wiki/ConvertingOpenCV 19 HSL and HSV colors. Retrieved January 16, 2016, from http://en.wikipedia.org/wiki/HSL_and_HSV

Page 31: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 31 -

Figure 4 - The screen is digitized (left), then an algorithm detects colors and recognizes approximate rectangles (right).

Alternative approaches were considered, such as simply painting the outline of the component or using post-its to simulate the colored area: the former was discarded due to the inefficiency of the algorithm, which found it hard to recognize rectangular shapes when the stroke was too thin and given the rough drawings typical of Paper Prototyping; the latter mostly due to the necessity of continuously cutting and resizing the post-its to adapt them to the different sizes of the components. Moreover, owing to the intense light produced by the projector, some colors could not be detected because too bright: for this reason, instead of using highlighters - which was the first idea - I decided to use normal colored markers. These are painted on a piece of gp-paper which, being transparent, allow users to see through, thus using the hand-drawn sketches as reference without “ruining” them. Of course, a few limitations were introduced for the testing phase: in order to create thresholds working in most situations and lighting conditions, I limited the number of different colors to 4, i.e. red, green, blue and black. The first three are easy to be detected, since one of the RGB values is definitely higher than the others; for black instead, all the values are low enough to be identified. In a further implementation, other color spaces and theories could be researched to obtain more distinguishable thresholds.

Page 32: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 32 -

3.2.2. Barcode recognition While color detection is mostly used to recognize components on the prototypes, there is a number of possible actions to be performed on the screens: in the initial version of the system, again, these are done by using the tablet. Therefore, the purpose is again to replace these high-tech interactions with low-tech ones, which means removing the device from the design process. As said above, the most immediate way to do this is probably by reusing the technique the tabletop system is based on: the barcode recognition. At the moment, these are only used to refer to a certain paper sheet, detecting its position and rotation. Hence, the idea is to create special barcode markers which, once detected by the webcam, could perform specific operations: for example, these can take pictures of the devices, connect screens, recognize and delete components and copy screens. Every barcode, this way, becomes a sort of tool: according to its position on the table and the interaction with other barcodes and screens it is able to produce the effect it is designed for. An algorithm runs continuously to check whether new barcodes are detected on the tabletop: once a new one is identified, indeed, its barcode marker is processed and compared with the values stored in the database. If this is a “special” value, then, instead of projecting a mobile screen (as normal barcodes do) the desired tool is generated and the user can perform the chosen action with it. The algorithm, therefore, identifies the whole barcode and allows to find its center. Using this and the two “cornerstones” of the marker as a reference, then, it is easy to detect the rotation of the barcode and, consequently, to move and adapt the desired projection. As can be imagined, there are several limitations regarding this technique too: indeed, in order to be seen by the webcam, the barcode markers need to be wide and clear enough. Moreover, since the barcodes have regular features (like proportions and locations of the squares) and are linked to a database of corresponding values, it is not possible to create new barcodes by simply merging or appending them, since the behavior of the resulting barcode would then need to be programmed manually. This of course limits the number and variety of interactions that can be realized, because every barcode needs to be associated with a specific output. Moreover, since a multithreading algorithm is continuously running to detect the barcodes, a threshold has been set in order to prevent repeated refreshes of the devices on the table. This states, indeed, that a barcode marker needs to be moved by at least few centimeters before its screen or effect is projected again onto the new location. This, if on one hand avoids endless deletions and recreations of projections, which might definitely be disturbing, on the other is frustrating for the users when trying to slightly move the barcode.

Page 33: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 33 -

3.2.3. Further improvements In order to provide the users with an alternative approach for some functionalities, a “sidebar” was implemented. This is a rectangular-shaped transparent area located at the top of the table and delimited by a thin red line: when barcode markers are placed inside this space, a certain operation is then performed. Moreover, a progress indicator (similar to a clock) was added, onto the center-top area of the tabletop system, to give immediate visual feedback to the users. Indeed, whenever they are carrying out a task, if this clock appears they can realize they are actually performing an action: the progress indicator keeps running for 3 seconds and, once completed, the process is executed by the system. This is particularly effective to avoid accidental mistakes, by providing users with a sort of time-frame in which they could change their mind. In the coming subchapter, finally, I will present how the two approaches (color detection and barcode recognition) and these latter improvements were implemented to substitute the functionalities provided by the tablet application, thus building the enhanced version of the system.

3.3. The Enhanced System As said before, the new interaction techniques are applied within a certain scenario: the users have already sketched the prototypes on paper sheets and now want to digitize and perform some operations on them. These operations are therefore the following:

1. digitize a screen by taking a picture; 2. duplicate a whole screen; 3. detect a component (button, image or textbox) on a screen; 4. connect two screens (creating a link from a button to the next screen); 5. remove a component from a screen; 6. remove a connection between two screens; 7. remove a whole screen.

Thus, for each one of these actions (excluding the last one), two different approaches were implemented: the first (A) is based on color detection and on the sidebar on top of the table; the second (B), instead, uses the barcode recognition to generate several different tools. Hereafter these two modalities are presented, describing how they accomplish each task.

Page 34: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 34 -

3.3.1. Modality A

1. The users want to digitize a screen in order to have a digital copy of it. To do this, they need to take a picture of a prototype: the system is implemented so that, when one single barcode is inside the sidebar, its screen is photographed. Thus, they choose a sketch and move its barcode marker inside the sidebar at the top of the table. Once it is released there, the progress indicator appears; after just 3 seconds, the table is freed from the projections and the picture is taken. This appears immediately after as a projection over the sketch, which could even be removed.

Figure 5 – User performing TASK 1A (in red the screen placed in the sidebar to be digitized).

Page 35: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 35 -

2. The users need to duplicate a whole screen and for doing so they are using again the sidebar. First of all, they get a new barcode, which represents an empty screen; then, the screen they want to copy is chosen. Once they have both barcodes, they place them together - simultaneously - inside the sidebar. When two screens are inside it, indeed, this is implemented in the following way:

○ If both screens are empty (i.e. not digitized), nothing happens, since there is no digital image that can be duplicated;

○ If both screens are digitized, nothing happens again: this was chosen to avoid accidental overwriting mistakes. The alternative was to implement a way to tell the system which screen is the one being overwritten;

○ If only one screen is digitized and the other is empty (which is the hypothetical

situation), the progress indicator starts. After 3 seconds, as users expected, the digitized screen is copied onto the empty one.

Figure 6 – User performing TASK 2A (in red the two screens involved in the copy process with the sidebar).

Page 36: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 36 -

3. The users want to detect a component inside a screen. They are given a barcode, to be placed on the table, which generates a projection representing a toolbox. This appears like a table with 4 boxes, inside which some terms are projected: “BUTTON”, “IMAGE”, “TEXTBOX” and so forth. These names indicate the elements the users can add onto the screen. To do this, the users take a piece of gp-paper and place it over the toolbox, painting with a chosen color inside the box indicating the component they want to detect. Then, they get another piece of gp-paper and stick it on top of the element they want to recognize; they accurately fill it in with the color they just used in the toolbox. This process can be repeated multiple times for every element the users want to detect: they just need to use the same color both inside the toolbox and on the component (in the testing session, indeed, they are asked to detect two components). Once they are done, they only have to digitize the desired screens (keeping the toolbox visible on the table): these are projected on the table again, yet displaying a colored square to represent the component. At the same time, however, this square becomes a coded UI element for the mobile application representing the prototype.

Figure 7 – User performing TASK 3A (left) and digitized screen with components correctly detected (right).

Page 37: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 37 -

4. The users want to connect two screens. In particular, they need to create a link between a button and a chosen screen, as to represent what normally happens with mobile apps. The process is the same as in the previous task, but this time they paint inside the “FROM → TO” box of the toolbox and, with the same color, on a chosen component. After digitizing the screen, this will become a “button-connector”, which will be linked, through a thin black line, to the screen which is nearest to the toolbox - obviously excluding the screen starting the connection. This means that, without other screens on the table, no connection is generated; and that, when there is only one other screen on the table, this is always the destination: this implementation, thus, aims at accelerating the connection process.

Figure 8 - User performing TASK 4A: after digitizing a connection has been generated (highlighted in red).

Page 38: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 38 -

5. The users need now to delete a component on a certain screen: they basically have to digitize the prototype again, but without the colored square. For doing this they might have two methods: either they physically remove the colored piece of gp-paper, or they can cover it with a piece of white paper. After retaking the picture, the screen is shown without the deleted component.

6. The users want to remove a connection between two screens. The process is the same as in the previous task: indeed, they have to delete (by removing/covering) the component generating the connection. After digitizing the screen, the connection has disappeared.

Figure 9 - User performing TASK 5A/6A, removing the colored paper generating the component/connection.

In order to realize these last two tasks, the digitization process of the original system was changed: indeed, while this latter continuously expanded the prototypes by adding layers on top, in my approach the screens are digitized from scratch every time. This was done with the purpose of testing and comparing these techniques to the ones presented now in modality B.

Page 39: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 39 -

3.3.2. Modality B

1. The users need to digitize a screen by taking a picture of a prototype. They get a special barcode, named “camera tool”, and place its camera icon as close as they can to the barcode marker of the screen they want to photograph. Once the camera tool is detected by the system, the progress indicator appears; after 3 seconds, the table is emptied from any projections and the picture is taken. This is then projected over the sketch.

Figure 10 - User performing TASK 1B, taking a picture by placing the camera tool near the screen.

Page 40: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 40 -

2. The users want to duplicate a whole screen. Firstly, they are given a new barcode and they place it on the table, projecting an empty screen; then, they decide which screen they want to duplicate. Once done, they use a barcode named “copy tool” and place one side of it on the screen they want to duplicate, and the other side on the empty screen. As soon as the copy tool is detected, the 3 seconds start and, after that, the digitized screen is copied onto the empty one.

Figure 11 - User performing TASK 2B, placing the copy tool on the screens (the progress indicator started).

Page 41: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 41 -

3. The users want to detect a component within a screen. In order to do this, they take two arrow-shaped barcodes, named “handles”, and put them on the table. They choose an already digitized screen and move the handles over it. Then, they place their two tips at two opposite corners of the component they want to detect: a thin red rectangular shape is created. Once they stop moving them, the progress indicator starts and, after 3 seconds, the rectangle becomes an actual component.

Figure 12 - User performing TASK 3B: the handles are placed and the component is recognized correctly.

Basically, this technique is built on an algorithm which calculates the reciprocal positions of the two handles: therefore, the further they are, the wider the rectangle is. On the other hand, the closer they come, the smaller the resulting shape will be. For the testing phase, the handles were limited to only generate buttons: this could be changed in the future by simply using more couples of handles or adding a variable to indicate which component is being created.

Page 42: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 42 -

4. The users want to connect two screens and, for doing this, they get an “arrow tool”, an arrow-shaped barcode marker. They choose an already digitized screen and move the arrow tool on it: there, they place the tail of the arrow on a detected button (the source of the connection) and move the tip onto another screen they want to connect it to (the destination). Once done, the timer starts and, after 3 seconds, a link between the two screens is created and projected.

Figure 13 - User performing TASK 4B: the tail of the arrow on the button, the tip on the other screen.

Page 43: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 43 -

5. The users need now to remove a component from a screen. Thus, they are given a barcode marker named “rubber tool”; they choose the component they want to delete and place the corner of the rubber (the scratching part) over it. The progress indicator starts running and, after that, the component is deleted.

Figure 14 - User performing TASK 5B/7B by placing the rubber tool on the component/screen to delete.

Page 44: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 44 -

6. The users want to delete a connection between two screens. They use again the “rubber tool” and place it over the connection they want to remove. The progress indicator appears and, 3 seconds later, the connection is gone. For this method, the whole rubber tool can be placed over the connection, not only its corner; with this implementation, moreover, just the connection disappears, while the button-connector is untouched.

Figure 15 – User performing TASK 6B by placing the rubber tool on the connection to remove.

7. The users want to finally delete a whole screen and, again, they use the “rubber tool”. Therefore, they simply select the screen they want to remove and place the corner of the rubber over it - trying not to place it on a component. Once done, the timer starts and, after 3 seconds, the screen is deleted. This technique, while it belongs to modality B, is the only one for removing a whole screen - for lack of other valid alternatives. For this reason, in the upcoming evaluation, it will not be counted as a task of any modality; however, I decided to test it anyway to analyze if users find it usable or not and, especially, to have an easy way to delete screens when needed.

Page 45: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 45 -

4. Evaluation In the previous chapter, the system being studied was presented in detail: in fact, I analyzed its original implementation and, subsequently, investigated the whole enhancement process, until two new interaction modalities and other general improvements were added to its final version. Thus, it is now time to evaluate this new implementation by testing the system: what I want to measure is therefore the overall success of the new interaction techniques, in order to prove whether they provide a quick and easy-to-use tool without distracting excessively the user’s design process. In particular, I want to assess to what extent this enhanced version is efficient and offers a satisfactory user-experience and, moreover, which of the two modalities I implemented (if any) is the most effective.

4.1. Experiment design To assess which approach is better and how the users perceive the system, therefore, I identified 4 parameters I want to measure, which are known as Dependent Variables:

● Quickness: I want to investigate if Modality A is faster than Modality B, or vice versa; ● Ease-of-use: I want to assess which of the two approaches is easier to be used; ● Distraction: I want to calculate which modality is perceived to be more distracting; ● User-Experience: I want to understand the user’s attitude about using the system.

Furthermore, I want to examine the first three parameters for each interaction technique. This might be particularly useful in case none of the two approaches is statistically better than the other one: in this situation, indeed, I could maybe conclude that probably only certain interactions are significantly superior to other ones. From the considerations stated above, thus, I can specify the research questions:

● Does Modality A take less time than Modality B, or vice versa? ● Is Modality A easier to use than Modality B, or vice versa? ● Is Modality A more distracting than Modality B, or vice versa? ● How is the user-experience with the system?

As said before, the first three questions can then address every single task comparison for the different interaction techniques, for instance by asking: “Is Task 1 faster with Modality A or B?”. I will not explicitly formulate these questions (nor hypotheses), however I will answer them when finally analyzing the results.

Page 46: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 46 -

In order to reply to these questions, I have to formulate some hypotheses for those referring to a quantitative research study. These are based on the so-called null hypothesis, which in the end needs to be rejected or discarded. In this case, thus, the null hypotheses could be:

● Modality A is as fast as Modality B; ● Modality A is as easy to use as Modality B; ● Modality A is as distracting as Modality B.

Once defined the null hypotheses, I can actually formulate my real research hypotheses: these try to respond to the previous research questions and to predict which results I expect from the upcoming evaluation. These are my personal hypotheses:

● Modality B is faster than Modality A; ● Modality B is easier to use than Modality A; ● Modality A is more distracting than Modality B.

Finally, I have defined what to measure and which results I expect from this analysis. Thus, hereafter I am describing in detail how I plan to measure these Dependent Variables:

● Quickness: time for task completion. Since the evaluation is going to be videotaped, assessing how long it takes users to complete a task is easy. Besides, I decided to calculate another variable, i.e. Effective Time, which measures the duration of the task if no system errors had occurred. In fact, it might happen that users act correctly by performing the expected interaction but, due to a malfunctioning of the system, their action is not executed. Therefore, this variable is the time as if they actually completed the task;

● Ease-of-use: ratio of successfully completed tasks. This is calculated by checking which interactions are executed perfectly: this means the users make no mistakes, have no significant problems (system errors are excluded) and do not need any help or suggestions to complete the task. This variable was again measured at the end of the user study, by reviewing and analyzing the video-recordings;

● Distraction: this parameter is composed of two factors. The first one is quickness,

which we already have, therefore we can concentrate on the second, the workload index (RTLX). This is an unweighted (Raw) version of the well-known NASA Task Load Index (TLX)20 which, by using a combination of six subscales, tries to define the perceived amount of workload of a task. The 6 parameters are: Mental, Physical and Temporal Demand; Performance; Effort; Frustration.

20 NASA TLX. Retrieved January 16, 2016, from http://humansystems.arc.nasa.gov/groups/tlx/

Page 47: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 47 -

Thus, I decided to implement my own custom version of the RTLX assessment: I created an online survey21 via Google Forms, which needed to be filled in by the users after every task. Therefore, in order to accelerate the rating process, I used 7-points linear scales (where 1 means “low” and 7 “high”); the ratings are then averaged, forming the task load index. The lower this is, the less demanding (and, supposedly, distracting) the task was; vice versa, a high RTLX value indicates huge workload. I decided to adopt a raw arrangement of the TLX index for two reasons: firstly, to avoid weighting due to time constraints given by the user study; secondly, since it was even proved how, “in the 29 studies in which RTLX was compared to the original version, it was found to be either more sensitive […], less sensitive […], or equally sensitive […], so it seems you can take your pick” [37];

● User-Experience: this variable is measured through the dimensions of the AttrakDiff Survey. This, as described on the official website22, is a questionnaire which helps researchers to “understand how users personally rate the usability and design of your interactive product”. It is based on the studies [38] performed by Hassenzahl and other colleagues, which “show that hedonic and pragmatic qualities are perceived consistently and independent of one another”, contributing in the same way to the attractiveness of the system. Again, I created my custom version of this survey23 via Google Forms, which I asked the users to fill in after finishing the whole study. As the original one, it uses 28 7-points semantic differential scales, which means they show opposite adjectives at both poles (like "good - bad" or "human - technical"). These are implicitly divided into 4 dimensions (Pragmatic Quality, Hedonic Quality - Identity, Hedonic Quality - Stimulation, Attractiveness), which will be examined singularly to understand and delimit the User-Experience with the system.

As we just saw, the Dependent Variables are the ones being measured and tested and, thus, are “dependent” on the so-called Independent Variables. These are changed or controlled by the tester to verify the effects (if any) on the dependent one, which are examined and recorded. Hence, the Controlled Independent Variables for this study are:

● Task: from 1 to 7 (numeric); ● Modality: A | B (numeric); ● Group: AB | BA (numeric).

21 Task Evaluation Survey. Retrieved January 16, 2016, from http://goo.gl/forms/1GwYNuzGd0 22 AttrakDiff Homepage. Retrieved January 16, 2016, from http://attrakdiff.de/index-en.html 23 Final AttrakDiff Survey. Retrieved January 16, 2016, from http://goo.gl/forms/cXw27vSXUf

Page 48: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 48 -

A few more information was asked the users before finishing the test. The following data can be considered as other Independent Variables, since they might affect the results or create unwanted side-effects:

● Area of expertise: HCI | IT | Other (numeric) – users with a background in HCI or IT might understand the interactions faster and perform better;

● Sketching familiarity: 1 to 5 (interval) – users who are familiar with sketching could be quicker and more accurate at drawing;

● Mobile familiarity: 1 to 5 (interval) – people who have a better knowledge of mobile devices might find easier to understand some concepts of the design of the interface;

● Tabletop use: Yes | No (numeric) – users who already experimented a tabletop computing system may be advantaged to perceive the interactions with the objects;

● Lighting conditions: Day | Night (numeric) – since the system is based on color detection, the brightness of the environment might affect the results of the study.

Once decided how to measure the Dependent Variables and specified the Independent ones, it is now time to actually design the study itself. As we saw in the previous chapter, the main focus of the enhanced version of the system is right on the two different approaches. These, indeed, exploit different interaction techniques: on one hand we have Modality A, which is based on color detection (and the use of the sidebar); on the other hand, Modality B uses barcode recognition to perform operations. Having two totally different design solutions, therefore, I decided to adopt A/B Testing24. This is a method commonly used in HCI (but also in marketing and business), which basically aims at comparing two variants of a design (which can be a website, an application or a full product): these can either be the original version and its variation or, as in my case of study, two distinct experimentations. These two options, then, need to be tested by a group of users. For my study I decided to adopt a “within-subject” design where, as suggested by the name, the results of each variant are compared within the performance of a specific user. In other words, this means that every user tests both versions. This is only possible due to the fact that the two approaches have no interactions in common; otherwise, the exposure to one variant might influence the user’s performance in the other one. The alternative way to avoid this problem would be to assign each group of participants to only one of the two interaction modalities, which is called “between subjects” design; however, since the participants are different, the drawback in this case is that they will have distinct behaviors and attitudes towards the system.

24 Nielsen Norman Group. (2005, August 15). Retrieved January 16, 2016, from http://www.nngroup.com/articles/putting-ab-testing-in-its-place/

Page 49: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 49 -

For this reason, the within-subject design is usually more efficient and it was the one chosen for this study. Moreover, without any additional effort, I doubled the amount of users testing all the interactions. Nevertheless, to detect possible (but unlikely) bias, I divided the users into an AB group (which tested the Modality A before and the B one later) and a BA one (firstly B, then A): however, as expected, there were almost no differences in the results between users of group AB or BA, meaning there were no significant influences of one modality onto the other. Therefore, this was the final layout of the study. Subject (odd number) Task 1 Modality A Modality B Task 2 Modality A Modality B Task 3 Modality A Modality B Task 4 Modality A Modality B Task 5 Modality A Modality B Task 6 Modality A Modality B Task 7 - Modality B

Subject (even number) Task 1 Modality B Modality A Task 2 Modality B Modality A Task 3 Modality B Modality A Task 4 Modality B Modality A Task 5 Modality B Modality A Task 6 Modality B Modality A Task 7 Modality B -

Finally, after this long introduction, it is now time to present the User Studies.

4.2. User studies Before starting the testing with the users, the list of tasks was finalized: this is similar to a shorter and less detailed version of the descriptions of the two modalities found in Chapter 3.3, therefore I am not going to rewrite it here. However, if needed, it can be found at Appendix A. Few screens of a Paper-Prototype were prepared and placed on the table. Indeed, while at the beginning I wanted the user to sketch the prototype they would have worked on, I then decided to skip this step. This was mostly done both for saving time and, moreover, to avoid distracting the users: this way, they can only concentrate on the interaction techniques and forget about the content of the screens. At this point, a walkthrough was undertaken, in order to review the full process and familiarize with the system: this was particularly useful to discover wording errors and helped make the list of instructions (Appendix B) as clear as possible. This, although not followed literally, was used as a reference when introducing the users to the evaluation. Ultimately, a pilot session was run with a user to simulate the interactions with the participants and verify that the testing lasted less than one hour.

Page 50: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 50 -

In the end I decided to evaluate the tabletop with only one user at a time: indeed, although this tool aims at enhancing the communication within the team by building a creative environment, my main purpose is focusing on the interaction techniques and, thus, one single participant is easier to observe and more difficult to get biased. Now the only thing missing is the users: while the initial goal was to test 12 users, I eventually managed to have 24 participants. These were recruited mostly via Social Networks, especially Facebook (through groups like “Free your stuff Berlin”25) or Yammer26, or personal connections: the reward was either a tasty gift or, since I was leaving my apartment in Berlin, a few house utensils they needed. The user demographics, in the end, were extremely heterogeneous yet balanced, because there were 11 females and 13 males; half of them were aged 18-24, 10 were between 25 and 34 years old and only two were older than that. Even if with different roles and occupations, 7 users had a background in HCI, 8 were connected to an IT environment, while the rest had other kinds of expertise. Several users filled in a Doodle27 I sent them, which definitely helped the organization, while others just agreed verbally; anyway, all the 24 sessions were completed in the scheduled two weeks. Once entered the testing room, the users are welcomed by the experimenter: they are invited to sit, relax and have something to drink. In the meantime, they are given a Consent Form (Appendix C), which they need to read and sign. The researcher, however, is going to explain the whole procedure again, introducing the users to the system and ensuring they understand the instructions. The users have now time, if needed, to ask questions, otherwise a GoPro28 is turned on to film the session and the experiment can start. The researcher starts by reading the instructions and the list of tasks; once the users begin to perform one of them, he takes notes and observes how they react to the system. He replies to questions when asked, but suggestions are given only when the users appear to be stuck. As said before, after every task the users slightly move to the near table and evaluate the task with the RTLX Survey; before submitting it, they have the possibility to leave a comment or a suggestion for the interaction they just tried. They continue this way, performing a task and evaluating it subsequently, until the final task; ultimately, at the end of the session, they fill in the AttrakDiff Survey, providing some information about their background and expertise, get their reward and are free to leave.

25 Free Your Stuff Berlin. Retrieved January 16, 2016, from http://facebook.com/groups/freeyourstuff/ 26 EIT Digital. Retrieved January 16, 2016, from http://yammer.com/masterschooleitdigital.eu/ 27 User Study Doodle. Retrieved January 16, 2016, from http://doodle.com/poll/2kc83me3unwbap4t 28 GoPro | World's most Versatile Camera. Retrieved January 16, 2016, from http://gopro.com/

Page 51: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 51 -

4.3. Analysis of the Results Once completed the User Studies, the data had to be analyzed: as said above, the videos from the GoPro were reviewed in order to precisely investigate how long took the users to complete a task and, especially, detect when they had troubles or needed help. This information was then integrated with the responses of the surveys and created two spreadsheets: one focusing on the tasks, which includes the video analysis and the RTLX Survey29, the other reporting the results of the AttrakDiff Survey30. To realize an in-depth analysis I used SPSS31 and referred to the book “Human-computer interaction: An empirical research perspective” [39] to exactly decide which test was the best to apply to determine whether the data were statistically significant or not. In fact, although in some cases a T-test was sufficient (e.g. when having only two groups), in the end I chose the ONE-WAY ANOVA test32. This, indeed, “is reasonably robust to violations in the underlying assumptions”, it can be used for any group of users of my study (without additional changes) and, moreover, it is ideal for the type of the Dependent Variables (mainly continuous, such as interval and ratio). The ANOVA test, therefore, tells us if an Independent Variable has a significant impact onto the Dependent ones. This test basically examines the data to verify the likelihood that the null hypothesis is true or false: in this latter case, the test is considered statistically significant, indicating there is a difference in the means and that this is due to the properties of Modality A versus Modality B. In other words, if the test is repeated there is a high chance that the result will be similar. Therefore, I analyzed the data and looked for the P-Value, which is the probability to obtain the observed data with the null hypothesis being true. According to convention, with p < 0.05 the test is considered to be statistically significant; however, a non-significant ANOVA does not mean that the null hypothesis is true, but only that there is not enough evidence to reject it. Going back to my study, thus, it is now time to confirm or reject my hypotheses. I am going to report the output of the One-Way ANOVA by following the standards described in the book and referring here33 for SPSS-specific details.

29 Task Video Analysis on Google Sheets. Retrieved January 16, 2016, from http://goo.gl/a5MdTE 30 AttrakDiff Analysis on Google Sheets. Retrieved January 16, 2016, from http://goo.gl/3pQZMR 31 IBM SPSS. Retrieved January 16, 2016, from http://www-01.ibm.com/software/analytics/spss/ 32 SPSS Tutorials. Retrieved January 16, 2016, from http://spss-tutorials.com/spss-one-way-anova/ 33 One-way ANOVA in SPSS Statistics. Retrieved January 16, 2016, from http://statistics.laerd.com/spss-tutorials/one-way-anova-using-spss-statistics.php

Page 52: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 52 -

● Quickness: Firstly, I supposed Modality B was faster than Modality A, being color detection a sort of long process. This hypothesis is confirmed by the User Studies: indeed, the mean task completion time for Modality A is 40.1 seconds, while for Modality B is only 23.25, which is almost half time. The difference is proved to be statistically significant (F1,286 = 18.982, p = 0.000). Furthermore, this is confirmed once more by measuring the Effective Time (which excludes system errors): for Modality A it is 29.24 seconds, for B only 18.38. Even this result is statistically significant (F1,286 = 10.598, p = 0.001).

Figure 16 - Bar chart for Time and Effective Time with error bars showing 95% Confidence Interval (CI).

Page 53: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 53 -

● Ease-of-use: My personal hypothesis was that Modality B was perceived as easier to use by the participants. Surprisingly it was the opposite! Indeed, the ratio of perfectly accomplished tasks with Modality A is 84%, which is statistically significantly higher than the 60% of Modality B (F1,286 = 22.542, p = 0.000).

Figure 17 - Bar chart showing the % of successfully completed tasks for both modalities.

Page 54: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 54 -

● Distraction: My hypothesis was that Modality A was more distracting than B. This is partially confirmed by the results, being A’s RTLX index = 1.90, whereas B’s one is 1.84. On a 7-points scale this is not a huge difference: in fact, it is not statistically significant (F1,286 = 0.289, p = 0.591). Therefore, the null hypothesis is tenable here.

Figure 18 - Bar chart for workload index with error bars showing 95% Confidence Interval (CI).

Therefore, by just looking at these three variables we can have a clear idea of the two different modalities: color detection (A), on one hand, seems to be easier to use but takes a longer time; on the other hand, barcode recognition (B) is definitely faster, however it is more difficult to understand (at least at a first attempt). Moreover, both approaches have a similar index of workload: this is around 1.8/1.9 out of 7, which is positively low, meaning the system is considered to be not excessively distracting.

Page 55: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 55 -

● User-Experience: finally, by analyzing the data resulting from the AttrakDiff Survey, I want to understand the user’s attitude about the whole system regardless of the type of modality. These data provide valuable feedback on how the system is perceived in terms of usability and attractiveness. The new eSURVEY tool34 presents the results through a so-called “portfolio-presentation”: hereafter it is easy to realize how both hedonic and pragmatic qualities, shown on the two axes of the diagram, contribute uniformly to the User-Experience of the product.

Figure 19 - Portfolio-presentation with dimensions and confidence rectangle.

As stated by the report provided by the tool, the system is considered as “rather desired”. The confidence rectangle, indeed, overlaps a few pragmatic areas: this means that the user feels “assisted by the product”, but that the usability can be improved being the value only average. The same situation can be observed when discussing the hedonic qualities, since the confidence interval covers several character zones: again, the user is “stimulated by the product”, however the value is about average and, therefore, there is room for improvement. On the other hand, the confidence rectangle is small, which means that the users unanimously agreed with their ratings of the system, making the results reliable and less variable.

34 UID eSURVEY tool. Retrieved January 16, 2016, from http://esurvey.uid.com/project/

Page 56: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 56 -

Figure 20 - Bar chart for the 4 dimensions with error bars showing 95% Confidence Interval (CI).

Moreover, I analyzed the single dimensions to confirm these indications:

PQ, the Pragmatic Quality, describes the usability: with an average of 5.08 out of 7 this shows that users are only average successful in realizing their goals through this system;

HQ-I is the Hedonic Quality (Identity): an average value of 4.92 means that the product needs to be improved to allow users to better identify with it;

HQ-S is the Hedonic Quality (Stimulation): 5.18, although above-average, indicates that the system is good (but might be better) at stimulating the user in terms of innovation, interest and content;

ATT, finally, is the Attractiveness, which is a general value to represent the system as it is perceived: with 5.57 out of 7, the system’s attractiveness is already above the average.

Furthermore, to have more significant feedback, I decided to compare the two modalities for all the tasks of the experiment, to detect which of the two implementations works better and in which situation. Hereafter there is a summary of the findings, but more data can be found at this link35.

35 Statistical Analysis folder on Google Drive. Retrieved January 16, 2016, from http://goo.gl/q6wvvB

Page 57: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 57 -

Figure 21 - Bar chart for Completion Time for the 6 Tasks with error bars showing 95% Confidence Interval (CI).

Figure 22 - Bar chart for Effective Time for the 6 Tasks with error bars showing 95% Confidence Interval (CI).

Page 58: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 58 -

Figure 23 - Bar chart for the % of Ease-of-Use for the 6 Tasks with error bars showing 95% Confidence Interval (CI).

Figure 24 – Bar chart for the workload index for the 6 Tasks with error bars showing 95% Confidence Interval (CI).

Page 59: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 59 -

1. Starting from Task 1 we can see how Modality A, the sidebar, is not only twice as fast as Modality B, the camera tool (08.50 seconds against 24.55 of effective time, statistically significant), but is also perceived as easier to use (79% to 63%) and much less distracting (with a workload comparison of 1.61 versus 2.13). Thus, the sidebar seems definitely a better approach for Task 1: the reason might be the uncertainty of the users when first using the barcode of the camera tool. Indeed, they had no idea of how to place this on the table and, especially, whether they had to follow any criteria or not: in fact, the camera tool was placed over other barcodes, on top of other screens or even upside down.

2. For Task 2 the situation is similar: while Modality A is slower than B regarding completion time (35.17 vs. 31.40 seconds), it is again statistically significantly faster when counting effective time (08.45 vs. 28.42), as indicated by the red arrow in Figure 22. This can only mean the sidebar causes several system errors, which is going to be analyzed further in Chapter 4.4. The workload index is basically the same (1.94 to 1.95), however the sidebar has 96% of ease-of-use against 46% of the copy tool (which is significant): while the users were familiar with the concept of sidebar, they were not sure of how and where to place the copy tool.

3. Task 3 goes to Modality B, the handles. This technique is statistically significantly

faster than color detection for creating components: 16 seconds is indeed nothing compared to 1 minute and 27 seconds! Furthermore, the workload index for B is lower too (1.89 to 2.35), while the ease-of-use goes again to approach A (71% to 79%), despite the fact that some users were not sure of which device to digitize. However, at this point, some considerations need to be made: even if both approaches are tested to detect components, they are totally different between them. Indeed, while the handles are built only with that purpose, the color detection can ideally perform three tasks in one: in fact, during Task 3A a sketch is digitized (through any of the methods of Task 1) and two components are detected (which is like doing Task 3B twice). Therefore, with a rapid calculation, we can see that if we sum the average of Tasks 1A and 1B (24.10) and two times Task 3B (32.50), we obtain 57 seconds, which is a more realistic comparison.

4. Task 4 is extremely similar to the previous one. Again, Modality B, the arrow tool,

works statistically significantly faster than Modality A (26.17 to 55.55 seconds of effective time) and it is even less distracting (1.85 to 2.31). However, color detection (with the FROM → TO box) is certainly easier to use (77% vs. 33%): this is because several users placed the arrow either near a button (and not on top of it) or on a yet-to-come button, i.e. a button sketched on paper which was not digitized yet (even specifying clearly “already detected button” was not enough to prevent errors).

Page 60: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 60 -

Anyway, similarly to Task 3, the creation of a connection is only a small part of what color detection does in this task. In fact, if the arrow tool is specifically designed to connect a button with a screen, the color detection approach again performs three tasks in one. Actually, during Task 4A a sketch is digitized (Task 1), a button is created (Task 3B) and this is connected to another screen (Task 4B). We can see then that, if we sum Tasks 1 (24.10), Task 3B (16.25) and Task 4B (26.17), we obtain 1 minute and 6 seconds, which is a longer time than what color detection takes.

5. For Task 5 both approaches to delete a component have similar results. Color detection (A) is faster than the rubber tool (B) when checking time (21.35 to 26.22), but is slower when measuring effective time (14.52 to 10.17), as the red arrow in Figure 22 shows. This can be explained by two reasons: firstly, color detection is now a well-known process to the users, so that it takes them short time to finish it; secondly, the rubber tool was found to have a bug, which never occurred before and that could not be replicated nor fixed during the testing to not bias the results. For this reason, when this is not counted in the effective time, the rubber tool is actually faster. Moreover, the RTLX index is similar too (1.58 to 1.72), whereas the ease-of-use is unquestionable (92% to 67% for color detection, statistically significant).

6. In Task 6 the users are already familiar with the rubber tool: this is why it takes them

only 8.32 seconds (which is significant, or 7.42 of effective time) to delete a connection, against the 23.50 (13.15) it takes them using color detection. Moreover, Modality B has also a lower workload index (1.49 to 1.60), but Modality A is again statistically significantly easier to use (96% to 71%): this is due to the fact that some users were deleting both the connection and the button generating it.

7. Finally, Task 7 only had one modality and, since it was coming after Task 5 and 6,

the users were pretty skilled with the rubber tool. Indeed, it took them only 9.13 (6.08) seconds on average to delete a whole screen, the ease-of-use was 96% and the distraction index was just 1.24.

Ultimately, by analyzing the data further, we can notice a few interesting facts: for instance, as can be seen in Figure 25 (left), people with a background in HCI (students or UX/UI Designers) assigned an even lower workload index to the system (1.58) and were faster to complete the tasks respect to other groups. This is kind of promising, since they are absolutely representative of the main target of the system. Moreover, referring to the right part of Figure 25, it is exciting to notice how people with good sketching abilities performed better and faster than the other groups.

Page 61: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 61 -

Figure 25 - Y-axis: Average RTLX index. Left: Area of expertise. Right: Familiarity with sketching

Besides, users who stated to be extremely familiar with sketching on paper (rating 5/5) found the system more attractive than average (5.88 against 5.18 out of 7). The same tendency seems respected while analyzing the familiarity with mobile devices, where it can be observed that users with better knowledge of the mobile environment find the system less distracting and perform the operations in less time. However, all these trends are not statistically significant, therefore possible side-effects can only be assumed. Therefore, additional research could help determine whether certain variables could affect the results. Finally, it was noticed how users evaluating the system during the day-time performed better and are connected to a lower RTLX index; moreover, these people evaluated it higher than people in the night-time (5.28 vs. 5.10). This aspect leads us to some further reflections I will explore hereafter.

4.4. Discussion Before concluding the evaluation and the data analysis part, there are a few considerations that remain to be made in order to give extra value to the previous statistics. First of all, a parameter that was calculated but later not taken into account during the analysis was the accuracy: this indicates how precise the result of the operation was, for instance the size of a component or its location. I decided not to include it before since I preferred to concentrate on the interaction techniques; however, a brief resume of the findings can be reported here. A global accuracy of 86% is surely a satisfactory result; on the other hand, color detection is more delicate and error-prone, having only a 67% of accuracy. Indeed, the output of this task was not always as precise as expected. These are some examples:

● a few times some external elements (such as barcodes or hands) were included into the digitized image. This was surely caused by lack of attention and concentration or, more easily, by the inexperience with the system;

Page 62: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 62 -

● if the pieces of gp-paper were slightly wavy it happened that, after taking the picture of the screen, there was a grey area projected over it: this is because of the brightness of the projector’s beam, which generated a shadow that was later detected becoming a “ghost shape”. Other types of paper could then be investigated;

● several times the number of detected components did not correspond to that of sketched ones. This could happen for three main reasons:

○ the sketch was not painted accurately, leaving too many white spaces that caused the algorithm not to detect it as a whole, yet generating multiple ones;

○ the colors on the toolbox were mixed or overlapping, preventing the system to match the desired one and, therefore, not creating the component at all;

○ surprisingly, when the toolbox was located at the sides of the projection (as seen in Figure 26). In this latter case, apparently, the most outer areas of the projector’s beam seem to be darker, then either not detecting anything, or recognizing a “fake” light grey color and creating multiple components.

Figure 26 - Comparison of two toolboxes. Left: correctly detected. Right: shadowing occurred.

Surprisingly, the projector’s beam seems to be the cause of most system errors: indeed, even if these latter happened in “just” 1 task out of 10, a huge percentage of them was provoked by the (probably) excessively bright light beam. Basically, the center-top part of the table (coinciding with the progress indicator and the center of the sidebar), which was the closest to the projector, received such an intense light that the markers could not be detected anymore due to the reflections on the paper and the table. This fact was even more evident during night-time testing, when only artificial lighting was used (which probably influenced the assessment of the system itself, as supposed above). Anyway, several options were tested (changing paper, wrapping the table, moving it, …), but in the end the only solution was to warn the users not to place devices in the center-top area of the sidebar (by explaining them it was where the “clock” had to appear), yet prefer the sides of it.

Page 63: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 63 -

This is the reason why so many system errors were detected, since the sidebar can be basically used every time a copy or a digitization needs to be made; however, focusing mostly on the interaction techniques, I decided to introduce the “effective time”, which considers the task completed if the interaction is performed correctly. Furthermore, in a 5% of the tasks, users experienced troubles because of devices which were continuously refreshing: this is probably due to the same reflections I mentioned right above. Therefore, if the users performed the interaction correctly, the effective time was measured and the users were told to just slightly move the device, until this found a better lighting condition and stopped refreshing. For the same purpose, after the first few testing sessions, the users were given a more detailed introduction at the beginning of the study, while the explanations of the tasks were clearer and more concise; thus, if we add that also the examiner improved his wording abilities and became more and more familiar with the exercise, it is not a surprise that the last users were finishing the testing in shorter time than the first ones. For instance, in fact, a user did not understand what a “connection” was, since they expected it to be called “link”: from that moment on, then, when explaining how to connect two screens both terms were used. Moreover, the first tests were run by starting with several prototypes lying on the table; however, since most tasks are performed on only one or maximum two of them, their number was then reduced in order to avoid distracting the users further. Finally, once completed the evaluation phase and exhaustively analyzed the data, it is time to move to the final chapter, to draw some conclusions and examine future scenarios.

Page 64: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 64 -

5. Conclusions and Future Work This Master Thesis presented and evaluated new interaction techniques that were designed and applied to a tabletop computing system to do paper-prototyping for mobile devices. After an introduction to tabletop tangible interfaces and the State of the Art of prototyping, the system - before the enhancement process - was described. Indeed, in the initial version, users had to pause the creative design session to use a tablet device to digitize the screens and perform operations on them. However, this was noticed to disturb the design process, lower the creativity and distract the users. Therefore, two different modalities of interactions exploiting low-tech solutions were applied to this system to enhance it and avoid using the tablet device: one is based on color detection (A) and the other on barcode recognition (B). These two new approaches were finally evaluated through some User Studies in order to understand which of the different interactions worked better and to assess the User-Experience with the system. By analyzing the data it was observed that users appreciated the system, perceiving it as attractive and stimulating above-average. Modality A (color detection) was found to be easier to use than B (barcode recognition), whereas this latter was generally faster. In addition, the index of workload of both techniques, indicative of the distraction, was similar. By looking at the single tasks comparisons, moreover, it can be noticed which interactions performed better and were easier to use than the others: thus, I conclude this Thesis by presenting what would be, according to the data analysis and personal experience, the ideal implementation for this system by only using the interactions presented above. In order to make the digitization process as fast and easy-to-use as possible, I would use approach A (the sidebar) to take pictures of the screens (of course, once fixed the trouble with the excessive brightness). I would use the sidebar even to duplicate devices; however, I would probably give the possibility to use the copy tool too, since it might be comfortable not to move devices around the table. For detecting components and creating connections I would definitely implement both approaches. Indeed, the handles and the arrow tool would be certainly used, when operating on already-digitized devices, to continuously expand them in an easy way and in almost no time. Color detection, on the other hand, would be used for sure when starting from a paper sketch from scratch, in order to perform multiple tasks in just at one time. Finally, to remove components, connections and whole screens I would use the rubber tool without a doubt: indeed, the color detection approach in this case was unnatural and implemented by changing the behavior of the system only for testing reasons. However, I would probably implement separate rubber tools, in order to avoid possible mistakes and the need for being extremely accurate when using them.

Page 65: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 65 -

Ultimately, hereafter I am presenting possible ways to further enhance this system and, moreover, some research directions that might be interesting to explore:

● during the digitization process it would be definitely useful to give visual feedback to the users (such as “Picture is being taken, please do not touch!”) when the projections disappear from the tabletop. Indeed, especially inexperienced users might think the system has crashed and could be tempted to place their hands onto the prototypes. However, this was not developed due to the implementative complexities of restructuring completely the original system;

● as suggested by a user, an icon or a label might be displayed near/over the progress

indicator to let users easily know which action is being performed at a certain moment;

● one of the priorities when working at this project would be exploring color detection further. This would mean, of course, finding or writing better algorithms to calculate the threshold for discerning different colors and to accurately detect shapes only through their contours. Regarding this topic, it would be useful to investigate further about how the color range is built in chroma-keying;

● the barcodes might be built with different materials (like wood or opaque plastic)

and the projector might be centered above the table: these two approaches could perhaps reduce reflections. For the same purpose, other lighting conditions and room environments might be experimented further;

● in order to avoid unintended movements of the paper sheets while sketching, the barcodes could be built by using magnets or similar techniques which stick them to the table but, at the same time, allow users to easily move and replace them;

● it would be interesting to run a User Study to compare the interaction techniques

presented in this Thesis with the ones of the original system;

● in conclusion, it would be extremely beneficial and effective for this system to have it tested in a real-life scenario by researchers and designers of a company or startup.

Page 66: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 66 -

6. Bibliography [1] Snyder, C. (2003). Paper prototyping: The fast and easy way to design and refine user interfaces. The Morgan Kaufmann Series in Interactive Technologies. Morgan Kaufmann. [2] Ishii, H., & Kobayashi, M. (1992, June). ClearBoard: a seamless medium for shared drawing and conversation with eye contact. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 525-532). ACM. [3] Ishii, H., Kobayashi, M., & Grudin, J. (1993). Integration of interpersonal space and shared workspace: ClearBoard design and experiments. ACM Transactions on Information Systems (TOIS), 11(4), 349-375. [4] Fitzmaurice, G. W., Ishii, H., & Buxton, W. A. (1995, May). Bricks: laying the foundations for graspable user interfaces. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 442-449). ACM Press/Addison-Wesley Publishing Co. [5] Ishii, H., & Ullmer, B. (1997, March). Tangible bits: towards seamless interfaces between people, bits and atoms. In Proceedings of the ACM SIGCHI Conference on Human factors in computing systems (pp. 234-241). ACM. [6] Ullmer, B., & Ishii, H. (1997, October). The metaDESK: models and prototypes for tangible user interfaces. In Proceedings of the 10th annual ACM symposium on User interface software and technology (pp. 223-232). ACM. [7] Kaltenbrunner, M., Bovermann, T., Bencina, R., & Costanza, E. (2005, May). TUIO: A protocol for table-top tangible user interfaces. In Proc. of the 6th Int’l Workshop on Gesture in Human-Computer Interaction and Simulation (pp. 1-5). [8] Jordà, S., Geiger, G., Alonso, M., & Kaltenbrunner, M. (2007, February). The reacTable: exploring the synergy between live music performance and tabletop tangible interfaces. In Proceedings of the 1st international conference on Tangible and embedded interaction (pp. 139-146). ACM. [9] Kaltenbrunner, M., & Bencina, R. (2007, February). reacTIVision: a computer-vision framework for table-based tangible interaction. In Proceedings of the 1st international conference on Tangible and embedded interaction (pp. 69-74). ACM.

Page 67: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 67 -

[10] Steimle, J., Brdiczka, O., & Mühlhäuser, M. (2009). CoScribe: Integrating paper and digital documents for collaborative knowledge work. Learning Technologies, IEEE Transactions on, 2(3), 174-188. [11] Zufferey, G., Jermann, P., Lucchi, A., & Dillenbourg, P. (2009, February). TinkerSheets: using paper forms to control and visualize tangible simulations. In Proceedings of the 3rd international Conference on Tangible and Embedded interaction (pp. 377-384). ACM. [12] Beaudouin-Lafon, M., & Mackay, W. E. (2003). Prototyping tools and techniques. In Human Computer Interaction - Development Process, 122-142. [13] Norman, D. (2013). The design of everyday things: Revised and expanded edition. Basic books. [14] Preece, J., Rogers, Y., Sharp, H., Benyon, D., Holland, S., & Carey, T. (1994). Human-computer interaction. Addison-Wesley Longman Ltd. [15] Nielsen, J. (1993). Iterative user-interface design. Computer, 26(11), 32-41. [16] Rudd, J., Stern, K., & Isensee, S. (1996, January). Low vs. high-fidelity prototyping debate. interactions, 3(1), 76-85. [17] Landay, J. A. (1996, April). SILK: Sketching Interfaces Like Krazy. In Conference companion on Human factors in computing systems (pp. 398-399). ACM. [18] Bähr, B., Kratz, S., & Rohs, M. (2010). A Tabletop System for supporting Paper Prototyping of Mobile Interfaces. “PaperComp” Workshop, UbiComp 2010 Copenhagen, Denmark. [19] Schumann, J., Strothotte, T., Laser, S., & Raab, A. (1996, April). Assessing the effect of non-photorealistic rendered images in CAD. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 35-41). ACM. [20] Virzi, R. A., Sokolov, J. L., & Karis, D. (1996, April). Usability problem identification using both low-and high-fidelity prototypes. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 236-243). ACM. [21] Catani, M. B., & Biers, D. W. (1998, October). Usability evaluation and prototype fidelity: Users and usability professionals. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 42, No. 19, pp. 1331-1335). SAGE Publications.

Page 68: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 68 -

[22] Liu, L., & Khooshabeh, P. (2003, April). Paper or interactive? A study of prototyping techniques for ubiquitous computing environments. In CHI'03 extended abstracts on Human factors in computing systems (pp. 1030-1031). ACM. [23] Lin, J., Newman, M. W., Hong, J. I., & Landay, J. A. (2000, April). DENIM: finding a tighter fit between tools and practice for Web site design. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems (pp. 510-517). ACM. [24] Lin, J., & Landay, J. A. (2002, September). Damask: A tool for early-stage design and prototyping of multi-device user interfaces. In Proceedings of The 8th International Conference on Distributed Multimedia Systems (2002 International Workshop on Visual Computing) (pp. 573-580). [25] Davis, R. C., Saponas, T. S., Shilman, M., & Landay, J. A. (2007, October). SketchWizard: Wizard of Oz prototyping of pen-based user interfaces. In Proceedings of the 20th annual ACM symposium on User interface software and technology (pp. 119-128). ACM. [26] de Sá, M., & Carriço, L. (2006, April). Low-fi prototyping for mobile devices. In CHI'06 extended abstracts on Human factors in computing systems (pp. 694-699). ACM. [27] de Sá, M., Carriço, L., Duarte, L., & Reis, T. (2008, May). A mixed-fidelity prototyping tool for mobile devices. In Proceedings of the working conference on Advanced visual interfaces (pp. 225-232). ACM. [28] de Sá, M., & Carriço, L. (2009, September). A mobile tool for in-situ prototyping. In Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services (p. 20). ACM. [29] de Sá, M., & Carriço, L. (2009). An evaluation framework for mobile user interfaces. In Human-Computer Interaction – INTERACT 2009 (pp. 708-721). Springer Berlin Heidelberg. [30] Duarte, L., de Sá, M., & Carriço, L. (2010, September). Physiological data gathering in mobile environments. In Proceedings of the 12th ACM international conference adjunct papers on Ubiquitous computing - Adjunct (pp. 405-406). ACM. [31] Pfleging, B., Valderrama Bahamondez, E. D. C., Schmidt, A., Hermes, M., & Nolte, J. (2010, April). MobiDev: a mobile development kit for combined paper-based and in-situ programming on the mobile phone. In CHI'10 Extended Abstracts on Human Factors in Computing Systems (pp. 3733-3738). ACM.

Page 69: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 69 -

[32] Holzmann, C., & Vogler, M. (2012, August). Building interactive prototypes of mobile user interfaces with a digital pen. In Proceedings of the 10th asia pacific conference on Computer human interaction (pp. 159-168). ACM. [33] Buxton, B. (2007). Sketching user experiences: getting the design right and the right design (interactive technologies). Morgan Kaufmann. [34] Bähr, B., & Neumann, S. (2013). Blended Prototyping Design for Mobile Applications. In Rethinking Prototyping: Proceedings of the Design Modelling Symposium Berlin 2013. epubli, 68 – 80. [35] Bähr, B., & Möller, S. (2015). Blended Prototyping. In Rethink! Prototyping (pp. 129-160). Springer International Publishing. [36] Csikszentmihalyi, M. (1991). Flow: The psychology of optimal experience. HarperPerennial. [37] Hart, S. G. (2006, October). NASA-task load index (NASA-TLX); 20 years later. In Proceedings of the human factors and ergonomics society annual meeting (Vol. 50, No. 9, pp. 904-908). Sage Publications. [38] Hassenzahl, M., Burmester, M., & Koller, F. (2003). AttrakDiff: Ein Fragebogen zur Messung wahrgenommener hedonischer und pragmatischer Qualität. In Mensch & Computer 2003 (pp. 187-196). Vieweg+Teubner Verlag. [39] MacKenzie, I. S. (2012). Human-computer interaction: An empirical research perspective. Morgan Kaufmann.

Page 70: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 70 -

Appendix A – List of tasks TASK 1 - mod. A The purpose of this task is to digitize a screen by taking a picture. Take a look at the sketches on the table. Choose a screen you want and move it to the sides of the sidebar on the top of the table until its barcode is contained. Once released there, a timer will start and, after 3 seconds, the screen will disappear and a picture will be taken. Please try this technique.

TASK 1 - mod. B The purpose of this task is to digitize a screen by taking a picture. Take a look at the sketch of the app on the table. You are given a camera tool by the facilitator. Choose a screen you want and place the camera tool as close as possible to its barcode. Once placed on the table a timer will start and, after 3 seconds, the screen will disappear and a picture will be taken. Please try this technique.

TASK 2 - mod. A The purpose of this task is to make a copy of a whole screen. In order to do this, you need at the same time to:

choose a screen you want and move it to the sidebar on the top of the table until its barcode is contained;

place a new barcode (for an empty screen) inside the sidebar. Once both barcodes are inside the sidebar, a timer will start and, after 3 seconds, the digitized screen is copied into the empty one. Please try this technique.

Page 71: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 71 -

TASK 2 - mod. B The purpose of this task is to make a copy of a whole screen. In order to do this, you are given a copy tool and a new barcode (for an empty screen) by the facilitator. Choose the screen you want to copy and place the new barcode on the table on a side of the screen. Then, place one side of the copy tool on the screen you want to duplicate and the other side on the empty screen. Once done, a timer will start and, after 3 seconds, the digitized screen is copied into the empty one. Please try this technique.

TASK 3 - mod. A The purpose of this task is to detect components inside a screen. You are given a toolbox by the facilitator. Place it on the table. You are then given 2 pieces of gp-paper:

Stick one on top of the desired screen, leaving the barcode visible. Paint (as filled as possible) inside at least 2 of the components, using the same color for the same group of components;

Stick the other on top of the toolbox. Paint inside the boxes according to the colors used on the screen.

With the toolbox on the table, take a picture with the modality you prefer. Please try this technique.

TASK 3 - mod. B The purpose of this task is to detect components inside a screen. You are given 2 handles by the facilitator. Place them on the table. Choose an already digitized screen you want and move the handles on it. Place them on the opposite corners of the button you want to detect. Once you stop moving them, a timer will start and, after 3 seconds, the button is detected and created. Please try this technique.

Page 72: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 72 -

TASK 4 - mod. A The purpose of this task is to connect two screens. You are given 2 pieces of gp-paper:

Stick one on top of the desired screen, leaving the barcode visible. Paint (as filled as possible) inside the button-connector you want to start the connection from;

Stick the other on top of the toolbox. Paint inside the box “FROM → TO” according to the color used for the button.

Then, place the toolbox as close as possible to the barcode of the screen you want to connect. Finally, take a picture with the modality you prefer. Please try this technique.

TASK 4 - mod. B The purpose of this task is to connect two screens. In order to do this, you are given an arrow tool by the facilitator. Choose an already digitized screen you want and move the arrow tool on it. Place the tail of the arrow on the desired button and the tip on the screen you want to connect. A timer will start and, after 3 seconds, a connection is created and displayed. Please try this technique.

TASK 5 - mod. A The purpose of this task is to delete a component. You are given some pieces of white paper. Place them over a painted component of a screen you want to cover or remove the colored component directly. Finally, retake a picture with the modality you prefer. A new screen is generated without the deleted component. Please try this technique.

Page 73: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 73 -

TASK 5 - mod. B The purpose of this task is to delete a component. In order to do this, you are given a rubber tool by the facilitator. Choose the component you want to remove and place the corner of the rubber tool over it. A timer will start and, after 3 seconds, the component is deleted. Please try this technique.

TASK 6 - mod. A The purpose of this task is to delete a connection. You are given some pieces of white paper. Place them over the painted button-connector of that screen or on the box “FROM → TO” to cover them or simply remove the component. Finally, retake a picture with the modality you prefer. A new screen is generated without the deleted connection and button-connector. Please try this technique.

TASK 6 - mod. B The purpose of this task is to delete a connection. In order to do this, you are given a rubber tool by the facilitator. Choose the connection you want to remove and place the rubber tool over it, but leaving the button-connector intact. A timer will start and, after 3 seconds, the connection is deleted. Please try this technique.

TASK 7 The purpose of this task is to delete a whole screen. You are given a rubber tool by the facilitator. Choose the screen you want to remove and place the corner of the rubber tool over it (not on the components!). A timer will start and, after 3 seconds, the screen is deleted. Please try this technique.

Page 74: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 74 -

Appendix B – List of instructions Welcome! Hope you found the place easily. Would you like something to drink? Thank you for coming. I am Francesco Bonadiman, Master Student at the EIT Digital Master School and writing my thesis about ways to improve the user interaction with tabletop computing environments with low-tech solutions. The purpose of today's session is for you to help us figure out how to make such interactions more usable and user-friendly before we finish developing it. I will explain to you a number of interaction ideas I implemented and ask you to fulfill with them certain tasks. The sessions will be videotaped for further review. We may publish our notes from this and other sessions, but all such observations will be confidential and anonymized. Keep in mind that we're testing the system - we're not testing you - so if you run into any problems it's not your fault and it means that there's something we need to change. I'll be standing next to you, and I can help you if you want. The system is still incomplete, but we want to get some feedback about how well this design works. Therefore, if you have any suggestions, we'll be glad to receive them. Please tell me what makes sense to you, what's confusing, and any questions that come to mind. Remember that we're testing the system, not you. We'll end promptly at [TIME], but if you need to stop or take a break before then, just let me know. Are you ready to start? As I mentioned, here's the system for paper prototyping you'll be working with. I will read a task from this list and you will let me know if it makes sense. If so, then whenever you're ready please show me what you would do. Two more recommendations: The most important thing is that the barcodes are visible, as if you were scanning a product at the supermarket. So if you see that the screen of the device disappears, just move it slightly to a place close to that. Try to avoid placing the devices in the central-top area of the table, since there will be a clock appearing there, which will tell you that something is happening on the table.

Page 75: Enhancing the interaction space of a tabletop computing system to design paper prototypes for mobile applications

- 75 -

Appendix C – Consent Form

CONSENT FORM This is a study about investigating new interaction techniques for a tabletop computing system. Our goal is to decide which techniques are more usable and user-friendly in order to integrate them in the new version of our system. Your participation will help us accomplish this goal. All information we collect concerning your participation in the session will be used for the Master Thesis linked to this study and other internal research purposes. The session will be videotaped for further review. We may publish our notes from this and other sessions, but all such observations will be confidential and anonymized; entering any of your personal information will be optional. A session facilitator will be near you and help you if you are stuck or have questions; he will quietly observe the whole session and take notes. This is a test of this tabletop computing system, we are not testing you! You will receive a small - but tasty! - gift at the end of the session, which will last approximately 1 hour. You may take breaks as needed and may stop your participation in the study at any time.

Statement of Informed Consent I have read the description of the study and of my right as a participant. I voluntarily agree to participate in the study.

DATE: ………………………………………………………

NAME: ………………………………………………………

SIGNATURE: ………………………………………………………