providing visualisation of wood industry data with a user...

96
Linköpings universitet SE–581 83 Linköping +46 13 28 10 00 , www.liu.se Linköping University | Department of Computer Science Master thesis, 30 ECTS | Datateknik 2016 | LIU-IDA/LITH-EX-A--16/001--SE Providing visualisation of wood industry data with a user centred design. Daniel Nilsson Patrick Lindell Supervisor : Tommy Färnqvist (Linköping university) Urban Ståhl (RemaSawco) Examiner : Ola Leifler

Upload: others

Post on 12-Jun-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

Linköpings universitetSE–581 83 Linköping

+46 13 28 10 00 , www.liu.se

Linköping University | Department of Computer Science

Master thesis, 30 ECTS | Datateknik

2016 | LIU-IDA/LITH-EX-A--16/001--SE

Providing visualisation ofwood industry data with auser centred design.

Daniel NilssonPatrick Lindell

Supervisor :Tommy Färnqvist (Linköping university)Urban Ståhl (RemaSawco)

Examiner :Ola Leifler

Page 2: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare – under 25 årfrån publiceringsdatum under förutsättning att inga extraordinära omständigheter uppstår.Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstakakopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och förundervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva dettatillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. Föratt garantera äktheten, säkerheten och tillgängligheten finns lösningar av teknisk och admin-istrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman iden omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sättsamt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sam-manhang som är kränkande för upphovsmannenslitterära eller konstnärliga anseende elleregenart. För ytterligare information om Linköping University Electronic Press se förlagetshemsida http://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet – or its possible replacement– for a period of 25 years starting from the date of publication barring exceptional circum-stances. The online availability of the document implies permanent permission for anyone toread, to download, or to print out single copies for his/hers own use and to use it unchangedfor non-commercial research and educational purpose. Subsequent transfers of copyrightcannot revoke this permission. All other uses of the document are conditional upon the con-sent of the copyright owner. The publisher has taken technical and administrative measuresto assure authenticity, security and accessibility. According to intellectual property law theauthor has the right to be mentioned when his/her work is accessed as described above andto be protected against infringement. For additional information about the Linköping Uni-versity Electronic Press and its procedures for publication and for assurance of documentintegrity, please refer to its www home page: http://www.ep.liu.se/.

c© Daniel Nilsson, Patrick Lindell

Page 3: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

Abstract

When developing a new system, it is a good idea to involve the end users from thestart to prevent usability issues. This thesis has evaluated how one can develop a datavisualisation system for the sawmill industry with a focus on user experience.

Semi-structured interviews with a snowball sample approach were used to acquire thedemands of the end users. From these demands, paper prototypes were developed andthen evaluated. Data on these prototypes were collected iteratively with the help of usabil-ity tests. This was done to understand how pleased users were when using the productbut also to evaluate how efficiently they used it. Metrics have been used to measure theuser experience of the product with both the paper prototypes and a hi-fi prototype, alsodescribed as the alpha prototype.

The conclusion answers the two research questions asked in this thesis. It concludesthat the interview technique used in this thesis gave a good understanding of what infor-mation the users were interested in. Regarding measuring user experience, usability issueshave been detected and reduced for each iteration, which indirectly results in a higher ef-ficiency since the number of confusions are reduced. Something that can be seen from thesystem usability scale tests is that the high scores (about 89) they generated indicate thatthe users are pleased. With the different metrics used in this thesis, the conclusions are thatthe fewer the obstacles are for the user, the less annoyed they are when using the productand in turn perform their goals faster.

Page 4: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea
Page 5: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

Acknowledgments

We would like to thank everyone at RemaSawco for welcoming us and being supportive.And to all those who have participated in our study with your time. Special thanks to oursupervisor at RemaSawco, Urban Ståhl, for supporting us and providing us with all the helpthat we needed. Our work would have been significantly more difficult without you, sothanks again.

Special thanks to our supervisor at LiU, Tommy Färnqvist, for supporting us with pre-cious feedback, quick responds and your guidance. Also thanks to our examiner Ola Leiflerfor support and great feedback on our work. Thank you both for your time.

A thank also goes out to Sofia Westerberg who have provided us with thoughts andinspiration regarding the interviews in our work. Also thanks to Daniel Sjövall who haveprovided us with good guidance in our work with the alpha prototype.

Finally, we would like to thank for the support from our partners and close friends whohave provided us with support and positive energy during our work with this thesis.

Daniel Nilsson and Patrick LindellLinköping, June 2016

v

Page 6: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

Contents

Abstract iii

Acknowledgments v

Contents vi

List of Figures viii

List of Tables ix

1 Introduction 1

1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Aim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.4 Delimitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.5 Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Theory 5

2.1 Research Purposes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.3 Qualitative Research Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.4 Non-Probability Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.5 Data Visualisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.6 User Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.7 Usability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.8 User-Centred Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.9 Usability Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.10 Measure User Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.11 Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3 Method 17

3.1 Prestudy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.2 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.3 Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.4 Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.5 Measure User Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4 Results 25

4.1 Prestudy Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254.2 Main Study Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274.3 Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.4 Data Collected From Lo-fi prototypes . . . . . . . . . . . . . . . . . . . . . . . . 384.5 Data Collected From Alpha Prototype . . . . . . . . . . . . . . . . . . . . . . . . 46

vi

Page 7: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

5 Discussion 51

5.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555.3 Data Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

6 Conclusion 63

6.1 Research Aim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

Bibliography 65

A Interview Guide (Swedish) 67

B Swedish Quotes 69

C Tasks 73

D System usability scale (SUS-test) 75

E Pictures of prototypes 77

F Datatypes mentioned during interviews 85

Page 8: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

List of Figures

4.1 The sawmill distribution of the interviewees. . . . . . . . . . . . . . . . . . . . . . . 284.2 The most frequently mentioned data. . . . . . . . . . . . . . . . . . . . . . . . . . . . 294.3 The demand for visualisation types. . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.4 The demand for platform support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.5 The saw house view in the lo-fi prototype used in test iteration two. . . . . . . . . . 344.6 The export data view in the lo-fi prototype used in test iteration one. . . . . . . . . 354.7 How heuristic evaluation are used to provide feedback. . . . . . . . . . . . . . . . . 364.8 The timber sorting view in the alpha prototype. . . . . . . . . . . . . . . . . . . . . 374.9 The export data view in the alpha prototype. . . . . . . . . . . . . . . . . . . . . . . 374.10 Efficiency for each task for both test iterations of the lo-fi prototypes. . . . . . . . . 384.11 Changes in paper prototype of the menu from before and after test iteration one. . 414.12 Changes in paper prototype of the report tool from before and after test iteration

one. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424.13 Changes in the prototype of the menu from before and after test iteration two. . . . 444.14 Changes in the prototype of the time span from before and after test iteration two. 454.15 Distribution of how critical the unique-usability issues are over each test iteration. 454.16 Efficiency for each task for all test iterations. . . . . . . . . . . . . . . . . . . . . . . 474.17 Distribution of how critical the unique-usability issues are over each test iteration

without including issues regarding task five. . . . . . . . . . . . . . . . . . . . . . . 484.18 The average SUS score for each iteration. . . . . . . . . . . . . . . . . . . . . . . . . 49

5.1 Meantime for each task for all users presented in minutes for all test iterations. . . 545.2 Plot the test iterations means with 95% confidence intervals . . . . . . . . . . . . . 58

E.1 The start view in the lo-fi prototype iteration two. . . . . . . . . . . . . . . . . . . . 77E.2 The saw house view in the lo-fi prototype iteration two. . . . . . . . . . . . . . . . . 78E.3 The export data view in the lo-fi prototype iteration two. . . . . . . . . . . . . . . . 78E.4 Feedback messages in the lo-fi prototype iteration two. . . . . . . . . . . . . . . . . 79E.5 The first step of the report tool in the lo-fi prototype iteration two. . . . . . . . . . . 80E.6 The second step of the report tool in the lo-fi prototype iteration two. . . . . . . . . 81E.7 The third and last step of the report tool in the lo-fi prototype iteration two. . . . . 82E.8 The login view in the alpha prototype. . . . . . . . . . . . . . . . . . . . . . . . . . . 82E.9 The timber sorting view in the alpha prototype. . . . . . . . . . . . . . . . . . . . . 83E.10 The export data view in the alpha prototype. . . . . . . . . . . . . . . . . . . . . . . 83E.11 The show and hide popup in the alpha prototype. . . . . . . . . . . . . . . . . . . . 84

viii

Page 9: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

List of Tables

4.1 Data gathered from both lo-fi prototype test iterations. . . . . . . . . . . . . . . . . 384.2 Identified unique-usability issues for test iteration one. . . . . . . . . . . . . . . . . 394.3 Solutions for unique-usability issues for test iteration one. . . . . . . . . . . . . . . 404.4 Identified unique-usability issues for test iteration two. . . . . . . . . . . . . . . . . 434.5 Solutions for unique-usability issues for test iteration two. . . . . . . . . . . . . . . 434.6 Results from the SUS-form that the users did after the tests. . . . . . . . . . . . . . . 464.7 Data gathered from test iteration three using the alpha prototype. . . . . . . . . . . 464.8 Identified unique-usability issues for test iteration three. . . . . . . . . . . . . . . . 474.9 Solutions for unique-usability issues for test iteration three. . . . . . . . . . . . . . . 474.10 Results from the SUS-form that the users did after the test. . . . . . . . . . . . . . . 48

5.1 Difference between mean value between each pair of groups. . . . . . . . . . . . . . 59

F.1 Data types mentioned during interviews. . . . . . . . . . . . . . . . . . . . . . . . . 86

ix

Page 10: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea
Page 11: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

1 Introduction

In this day and age, new computer systems and applications are introduced to the worldevery day. The amount of data that these systems produce seems to grow exponentially overtime and this has created a critical demand for new data analysing tools.

Sawmill industries are an example on where the need of analysing data is essential toimprove the production. It is not unusual that multiple different isolated systems are used tocollect data in the sawmill industry. All these systems then produce data by themselves anddo not always communicate or assemble all the data in one place. To be able to interpret andsee the connections between data spanning over different systems, one need a system thatcan assist in making these connections. Such a system must show the data that the users findrelevant and it must be easy to use.

To develop a user friendly system, the users must have a central role during the devel-opment of the product from day one. Every aspect of the system must have a focus whichembraces the end users’ needs and the developers must be able to establish a good commu-nication with them.

1.1 Motivation

It is important for companies to be able to utilise their data since this can help them in theirproduction. For example, analysing data on where interruptions in the production occur,might assist in detecting bottlenecks in the productions or give them a hint on where theproduction line should be upgraded next. It might be a disadvantage if a company lacks thisability while competing companies have it.

Systems used in the sawmill industry today might contain very large amounts of data,so-called Big Data. This makes it important that the system is able to present the data in acomprehensible way. One way to easier understand data is to visualise it in a good way. Tobe able to do this we need to have an understanding of what the end users are interested inhaving presented to them.

1

Page 12: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

1.2. Aim

The system designed in this thesis will have a focus on the sawmill industry and workersat sawmills. The primary focus will be customers of RemaSawco and subsystems developedby RemaSawco.

1.1.1 RemaSawco

As stated above, this thesis will focus on companies using systems developed by the com-pany RemaSawco. RemaSawco produce different systems for sawmills and many of theircustomers use multiple systems developed by them. Today they do not supply any goodway to visualise or interact with the data from these systems simultaneously, however, theywould like to be able to do this. A system that could aid their customers to in an easy fashionanswer questions like the following:

• How efficient is our production today?

• Which products are cost-effective and which are not?

• How effective was our production of this product the last time we produced it?

• How effective was our production the last shift, 24-hour period, week, month?

This, in turn, would help the sawmills to in a more precise way decide what products toproduce and to easier discover problems like bottlenecks in the production.

1.1.2 Other Companies

This thesis will focus on the systems created and maintained by RemaSawco. But this doesnot mean that it is not relevant in other applications. Any other company using systems thatneed to visualise data in an interactive way might be able to benefit from a similar solution.This includes, but is not limited to, other manufacturing companies and financial companiesto mention some.

1.1.3 Other Products

A product that is similar in design and functionality to the product that we wanted to de-velop is Qlikview1. This product was mentioned by some of the users that we conductedinterviews within this thesis. Qlikview describes their product as a way to search and explorea vast amount of data and as a tool to help to analyse it.

Qlikview is not only developed for the sawmill industry and therefore does not have acomplete predefined overview of the production. Instead the views are created in a way thatmakes it unique for every customer. The unique views created are therefore not evaluated inthe same way as the system developed in this thesis regarding usability. The system in thisthesis will conduct usability tests on every view and iterate over details that will enable theend users to use the system more efficiently. It will also conduct research on the demands ofdifferent employees at sawmills to establish the needs of end users.

1.2 Aim

The goal with this thesis is to answer the questions in the next section. The first question willbe answered by conducting interviews with sawmill personnel. The second question will beanswered by evaluating usability tests that we will perform on users during the developmentprocess of a visualisation tool.

1http://global.qlik.com

2

Page 13: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

1.3. Research Questions

1.3 Research Questions

• What kind of data are employees in the sawmill industry interested in visualising?

To be able to show users the data they are interested in, the first question to be answeredis what data that is. Sweden is one of the biggest country’s within the forest industry andtherefore research exploring this area are of further interest. According to the forest industry2

in 5 out of Sweden’s 21 counties almost 1 in 5 swedes with a blue collar job is employed bythe forest industry. The users, in this case, are connected to the sawmill industry.

• Can you increase the usability, regarding the attributes efficiency and satisfaction, of a visuali-sation tool for the sawmill industry by measuring user experience?

The focus with this question is to examine how one can measure user experience to ensureusability of the product. Also, how this will help the developers to increase the usability ofthe product during the development.

1.4 Delimitations

The following subsections will describe the limitations used in this thesis.

1.4.1 Data Limitations

This thesis will only focus on data that is already stored locally at the sawmills. It does notaim to acquire knowledge in what new potential data these systems could collect. Neitherwill this thesis focus on any data that is not digitalised.

1.4.2 User Limitation

When examining what data employees at sawmills are interested in the focus will be on someof the different roles at the sawmill. Not every role will be examined, since the system thatthis thesis covers is directed to some special positions. These were positions where dataanalysing were a part of the daily work routine.

1.4.3 Development Process

When trying to establish how a system should be developed to ensure a good quality of userexperience, this thesis will discuss different methods according to the literature. It will thenevaluate the results of these methods. It will not discuss things such as how a developmentteam should be formed or how the developers’ skills affect the outcome.

1.4.4 User Experience Evaluation

When measuring how the users interact with the prototype, only real-time observations willbe made. That means that we will not record the test sessions of the prototypes with the users.With the proper setup with cameras, one could record video on how the users interact withthe design as well as recording the users’ body languages. This thesis will instead primarilyfocus on the use of easy measure metrics.

2http://www.skogsindustrierna.org

3

Page 14: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

1.5. Abbreviations

1.5 Abbreviations

In this thesis the following abbreviations are used:

UX User Experience

UCD User Centred Design

Lo-Fi Low Fidelity

Hi-Fi High Fidelity

UI User Interface

SUS System Usability Scale

4

Page 15: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

2 Theory

This chapter will discuss some of the previous research in the area and explore some of themore common terms related to the topic at hand. This section will explain in more detailconcepts such as data visualisation, user experience and methods for data collection.

2.1 Research Purposes

According to Runeson and Höst [23] different research methods are good for different kindsof studies. According to Robson [20], studies can be sorted into four major research categoriesdepending on the purpose of the research. He [20] considers that the purpose of a study canbe categorised as one of the following:

• To explore

• To describe

• To explain

• To improve

A study is often categorised into one of these categories but can in some cases containelements of different purposes, even though one of them is often more central. It is alsopossible that the purpose of the study changes as the research moves forward [20].

2.2 Case Studies

The definition of what a case study is somewhat differs depending on who one might ask.A common definition is often that a case study is an empirical method used to examine aphenomenon in its context [23]. Runeson and Höst [23] defines the key characteristics of acase study as follows:

1. "It is of flexible(qualitative) type, coping with the complex and dynamic characteristics of real-world phenomena, like software engineering."

5

Page 16: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

2.3. Qualitative Research Interviews

2. "Its conclusions are based on a clear chain of evidence, whether qualitative or quantitative,collected from multiple sources in a planned and consistent manner."

3. "It adds to existing knowledge by being based on previously established theory, if such exist, orby building theory."

According to Runeson and Höst [23] this means that software development is a partic-ularly interesting field of study where the use of case studies are a good approach. They[23] argue that this is the case because of the fact that software development is performedby individuals and organisations which are affected by different conditions. Therefore, thecontext is important.

Yin [25] reasons that typical questions asked in case studies are either "How"- or "Why"-questions. This is because of the fact that these kind of questions are more of the explanatorytype and therefore survey type methods would probably be a bad choice. Research questionsstarting with "What" can be of both exploratory type or about prevalence. Depending onwhich type of question is asked different methods is appropriate. Therefore, it is importantto understand what kind of questions one have before deciding on what kind of method oneplan to use [25].

Runeson and Höst [23] states that a case study process consists of 5 major steps that theresearchers need to walk through when conducting the study. These are:

1. Case study design: The aim of the study is set up and a plan to reach these goals areestablished.

2. Preparation for data collection: The researchers prepare the procedures and protocolsthat they will use during their data collection.

3. Collecting evidence: Data is collected with the help of the prepared methods from theprevious step.

4. Analysis of collected data: The data collected in the study is analysed.

5. Reporting: The result and conclusions from the study are reported.

These steps are similar to the steps used in most other empirical studies as well [23]. Themain difference in the process steps of a case study and other empirical studies are that in acase study there is a larger number of iterations over the process steps because of its flexiblenature [23].

2.3 Qualitative Research Interviews

The qualitative research interview is an inductive research method that can be used to gen-erate theory [7]. According to Bryman and Bell [7] there are three typical ways to perform aqualitative interview.

The first one is the structured interview. The structured interview is a method where theinterviewer asks the questions to the interviewee in an exact pre-defined phrasing. This isdone to minimise the risk of interviewees answering different questions, as a slight differencein phrasing can mean that the interviewee conceives the question differently. According toRuneson & Höst [23], this type of interview can be compared to a questionnaire-based survey.This method is used to minimise the risk of errors in survey research and is more often usedin quantitative research [7].

6

Page 17: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

2.3. Qualitative Research Interviews

The second way is the semi-structured interview. This is an interview that has a set ofpredefined questions or topics that should be covered during the interview. If the interviewerpicks up on anything during the interview he or she is free to add follow-up questions or letthe interviewee talk freely about the subject. However, all the predefined questions shouldbe asked in a similar fashion during all the interviews [7] [23].

The third way is the unstructured interview. This means that there are almost no prede-fined questions to the interview. Often the interviewer only has one predefined question thathe asks the interviewee and then let him talk freely around the subject. This kind of interviewtechnique can almost be compared to a normal conversation rather than an interview [7].

Runeson & Höst [23] states that interviews are an important tool for data collection duringcase studies and the most common way to conduct these are one interview per interviewee,as opposed to the less common group interview.

2.3.1 Forming Questions

When forming questions for any research interview there are some things to take into ac-count. For example Runeson & Höst [23] distinguish between open and closed questionsin research interviews. These can sometimes be referred to as open/closed-ended, fixed orpre-coded questions [7]. Throughout this thesis, the terms open and closed questions will beused.

A closed question is a question with a limited number of possible answers which theinterviewee can select from. These type of questions reduce the risk of the interviewer mis-interpreting the interviewee’s answer [7]. This is opposed to the open question which allowsthe interviewee to answer the questions freely [23].

Another thing to always keep in mind is to not ask leading questions. This means that thequestions asked should not be formulated in a way that prompts the interviewee to answerin a special way. An example could be:

"Do you think McDonalds make better burgers than Burger King?"

This might produce a different result than if the question was formulated the followingway:

"Do you prefer burgers from McDonalds or Burger King?"

The reason this is a problem is that leading questions might mean that the researchers endup with a biassed result [7].

Another thing Bryman & Bell [7] mentions is a technique that in the end of the interviewthe interviewer asks a so-called "catch-all"-question. This means that the question is of a verybroad and open nature, it could, for example, be something like:

"If it was up to you, what changes would you make in your firm’s production?"

This gives the interviewee the opportunity to fill in the interviewer’s results with some-thing he or she might have overlooked in his or hers own questions, and also gives theinterviewee a possibility to ventilate their own opinions [7].

Other rules for forming good questions mentioned in the literature are, for example, notto ask so-called "double-barreled" questions, in other words, two question in one question.

7

Page 18: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

2.4. Non-Probability Sampling

Another rule of thumb mentioned in the literature is to not use negatives in questions sincethis can sometimes lead to "yes"- and "no"-answers that is hard to interpret [7].

There are different approaches to in what manner the interview questions should beasked. Runeson and Höst [23] brings up three general principles:

• The funnel model

• The pyramid model

• The time-glass model

The funnel model is a model where you start the interview by asking very open questionsand then moves on to asking more specific ones. The pyramid model does it the other wayaround, it starts off with more concrete questions and then moves over to more open ques-tions as the interview proceeds. The third method is the time-glass method and means thatthe interview begins with open questions, moves on to more specific ones, and then midwaythrough starts to open up the questions again [23].

2.4 Non-Probability Sampling

Non-probability sampling is a common name for different non-probability based samplingmethods used to select samples of interviewees. The opposite method, probability sampling,is a way to choose interviewees at random. This is useful to be able to generalise the resultsof the interviews to the larger group from where the samples were taken from [7].

In the case of qualitative research, the goal is often expressed to be "words rather thannumbers" [7]. Runeson and Höst [23] means that one uses statistics to analyse quantitativedata while one uses categorisation and sorting to analyse qualitative data. This is becausethe goal of a qualitative study is often not to be able to generalise the result from the study tosimilar settings, but rather to explore a phenomenon [7].

Because of this, the non-probability based sampling methods is often better to use in asso-ciation with qualitative research methods rather than in quantitative research methods. Sinceit is often too hard to make some kind of generalisation from a non-probability based sample[7].

2.4.1 Snowball Sampling

One of the non-probability sampling methods is the snowball method. This is a methodwhere the person or persons that conduct the interview has acquired their intervieweesthrough an initially small group of interviewees [7]. Through these interviewees, they thenmanage to get in contact with other interviewees [7].

This method of sampling has some similarities with another non-probability samplingmethod, the convenience sample method. The convenience sample method is as it sounds amethod where the researchers choose a set of interviewees based on who are convenient touse [7]. An example could be a student who chooses to interview his or her classmates. Theresult from such interviews would probably not be representative of say, the students in theentire school.

One difference between a convenience sample and a snowball sample is that the snowballsample method is often used on groups of people who could otherwise be hard to get in touchwith. Because of this, it can often be more time-effective to use a snowball sample instead ofa probability-based sampling method [7].

8

Page 19: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

2.5. Data Visualisation

2.5 Data Visualisation

Data is worth a lot less if we cannot visualise it. Visualisation of data is the bridge betweencomputers and humans that deliver the message it contains. Each time a user interacts withthe system it is a high probability that new data together with old will be presented. In ourscenario, new data is generated constantly as long as the machines are working. Users, ingeneral, do not care about the trade-offs or storage of the data, they just want it presented tothem for analytic purposes.

2.5.1 How to Visualise

Visualising data from a database can be done with the correct tools. One way is to use querylanguages with the database. Query languages provide the developer with the tools to formand set up expressions for displaying data which analysis can be performed on. Normalstatic visualisation is when the developer decides what information will be presented to theend user and in what formats.

A different approach is to use interactive data visualisation, where you allow the endusers to perform selection and representation functions via an interface. This means that theend users can choose what data they find relevant to be presented from a complex data set.This allows the outcome to be more accurate for what the user is looking for [8].

Understanding what the end users want to see is important to ensure that they are satis-fied when working with the product. Different users differ in capabilities of understandingvisual relations. It is important to understand where it is appropriate to use visual relationssuch as colours. For example, comparing age would be a bad area to apply colours to sinceage do not have a direct connection to colour. The process of visualising data is to struc-ture it up followed by determining an abstract perceptual structure and finally a drawablevisualisation [9].

2.6 User Experience

User experience (UX) is a term used when discussing human-computer interaction (HCI). Asnew technologies are developing in large scales in the forms from web pages to applications,the demands from users on the design and how to interact with systems increases [14] [24].The term UX is still lacking a common definition within the community since it is so widelyused [24]. You can find it used when talking about layout design, functionality or even whentalking about feelings. Marc Hassenzahl and Noam Tractinsky [11] argues in their report’User experience - a research agenda’ that UX involves task oriented purpose combined withexperiences and emotions when interacting with a system.

2.7 Usability

The quality of a product is defined by various factors. According to IS0 9126, it is divided intofunctionality, reliability, usability, efficiency, maintainability and portability [1]. The qualityusability is, quoted, “a set of attributes that bear on the efforts needed for use, and on theindividual assessment of such use, by a stated or implied set of users” [1].

A product is usable when a user can perform his or her task in a way that they expectwithout any questions, obstacles and so on. The best way to notice where usability is lackingis in the absence of it. If something is going as expected, you will not notice the problem.Usability is therefore in some means invisible. It can sometimes be hard for developers to see

9

Page 20: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

2.8. User-Centred Design

that something they find obvious, is for a user the opposite [22]. According to J. Rubin andD. Chisnell [22], for a product or service to be usable, it should be the following attributes:

• Useful: It should allow the user to achieve its goal and at the same time encourage thewillingness to use the product. Even if a system makes it easy for a user to use andlearn it, there is no point if the goal of the user is not fulfilled.

• Efficient: This can be measured in time when it comes to quickness when letting usersachieve the goals correct and accurately.

• Effective: The product or service is effective if it behaves as users expect it to and thatthey can perform their goals as they intended to.

• Learnable: When users can operate with the given system with a defined level of com-petence or training that is expected. Can also be the ability to relearn the system afterchanges.

• Satisfying: Takes into account the way the users feels and their opinion when using thesystem. Products or services that provide satisfaction tend to make the users performwell compared to system that do not.

• Accessible: Allow the users to access the product or service whenever needed to accom-plish a goal.

J. Nielsen [17] have a similar definition when he talks about usability and defined it bythese five qualities:

• Learnability: How easy is it for the user to perform basic tasks when encountering thedesign for the first time?

• Efficiency: How fast can users perform their tasks after they have learnt how the designworks?

• Memorability: How well will a user be able to work with a design after being awayfrom it for a period of time?

• Errors: How many errors a user makes and how severe these errors are but also howwell can they recover?

• Satisfaction: How do the user feel about the design?

2.8 User-Centred Design

User-centred design (UCD) have existed for decades but have had different names. UCDis a combined name for techniques and process methods for designing products and sys-tems where you have the user in centre during the developing process. User experience issomething the design teams must keep in mind while they think about the technology of theproducts. UCD have users in focus right from the start of the development. This is done toseek support from the users who actually work with the products, rather than forcing themto change the way they work [22]. The developers need to understand the users’ tasks andcognitive activities during the whole development cycle. Involving the users in an early stagein the development may improve the usefulness of the system and maintain the expectationsand needs of the users in the end of the development. It may also reduce or prevent usabilityerrors that could otherwise occur [18].

It is important to understand how the basics of UCD works to be able to perform usabilitytesting. Usability testing is just one technique to ensure a good UCD, it is not UCD itself [22].For UCD, there are three basic principles.

10

Page 21: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

2.9. Usability Testing

• Early focus on users and their tasks.

– Direct contact between users and the design team during the development lifecycle.

– Systematic, structured approach to the collection of information.

• Evaluation and measurement of product usage.

– Measuring the behaviour and the use of using the product with actual users.

• Iterated design.

– Performing design iteration during the whole development cycle.

2.9 Usability Testing

To provide the necessary information you need during the development of a project life cycle,testing is required. It is, of course, possible to acquire a good product without testing but itis like writing a document without editing it. There are several techniques, methods andpractises that you can use for this purpose. The common ground is that you put a user infront of an existing product or some kind of prototype [10]. Usability testing is techniques tocollect data while users perform tasks based on realistic scenarios. In this way, we can to adegree evaluate how well the product fulfils the usability measures [22].

2.9.1 Sketches

Drawings and sketching are powerful tools for making an "analogue" digital representationof your ideas quick. There are three main goals with sketches [4]:

1. To transform intangible ideas to tangible information for others to understand.

2. To revel ideas or how things relate, not result.

3. To create discussions about subjects and potential problems.

It often occurs that people try to capture all their ideas in one sketch. By drawing orsketching ideas in variations, designers may think different about the completeness of anidea and also to more effectively express thoughts [4]. It is important to not discard sketchessince they open up the opportunity to discuss ideas and ask questions. An author shouldbe open minded about the drawing and not be to protective with his or hers ideas. Whenreflecting on your work you allow yourself to gain a distance to it and in turn a renewedperspective on it [4].

2.9.2 Prototyping

Prototypes are models for testing design ideas. These prototypes allow you to gather dataearly in the developing life cycle where design flaws such as usability problems may occur.In software design, there are various techniques to create prototypes. A prototype that issimilar to the finished product are high-fidelity and those who are simpler and less similarare called low-fidelity [12].

11

Page 22: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

2.10. Measure User Experience

Low-Fidelity

Low-fidelity (lo-fi) prototyping is possible to do on computers but is most common on paper[12]. Paper prototyping is a lo-fi prototype technique where you sit down with the user andask questions about a prototype done by paper and observe how they interact with it. Theyare fast to create and is easy to change [19] [12].

Paper prototyping is a good way to demonstrate interaction with an interface of the prod-uct in an early stage in the development process. This allows the developers to find flaws inthe design and by so, with an iterative process, maintain quality before releasing the product[19].

High-Fidelity

The opposite to low-fidelity prototypes are high-fidelity (hi-fi) prototypes. Examples ofhi-fi prototypes are demo programs and media tools for testing the interaction [19]. A hi-fiprototype is therefore often made by the same method as the final product and allow similarinteraction techniques. But compared to a lo-fi prototype, it is more expensive and time-consuming to build [12]. Studies show that the number of usability issues found betweenhi-fi and lo-fi prototypes are almost equal. However, there are more comments about noneusability related things with computer prototypes then with paper prototypes [12].

A bug or something that tend to be unclear in a hi-fi prototype that needs to be changedtakes both time and may cause resistance from developers to change since it can be hard toimplement [19]. Another thing is the way you see the prototype. A hi-fi prototype may causethe user to focus on the wrong things, like fonts and button sizes. You want the user to focuson the way they interact and the general layout of the prototype.

2.10 Measure User Experience

There are different ways to measure UX since there is still not a common definition of it. Toprovide a better user experience means that a developer must first have a good understandingof the users’ needs. This is one of the most common reasons for measuring user experience[5]. The main objectives when it comes to measuring user experience is [6][2]:

• Optimising users’ performance.

• Optimising users’ satisfaction.

Performance is how well users use the product or design when interacting with it. Howwell they accomplish a task, how long it takes to perform each task, the amount of effort toperform each task and so on. Satisfaction is about what the users think and feel about theproduct or design when interacting with it. How easy is it to use, are there any activitiesthat may cause confusion and do the product or design hold up to the users’ expectations [2]?

One way to measure user experience is by using metrics. A metric is used for evaluationor to measure a specific area. Using the same kind of measurements each time you evaluatesomething will generate results that can be compared [2].

Another technique is heuristic evaluations where you have a group of evaluators thatexamine a design to find usability problems based on a usability inspection method. Thistechnique is often applied to an already existing design [16].

12

Page 23: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

2.10. Measure User Experience

2.10.1 Study Goals

There are mainly two ways to use the generated data from measuring UX, Depending on thegoal of the study [2].

Formative Usability

A formative usability study is recommended when there is a design issue where improve-ments can be made. If the wrong assumptions are made early in the project, the product ordesign will often encounter usability problems later [22]. The UX tester may evaluate a prod-uct or design during the whole developing life cycle to make recommendations and preventobstacles. If there is no opportunity to make any changes or impact on a design that is tested,then formative usability tests are probably a waste of time and money [2]. A limited budgetis often the obstacle for this kind of tests.

Summative Usability

Summative usability testing is when you want to test the finished product or design. It fo-cuses on evaluating the product or design against a set of criteria. Besides controlling func-tionality for a developed product, summative usability testing is good to use when comparingseveral products to each other [2].

2.10.2 Usability Evaluation

How the user interacts with the system is important since you want to get the data that isgoing to be analysed fast and easy. An important aspect to keep in mind is that when youhave identified a usability issue, you need to understand what lead to the issue.

Heuristic Evaluation

Heuristic evaluation is a method for identifying usability issues that may occur with any kindof user interface (UI) [16] [15]. One of the more common heuristics in use is the Nielsen’sheuristics that defines a list of areas that may be problematic. To reduce the complexityone may keep in mind when doing the evaluation on some of the basic usability principalsmentioned in "Improving a human-computer dialogue" [15].

The following principals are:

• Simple and Natural Dialogue

• Speak the User’s Language

• Minimise the User’s Memory Load

• Be Consistent

• Provide Feedback

• Provide Clearly Marked Exits

• Provide Shortcuts

• Provide Good Error Messages

• Error Prevention

13

Page 24: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

2.10. Measure User Experience

2.10.3 Performance Metrics

Performance metrics are among the most valuable of the metrics and are used to evaluate theeffectiveness and efficiency of a system. It is one of the best ways to see how the user interactwith a system. The most common of usability metrics is task success [2]. If the user takes tolong to perform a task, then there is most certainly a way to improve the products efficiency.

Task Success

To measure success you must first define tasks. A user can not be browsing around randomlywithout a goal, then there is no way to say if the user has achieved the goal. One way tomeasure task success is by using a binary measure. Either you fail or you succeed with thetask. Another approach is to measure the level of success. Where you could, for example,take into account if they pass, fail or make it with some help [2].

Time on Task

In most situations, if a user is able to complete a task fast, then they also have a better experi-ence with the product. The time users take for completing tasks is one parameter to measurethe efficiency of the product or design. Exceptions may occur when you have an e-learningpage where you do not want the users to rush through the material [2].

Time on task can also be used to see the learnability of a product or design. A user cannotlearn how a product work instantly, it takes time. When users perform tasks and gain moreexperience, they might also perform the tasks easier and spend less time [2].

Combination of Task Success and Time on Task

A combination of task success and time on task is one of the common formats of usabilitytesting when it comes to measuring the efficiency [2]. In other words, you measure tasksuccess per meantime, which is average task success per average time unit. It is important tochoose a good unit so that the measurements is within a good scale [2].

2.10.4 Issue-Based Metrics

It is an iterative process to find usability issues and make design recommendation to avoidthem. As a UX moderator, you are looking at the users’ behaviour when interacting with theproduct to find usability issues. Usability issues can be confusing terminology or misleadingnavigation in a product. It is not necessary that you point directly at the cause of the issuebut at least get a hint regarding where it occurs [2]. The term "issue" is often associatedwith negative impact and therefore is it important to also present positive feedback on thedesign when observing. This is because you can encourage the project team with the positivefeedback and also keep these in future design iterations.

Besides observing the behaviour of the participants, listening to them is of equal impor-tance. Letting participants verbalise what they think when working with the tasks is a goodway to identify usability issues [2] [22].

Number of Unique-Usability Issues

Counting the number of unique-usability issues when using an iterative design process is agreat way to see how the usability of the system changes. Hopefully, there will be a decreasein the number of unique cases for each iteration which takes you closer to the final product.Assigning severity rating to each unique issue can give an even better picture how well the

14

Page 25: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

2.11. Triangulation

usability for each design is, where the severity of the usability issues can be classed as low,medium or high [2].

2.10.5 Self-Reported Metrics

Self-reported metrics helps to understand how the user feels about the product or designsimply by letting them tell you about what they just experienced. You might find out that theuser had a happy experience while working with the system even if they spent a lot of timefinishing their tasks. The most common way to measure self-reported metrics is by usingrating scales where the user can provide feedback on how they felt using the product [2].

System usability scale (SUS) is a widely used tool for assessing the usability of the product.It consists of statements where half is worded positive and the other half is worded negativelywhere users can rate how they agree to these statements [2].

2.10.6 Number of Participants

The number of participants required for usability tests to identify most of the usability issuesdiffers. Most UX professionals have their own opinion but it can be narrowed down to twosides. One side thinks that 5 participants are enough while the other says it requires more [16][2]. When measuring the probability of finding new unique-usability issues, 80% of them arefound by the first five participants [2].

2.11 Triangulation

Triangulation means that the researchers use different methods when studying a phe-nomenon empirically. This will give the researchers the ability to look at the problem fromdifferent perspectives which can help validate the research results. This is extra useful inqualitative research as data collected with qualitative methods, which can provide a goodoverview, lack in precision [23].

Runeson and Höst [23] mentions four different methods for triangulation:

• Data (Source) triangulation - This means that the researchers collect data from multiplesources and/or at different times.

• Observer triangulation - In the case where the researchers use observation, observertriangulation means that they use multiple observers.

• Methodological triangulation - Methodological triangulation means that the researcherscombine different methods when gathering data.

• Theory triangulation - This means that the researchers use more than just one theory ofviewpoint as a base for their research.

The conclusion one can draw from these descriptions is that triangulation is good forminimising the risk of the result being biassed because of the use of just a single method ortheory.

15

Page 26: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

2.11. Triangulation

16

Page 27: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

3 Method

This part of the thesis will discuss the methods that the authors decided to use to answerthe research questions stated in the introduction. With the use of these methods, the goal isto create an alpha prototype that will visualise data that the users find relevant and easy tointeract with.

3.1 Prestudy

A small prestudy was performed prior to this thesis being written. This was done to get abetter knowledge of the industry and to research what prior work RemaSawco had done inthe past in this particular area. It was necessary to be able to get some insight into the indus-try and from that being able to ask relevant questions during the second round of interviews.This prestudy consisted of two semi-structured interviews with persons involved in the fieldand who were currently working at RemaSawco.

One of the interviewees has been involved in the development of numerous systems inuse on sawmills today. The second interviewee also has experience in developing systemsfor sawmills in addition to prior experience in developing an automated report system. Thissystem is known as RemaReports.

Our supervisor at RemaSawco recommended these persons because of their backgroundin the company.

3.2 Case Study

To be able to answer the research questions asked in this thesis a case study has been con-ducted. The main goal with this case study has been to establish what data types sawmillworkers are interested in and also developing a user friendly system that can visualise thesedata types to them.

The purpose of this case study has therefore been to explore since there are no priorstudies on what data types sawmill employees are interested in. The second part of the

17

Page 28: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

3.3. Interviews

case study needed to identify what methods that were appropriate when developing a userfriendly system and how one could evaluate the user experience.

To perform this case study the appropriate methods were chosen with the help of relevantliterature. And the different parts in the study were prepared and executed in an accordinglymanner.

The data collected with the selected methods were compiled and analysed in a systematicway and from this conclusions were drawn and presented in this thesis.

3.3 Interviews

To be able to answer the first research question regarding what kind of information sawmillemployees are interested in visualising, this thesis has conducted semi-structured Interviews.Semi-structured interviews is a good method of choice since the goal is to find out what thecustomers want. Since there is no prior information on this topic it would be hard to conducta structured interview, since it is then hard to set up questions prior to the interview. Andan unstructured interview would also not be the best choice since some specific questionsneeded to be answered. With the semi-structured interview, the goal was to both get in-formation regarding some topics known at forehand, but also have the ability to get newrelevant information on other topics.

The predefined questions were used as a starting point for gathering information on someareas of interest. This, on the other hand, lead to a more open discussion and helped innoticing what the interviewees found relevant.

3.3.1 Sampling Method

To be able to conduct interviews one must have interviewees. In this thesis, the intervie-wees were selected by using the snowball sampling method. All contact was initialisedthrough the contact with RemaSawco. They chose a couple of contacts at different sawmillsthat in turn contacted other persons at their sawmills who could participate in the interviews.

This method was chosen because it was a cost-effective approach compared to aprobability-based sample, since a large set of every possible interviewee did not have tobe established. Also because of the fact that all interview subjects were contacted by some-one they already knew, which meant that the dropout rate was probably less than if theywould have been contacted by someone they did not know. Another benefit of the snowballsample method was that it was easy to find relevant interviewees with relevant positionsin the industry, since the people recommending them knew what they worked with. Thiswas considered a benefit since it is of great importance in qualitative research to have goodinterviewees.

The main drawback of this approach compared to a probability based approach is thatthe results from these interviews will not be as generalisable as those of a probability-basedsampling method would have been. However the main goal in qualitative interviews, whichwere conducted in this study, is most often not to create a generalisable result but ratherto explore a phenomena. Therefore, the snowball sampling method was prioritised over aprobability-based sample, since keeping a low dropout rate was considered as important.

18

Page 29: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

3.3. Interviews

3.3.2 Questions

When forming the interview guide for the interviews the first thing that was conducted wereto establish a set of goals for what questions these interviews would answer. These questionswere the following:

1. What kind of data could ease this person’s work?

2. How could this data be presented in a way that enables this person to comprehend thedata?

Comprehend as in the second goal, focused on collecting data regarding how the userswanted data visualised. To be able to answer these questions an interview guide was created.The interview guide contained questions regarding what role in the company the intervieweehad and what data he or she was interested in, in their day to day work. The questions didalso relate to in what places these persons could be located when requiring access to thisinformation.

The questions were formed using the techniques described by Bryman & Bell [7] andalso Runesson & Höst [23] which have been described in the theory chapter of this thesis.For example, work was put into forming questions that would not be leading questions,"double-barrelled"-questions nor questions containing negations. The last question on everyinterview was also a so-called "catch-all" question made out to be able to get further informa-tion from the interviewee that he or she had not already stated. This question was:

"If you were to create a system like this, what functionality would you add to it?"

The questions asked during the interview generally followed the pyramid model withmore specific questions in the beginning and more open questions in the end. This methodwas chosen because the focus in the beginning of the interview was to get a relation to theinterviewees background and experience in the industry. Then as the interview proceededthe focus was shifted to be more about the interviewees opinions on current systems andtheir wishes on future systems.

The whole interview guide can be read in appendix A (Swedish).

3.3.3 Extraction of Product Specifications

To be able to extract product specifications from the interviews the first step was to transcribethe interviews. This was done so that it would be easier to analyse and make references to itlater on. When all interviews had been transcribed the next step was to extract the parts thatwere relevant for the purpose of this study. Since the goal of the interviews was to collectdata related to how a data visualisation system should be designed this is what the focus wason. All interview transcripts were read through and relevant parts were highlighted withshort comments on what the essence of the highlighted text was.

After this had been done for all interviews, a short summary spanning 1 to 2 pages waswritten for each interview. This made it easier to get a good overview of what relevantinformation that had come out of the interviews. From these summaries, it was then easierto extract what opinions the different interviewees had in common.

The opinions in the summaries were then extracted and divided into different statementsand requests. This allowed for a way to quantify the data from the interviews becausethese statements could be matched with the interviewees in an excel-table. In this table, the

19

Page 30: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

3.4. Prototype

columns represented interviewees and the rows represented different statements or requestsand the corresponding cells between interviewees and statements/requests were markedwith an x.

From this table, it was then possible to get an idea of how high the demand was fordifferent features. Although it had to be taken with a grain of salt since there could be otherreasons than just demand that had an impact on these results. For example, the more com-plex propositions might be something that everyone wanted but only one person thoughtabout.

This relates to the fact that the interviews performed were of a qualitative nature insteadof a quantitative nature.

3.4 Prototype

The requested features that we acquired from the interviews were the starting position forforming our prototype. Gaining the information on how users preferred to see data weresomething that we hoped to use to avoid some usability issues that might otherwise haveoccurred. But before we developed the prototypes we made sketches. We did this to shareand merge ideas.

3.4.1 Sketching

With a pen and paper, we sketched ideas on how we could present a web page based onthe features that we had extracted from the interviews with the users. To avoid to get stuckin the same pattern, we decided to make three different sketches each. Since we come fromdifferent backgrounds and have different prior experience these would result in differentlooking sketches. Therefore, it was of importance that we did not sit together while doingthese sketches. Gaining inspiration from each other’s drawings could make the ideas for adesign similar and miss out on the discussion that it could have lead to otherwise.

Comparisons of the sketches lead to questions that allowed each drawer to see the poten-tial for improvement. Each drawer could also highlight smart and useful design choices onthe other person’s sketch. By picking out the best from the sketches and removing the partsthat might have appeared confusing, we summarised them up to one sketch. This sketch wasthe base for our paper prototype.

3.4.2 Paper Prototype

Since we planned to test our prototypes on two separate test occasion that were located closein time to each other, we decided to make a lo-fi prototype. We chose to create a paperprototype because they were fast to change after feedback. Another reason why we chose todo a paper prototype instead of a hi-fi prototype in the form of a demo program, was that ourbudget in this thesis consisted of time and a hi-fi prototype takes more time to create.

3.4.3 Alpha Prototype

The data collected from measuring the paper prototypes were used to form our alpha pro-totype. It is a hi-fi prototype where we applied our suggested solution on how the designcould look like. To build the alpha prototype, we chose to use React.js1 for frontend together

1https://facebook.github.io/react/

20

Page 31: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

3.5. Measure User Experience

with React-D32 for graphs. The backend was built with pythons Django3 together with aMySql4 database. In this prototype, the users could choose from different time spans thatthe visualised data were from. This was implemented by using dynamical queries that usetime as the variable when collecting data from the database. We did one test iteration on thisprototype with the same metrics as with the paper prototype.

3.5 Measure User Experience

The users have been in focus from the beginning through interviews to usability testing. Asthe three principles stated in theory section 2.8, we have considered these in our process toperform usability testing.

• We have used a prestudy as well as interviews with real users to collect data.

• We have measured with different metrics to find out how well the users interact withthe product and how satisfied they are.

• We have tested the lo-fi prototype design in iterations until the creation of an alphaprototype.

This thesis chose to use the definition of usability that J. Nielsen has defined and that ismentioned in chapter 2.7. Compared to J. Rubin and D. Chisnell definition also mention inthe same chapter, J. Nielsen’s definition is more in line with the main objectives of measuringUX which is performance and satisfaction.

Since we used the users’ input to make changes from the beginning, this has been a for-mative usability study. We chose to use five persons in every iteration. The most importantpart of the first iteration was to eliminate as many of the found usability issues as possible.For that purpose, we decided to use participants from RemaSawco. Another reason that wechose RemaSawco was that the users at the sawmills either had it hard to set time aside orwere located on travel distances to far away for us to make several trips to. Therefore, also thefinal testing of the prototype was done with participants from RemaSawco. Only the seconditeration of testing the prototype was on the potential real users of the product.

3.5.1 Heuristic Evaluation

We kept in mind the usability principles of Nielsen’s heuristics when we created our proto-types. Even for a paper prototype it is necessary to provide feedback and try to highlightfunctionality. We used the heuristics to check the following items on the list below to ensurethat the prototype covered all of them.

• Simple and Natural Dialogue

• Speak the User’s Language

• Minimise the User’s Memory Load

• Be Consistent

• Provide Feedback

2http://www.reactd3.org/#page-top3https://www.djangoproject.com/4https://www.mysql.com/

21

Page 32: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

3.5. Measure User Experience

• Provide Clearly Marked Exits

• Provide Shortcuts

• Provide Good Error Messages

• Error Prevention

3.5.2 Triangulation

As UX mediators, we wanted to gain as much data as possible on how users felt about thedesign as well on how efficient they were when interacting with it. We, therefore, chose touse the concept of triangulation where we used more than one method while studying aphenomenon, a so-called Methodological triangulation mentioned in the theory chapter 2.11.Three of the metrics mentioned in theory chapter 2.10 were the combined test model used tounderstand how efficient and satisfying the design was. These three metrics can be found inthe following subsection.

3.5.3 Task Success per Time Unit

We defined 6 tasks that the users had to perform. These tasks were based on what func-tionality that we had added to the prototype. The goal was that the users would try allfunctionalities of the prototype by performing these tasks. The original tasks can be found inappendix C (Swedish).

For each of these tasks we decided with a binary value if the users completed a task suc-cessfully or not. The reason we chose binary over the level of success was that we wanted tostep back and let the users perform without expecting us to help. If they completed the taskthat was given as we planned, then they had completed it successfully. An unsuccessful taskwould have been some of the following:

• If the user thought that he or she was done with the task, but the goal was not reached.

• The user took too long time to perform the task.

• The user gave up.

We also measured the time for a user to perform each task. This was done to measurehow efficient each task was. A user may have completed a task, but could at the same timehad gone outside the set time frame of what counted as a success. The goal as moderatorswas to find improvements in the design that would improve the efficiency. As an example,the observer during the prototype tests would make a note on a usability issue that could beimproved to ease the workflow. Efficiency was calculated by taking the average success ratedivided with the meantime.

3.5.4 Unique-Usability Issues

The observer made notes during the tests on how the user was interacting with the prototypeand on comments that came from the user. The observer asked them to verbalise what theywere thinking while they tried to complete their tasks. This allowed the observer to listento the user and identify where potential improvements could be applied. The observer alsolooked for where the user could get stuck and from where they did not know how to proceed.It also allowed the observer to, beside from watching body language of the users, listen tothe tone of their voices to capture if they got annoyed or even excited.

22

Page 33: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

3.5. Measure User Experience

After the tests had been performed with all the users, we summarised the observations toidentify each unique usability issue and rate them based on how serious they were. To ratethese issue we were using the number of times the same issue occurred for different users orif we rated them as severe. Solving these issues would hopefully improve the efficiency andalso the satisfaction of the users when working with the prototype. It is important to keep inmind that when changing something one might create new issues. The goal was to have afewer number of unique-usability issues for the prototype between each iteration.

Self-Reported Metric

When the user had performed the tasks that they were given, we gave them a form to fill out,a so-called self-reported metric. This form was a system usability scale (SUS) form where onecan measure how satisfied the users are with the product. We used a standard SUS form5

that was translated into Swedish and can be found in appendix D. It contained 10 questionsthat the users would answer on a scale from 1 to 5 ranging from strongly disagree to stronglyagree. Half of the questions were worded positive and the other half was worded negatively.

The questions 1, 3, 5, 7 and 9 were calculated by taking the chosen score minus 1.

OddNumberScore = ChosenScore ´ 1

The questions 2, 4, 6, 8 and 10 were calculated by taking 5 minus the chosen score.

EvenNumberScore = 5 ´ ChosenScore

The score from these questions was then added together and multiplied with 2.5 to getthe SUS-score. An SUS score above 68 is considered above average. The user was told beforestarting with the form the importance to be completely honest so that the measured datacould be used for improvements. The users were not told on the other hand that every otherquestion were worded in a positive/negative way about the product.

5Template can be found on the following site: http://www.usability.gov/how-to-and-tools/methods/system-usability-scale.html

23

Page 34: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

3.5. Measure User Experience

24

Page 35: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4 Results

This section will present the results acquired from implementing the methods presented inthe previous section. These results will be presented in an objective way here, and furtheranalysis of the results will be performed in the discussion (section 5).

The quotes in the following subsections are translated from Swedish to English and arein some cases edited for legibility. Where the original quotes could be unclear our interpreta-tions are presented in square brackets. All original Swedish quotes can be found in appendixB.

4.1 Prestudy Interviews

During the interviews in the prestudy, the main results can be divided into two categories.These are information regarding a previously developed system called RemaReports andinformation regarding the sawmill industry in general. Therefore, this section will be dividedinto these two subcategories in addition to the description of the interviewee sample.

4.1.1 Interviewees

The prestudy interviewee sample consisted of two persons with different roles at Rema-Sawco. One who had previously been involved in developing a system used for automaticreport generation. The other interviewee was a developer who had been involved in devel-oping some of the systems used on sawmills today.

4.1.2 RemaReports

RemaReports is a system that was developed by RemaSawco as a system that would ease theprocess of acquiring data from RemaSawco’s subsystems. It consisted of two parts:

1. A backend database that collected data from subsystems in one place.

2. A frontend interface that enabled the users to create reports from this data.

25

Page 36: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.1. Prestudy Interviews

This system was developed in asp 2.0 and Crystal Reports and acted as a web page onlyreachable from within the sawmill’s internal network.

The general idea regarding how this system should have been built was found to be anadjustable interface that the end user can modify according to his or hers own data prefer-ences.

"(...) that you want some kind of dashboard with different information where onecan choose what one actually want to look at, what one want to show up. Becausethere is so much information in these databases that is interesting on differentplaces in the saw." (Original Swedish quote B.0.1)

— Interviewee A

And when the user had done this once the application should remember these prefer-ences. It should also remember in what ways this information should be presented.

"Then it probably bests that it should be able to save where they are placed so thenext time you pick it up you end up on the same place, for example, those are nicedetails." (Original Swedish quote B.0.2)

— Interviewee A

The data that is kept in the RemaReport’s backend is to be considered as confidential andit is therefore important that it is kept secret. Especially if it is reachable from outside of thesawmill.

"(...) it must be safe then. Because many [of the sawmill employees] will be veryworried when you say that it is open and you can reach it both down in Oslo andup in Sokna or stuff like that." (Original Swedish quote B.0.3)

— Interviewee B

"Yes, it is [important]. It reveals their whole production and how they are doingand also because it is very sensitive numbers if one would get a hold of them."(Original Swedish quote B.0.4)

— Interviewee A

One of the opinions that were expressed during one of the interviews was the problemwith developing a reporting tool instead of predefined reports. According to one interviewee,it is harder to sell a tool because the sawmill then considers it as if they do all the workthemselves.

"(...) the problem with selling it as a tool is that you expect it to be very cheap. Imean when we create reports for them, then they can accept that it costs moneybecause they do not want to put an effort into it. So if they make the whole jobthemselves, then they think that the price will be as buying Word or Excel orsomething like that." (Original Swedish quote B.0.19)

— Interviewee A

26

Page 37: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.2. Main Study Interviews

4.1.3 Sawmill Industry

When developing a computer system for the sawmill industry it is important to take intoaccount the end users’ technical capabilities. A system intended for this audience must beuser friendly and easily managed by anyone.

"(...) above all make it easy to manage, because if you make the system, then theimportant thing is to make a few things very, very good and simple instead oftrying to make it all." (Original Swedish quote B.0.5)

— Interviewee B

There are different roles at sawmills. The interviewees in this prestudy thought that theseroles wanted access to different data accordingly. However that these different needs couldprobably be divided into a few different roles.

"(...) there are always, for example some production manager or quality manageron each location. I think it is [that person], and then everyone else." (OriginalSwedish quoteB.0.6)

— Interviewee B

On sawmills today, the employees mainly use stationary computers or laptops in theirday to day work. This can be related to the fact that sawmills can be a harsh place for tabletsand smart phones and that the employees often wear gloves and so on. However, the use oftablets might be an appreciated gadget in the offices of the sawmills.

"(...) it is stationary computer or laptops in place. (...) Unfortunately, it is not theoptimal environment to walk around in because often it is, perhaps cold and onemust wear gloves, or one is dirty, especially for operators and such. Then youcould have an industry tablet. However in the office, I can absolutely imaginethat [a tablet] is good to have on meetings and such." (Original Swedish quoteB.0.7)

— Interviewee B

4.2 Main Study Interviews

This section will present the data gathered through the qualitative interviews performed inthe main study, that followed the prestudy. It will be divided into subsection according tosome of the main areas of interest found during the interviews. In total eight interviews werecarried out.

4.2.1 Interviewees

The interviewee sample consisted of eight persons with a varied age and experience level.Among them were production chiefs, logistic chiefs, production optimisers and similar jobtitles. The interviewee sample consisted only of men, this was due to the fact that the sawmillindustry consists of mostly men. The participants were distributed over four differentsawmills.

The 8 persons that were able to participate in our study were the outcome from a totalnumber of 11 contacted. This gave us a dropout rate of 27%. The interviews were carried

27

Page 38: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.2. Main Study Interviews

out in two different ways. Four of them were done over the phone or via Skype withoutwebcam and the other four were done in person. The method that was chosen depended onthe distance to the sawmill in question.

In total, these eight participants worked at four different sawmills. The distribution ofparticipants per sawmill can be seen in figure 4.1.

Sawmill 1 Sawmill 2 Sawmill 3 Sawmill 40

1

2

3

4

Sawmills

Nu

mbe

rof

inte

rvie

wee

s

Interviewee Sawmill Distribution

Figure 4.1: The sawmill distribution of the interviewees.

4.2.2 Important Data

During the interviews the participants mentioned some data that they considered as espe-cially important for their roles at the sawmill. All in all 43 different kinds of data were men-tioned a total of 109 times. Out of these 43 different data types, 12 were mentioned by 4interviewees or more. These 12 different data types can be seen in figure 4.2 with respect tothe number of interviewees who mentioned each data type. The entire list of mentioned datatypes and the number of times they were mentioned can be found in the table in appendix F.

28

Page 39: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.2. Main Study Interviews

Saw

Yield

Logs/Tim

e

Reaso

ns for In

terru

ptions

Logs in

Volum

e Yield

Feedbac

k onSt

opCodes

Producti

onPac

eTAK

Data over

Time

Saw

edVolu

me per

Min

ute

Downtim

e

Goal-ag

ainst

Current-V

alue

0

2

4

6

8N

um

ber

ofin

terv

iew

ees

Most Frequently Mentioned Data Types

Figure 4.2: The most frequently mentioned data.

4.2.3 Functionality

Most of the interviewees mentioned that they would like to be able to access the data fromoutside of the sawmill. Six out of the eight interviewees said that they wanted this. Only oneperson preferred a local system. The main reason for being able to access the data from out-side the sawmill seems to be that this would enable the interviewees to work while travelingand to aid the personnel at the sawmill if something unexpected happens.

"Yes, we are that much on the move [and] therefore they can take care of sickchildren from home and still do the job." (Original Swedish quote B.0.8)

— Interviewee 1

One Feature that was mentioned by four of the interviewees were a speedometer thatshowed how the current speed of the production was holding up against the planned pro-duction speed. Out of these four, three mentioned that they thought this could be used toencourage, among others, the operators to perform better.

"(...)[if] the speedometer’s seesaw [is] on red, then it will become a carrot, or asmall carrot, to the workers to reach the green. That is how it is." (OriginalSwedish quote B.0.12)

— Interviewee 1

Another feature that most of the interviewees requested were the ability to export datafrom the system to other programs. The main other program being Excel. seven of the inter-viewees mentioned that they considered this a useful feature.

29

Page 40: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.2. Main Study Interviews

"(...)so that you at the same time can take it forth to, among others, Excel to beable to process it further and be able to calculate off of it." (Original Swedishquote B.0.13)

— Interviewee 5

A common feature that the interviewees mentioned was the ability to create their ownreports with some kind of report tool. Five different interviewees mentioned this feature.Most of the systems used today gives the workers the ability to create predefined reports,however if they want to create something that is not predefined there is no good solutionto this. According to the interviewees this problem seems to be dealt with in these threedifferent ways depending on the system:

1. Pay the system developers to add a predefined report to the system. This method isrelatively fast but expensive.

2. Tell the developer to add a predefined report in their next update. This method is slowbut inexpensive.

3. Create your own report using programs like Excel. This method is fast, inexpensive butalso unnecessarily complicated.

Here is how one of the interviewees describe how he thinks this kind of system shouldwork:

"You would like some graphic report system where you with the help of, I do notknow how it would work but, different blocks, now I want to create a report, Iwant to have the stop time, I want to have this and build reports in an easy andgraphical way." (Original Swedish quote B.0.14)

— Interviewee 6

During the interviews there was a high focus on graphs over time, seven intervieweesmentioned that this was something they were interested in. Six out of these seven also men-tioned that they wanted the ability to interact with the graphs. For example to look closeron specific data or over a special time period. Figure 4.3 shows the different visualisationtypes that were mentioned during the interviews and how many of the interviewees whomentioned each method.

30

Page 41: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.2. Main Study Interviews

Line graphs Speedometers Bar charts Pivot tables0

2

4

6

Visualisation Type

Nu

mbe

rof

inte

rvie

wee

s

Demand for Visualisation Types

Figure 4.3: The demand for visualisation types.

Some of the interviewees, three of them, would like it if they had the ability to get acomprehensive picture of the production at the current moment. So they could just log inand check if everything is up and running and that the production goes accordingly.

"(...) current [information], like what is happening now. I want to go in and havea look at any time and see ’how is it going now’, that is something I think is reallyimportant to me." (Original Swedish quote B.0.18)

— Interviewee 4

4.2.4 Design Features

Three of the interviewees pointed out that the system must be simple and user-friendly. Oneof them considered this as a very important aspect and mentioned that this was somethingthat had been a problem in the past. He considered this an important aspect since a toocomplicated system can end up with the users not wanting to use the system.

"But make it simple, because if you do not make it simple you will lose trust andthe enthusiasm for working with it." (Original Swedish quote B.0.9)

— Interviewee 2

Out of the eight interviewees, four mentions a system called QlikView. This is a systemused to visualise dynamic data sets and is a system that seems to be used in some sawmills.The attitude towards QlikView seems to be generally positive and one of the intervieweesdescribed why he thought it was good with the following quote:

"I would probably look quite a lot to [QlikView when developing a new system],I have worked in it earlier and thought it was really good. It was flexible andfast and you can crunch the numbers depending on how you want them at themoment so to speak." (Original Swedish quote B.0.15)

31

Page 42: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.2. Main Study Interviews

— Interviewee 7

One thing that four of the interviewees mentioned and that they seemed to consider asrather important was the ability to adjust the system to their needs. There are often a lot ofdata in these kind of systems and it can be tedious to sort through it all if the user does nothave the ability to sort out irrelevant data.

"It bothers me [when there are] a great lot of global settings, [when] I do not man-age my own view myself, that is something I really want to do. I want to see thethings I want to see, and be able to remove what I do not need, because there is alot of that." (Original Swedish quote B.0.17)

— Interviewee 8

4.2.5 Security

Most of the interviewees do not express any concern about the information being reachablefrom outside the sawmill. In most cases this seems to already be the case. Only one of theinterviewees expresses a negative opinion to this idea, fearing that this would enable possible’bugs’ from the outside.

"(...) I probably want like a pretty [isolated connection to the server], so you canmake sure that you do not acquire any bugs from the outside." (Original Swedishquote B.0.10)

— Interviewee 3

The interviewees, in general, seem to be accustomed to using usernames and passwordsto log in to their current systems. This authentication method seems to be the standard inthese kinds of systems.

"Axxos has one login, we have one login to our internal intranet, we have onelogin to sawinfo, we have an one login to DVH" (Original Swedish quote B.0.11)

— Interviewee 4

Some of the interviewees mention different user privileges as a good thing to implement.Not only as a precaution against external threats but as a precaution against accidental misuseon the user’s behalf. One interviewee gives an example of when this has happened in thepast:

"(...) I came in one morning and there were no schedules left. Every [firm name]-sawmills schedules were gone, then there was some guy who had a too high priv-ilege level and thought that he didn’t need to see any more than his own. Hehehe.So he cleared it." (Original Swedish quote B.0.16)

— Interviewee 8

4.2.6 Platforms

When asked which platforms that the interviewees found relevant for this kind of system acouple different platforms were mentioned. First and foremost the PC was considered im-portant. Other platforms such as smartphones and tablets were also mentioned. Four of theinterviewees also mentioned that they wanted the system to be accessible via other platformssuch as control panels and tv-screens. The distribution between the requested platforms canbe seen in figure 4.4.

32

Page 43: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.3. Prototype

Smart Phone Tablet PC Others0

2

4

6

8

Platforms

Nu

mbe

rof

inte

rvie

wee

s

Demand of Platform Support

Figure 4.4: The demand for platform support.

4.3 Prototype

This section will present the prototypes and the results of the data gathered from the tests.These prototypes were developed to satisfy the needs identified in the previous section.

4.3.1 Sketching

We generated three different sketches each for the first iteration. Each sketch was unique andhad different parts of them that inspired for good design ideas. The result from the sketcheswere multiple different solutions to similar problems. The best ideas were combined andused to create the lo-fi prototype.

4.3.2 Lo-fi Prototypes

The prototype was created with some refinements of the sketches. The result from the firsttest iteration gave insight into what caused usability issues in the design. We came up withsolutions that we found fitting based on the observations. The same procedure followed afterthe second test iteration where we refined the design before making an alpha prototype.

The following two images show the lo-fi prototype from the second iteration. The firstimage in figure 4.5 illustrates the view with data from the saw house station.

33

Page 44: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.3. Prototype

Figure 4.5: The saw house view in the lo-fi prototype used in test iteration two.

In the image above one can see that the prototype is built with a menu on the left handside of the screen and a navigation bar on the top of the screen. These two menus stay inthese places in all the different submenus the user can visit to maintain consistency.

The top bar contains a settings button and a logout button in addition to the users user-name. The menu to the left contains all the different submenus. These are categorised intosubmenus related to exporting data, such as a reporting tool and an export tool. The otherhalf of the menu relates to visualising data from different parts of the sawmill. When oneof these menu options are selected, the middle of the page will show graphs related to thismenu. In the image in figure 4.5, graphs with data from the saw house can be seen.

The next image, figure 4.6, shows the submenu "export data".

34

Page 45: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.3. Prototype

Figure 4.6: The export data view in the lo-fi prototype used in test iteration one.

This image illustrates how the export data submenu is supposed to work. In this menu,the user is supposed to be able to select different databases and tables and then export thisdata from a selected time period. The system shall be able to export the data to Excel-, PDF-and txt-format.

Figure 4.7 below shows an example of how the heuristic evaluation described in section2.10.2 has been used in this thesis. In this example, one can see some of the pop ups in apaper form used to provide feedback to the user when doing something with the product.The language is easy and consistent with the user’s language preferences and some of thepop ups are warnings that will prevent users from committing errors. Appendix E containsmore images of the lo-fi prototype.

35

Page 46: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.3. Prototype

Figure 4.7: How heuristic evaluation are used to provide feedback.

4.3.3 Alpha Prototype

After two iterations with the lo-fi prototype, the development of an alpha prototype started.This prototype was based on the second iteration of the lo-fi prototypes. However, somechanges were made due to some identified issues.

The alpha prototype was developed with a React.js in the frontend and Django in thebackend. To show graphs on the page, React-D3 was used. The end result of the alphaprototype is illustrated in the two pictures below. The first picture corresponds to the imageof the lo-fi prototype in figure 4.5.

36

Page 47: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.3. Prototype

Figure 4.8: The timber sorting view in the alpha prototype.

Figure 4.8 illustrates the transformation from a lo-fi prototype to a alpha prototype. Oneof the changes from the lo-fi prototype is that the three graphs on the top of the page in thelo-fi prototype was removed. This was done due to the fact that the test persons expressedthat this was a feature they did not want.

Another change is that the title of the page was made larger in the alpha prototype becausesome of the testers expressed some confusion about which page they were on at some points.

Figure 4.9: The export data view in the alpha prototype.

In the image above, figure 4.9, the export data view of the alpha prototype is shown. Thisview does not differ very much from the corresponding view in the lo-fi prototype which wasdisplayed in figure 4.6. Only some minor graphical changes were made. More images fromthe alpha prototype can be found in appendix E.

37

Page 48: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.4. Data Collected From Lo-fi prototypes

4.4 Data Collected From Lo-fi prototypes

The following figures and tables will represent data gained from observations and measure-ments during the tests of the lo-fi prototypes.

4.4.1 Task success per minute

In table 4.1 we have the gathered data from both test iterations of the two different lo-fiprototypes. The meantime is the average time in minutes for how long it took for the usersto perform each task. Task success is the metric describing if the user managed to completethe tasks. Efficiency is the number of users who managed to perform a task divided with themean time for each task.

Iteration 1 Iteration 2Tasks Meantime Task success Efficiency Meantime Task success Efficiency

Task 1 3.4 1 0.29 3.00 1 0.33Task 2 2.62 1 0.38 1.54 1 0.65Task 3 2.72 0.8 0.29 1.46 1 0.68Task 4 3.22 1 0.31 2.98 1 0.34Task 5 2.24 1 0.45 1.92 1 0.52Task 6 3.34 1 0.30 2.96 1 0.34Average 2.92 0.97 0.34 2.31 1 0.47

Table 4.1: Data gathered from both lo-fi prototype test iterations.

Figure 4.10 show how the efficiency for each task differs in between both iterations. Theaverage efficiency for each task is presented for each iteration. Better efficiency is having ahigher value in the figure.

Task 1 Task 2 Task 3 Task 4 Task 5 Task 60

0.2

0.4

0.6

0.8

Tasks

Effi

ency

(tas

k/m

in)

Average task success per minute

Iteration 1Iteration 2

Figure 4.10: Efficiency for each task for both test iterations of the lo-fi prototypes.

38

Page 49: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.4. Data Collected From Lo-fi prototypes

4.4.2 Unique-usability issues

Table 4.2 shows all the unique-usability issues we found from the first test iteration by ob-serving the users while they were performing the tasks. The frequency for each issue is usedto decide how critical the issue is. The higher the frequency, the more important it is to solve.The table also shows on which task the issue occurred.

However, the calculated critical level was not the only thing taken into account whendeciding what issues to solve. An issue with a low critical level could be more importantthan an issue with a high critical level, given that the consequences of the first issue are direrthan the consequences of the second issue.

Issue Title Comment Frequency Task # Critical

1 Navigate issueUser tried to export data from a sub menu otherthan "Export Data". 0.6 1 High

2 Confusion issue User got confused and started clicking buttonson random.

0.4 1 Medium

3 Button issueUser pressed the wrong sub menu because of missleading name. 0.2 1 Low

4 Button issue User mixed up the sub menu names "TAK" andSawhouse.

0.2 2 Low

5 Button issue The time periods confuse the user, e.g. Are today24 h or from 12 am?

0.2 2 Low

6 Efficiency issue User adds new graph to home page instead of goingto the relevant sub menu.

0.2 2 Low

7 Navigate issue User tried to hide view in settings instead of inthe views.

0.6 3 High

8 Confusion issue User got confused and started clicking buttons onrandom.

0.4 3 Medium

9 Navigate issue User tries to hide sub menu in the sub menu itself. 0.2 3 Low

10 Efficiency issueUser changes the time span instead of interactingwith the graphs where the latter would be possible. 0.4 4 Medium

11 Button issueUser gets confused by the ability to choose "other"in the time span menu. 0.2 4 Low

12 Confusion issue The user don’t understand how to compare values. 0.2 4 Low

13 Efficiency issue User adds 2 graphs to the home page to comparethese to each other.

0.2 4 Low

14 Button issue The "add" button confused the user. Inconsistent. 0.4 5 Medium15 Button issue The "next" button confused the user. Inconsistent. 0.2 5 Low

16 Navigate issueUser navigated to the sawhouse sub menu insteadof the report tool. 0.2 5 Low

17 Button issueThe user got confused by the ability to chose they-axis interval. 0.2 5 Low

18 Button issue The time periods confuse the user, e.g. Are today24 h or from 12 am?

0.2 6 Low

19 Button issueUser got confused regarding the buttons named:"Change data" and "Change graph". 0.4 6 Medium

20 Navigate issue User missed a "back-button" when changing grahps. 0.2 6 Low

Table 4.2: Identified unique-usability issues for test iteration one.

39

Page 50: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.4. Data Collected From Lo-fi prototypes

For each issue, a fitting solution was found. Table 4.3 shows the proposed solutions. Ascan been seen, some solutions solve more than one of the problems. Most of the solutionsspeak for themselves and how they would solve the issues. The title of the issue in table 4.2is used for categorising the identified issues.

Issue Proposed solution1 Add export button to graphs.2 Solves with solution of task issue 1 combined with restructure of menu order.3 restructure of menu buttons.4 restructure of menu buttons and describe what TAK is before testing.5 Change to "last 24h", "last 7days" and so on.6 Describe Pin button before testing.7 Describe View button before testing.8 Same as number 7 and clearer sketches.9 Same as number 7.

10 Tell beforehand that users can interact with graphs.11 Change to "Other time span".12 Same as 10 and clarify task description.13 Issue that disappears with learnability.14 Change to "Add to rapport". Hide "next" button after you are done in the step.15 Same as 14.16 Same as 3.17 Not a necessary functionality, remove.18 Same as 5.19 Change to "Change visualisation method".20 Add backwards button.

Table 4.3: Solutions for unique-usability issues for test iteration one.

An example of a usability issue that is rated high is issue 1. The users got confused anddid not know how to continue to approach their task to export data. The solution we cameup with was to add an export button to the graphs to fulfill the export functionality.

Issues 2, 3, 4 and 16 regarded the menu and its buttons. Our proposed solutions were torestructure the menu buttons and their names. These changes that were made can be seen infigure 4.11 below.

Issues 14, 15 and 17 regarded the report tool. Issue 14 and 15 were confusions related tobutton names which lead to that it was unclear how to proceed. Also, some functionality, forexample issue 17, was not necessary, so it was removed. The changes can be seen in figure4.12. It is also possible to see that the time span was changed which was mentioned in issues5, 11 and 18.

40

Page 51: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.4. Data Collected From Lo-fi prototypes

(a) Menu before fixing issues from test iteration one. (b) Menu after fixing issues from test iteration one.

Figure 4.11: Changes in paper prototype of the menu from before and after test iteration one.

41

Page 52: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.4. Data Collected From Lo-fi prototypes

(a) Report tool before fixing issues from testiteration one.

(b) Report tool after fixing issues from test iterationone.

(c) Report tool after fixing issue 14.

Figure 4.12: Changes in paper prototype of the report tool from before and after test iterationone.

The rest of the issues from iteration one were fixed by applying the proposed solutions.The second test iteration of the refined lo-fi prototype followed the same approach. As seenin table 4.4, we have some new issues from this iteration. The critical level on the issue iscalculated in the same way as previously. We came up with solutions for the detected issues

42

Page 53: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.4. Data Collected From Lo-fi prototypes

in the refined lo-fi prototype as can be seen in table 4.5.

Issue Title Comment Frequency Task # Critical1 Button issue User got confused by the sub menu named "stop times". 0.2 1 Low

2 Confusion issue User didn’t know how to use the export view. 0.4 1 Medium

3 Navigate issue The user tried to "enter" the graph by clicking it. 0.2 2 Low

4 Navigate issue User wanted a view button at the relevant sub menubutton.

0.2 3 Low

5 Confusion issue User didn’t know what sub menu he was in. 0.4 3 Medium

6 Confusion issue User mixed up "get information" with "exportinformation".

0.2 4 Low

7 Navigate issue User starts creating a report and then considers changingto the Sawhouse view.

0.2 5 Low

8 Efficiency issueUser configures the graph to match the requirementsthen forgets to use the pin. 0.2 6 Low

9 Button issue The user didn’t understand when to use the "plus"button.

0.4 6 Medium

10 Design issueUser got confused by the "Change time span" submenu when trying to edit graph on home page. 0.2 6 Low

Table 4.4: Identified unique-usability issues for test iteration two.

Issue Proposed solution1 Removed the sub menu.2 Add describing texts and more visual buttons.3 Clearer titles on graphs.4 This feature will not be implemented.5 Clearer titles for the pages.6 Learnability will prevent this.7 Learnability will prevent this.8 No changes will be made in regard to this.9 Clearify plus button, with the design of the button.

10 Recreate the timespan sub menu.

Table 4.5: Solutions for unique-usability issues for test iteration two.

These proposed solutions were applied to the alpha prototype. Issues 4 and 8 were issueswe did not find necessary to implement since it would lead to more confusion. One delimi-tation for the alpha prototype that we did was to not implement the "plus" button which canbe seen in appendix E.1.

Issue 1 regarded the menu once again. There were confusions with the submenus "stoptimes" and "TAK". Since the information in these submenus also existed in the other sub-menus, we chose to remove these. Another delimitation for the alpha prototype was to notimplement the functionality for the report tool and pre-defined reports. The new alpha menucan be seen in figure 4.13 below, where it is placed beside the menu from lo-fi prototype intest iteration two.

43

Page 54: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.4. Data Collected From Lo-fi prototypes

(a) Menu before fixing issues from test iteration two. (b) Menu after fixing issues from test iteration two.

Figure 4.13: Changes in the prototype of the menu from before and after test iteration two.

Issues 2, 3 and 5 regarded how well titles and buttons were shown. These were solved bymaking them more visual in the alpha prototype. The fact that the alpha prototype is com-puter generated helped the users to easier interpret texts and symbols. Issue 10, regardingtime span, was solved as can be seen in the figure 4.14 below.

44

Page 55: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.4. Data Collected From Lo-fi prototypes

(a) Time span before fixing issues from test iterationtwo.

(b) Time span after fixing issues from test iterationtwo.

Figure 4.14: Changes in the prototype of the time span from before and after test iterationtwo.

The rest of the issues were solved with proposed solutions. The result from both iterationscan be seen in figure 4.15. The graph illustrates the distribution of how critical the unique-usability issues are over each iteration. For test iteration 1, we had the distribution of 13 low,5 medium and 2 high on the critical scale. For test iteration 2, we had a distribution of 7 low,3 medium and none on high.

Iteration 1 Iteration 20

5

10

15

20

Design iteration

Un

iqu

e-u

sabi

lity

issu

es

Unique-usability issues

LowMedium

High

Figure 4.15: Distribution of how critical the unique-usability issues are over each test itera-tion.

45

Page 56: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.5. Data Collected From Alpha Prototype

4.4.3 Self-Reported Metric

In table 4.6 the results from the SUS tests performed with the users are presented. A reminderis that half of the questions is worded positive and the other half is worded negative, this isdone to minimise the risk of the result being biased. An SUS-score over 68 is considered asabove average.

Iteration 1 Iteration 2Question: User 1 User 2 User 3 User 4 User 5 User 6 User 7 User 8 User 9 user 101 4 5 4 4 4 5 5 5 3 52 2 1 2 2 1 1 1 1 2 13 4 4 4 4 5 4 4 5 3 44 1 2 2 3 1 2 1 1 2 25 3 4 4 4 4 4 4 5 4 56 2 2 1 2 2 1 1 1 2 17 4 5 4 4 5 5 5 5 5 58 1 1 1 2 1 1 1 1 1 19 2 5 5 3 4 5 4 5 3 310 2 3 1 3 2 1 1 1 2 1SUS-Score: 72.5 85 85 67.5 87.5 92.5 92.5 100 72.5 90Average: 79.5 89.5

Table 4.6: Results from the SUS-form that the users did after the tests.

4.5 Data Collected From Alpha Prototype

The following figures and tables will present data gained from observations and measure-ments during the test of the alpha prototype together with data from the lo-fi prototypes.This was the third and last iteration.

4.5.1 Task success per minute

In this iteration, only five out of the six tasks performed in previous iterations were carriedout. In our alpha prototype we did not implement the functionality that task five, from ap-pendix C, was testing. Therefore, we performed task 1 to 4 as well as 6. As for the resultsfrom 4.4.1, the table 4.7 represent the average time, task success and efficiency for each taskin this iteration. Meantime is as described previously the average time for all users.

Iteration 3Tasks Meantime Task success EffiencyTask #1 1.3 0.8 0.62Task #2 0.74 1 1.35Task #3 0.52 1 1.92Task #4 0.56 1 1.79Task #6 0.58 1 1.72Average 0.78 0.96 1.48

Table 4.7: Data gathered from test iteration three using the alpha prototype.

Figure 4.16 shows the efficiency for iteration three together with the efficiency for the lo-fiprototypes in iteration one and two. As mention above, task five was not performed and istherefore not in the figure.

46

Page 57: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.5. Data Collected From Alpha Prototype

Task 1 Task 2 Task 3 Task 4 Task 60

0.5

1

1.5

2

Tasks

Effi

cien

cy(t

ask/

min

)

Average task success per minute

Iteration 1Iteration 2Iteration 3

Figure 4.16: Efficiency for each task for all test iterations.

4.5.2 Unique-usability issues

Some unique-usability issues were found in this last iteration as well. Most of the tasks wereperformed almost without any hesitation. Table 4.8 shows the few issues that occurred. Theproposed solutions for these issues can be seen in table 4.9. None of these were seen as severeand they only occurred with a low frequency. Since this was the last iteration for this thesis,the proposed solutions were not implemented due to time restrictions.

Issue Title Comment Freqency Task # Ctritcal1 Button issue User missunderstod the button Export Data. 0.2 1 Low

2 Navigate issueUser wanted to export data from the Sawhouse viewwith a specific time. 0.2 1 Low

3 Confussion issue User wanted tooltips. 0.2 4 Low4 Navigate issue User thought the view menu was a too obvious choice. 0.2 4 Low

Table 4.8: Identified unique-usability issues for test iteration three.

Issue Proposed solution1 Learnability will prevent this.2 Implement the "other time span" from the iteration 2 solution.3 Add tooltips to some of the functionalities.4 This will probably not be a problem in the future.

Table 4.9: Solutions for unique-usability issues for test iteration three.

Like figure 4.15, we have compared the result of the unique-usability issues together withiteration three. In figure 4.17 below we can see the same distribution as before. For iterationthree we only had four low and none in the other critical levels.

47

Page 58: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.5. Data Collected From Alpha Prototype

Iteration 1 Iteration 2 Iteration 30

5

10

15

20

Design iteration

Un

iqu

eu

sabi

lity

issu

es

Unique usability issues

LowMedium

High

Figure 4.17: Distribution of how critical the unique-usability issues are over each test iterationwithout including issues regarding task five.

4.5.3 Self-Reported Metric

SUS tests were performed for this iteration as well and the results can be found in table 4.10below. You can also find the average results from the SUS tests for each test iteration in thebar graph in figure 4.18.

Iteration 3Question: User 1 User 2 User 3 User 4 User 51 5 5 4 5 52 1 1 1 2 13 4 5 5 4 44 1 1 2 1 25 5 5 4 2 56 1 1 2 5 27 5 5 4 5 18 1 1 1 1 19 5 5 4 5 510 1 1 2 1 2SUS-Score: 97.5 100 82.5 77.5 80Average: 87.5

Table 4.10: Results from the SUS-form that the users did after the test.

As can be seen in figure 4.18, the SUS score for iteration three stayed about the same as initeration two. It slightly drops down two points even to 87.5, which is still a very good SUSscore.

48

Page 59: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.5. Data Collected From Alpha Prototype

Iteration 1 Iteration 2 Iteration 30

20

40

60

80

100

79.5

89.5 87.5

Design iteration

SUS

scor

e

Avarage SUS scores

Figure 4.18: The average SUS score for each iteration.

49

Page 60: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

4.5. Data Collected From Alpha Prototype

50

Page 61: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

5 Discussion

In this part of the thesis, the results from the previous chapter will be discussed and evalu-ated. Both in sense of what these results mean and if they could be considered reliable. Itwill also discuss some of the ethical aspects that this thesis had taken into account and whateffects it could have in a wider perspective.

5.1 Results

The following section will discuss and analyse the outcome from the result chapter.

5.1.1 Prestudy and Interviews

During the prestudy and the interviews there were some contradictions of some sorts identi-fied. These along with some other interesting observation will be discussed in the followingsubsections.

Developing a Tool or Static Application?

During the prestudy one of the interviewees expressed a certain scepticism against devel-oping a tool since he was worried that this would result in that the sawmills would not beready to pay for the service. He advocated a system were the sawmills bought the initialsystem and then payed RemaSawco to develop new reports. During the interviews with theworkers at sawmills it appeared as if this is a common solution in systems today.

However when discussing this topic with the workers at sawmills this is something thatthey are not completely satisfied with. Many of them mentions the insufficiency in thismethod and really would prefer a tool.

A company who could provide a tool like this might be able to use this to their advantagewhen competing in this area.

51

Page 62: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

5.1. Results

Data Confidentiality

During the prestudy the interviewees pointed out how important it would be to keep thiskind of data confidential. One of them also mentioned that many of the people we talkedwith would be very worried when we told them that the data would available from outsideof the sawmill.

Yet this is not the picture one would get from the interviews with the employees at thesawmills. They also considered this data as confidential but only one of the intervieweesexpressed any concern about this data being accessible from outside of the sawmill. In real-ity most of these sawmills already had this data accessible from the internet in one way oranother.

Adaptable View

One thing that was mentioned during the interviews in the prestudy was the ability forthe end users to be able to adjust their views according to their needs. One intervieweepointed out that this was necessary since the system would contain large amounts of dif-ferent data and what data the user finds interesting depends on his or her role on the sawmill.

This theory could be confirmed during the final interviews with the personnel at thesawmills. Some of the interviewees expressed a dissatisfaction with this in the current sys-tems on the sawmills today.

User Friendly

Another thing mentioned in both the prestudy and during the sawmill interviews were thefocus on a system that was easy to use. During the prestudy, one of the interviewees pointedthis out and during the sawmill interviews, three persons pointed this out. However eventhough some of the interviewees did not point this out they did express negative feelingstowards the interaction with the current systems.

So even though many of the interviewees did not explicitly express that the system shouldbe user friendly one could still argue that this is important. As one of the interviewees pointedout, if it is not easy to interact with, the users will lose the trust and enthusiasm for workingwith it, as stated in quote B.0.9.

Roles at Sawmills

During the prestudy one of the interviewees expressed an idea regarding the roles onsawmills. This was that the roles could probable be divided into two different kinds of rolesthat was interested in different kinds of data.

During the interviews with the sawmill workers the picture was a bit different though.The different data types mentioned by each interviewee often differed but with some datatypes reoccurring.

Desired Platforms

Both during the prestudy and the interviews at the sawmills the opinion was that the mostimportant platform to support was the PC. Everyone who were interviewed, both in theprestudy and in the real study, mentioned PC as desired platform. There was some focus ondeveloping the system for other platforms as well.

52

Page 63: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

5.1. Results

Important Data

Since the interviews in this thesis is of a qualitative nature rather than a quantitative, theresults presented in figure 4.2 can not be directly translated into a truth about how importantdifferent data types are to the workers at sawmills. However one can argue that it givessome indication of a data types significance. In the case of stop times for example, whichis mentioned by all interviewees at the sawmills, one could probably argue that it is moreimportant than for example billed volume per time unit which was mentioned only once.

Yet if one wants to determine this in a more reliable way a more qualitative study shouldbe performed in this area.

5.1.2 Iterations of Lo-fi Prototype

As seen in the result chapter we bring forth the data gained from the two test iterations.What can be seen through the two iterations are an improved result in all the metrics usedduring the tests. This could be the result of the improvements made based on the solutionswe came up with after analysing the data gathered in the first test iteration.

It could also be the result of the users used in the second test iteration of the tests, beingbetter fitted to perform the same tasks as the users in the first test iteration. However weconsidered it to be most reliable if the data from the metrics were collected on the same tasksfor both iterations combined with different test users.

Performing a third iteration of the lo-fi prototypes before starting the implementations ofthe results into a alpha prototype might have been able to improve the results even further.However that is always a possibility and it is up to the developers to make the decision onwhen the lo-fi prototype is generating a good enough result to continue in the developmentprocess.

5.1.3 Iteration of Alpha Prototype

The results from the last test iteration using the alpha prototype follows the same patternwere improvements were made both for task success per time unit and unique-usabilityissues. The SUS test was almost the same as in test iteration two but a little lower.

We did expect the results to differ from the paper prototype since it takes time to change"views" when using paper. We will discuss this later on in section 5.1.4. But overall wethought that the suggested improvements from test iteration two did work. This can be seenin figure 4.17 in section 4.5.2 were number of unique-usability issue are reduced. We willdiscuss this more in section 5.1.5.

5.1.4 Efficiency per Task

The tasks used for the lo-fi prototypes were six separate tasks that covered most of thedifferent functionality that were identified as relevant to the users during the interviews.We noticed that these tasks took some time to perform. It was not that all the test usersgot stuck, just that it took some time to perform the different steps in each task. Tasks thatcontained similar steps were faster to perform for each of these tasks the user completed.An example of this is task 2 and 5 from figure 4.10 where one can see that the efficiency isgreater in task 5. This is probably because the tasks had a similar proceeding in its navigation.

There will always be a learning curve for a user when encountering new products. Learn-ing to navigate and use the product occurs over time as the users’ experience increases. Best

53

Page 64: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

5.1. Results

scenario is when a user can use previous experience to understand how to solve new orsimilar tasks. Task 1, 4 and 6 are the 3 tasks that we considered differed the most in theperspective of what was asked to be done. These tasks were more oriented to create anduse tools which involved a couple of steps in the task procedure. For the alpha prototypethe same tasks were used excluding task 5. The same pattern of improvements followedwith this last test iteration as well. The time here became a lot faster compared to the otheriterations but the same pattern between the similar tasks could be observed here as well.

As both tables 4.1 and 4.7 shows, almost every test user completed all the tasks. Onlytwo failures occurred in total. The first failure was when the test user assumed that he hadperformed the task, but had in fact not. The other failure was when a test user gave up ona task. Since almost every user had a full success rate for each task, an alternative way topresent how well each task was performed could simply be to look at the meantime for eachtask.

Task 1 Task 2 Task 3 Task 4 Task 60

1

2

3

4

5

Tasks

Tim

ein

min

ute

s

Meantime for each task

Iteration 1Iteration 2iteration 3

Figure 5.1: Meantime for each task for all users presented in minutes for all test iterations.

As figure 5.1 shows, the time to perform the same tasks in test iteration two is significantlyfaster than in test iteration one. Beside the improvements performed since the first iteration,one contributing factor may be that the moderator who act computer during the paper testget more secure and comfortable with the prototype through the iterations. This could be oneof the reasons the time result is better since the moderator can change "views" slightly faster.For the third test iteration the time improved a lot from both test iteration one and two. Thiscould be the result of some different contributing factors. The main reason that we suspectis because we removed submenus mentioned in figure 4.12 and because the system couldchange views faster in the alpha prototype. This solution removed the confusions on whereto go that existed in the previous test iterations. Another reason for this could be that theusers got more comfortable and dared to press buttons when they had an actual mouse. Thisalso resulted in that the users did not need to wait for us as moderators to change "views".The testers for the last test iteration were participants from RemaSawco. 4 out of the 5 testershad already participated in iteration one 8 weeks earlier. We still considers it valid to comparethese results because of the time gap and since the system had seen some major updates sincethe first test iteration.

54

Page 65: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

5.2. Method

5.1.5 Unique-Usability Issues

For each design feature that changes, new issues may occur. An example of were this hap-pened is the solution for issue number 5 and 11 in table 4.3. This solution became an issue inthe next iteration of the user tests.

If we compare the number of unique-usability issues found in the both iterations asshown in figure 4.15, the second iteration consists of half as many as in the first iteration. Forthe last iteration we only found four unique-usability issues. This was the goal, to see thatwe for each iteration could reduce the total number of issues for the prototype. Since thetest users were asked to verbalise what they thought, the observer during the prototype testscould follow their thoughts rather easy.

Complaints or confusions overall did not occur and the work flow went smooth. Thosewho existed concerned mostly the submenus, which were removed after test iteration two.This indicates that our recommended solutions was an improvement on the design to thebetter.

5.1.6 SUS Tests

The self-reported metric which was chosen was the SUS form. The result from the SUS formturned out to, in both iterations, give a score above average (68) except for on one of the testusers in the first iteration. Since this test was up to the user to do by him- or herself they hadthe opportunity to think about their ratings without having us hanging over their shoulder.We were surprised to see how high our score level was already after the first iteration. Tojudge from the test users after each test, they seemed pleased as the SUS form could confirm.

As shown in table 4.18, we had overall satisfied test users. Since test users in iterationtwo were real end users, compared to test users in iteration one and three who were softwaredevelopers, we would say that we understood the needs of the users and how they wouldprefer to have the data visualised.

In the third iteration the number of unique-usability issues was more than halved com-pared to the second iteration, but nonetheless the SUS score was almost the same. This mightindicate that at some level the ability to raise the SUS score will decrease independently ofhow much better the system gets, since a perfect score is probably very hard to get. But theremight also be some other explanation for this that our metrics could not aid us in discovering.Studies shows that SUS tests follows a similar pattern over different studies [2] which peaksat a score at 80 and only a few scores over 90. This could be the reason that we did not gainan increased score for test iteration three but instead landed on a score similar to the "max"value.

5.2 Method

The following section will discuss the method that we used in this thesis and if they had anyconsequences to the result.

5.2.1 More Detailed Prestudy

In the beginning of this project only a brief prestudy was done to get some quick insights intothe sawmill industry before doing the real interviews. With the knowledge gathered fromthe prestudy the research could continue to the real interviews with enough knowledge to

55

Page 66: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

5.2. Method

ask the right questions.

If the prestudy would have been done more thoroughly the first part of the real studymight had been able to have a more quantitative approach. Especially in gathering informa-tion regarding what data types the workers at sawmills are interested in. This would havebeen a better help when answering the first research question in this thesis:

"What kind of data are employees in the sawmill industry interested in visualis-ing?"

This is because the result would then in a better way reflect the views from the wholeindustry rather then these 8 interviewees.

5.2.2 Interviewee Sample

The interviewee sample used in this thesis is as mentioned in the method chapter, chosen withthe help of the snowball method. This helped in quickly gaining contacts in the industry whowas willing to participate in the study. However it also resulted in an interviewee samplewith 4 out of 8 interviewees working at the same sawmill. One could argue that this mighthave resulted in a slightly biased interview result.

5.2.3 Paper Prototype

Paper prototypes was a simple step to go from after the sketches. They were fast to createand easy to represent functionality on. What we did not expect was the time to change the"view" of the prototype as the user interacted with it. We did some practice runs beforebringing the prototype to the real test users. We practised a couple of times for the differenttasks and did some "browsing" with the views in the prototype and testing buttons until themoderator felt comfortable working with it. Still, the time we measured was not somethingthat user only could be responsible for. It could be compared to a website that have a smalldelay when changing pages before anythings shows at all.

Another reason for choosing paper prototypes were because they will not make the userfeel like interacting with a complete product. That would in turn result in relevant com-ments about design choices related to usability instead of general design choices such asbackground colours. We mentioned in the theory chapter 2.9.2 that lo-fi is possible to do oncomputers as well with certain software tools. These tools tries to keep the feeling of an earlystage program but are still easy to create a prototype in1.

This could have been an alternative method for gathering UX data that might have beenmore rapid. It would also probably have given a more precise time per task since there wouldnot have been the delay that the moderator unintentionally created. It would certainly reducethe time to set up test environments since we would not have needed to sort all the small, andbig, paper components.

5.2.4 Binary Task Success

Task success per time unit was used in this thesis to get a wider understanding on howefficiently users interacted and performed tasks on the product. Since the outcome was thatonly two test user failed one task each, the outcome of the efficiency depended almost onlyon the time of the task.

1Balsamiq Mockups is such a tool with easy drag and drop functions as well as the paper lo-fi feeling.

56

Page 67: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

5.2. Method

Instead of measuring task success binary, we could have used a three scale indicator tomeasure the level of success. We mentioned in the method chapter that we wanted to take astep back and allow the test users to perform their tasks all by themselves. Some users didstill forgot this and asked us for help. We could see that they were more willing to guess afterwe reminded them that they were on their own. Some of the tasks were therefore sometimeson the limit to be judged as a success. But since they performed their task within our acceptedtime span, we could not fail them.

An example of a three scale indicator could be to judge each task as "success", "prob-lematic" or "failure". This would have given more precise information regarding how eachtask was performed. Connections between the problematic tasks and unique-usability issuescould perhaps be established and we could have seen if similar problems existed in othertasks.

5.2.5 Prototype Tester Sample

During the 3 iterations with the prototypes a total number of 15 tests was performed. Thefirst paper prototype was tested using five persons working at RemaSawco. These testers didnot have the same jobs as the supposed end users and one could argue that this would havea negative impact on the test result. However the system itself did not require very muchprior knowledge about the sawmill industry, since the system was quite simple. The mostimportant things to know was pretty basic sawmill knowledge. The testers who worked atRemaSawco had this knowledge and therefor this was not a very big problem. The mainfocus in the early tests were to find usability issues with in product, which can be found withany user.

The second iteration took place at a sawmill with users who worked there. These testershad a varying degree of work experience and different roles at the sawmill. What they hadin common was that all of them used data extracted from the different sawmill systems intheir day to day work.

The third iteration took place internally at RemaSawco once again and 4 out of the 5 testershad already participated in iteration one. As mentioned earlier, this was considered as okaysince the time between the first iteration and the last iteration was about eight weeks and thesystem had seen some major updates.

5.2.6 Replicability

To maintain a high standard of this thesis, replicability will be discussed. In the followingsubsections reliability and metrics validity will be discussed.

Reliability

The different elements in this study have been presented in the method chapter. The methodchapter describes the methods that have been used to collect different kinds of data and howto evaluate it. It presents how the interviewees were selected and explains how the questionswere formed. These questions can be found in the interview guide in appendix A. Thisin combination with how to generate the prototype and how to evaluate it, gives anotherresearcher the tools for doing a similar work. The exact same result may not be guaranteedbecause of the solutions that was proposed is based on the researchers’ own analysis andthat may influence the outcome.

And also, as presented in 5.2.2 the fact that the interviewee sample might be slightly bi-ased could also have a negative impact on the reliability.

57

Page 68: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

5.2. Method

Metrics Validity

The triangulation of metrics that we chose to use for evaluating have all played their part inthe goal to understand the users’ different experience with the product. Both efficiency inusing the product and satisfaction for the users is something we early on stated as important.Task success per time unit gave us an understanding on how the users navigate and howefficient they are when performing tasks. It also gave us a perspective on how long it wouldtake to perform a task. A modification would be as mentioned above to change binarysuccess rate to measure level of success instead.

One of the criteria for a task to be successful was that it was within a certain time frame.This in combined with the fact that we had few data points to measure on, gave us fewextreme values. By removing some extreme values, we should be able to see how sensitiveour method was. To evaluate our task success per time unit metric, we chose to calculate howstatistically significant these results were with the use of one-way ANOVA-tests (ANOVAstands for analysis of variance). We have used model I ANOVA to find a pattern among themean values between the groups and find relations of how the data is structured [13].

We used the proposed spreadsheet in ’handbook of biological statistics’ by J. H. Mcdon-ald [13] for calculating one-way ANOVA. In our case, we compare the 3 design iterationswith 5 tasks each. The calculated P-value was 0.000181 which is lower then 0.05. A P-valuelower then 0.05 is considered statistically significant, which means that it is do not just actby chance. A plot of the different mean values for the iterations can be seen in figure 5.2 below.

Figure 5.2: Plot the test iterations means with 95% confidence intervals

58

Page 69: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

5.2. Method

The P-value calculated involves all iterations. To calculate if two iterations were statis-tically significant from each other, we used the Tukey-Kramer method [13]. Tukey-Kramercalculate the minimum significant difference for each pair of iterations. in table 5.1 belowone can see which design iterations that were statistically significant, indicated with ’*’. Thiswas also calculated with the use of the spreadsheet mentioned above.

Grouping Difference of Means Significant Differences1 - 2 0.144 0.542181 - 3 1.156* 0.542182 - 3 1.012* 0.54218

Table 5.1: Difference between mean value between each pair of groups.

Design iteration 3 is statistically significant from design iterations 1 and 2. The group pairof design iteration 1 and 2 is not statistically significant. This mean that the method itselfshows statistically significant but not between all iterations were chance might have occurred.

The method unique-usability issues pointed to the direction on where problems occurredso that we could come up with solutions to remove or reduce these. Removing design flawsthat causes confusions, increase the satisfaction of the users were they otherwise could getannoyed. Self-reported metrics is the metric most focused on the satisfaction of using theproduct and get a picture of if they approve or disapprove the product overall. All thesecombined have given an evaluation on how well the UX has been.

5.2.7 Source Criticism

The approach we used to gain access to the articles and literature that we have used in thethesis has mainly been through Linköping university’s library. For all the references thatwere chosen, a checkup on google scholar was done as well to ensure that these were wellcited. We also tried to use references that was newly released and up to date.

When designing the qualitative interviews used in the beginning of the thesis the mainsource used was the book by Bell & Bryman [7] and the article by Runeson & Höst [23]. Abook is generally speaking considered as a less good reference then a peer reviewed pub-lished article. However considering that the book by Bell & Bryman [7] have been cited over8000 times and that Bryman alone has been cited over 67000 times one could argue that it isstill a pretty reliable source. Regarding Runeson & Höst [23] the article is peer reviewed andwell cited (1373 Times) and can therefore be considered as a reliable source.

The main focus of this thesis have been in a UCD perspective and measuring UX. Tounderstand how to evaluate the data gained from the UX tests, we needed to find suitingmetrics. The references ’Handbook of Usability Testing, Second Edition: How to Plan, Design,and Conduct Effective Usability Tests’ [22] and "Measuring the user experience - collecting,analysing, and presenting usability metrics, Second Edition" [2] have been the main literaturethat have been used regarding UX measuring. Both these books goes into detail on differentapproaches on how to analyse the data gained from UX tests. We did not find any otherarticles that could explain in an enough detailed way how SUS, Unique-usability issues, tasksuccess and other ways to measure. One could argue that it is more correct to use scientificallyrevised articles for strengthening the claims and methods that was used.

59

Page 70: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

5.3. Data Ethics

5.3 Data Ethics

This section will discuss some of the ethical aspects related to this thesis. It will also discussthe results from this thesis in a wider context.

5.3.1 Users integrity

During the work with this thesis a large number of people have participated in one way oranother. In the prestudy two persons were interviewed. In the real study another eight wereinterviewed. In the paper prototype phase the prototype was evaluated in a total of fifteentest runs. During these phases we ensured the participants that their names would not be inthe thesis. This was done to encourage the participants to speak freely.

Doing this meant that we had to deal with confidential information and that we had tobe more careful about how we handled the data. One of the things that was done to ensureanonymity was for example to refer to the interviewees only as numbers or random letters.

This was important out of respect to the participants since it was under these pretencesthat they had participated.

5.3.2 Responsibility When Accidents Occur

Companies do not want to bring any harm to their customers. Dealing with technology asin this case that uses data containing how the customers perform is vital to keep secure.This means that security measures and thinking need to be present on the services that areprovided.

Quoting J. Basart and M. Serra in their article ’Engineering Ethics Beyond EngineersEthics’ [4] when discussing the concept of responsibilities:

"The ethical question is not, “Who is guilty?” but “What has been my contributionto the outcome?”"

When working with data and trying to create a more efficient data management systemthere are ethical questions that needs to be considered. This thesis has the purpose to under-stand how the users want their data presented. The results concludes that the users want tosee and access data from anywhere they want. Since we chose a solution that is web based,this also indicate that with the correct credentials you might get access to data that you arenot suppose to get access to. Different sawmills have different strategies for maximisingthe usage of the sawmill. Therefore it is important that competitors do not gain access tothis data. The developer can not prevent users from leaking information. But rather takeprecautions to avoid known exploits and prevent attacks such as SQL injections.

J. Basart and M. Serra [4] argue that with so many complex technology system that issurrounding us daily, one individual can not only be blamed when accident occurs. Thereare often more the one factor that is the cause of an accident [3]. The importance, as we see it,is to establish within what areas we as developers can be hold as directly responsible. Thereis a difference in releasing a product without considering protecting information and beingdirectly attacked by outsiders. For us as engineers to be able to contribute with solutions, wehave to establish a relation between us and the users of our products. Regarding softwaresecurity, as a software engineer, we often cannot deny that we have some part in the failureswhen an accident occur. Therefore, we need to have a good communication with users, asthese failures can often be based on more than one element.

60

Page 71: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

5.3. Data Ethics

5.3.3 Reduced Workforce

Many of the people interviewed in this study works with extracting data from databasesand presenting this data in different reports and spread sheets. D. Gotterbarn, K. Millerand S. Rogers presents in the paper with the title "Software Engineering Code of Ethics isApproved" [21] a list of principles that should be followed by software engineers. "In allthese judgements concern for the health, safety, and welfare of the public is primary; thatis, the public interest is central to this Code" is a quote from that paper were the word publicinterest is recurring more then once. Lets say that, if this procedure is made more efficientwith the help of the system developed, perhaps in the future that will lead to some jobsbeing merged. That can result in some services not being needed anymore. In this case theworkforce at the sawmills would shrink in size and people would be let go.

One of the principles named ’public’ states that: "Software engineers shall act consistentlywith the public interest" [21]. Is this in the public interest that individuals will be let gobecause of our system? Considering that in this scenario the products impact will lead to thatthe company can make their business more efficient. More efficient could lead to increasedwelfare and therefore the resources to expand. If the company expand, chances are that newjob opportunities are created which takes us back to the public interest.

The goal of the product suggested in this thesis is to improve the work for those userswho use and analyse data to improve the workflow. To identify down time at machines andfind patterns to in turn prevent these. The tool helps them understand the data better. It cannot see potential future problems for new products that they may produce. Therefore therewill be a need for these people who analyse the data when problems occur.

61

Page 72: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

5.3. Data Ethics

62

Page 73: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

6 Conclusion

This chapter will present the conclusions that have been drawn from the results and discus-sion in this thesis. It will also answer the research questions asked in the introduction.

6.1 Research Aim

The aim of this thesis has been to answer the two research questions asked in the introduction.These two questions were answered with the help of different methods. The first questionwas answered by conducting semi-structured interviews. The second question was answeredby developing a visualisation system for data from the sawmill industry.

6.1.1 Interesting Data for Sawmill Employees

The first research question asked was:

• What kind of data are employees in the sawmill industry interested in visualising?

What data employees at sawmill concerns are interested in differs a great deal dependingon the role of the employee. A lot of different data types were mentioned during the semi-structured interviews and even when the same data types were mentioned they could oftenoriginate from different parts of the sawmill.

The most frequently mentioned data types are illustrated in figure 4.2 and the othermentioned data types are listed in a table in Appendix F. This data should not be perceivedas an absolute ranking of how important the data is, but rather as a list of the most importantdata. However, the number of mentioning could probably give some indication of whichdata is more important and which data is less important. The data was categorised to makeit easier to grasp and by turn create demands for the product.

In general, the most frequently mentioned data types related to stops in the sawmill pro-cess. All interviewees mentioned that they were interested in stop times and seven mentionedthat they were interested in reasons for stops. Other popular data types often related to yieldand volumes processed.

63

Page 74: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

6.2. Future work

6.1.2 Developing a System With a Human Centred Design

The second research question was:

• Can you increase the usability, regarding the attributes efficiency and satisfaction, of a visuali-sation tool for the sawmill industry by measuring user experience?

From the results presented in this thesis the conclusion drawn is that it is possible toincrease the usability, regarding efficiency and satisfaction, of a visualisation tool for thesawmill industry by measuring the user experience. With the help of the triangulation withthe three metrics "efficiency", "SUS" and "unique-usability issues" this study shows that theamount of unique-usability issues are reduced in every iteration performed and that theefficiency is increased in every iteration. The SUS score increased from the first to seconditeration but stayed around the same value in the third iteration. The conclusion drawn inthis thesis is that this has to do with the fact that an SUS score very seldom rise above 90 inany SUS evaluation.

Regarding the efficiency metric the conclusion is that in future studies, the task successpart might be better to use as a non binary scale. This conclusion is drawn because of the factthat some test users had problems with solving some issues but very few failed to completethem all together. Therefore, the ones who did fail had a large impact on the statistics. Ifa nonbinary scale is not going to be used, one should consider creating an even more de-termined way to tell if a task is a failure. Also, the time for each task was greatly reducedbetween the paper prototypes and the alpha prototype. To compare a prototype on the com-puter with a paper prototype will result in a time differ just because of the different platforms.

From the combination of the metrics used in this thesis the final conclusion was that thesystem improved as the number of unique-usability issues decreased. This, in turn, results ina higher efficiency as each task becomes faster to perform since misleading steps are removedand the system ends up with more satisfied users and a higher SUS score.

6.2 Future work

Since this thesis to some extents has been of an exploratory character it would be relevant totry other metric combinations on this study in order to be able to compare the results. Thenit would be possible to make a more valid conclusion on how effective the combination ofthese methods are and what limitations they have.

It would also be interesting to perform a more quantitative study on what data typessawmill employees are interested in. Then one could establish the validity of how importantthe different data types are and make the results more generalisable.

64

Page 75: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

Bibliography

[1] B. Behkamal M. Kahani M. K. Akbari. “Customizing ISO 9126 quality model for evalu-ation of B2B applications”. In: Information and Software Technology 51 599-609 (2009).

[2] T. Tullis B. Albert. Measuring the user experience - collecting, analysing, and presenting us-ability metrics, Second Edition. Waltham, USA: Elsevier Inc., 2013.

[3] J. Basart and M. Serra. “Engineering Ethics Beyond Engineers Ethics”. In: Sci Eng Ethics(2013) 19:179–187 (2013).

[4] M. Baskinger. “Pencils Before Pixels: A Primer In Hand-Generated Sketching”. In: In-teractions 28-36 (2008).

[5] N. Bevan. “UX, usability and ISO standards”. In: The 26th Annual CHI Conference onHuman Factors in Computing Systems 1-5 (2008).

[6] N. Bevan. “What is The Difference Between The Purpose of Usability and User Experi-ence Evaluation Methods?” In: Interact 1-4 (2009).

[7] A. Bryman and E. Bell. Business Research Methods. Oxford, United Kingdom: OxfordUniversity Press, 2003.

[8] W. N. Dilla D. J. Janvrin R. L. Raschke. “Making sense of complex data using interactivedata visualization”. In: Communications of the AA (2014).

[9] M. Dastani. “The Role of Visual Perception in Data Visualization”. In: Communicationsof the ACM (2002).

[10] K. Goodwin. Designing for the Digital Age: How to Create Human-Centered Products andServices. Indianapolis, Indiana: Wiley Publishing, Inc., 2009.

[11] M. Hassenzahl and N. Tractinsky. “User experience - a research agenda”. In: Behaviour& Information Technology 25:2, 91-97 (2006).

[12] M. Walker L. Takayama J. A. Landay. “High-Fidelity or Low-Fidelity, Paper or Com-puter? Choosing Attributes when Testing Web Prototypes”. In: Proceedings of the HumanFactors and Ergonomics Society Annual Meeting 46:5 661-665 (2002).

[13] J. H. MCDONALD. Handbook of biological statistics, Second Edition. Baltimore, Maryland,U.S.A.: Sparky house publishing, 2009.

[14] H. Miki. “User Experience Evaluation Framework for Human-Centered Design”. In:HIMI 2014, Part I, LNCS 8521, pp. 602-612 (2014).

65

Page 76: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

Bibliography

[15] R. Molich and J Nielsen. “Improving a Human-Computer Dialogue”. In: Communica-tions of the ACM 33 (1990).

[16] J. Nielsen. “Finding usability problems through heuristic evaluation”. In: Proceedings ofthe SIGCHI conference on Human 373-380 (1992).

[17] J. Nielsen. Usability 101: Introduction to Usability. URL: https://www.nngroup.com/articles/usability-101-introduction-to-usability/.

[18] S. Hassler N. Leroy M. Baas C. Boog S. Loiseau M. Favre S. Pelayo. “Human-centereddesign strategy applied to the development of a system to support the entry of codedand structured medical information”. In: IRBM 34 259–262 (2013).

[19] M. Rettig. “Prototyping for tiny fingers”. In: Communications of the ACM 37 (1994).

[20] C. Robson. Real World Research. West Sussex, United Kingdom: John Wiley & Sons, 2011.

[21] D. Gotterbarn K. Miller S. Rogerson. “Software Engineering Code of Ethics is Ap-proved”. In: Communications of the ACM Vol. 42, No. 10 (1999).

[22] J. Rubin and D. Chisnell. Handbook of Usability Testing, Second Edition: How to Plan, De-sign, and Conduct Effective Usability Tests. Indianapolis, Indiana: Wiley Publishing, Inc.,2008.

[23] P. Runeson and M. Höst. “Guidelines for conducting and reporting case study researchin software engineering”. In: Empirical Software Engineering 14 (2008).

[24] K. Väänänen-Vainio-Mattila E. Karapanos S. Kujala V. Roto and A. Sinnelä. “UX Curve:A method for evaluating long-term user experience”. In: Interacting with Computers 23473-483 (2011).

[25] K. Yin. Case Study Research. London, United Kingdom: SAGE Inc., 2009.

66

Page 77: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

A Interview Guide (Swedish)

Frågor vi vill ha svar på (inte frågor vi skall ställa):

• Vilken data skulle kunna underlätta personens arbete?

• Hur skulle denna data presenteras för att personen ska kunna ta till sig informationen?

• Vilka begränsningar finns hos slutanvändaren till vår applikation?

Frågor:

• Du kan väl börja med att berätta om din roll här på företsaget? (ålder, vad är din roll iföretaget, hur länge har du arbetat där du arbetar idag, har du arbetat med andra sakerinom branschen)

• Hur ser en vanlig dag på jobbet ut för dig? (Vilka olika moment ingår?)

– Tror du att någon form datorsystem för att presentera information skulle kunnaunderlätta någon utav dessa uppgifter?

– Hur skulle du i så fall vilja ha denna information presenterad?

• Vilka olika datorprogram/system kommer du i kontakt med dagligen?

– Hur upplever du att interaktionen med dessa datorprogram/system fungerar?

∗ På vilket sätt då?

– Känner du till några andra datorprogram/system som används på sågverket?

• Vilken information/data är du intresserad av i din tjänst?

– Finns det någon annan information/data du skulle tycka vore intressant att ha?

• Hur skulle du vilja ha information/data presenterad för dig för att enkelt ta in den?

• Vilka plattformar skulle du vilja komma åt informationen/datan på?

• Vilka olika platser kan du befinna dig på när du vill komma åt denna typ av data?

• Ifall du skulle konstruera ett sånthär system, vilka funktionaliteter skulle du då ha medi det?

67

Page 78: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

68

Page 79: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

B Swedish Quotes

B.0.1

"(...) att man vill ha nån slags, eh, dashboard med olika information som man kanvälja vad man egenterligen vill titta på, vad man vill ha uppe, för de, för det finnsju så mycket information i den här databasen som är intressant på olika sätt, påolika ställen i sågen."

B.0.2

"Då är det nog smartaste att det ska kunnas sparas var dom ligger så att man,nästa gång plockar upp det så hamnar man på samma ställe till exempel, det ärtrevliga fineser."

B.0.3

"(...) det måste vara säkert då. För många kommer vara väldigt rädda när mansäger att det är öppet och man kan komma åt det i, både i, nere i Oslo och uppe iSokna eller sånna grejer."

B.0.4

"Ja, det är den. Den avslöjar hela deras produktion och hur bra det går och ävenatt det är mycket känsliga siffror om man skulle komma åt dom."

B.0.5

"(...)framförallt göra det enkelt att hantera, för gör man systemet, alltså, de viktigaär att göra några få grejer väldigt,väldigt bra och enkelt än att försöka göra allt."

B.0.6

"(...)det finns alltid, till exempel någon produktionschef eller kvalitetschef, på , förvarje plats. Jag tror det är han , och sen resten."

69

Page 80: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

B.0.7

"(...)det är stationära datorer eller laptops som står.(...)Det är tyvärr inte riktigtoptimal miljö me att gå runt för oftast är det ju, kanske kallt ute och man måste hahandskar på sig, eller så är man skitig, speciellt för operatörer och så då. Då skulleman ju kanske ha en form av industiplatta, däremot på kontoret kan jag absoluttänka mig att det är bra att ha för dom vid möten och sånt."

B.0.8

"Ja, de, vi är så pass, mycket på resande fot o att man, m, vi kan ju, alltså de ja,kan ju vabba hemifrån o göra jobbet ändå."

B.0.9

"Men gör det enkelt, för, för gör man, har man in.., gör man inte enkelt så tapparman, tilltron o, o entusiasmen för jobba med det."

B.0.10

"(...)jag vill nog gärna liksom ha ett ganska vattentät skott mot maskin va. så attman liksom, kan säkerställa att man inte får in buggar utifrån."

B.0.11

"Axxos har en inloggning, vi har en inloggning till vårat interna intranät, vi haren inloggning till, eh , sawinfo, vi har en inloggning till DVH."

B.0.12

"(...)hastighetsmätaren pendlar, eh, på rött, då blir det ju en, eh, motor, eller enliten morot, till gubbarna för att komma på den gröna då så. Så är de ju."

B.0.13

"(...)så man samtidigt kunna ta det vidare till, bland annat, excell för att kunnabearbeta det vidare och kunna räkna på de o."

B.0.14

"Man skulle vilja ha nått graf, grafiskt rapport system där man med hjälp av, ehjag vet inte hur det skulle vara men, olika block såhär, de här, nu vill jag göra enrapport, jag vill ha med stopptid, jag vill ha med de och liksom bygga rapporterpå ett enkelt och grafiskt sätt."

B.0.15

"Jag skulle nog titta ganska mycket på det, jag har jobbat i det tidigare och tycktedet var väldigt bra. Det är flexibelt och snabbt och du kan vrida och vända påsiffror beroende på hur du vill ha dom för tillfället så att säga."

B.0.16

"(...)jag kom en morron o de fanns inga scheman kvar. Alla, alla moelvensverkschema var borta, då var det en kille som hade, för hög behörighet och tyckte atthan behövde inte se mer än sina egna. Hehehe. Så han rensa."

70

Page 81: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

B.0.17

"Eh, ja stör mig på att jag, inte kan, eh, väldigt mycket globala inställningar, jagstyr inte själv över min egen vy, det vill jag väldigt gärna göra att. Jag vill se dejag vill se, o, och kunna klicka bort det jag inte behöver, för det är väldigt mycketsånt."

B.0.18

"aktuell, liksom vad händer nu. Jag vill kunna gå in och titta när som helst och se“hur går det nu”, det tycker jag är jätteviktigt för mig."

B.0.19

"(...) problemet med att om man säljer det som ett verktyg, det är att, man förvän-tar sig att det ska vara jävligt billigt va. Jag menar när vi skapar rapporter åt dom,då kan man acceptera att det kostar pengar för man vill ju inte lägga ner jobb pådet. Så de gör hela jobbet själva, då tror de att det kommer kosta som att köpaword eller excel eller något sånt där."

71

Page 82: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

72

Page 83: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

C Tasks

1. Logga in på systemet, exportera rådata gällande hur länge sågen stått still de senaste24 timmarna samt stopporsaker under samma period. Exportera denna data till ettexcell-dokument.

2. Du befinner dig på startsidan. Ta reda på hur många stock per minut som i snitt gåttigenom sågen de senaste 24 timmarna. Titta även specifikt på hur den totala sågadevolymen per dag förändrats över de senaste 4 veckorna.

3. Du är inte intresserad av “Exportera data”-vyn och bestämmer dig för att dölja den. Dutycker även att såghus-vyn har för många grafer för din smak, dölj “ärr & bör värde”-grafen.

4. Du befinner dig på startsidan, jämför TAK-värden från i förrgår och i går samt se hurstopptiderna på sågen skiljde sig från varandra.

5. Ta dig från startsidan till rapportverktyget, väl där skapar du en rapport som innehålleren linjegraf över de senaste 3 dagarnas volymutbyte i sågen.

6. Du vill ändra hur din startsida ser ut, byt ut målvärde-grafen mot torkad volym i siffrorunder samma period. Lägg sedan till de senaste tre dagarnas tillgänglighet i torkhuseti en linjegraf.

73

Page 84: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

74

Page 85: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

D System usability scale (SUS-test)

75

Page 86: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

76

Page 87: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

E Pictures of prototypes

Figure E.1: The start view in the lo-fi prototype iteration two.

77

Page 88: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

Figure E.2: The saw house view in the lo-fi prototype iteration two.

Figure E.3: The export data view in the lo-fi prototype iteration two.

78

Page 89: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

Figure E.4: Feedback messages in the lo-fi prototype iteration two.

79

Page 90: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

Figure E.5: The first step of the report tool in the lo-fi prototype iteration two.

80

Page 91: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

Figure E.6: The second step of the report tool in the lo-fi prototype iteration two.

81

Page 92: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

Figure E.7: The third and last step of the report tool in the lo-fi prototype iteration two.

Figure E.8: The login view in the alpha prototype.

82

Page 93: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

Figure E.9: The timber sorting view in the alpha prototype.

Figure E.10: The export data view in the alpha prototype.

83

Page 94: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

Figure E.11: The show and hide popup in the alpha prototype.

84

Page 95: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

F Datatypes mentioned during

interviews

In table F.1 the frequency of data type mentionings during the interviews are illustratedsorted in descending order.

85

Page 96: Providing visualisation of wood industry data with a user ...liu.diva-portal.org/smash/get/diva2:957394/FULLTEXT01.pdf · Abstract When developing a new system, it is a good idea

Data type FrequencyStop Times 8Reasons for Stops 7Saw Yield 6TAK 6Volume Yield 5Logs/Time 4Logs in 4Feedback on Stop Codes 4Production Pace 4Data over Time 4Sawed Volume per Minute 4Goal- against Current-Value 4Feeding Pace 3Log Volume 3Raw Data 3Saw Orders 3Dry Sorting Outcome 3Dried Volume 3Log Gap 2Storage Volume per Day 2Log Depth 2Dry Sorting Offcut 2Difference in number of in/out 1Number of empty carriers 1Number of Sorted Logs 1Dry Sorting length in/out 1Number per production order/shift 1Dry Sorting Volume in/out 1Off-loaded Volumes per Day 1Queued Volume Dryer 1Billed Volume per Day 1Quality Outcome per Sorting 1Quality Outcome per Dry Sort iteration 1Quality Outcome per Dimension 1Quality Outcome After Dryer 1Timber Class to Stop Time 1How long packages have been in storage 1Lost wood in storage and dryer 1Quality 1Thickness 1Breadth 1Moisture Quota 1

Table F.1: Data types mentioned during interviews.

86