search user interface design

82
Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/ Search User Interface Design Dr Max L. Wilson Mixed Reality Lab University of Nottingham, UK Monday, 2 July 12

Upload: max-wilson

Post on 22-May-2015

3.154 views

Category:

Technology


4 download

DESCRIPTION

Talk given to the University of Glasgow IR group on the 18th June 2012. #HCIR

TRANSCRIPT

Page 1: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Search User Interface DesignDr Max L. WilsonMixed Reality Lab

University of Nottingham, UK

Monday, 2 July 12

Page 2: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Search User Interface Design

About Me

My Framework

Brain Response

Information vs Interaction

My Research Areas

Social Media Search

Casual Search

Monday, 2 July 12

Page 3: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Software Engineering MEngHCI & Information Science PhDWeb Science and Semantic Web

Monday, 2 July 12

Page 4: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/Monday, 2 July 12

Page 5: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

UIST2008

JCDL2008

Monday, 2 July 12

Page 6: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

My PhD

Bates, M. J. (1979a). Idea tactics. Journal of the American Society for Information Science, 30(5):280–289.

Bates, M. J. (1979b). Information search tactics. Journal of the American Society for Information Science, 30(4):205–214.

Belkin, N. J., Marchetti, P. G., and Cool, C. (1993). Braque: design of an interface to support user interaction in information retrieval. Information Processing and Management, 29(3):325–344.

Monday, 2 July 12

Page 7: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

My PhD

Wilson, M. L., schraefel, m. c., and White, R. W. (2009). Evaluating advanced search interfaces using established information-seeking models. Journal of the American Society for Information Science and Technology, 60(7):1407–1422.

Monday, 2 July 12

Page 8: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Come and Sii what I’ve built

http://mspace.fm/sii

Best JASIST article 2009Monday, 2 July 12

Page 9: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/Monday, 2 July 12

Page 10: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/Monday, 2 July 12

Page 11: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Search User Interface Design

About Me

My Framework

Brain Response

Information vs Interaction

My Research Areas

Social Media Search

Casual Search

Monday, 2 July 12

Page 12: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Social Media Search

were not happy with some pieces of information coming from non-authorities, and being linked to dubious websites. Further, not-useful tweets were often repeated content, or part of a conversation that would only be useful as a whole.

There were also three more subjective factors of not-useful tweets, including users disagreeing with the tweets (e.g. being pro or anti Apple), or not finding them funny.

Analysis by Task Tables 3 and 4 include counts for how frequently each code was applied to tweet+response pairs for each task.

Temporal Search. For the first task, useful and trusted links along with specific information, played main factors in deciding if a tweet was useful for that task. We also saw how other types of links, including media, were also frequent for the first task. The increased popularity of the media link code may have been influenced by the broadcast of the BBC Proms over the Internet. Media links, did not account for other tweets being regarded as useful for other tasks.

Subjective Search. For the subjective task, we were able to observe that experience with or of the subject matter was important to the information seekers. We also see two very interesting codes appear in this task, which are able to compliment each other, the first being shared sentiment, and secondly entertaining. Both of these codes are subjective in nature, which could be expected a subjective task. Useful links and experience were also played an important role in this task. Many participants found this

task frustrating due to the amount of non-useful tweets; many of them were marked as SPAM or untrustworthy.

Location Sensitive Search. In the third (location-sensitive) task, we again see a high dependency on specific and useful information. However for this task, specific information played a more important role. As suspected we also see location sensitivity as an important factor, dominating this task with 85% of reasons to why location sensitivity is useful being allocated to this task. In this task, we see that trust, in the form of avatars and authors played an important role, with 2 tweet+response pairs being coded as useful because of the participant trusting the avatar, and a further 6 being coded as trusted author. Further, we see the introduction of direct recommendation and experience playing a part in why a participant found a tweet useful. Perhaps indicating a need for knowledge of first hand experience from someone who has been to a lunch venue in London, rather than a commercial entity trying to sell an experience or product.

Relevance Judgments for Tasks In the post-task interviews, we asked users to informally augment their relevance judgments with scores out of 5. Overall, the mean score for all rated tweets over all three tasks was 2.2, indicating a very low relevancy score. Individually, the first task, which was temporal in nature, scored 2.7. The second task, which involved users search for information regarding purchasing an iPhone, scored a very low 1.25. The third and final task, which was a

In Tweet Content T1 T2 T3 Experience Someone reporting a personal experience, but not necessarily suggestion / direction. 15 12 13 Direct Recommendation

Someone making a direct recommendation, but not necessarily relaying a personal experience.

3 3 20

Social Knowledge Containing information that is spreading socially, or becoming general knowledge. 7 6 6 Specific Information

Where facts are listed directly in tweets e.g. prices, times etc.

51 10 47

Reflection on Tweet Entertaining The reader finds them amusing. 1 3 2 Shared Sentiment The reader agrees with the author of the tweet. 1 2 1

Relevant Time The time is current. 14 0 2 Location The location is relevant to the query. 6 1 40

Trust Trusted Author The twitter account has a reputation / following. 3 2 6 Trusted Avatar The visual appearance cultivates trust. 2 0 2 Trusted Link A link to a trustworthy recognizable domain. 14 1 7

Links Actionable Link The user can perform a transaction by using the link (heavily dependent on trust). 9 0 0 Media Link The link is to rich multimedia content. 9 0 0 Useful Link The link provides valuable information content, e.g. authoritative information, educated

reviews, and discussions. 61 30 43

Meta Tweet Retweeted Lots Its information that others have passed on lots. 4 0 4 Conversation It is part of a series of tweets, and they all need to be useful. 1 4 4

Table 3. The 16 codes and the 6 categories extracted from responses and tweet pairs from the useful tweets. Further, columns 3-5 show how frequently each was associated with the temporal (T1), subjective (T2) and location-sensitive (T3) tasks.

ICWSM 2011

Monday, 2 July 12

Page 13: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Social Media Search

INSERT VIDEO

Monday, 2 July 12

Page 14: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Casual Leisure Search

‘explore’, and ‘search’ in their past, present, and futuretenses. 12 seed-terms were used to query Twitter each hour,with the 100 newest tweets being stored each time. Ourcorpus contains information about hundreds of thousandsof real human searching scenarios and information needs,some examples are shown in Figure 1.

To investigate the information behaviours described in thecorpus, we embarked on a large-scale qualitative, inductiveanalysis of these tweets using a grounded theory approach.With the aim of building a taxonomy of searching scenariosand their features, we have so far coded 2500 tweets in ap-prox. 40 hrs of manual coding time. Already, we have begunto develop a series of dimensions and learned, ourselves, agreat deal about the kinds of search scenarios that peopleexperience in both the physical and digital domains.

To date, we have identified 10 dimensions within our tax-onomy, 6 of which were common in the dataset and havebecome fairly stable. We will present this taxonomy in fu-ture work, when more tweets have been coded and the tax-onomy is complete. Further, once the taxonomy is stableand has been tested for validity, we will use alternative au-tomatic or crowd-sourcing techniques to gain a better ideaof how important the factors are and how they relate. Here,however, we will highlight some of the casual-leisure searchbehaviours documented so far.

4.1 Need-less browsingMuch like the desire to pass time at the television, we saw

many examples (some shown in Table 3) of people passingtime typically associated with the ‘browsing’ keyword.

1) ... I’m not even *doing* anything useful... just browsingeBay aimlessly...

2) to do list today: browse the Internet until fasting breaktime..

3) ... just got done eating dinner and my family is watch-ing the football. Rather browse on the laptop

4) I’m at the dolphin mall. Just browsing.

Table 3: Example tweets where the browsing activ-ity is need-less.

From the collected tweets it is clear that often the inform-ation-need in these situations are not only fuzzy, but typi-cally absent. The aim appears to be focused on the activity,where the measure of success would be in how much theyenjoyed the process, or how long they managed to spend‘wasting time’. If we model these situations by how theymanage to make sense of the domain, or how they progressin defining their information-need, then we are likely to pro-vide the wrong types of support e.g these users may not wantto be supported in defining what they are trying to find oneBay, nor be given help to refine their requirements. Weshould also point out, however, that time wasting browsingwas not always associated with positive emotions (Table 4).

1) It’s happening again. I’m browsing @Etsy. Crap.2) browsing ASOS again. tsk.3) hmmm, just realizd I’ve been browsing ted.com for the

last 3 hours.

Table 4: Example tweets where the information-need-less browsing has created negative emotions.

The addictive nature of these activities came through re-peatedly and suggests perhaps that support is needed to

curtail exploration when it is not appropriate.

4.2 Exploring for the experienceMostly related to the exploration of a novel physical space,

we saw many people exploring with family and friends. Theaim in these situations (see Table 5) is often not to findspecific places, but to spend time with family.

1) exploring the neighbourhood with my baby!2) What a beautiful day to be outside playing and explor-

ing with the kids:)3) Into the nineties and exploring dubstep [music] while

handling lots of small to-dos

Table 5: Example tweets where the experience out-weighs the things found.

In these cases, the goal may be to investigate or learnabout the place, but the the focus of the activity is lesson the specific knowledge gained than on the experience it-self. Another point of note is that in these situations peopleregularly tried to behave in such a way that accidental orserendipitous discoveries were engendered. While examples1) and 2) are physical-world examples, it is easy to imagedigital world equivalents, such as exploring exploring theDisney website with your children.

Below we attempt to combine the characteristics we havediscovered to create an initial definition of what we refer toas casual search.

5. CASUAL SEARCHWe have seen many examples of casual information be-

haviours in these recent projects, but here we highlight thefactors that make them di�erent from our understandingof Information Retrieval, Information Seeking, ExploratorySearch, and Sensemaking. First, we should highlight thatit is not specifically their information-need-less nature thatbreaks the model of exploratory search, although some ex-amples were without an information need entirely. Thedi�erentiators are more in the motivation and reasoningfor searching, where all of our prior models of search aretypically oriented towards finding information, but casualsearch is typically motivated by more hedonistic reasons.We present the following defining points for casual searchtasks:

• In Casual search the information found tends to be ofsecondary importance to the experience of finding.

• The success of Casual search tasks is usually not de-pendent on actually finding the information being sought.

• Casual search tasks are often motivated by being in orwanting to achieve a particular mood or state. Tasksoften relate at a higher level to quality of life and healthof the individual.

• Casual search tasks are frequently associated with veryunder-defined or absent information needs.

These defining points break our models of searching in sev-eral ways. First, our models focus on an information need,where casual search often does not. Second, we measuresuccess in regards to finding the information rather thanthe experience of searching. Third, the motivating scenar-ios we use are work-tasks, which often is not appropriate incasual search.

HCIR 2010

Monday, 2 July 12

Page 15: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Casual Leisure Search

Springer Book Chapter - Award: Outstanding Author Contribution

Monday, 2 July 12

Page 16: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Search User Interface Design

About Me

My Framework

Brain Response

Information vs Interaction

My Research Areas

Social Media Search

Casual Search

Monday, 2 July 12

Page 17: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Search User Interface Design

Monday, 2 July 12

Page 18: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/Monday, 2 July 12

Page 19: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

1

2

3

4

5

6

810

11

9

7

12

14 13

11

9

Monday, 2 July 12

Page 20: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

1

2

3

4

5

67

8

9

10

11

12 13

14

15

16

Monday, 2 July 12

Page 21: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/Monday, 2 July 12

Page 22: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/Monday, 2 July 12

Page 23: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/Monday, 2 July 12

Page 24: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/Monday, 2 July 12

Page 25: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

1

2

3

4

5

6

810

11

9

7

12

14 13

11

9

Monday, 2 July 12

Page 26: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

1

4

8

12

Input Features

Monday, 2 July 12

Page 27: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

1

3

4

5

6

8

11

9

7

13

11

9

Control Features

12

Monday, 2 July 12

Page 28: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

1

2

3

5

6

10

11

9

7

12

11

9

Informational Features

Monday, 2 July 12

Page 29: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

1

2

6

9

12

14 13

9

Personalisable Features

Monday, 2 July 12

Page 30: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

SUI Design Taxonomy

Input Features

Control Features

Informational Features

Personalisable Features

Monday, 2 July 12

Page 31: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

SUI Design Taxonomy

Input Features

Control Features

Informational Features

Personalisable Features

Search boxQuery-by-exampleClusters/CategoriesTaxonomiesFacetsSocial annotations

Monday, 2 July 12

Page 32: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Auto-complete/suggest4.1. INPUT FEATURES 31

(a) Apple – shows lots of contextual informa-tion and multimedia.

(b) Google – prioritising previous searches.

Figure 4.1: Examples of AutoComplete.

The majority of the fields in Google’s advanced search box can be translated to special operators inthe normal query box. Consequently, when the results are displayed, the full advanced search formdoes not also have to be displayed (helping maintain Nielsen’s consistency heuristic for the designof SERPs). Further, the expert searchers can use shortcuts (another of Nielsen’s heuristics) by usingthe operators instead of the advanced search form.

Query-by-ExampleThere is a range of searching systems that take example results as the Input. One example commonlyseen in SERPs is a ‘More Like This’ button, which returns pages that are related to a specific page.Google’s image search also provides a ‘Similar Images’ button, which returns images that appear tobe the same, in terms of colour and layout. While these could be seen as Control examples (modifyingan initial search), the Retrievr prototype SUI (Figure 4.2) lets a searcher sketch a picture and returnssimilar pictures. Similarly, services like Shazam2, let searchers record audio on their phone andthen try to find the song that is being played. Shazam and Retrievr are examples that are explicitlyquery-by-example Input features, while others can be seen as Input and/or Control.

2http://www.shazam.com/

Monday, 2 July 12

Page 33: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

SUI Design Taxonomy

Input Features

Control Features

Informational Features

Personalisable Features

Query SuggestionsCorrectionsSortingFiltersGroupings

Monday, 2 July 12

Page 34: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Sorting46 4. MODERN SEARCH USER INTERFACES

(a) Sorting in Amazon (b) Sorting in Walmart (c) Sorting in Yahoo!

(d) Tabular sorting in Scan.co.uk.

(e) Tabular sorting in iTunes

Figure 4.12: Sorting results helps the searcher find more relevant results.

Monday, 2 July 12

Page 35: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

SUI Design Taxonomy

Input Features

Control Features

Informational Features

Personalisable Features

SnippetsUsable InfoThumbnailsPreviewsRelevance Info2D & 3D VizGuiding numbersZero-click answersSignpostingPaginationSocial Info

Monday, 2 July 12

Page 36: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Usable Information

52 4. MODERN SEARCH USER INTERFACES

Figure 4.17: Snippets in Ciao’s search results can be extended using the ‘more’ link.

Figure 4.18: Results in Sainsbury’s groceries search can be added to the shopping basket without havingto leave the search page.

allows searchers to add items to their cart from the SERP, as shown in Figure 4.18. If searchers areunsure if an item is right for them, however, they can view a page with more information abouteach product, and buy from there too. Ciao!, in Figure 4.17, also has a range of usable links intheir results, including links directly to reviews, pricing options, and to the category that an itembelongs in. In Google Image Search, there is a usable link that turns any result into a new search for‘Similar Images,’ as discussed in the Query-by-example section above. Further, searchers may now‘+1’ a result in a Google SERP, without affecting or interrupting their search. Finally, searching inSpotify23 provides a number of usable links in their search results. While viewing a list of tracks thatmatch a search, as in Figure 4.19, searchers can: use the star to favourite a track, buy the track, and

23http://www.spotify.com/

Monday, 2 July 12

Page 37: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Social Information

74 4. MODERN SEARCH USER INTERFACES

Recommendation

• Track and reuse information about the behaviour of a systemssearchers.

Figure 4.39: Amazon often provides feedback to tell searchers what people typically end up actuallybuying.

or even the way they are presented. Further, they can affect the Control features that are provided.For clarification, there has been a lot of work that has focused on algorithmic personalisation forsearch, which has a whole book of its own [133]. Instead, this section focuses on different types ofPersonalisable features that appear in a SUI and the impact they can have.

4.4.1 CURRENT-SEARCH PERSONALISATIONThe most common type of personalisation found within a single search session, is to provide some-thing like a shopping cart to searchers, or a general collection space. In a recently retired34 exper-imental feature, Yahoo! SearchPad (Figure 4.40) provided searchers with a space to collect searchresults and make notes about them. When activated, SearchPad logged the searches made and thewebsites visited. When opened, searchers can remove items from the SearchPad, or add notes forthemselves or others to read later; SearchPad entries could be saved and emailed.

34http://help.yahoo.com/l/ph/yahoo/search/searchpad/spad-23.html

Monday, 2 July 12

Page 38: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

SUI Design Taxonomy

Input Features

Control Features

Informational Features

Personalisable Features Current-searchPersistentSocialised

Monday, 2 July 12

Page 39: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Search Histories76 4. MODERN SEARCH USER INTERFACES

Recommendation

• Help searchers to return to previously viewed SERPs and results.

(a) History of searches in PubMed. (b) History of searches and results in Ama-zon.

Figure 4.41: SUIs can help searchers get back to previous searches by keeping a history.

Amazon and eBay, for example, assist searchers by recommending items that were viewed in the lastsession on the home page. Such features can be very helpful in multi-session search scenarios, likeplanning a holiday or buying a car [95, 117]. When SearchPad was active, it was designed to supportsuch tasks, by helping account holders to resume previous sessions quickly and easily. Similarly,Google extends the idea of a per-session history of queries, by providing their account users with acomplete history of their searches and page views (#12 in Figure 1.1). Google also uses this historyto modify the Informational view of a single search result, by adding the date that a searcher lastviewed a result, or indeed how many times they have viewed it (Figure 4.42). Further, Google tellssearchers who in their social networks have shared a link or “+1’d” it, as shown in Figure 4.42. Theconcept of “+1”-ing a website is Google’s most recent evolution of highlighting results that searcherslike, where previous versions included starring a result, as shown in Figure 4.43, or pressing a buttonthat would also show a certain result at the top of a SERP.

Monday, 2 July 12

Page 40: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

SUI Design TaxonomyInput Control

Informational Personalisable

Monday, 2 July 12

Page 41: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

The Search BoxInput Control

Informational Personalisable

sbQueryOnly

Monday, 2 July 12

Page 42: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

The Search BoxInput Control

Informational Personalisable

sb

withauto-

suggest

Monday, 2 July 12

Page 43: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

The Search BoxInput Control

Informational Personalisable

sb

If queryis persistent

in search box

Monday, 2 July 12

Page 44: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

The Search BoxInput Control

Informational Personalisable

sb

with auto-suggest,

and query left in

place, andif auto-suggest includessearch history

Monday, 2 July 12

Page 45: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

The Sweet Spot for SUI designInput Control

Informational Personalisable

Good SUI features fit into >1 categoryMonday, 2 July 12

Page 46: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Search User Interface Design

• The Taxonomy

• Historical context

• Lots of examples

• 20 Design Recommendations

• Future Trends

• Evaluation notes

Monday, 2 July 12

Page 47: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Search User Interface Design

About Me

Brain Response

Information vs Interaction

My Framework

My Research Areas

Social Media Search

Casual Search

Monday, 2 July 12

Page 48: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Search User Interface Design

Does Interaction Matter?

Does interaction provide significant benefits to users?

Or is it just more information and more data?

How should companies prioritise investment in these areas?

Monday, 2 July 12

Page 49: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Information vs Interaction

Monday, 2 July 12

Page 50: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Information vs Interaction

• Kelly et al (2009) - query suggests > term suggestions

• Ruthven (2003) - humans not good at choosing useful ones

•Diriye (2009) - slow people down during simple tasks

Useful info - or Efficient interaction?

Monday, 2 July 12

Page 51: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Information vs Interaction

• Hearst & Pederson (1996) - better task performance

• Pirolli et al (1996) - helped to understand corpus

Useful data?(from good algorithm)

Efficient interaction?

Monday, 2 July 12

Page 52: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Information vs Interaction

• Hearst (2006) - careful metadata is always better than clusters

•Wilson & schraefel (2009) - good for understanding corpus

Powerful interaction?

or lots of useful data?

Monday, 2 July 12

Page 53: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Information vs Interaction

Monday, 2 July 12

Page 54: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Information vs Interaction

Query data

Monday, 2 July 12

Page 55: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Information vs Interaction

Query data Clusteredalgorithms

Monday, 2 July 12

Page 56: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Information vs Interaction

Query data Clusteredalgorithms

Facetedmetadata

Monday, 2 July 12

Page 57: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Information vs Interaction

Query data

Monday, 2 July 12

Page 58: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Information vs Interaction

Query data

- H1: Searchers will be more efficient with more powerful interaction, using the same metadata, when completing search tasks.

- H2: Searchers will enjoy more powerful interaction, despite using the same metadata.

- H3: Searchers will use query recommendations more when they are presented differently.

In order to accept or reject these hypotheses, we designed a 3x2 repeated-measures study using two independent variables: 1) form of interaction, and 2) type of task. There were 3 forms of interaction, described below, covering standard query suggestions, hierarchical clustering, and faceted filtering. There were 2 types of task: simple and exploratory. Below, we describe these factors in more detail, beginning with the three search interfaces.

3.1 Form of Interaction For this study, we built three search user interfaces (Figure 1) that closely resembled Google, but using the freely available Bing Search API1. Bing’s API was chosen because it was a) free to use, b) easy to process on the server-side2, and c) less limited in terms of number of API

1 http://www.bing.com/toolbox/bingdeveloper/ 2 Google’s API uses javascript, which means that the data

manipulation is restricted to client-side processing.

calls than the alternatives. We chose for all three user interfaces to resemble Google, as it was most likely to be familiar to the majority of study participants. The three user interfaces varied only in the form of IIR interaction to the left of the results, described in turn further below. Otherwise, all three interfaces allowed searchers to search the web as normal, submitting queries and clicking on results. The alternative forms of search, such as image, maps, and YouTube, were disabled. Elements like spelling corrections were also implemented, as was the inclusion of information like number of results and time taken. Finally, however, based upon the results of a pilot study, a design decision was taken to remove paging in order to encourage the use of query refinements. Although paging was relatively infrequent, removing this feature did encourage additional use of refinements, without noticeably affecting user opinion of the design. In fact, some pilot participants did not even notice that paging was missing. This was the only design diversion away from the typical Google search experience. UIQ: Query Suggestions. Using query suggestions in their most natural form of interaction, UIQ presented query suggestions from the Bing API as a list down the left hand side of the results page. As per the standard interaction provided by search engines, selecting a query suggestion simply issued an entirely new query, presenting new results and a new set of query suggestions to go with them. In

Figure 1: The three interaction conditions in the study. UIQ on the left presents query suggestions in their common form. UIC in the middle presents secondary query suggestions with an interaction model based on

hierarchical clustering. UIF on the right, which includes the whole view of the Google UI recreation, provides terms, or facets, that can be applied to or removed from the search in any combination to ‘filter’ the results.

Monday, 2 July 12

Page 59: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

3 Conditions

- H1: Searchers will be more efficient with more powerful interaction, using the same metadata, when completing search tasks.

- H2: Searchers will enjoy more powerful interaction, despite using the same metadata.

- H3: Searchers will use query recommendations more when they are presented differently.

In order to accept or reject these hypotheses, we designed a 3x2 repeated-measures study using two independent variables: 1) form of interaction, and 2) type of task. There were 3 forms of interaction, described below, covering standard query suggestions, hierarchical clustering, and faceted filtering. There were 2 types of task: simple and exploratory. Below, we describe these factors in more detail, beginning with the three search interfaces.

3.1 Form of Interaction For this study, we built three search user interfaces (Figure 1) that closely resembled Google, but using the freely available Bing Search API1. Bing’s API was chosen because it was a) free to use, b) easy to process on the server-side2, and c) less limited in terms of number of API

1 http://www.bing.com/toolbox/bingdeveloper/ 2 Google’s API uses javascript, which means that the data

manipulation is restricted to client-side processing.

calls than the alternatives. We chose for all three user interfaces to resemble Google, as it was most likely to be familiar to the majority of study participants. The three user interfaces varied only in the form of IIR interaction to the left of the results, described in turn further below. Otherwise, all three interfaces allowed searchers to search the web as normal, submitting queries and clicking on results. The alternative forms of search, such as image, maps, and YouTube, were disabled. Elements like spelling corrections were also implemented, as was the inclusion of information like number of results and time taken. Finally, however, based upon the results of a pilot study, a design decision was taken to remove paging in order to encourage the use of query refinements. Although paging was relatively infrequent, removing this feature did encourage additional use of refinements, without noticeably affecting user opinion of the design. In fact, some pilot participants did not even notice that paging was missing. This was the only design diversion away from the typical Google search experience. UIQ: Query Suggestions. Using query suggestions in their most natural form of interaction, UIQ presented query suggestions from the Bing API as a list down the left hand side of the results page. As per the standard interaction provided by search engines, selecting a query suggestion simply issued an entirely new query, presenting new results and a new set of query suggestions to go with them. In

Figure 1: The three interaction conditions in the study. UIQ on the left presents query suggestions in their common form. UIC in the middle presents secondary query suggestions with an interaction model based on

hierarchical clustering. UIF on the right, which includes the whole view of the Google UI recreation, provides terms, or facets, that can be applied to or removed from the search in any combination to ‘filter’ the results.

UIQ UIC UIF

Monday, 2 July 12

Page 60: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

2 Types of Task

Google, these are typically found at the end of the search results, and in Bing they are typically found to the left of the search results. Ultimately, however, UIQ was our baseline condition and simulated the typical behaviour of query suggestions. UIC: Hierarchical Clustering. Our second user interface provided a browsing experience similar to hierarchical clustering interfaces like Clusty.com. In terms of interaction, clustering interfaces use hierarchical clustering techniques to automatically generate a tree-like structure of entities and sub-entities that can be found in the results set. The searcher can then filter all the search results retrieved by the system by either top-level or sub-level entities in the hierarchy. When selecting an entity in the hierarchy, the results are filtered, and any sub-entities are shown in the hierarchy. At all times, the searchers original query is left in the search box, indicating that the results have been filtered rather than the system submitting a new query. To recreate the hierarchical clustering experience, standard query suggestions were retrieved from the Bing API for the current query. These were used as the top-level entities in the hierarchy. For each query suggestion, UIC then asked for subsequent query suggestions, which were represented as the sub-level entities in the hierarchy. To create the same sensation of simply filtering and browsing through the results, as opposed to reissuing queries, the searchers original query was left in the search box. As well as highlighting the item that had been selected in the hierarchy, UIC also used Google’s standard terminology to say ‘Showing results for [selected item in hierarchy]’. Consequently, although the system was technically issuing more specific queries underneath, the experience appeared to participants as choosing to display different sub-clusters of the initial results returned by the query. UIF: Faceted filtering. In faceted filtering systems, searchers can take any of the items of metadata made available to them, and apply them in combination in order to filter the results. Thus the user is able to flexibly combine, add, or remove any number of keyword filters in order to describe what they are looking for and narrow their results. Like with clustering, systems typically maintain any search query as a constant in the search box, and then apply the selected keywords to filter the results to portions of the overall result set. Once again, for UIF, we restricted ourselves to using just the Bing API query suggestions, but aimed to create a search feature that allowed searchers to apply multiple suggested terms in combination. Without carefully constructed metadata we were unable to create a set of distinct facets, such as sets of prices, colours, brands, and so on, which are commonly seen in online retail stores. Instead, we extracted terms from the query suggestions to display separately as additional query terms that could be applied in any combination to the query. Consequently, we chose an output that appeared much like a tag cloud, such that it would appear in a form that would be familiar for

many users. The tag cloud was displayed in a common style, with popular terms displayed in a larger font. Overall, however, the tag cloud provided the same interaction model as items in facets: users were able to ‘turn on’ and ‘turn off’ any term in the list as a filter, where ‘on’ terms were highlighted using background. This is a different interaction model to providing term suggestions, which would issue a refined query, provide new results, and new term suggestions, similar to our baseline condition. Like UIC, however, the faceted filters remain constant until the user changes their query, which was left in the search box. The initial query and filters were displayed together using Google’s phrasing as: ‘Showing results for [query + selected terms]’. Again, this combination made the experience appear as if searchers were applying filters to the results returned by the original query, but in reality, the system was still issuing refined queries to the Bing API.

3.2 Type of task Two standard types of user study task were used in the study: 1) a simple lookup task and 2) an exploratory task. All six tasks are shown in Table 1. The simple lookup tasks had a fixed answer, but the chosen task description was presented in such a way that the most likely query would not find the answer without subsequent queries or refinements. This approach was chosen to intrinsically encourage participants to use the IIR features on the left of each user interface condition.

Table 1: Tasks set to participants in the study. S = Simple, E = Exploratory

ID S/E Task Description 1 S What is the population of Ohio?

2 E Find an appropriate review of “Harry Potter and the Deathly Hallows”. - Compare the rating with the previous film.

3 S Find the first state of America.

4 E Deduce the main problems that Steve Jobs incurred with regards to his health.

5 S What is the iPad 3’s proposed processor name?

6 E Explore information related to Apple’s next iPhone, the iPhone 5.

- Note the expected release date. There could well be multiple rumours.

The exploratory search tasks were chosen to be tasks with multiple sub-problems, such that searchers would have to perform a series of searches or refinements to combine answers from several websites. The tasks, therefore, resembled a collection-style task, without there being specific dependencies between the sub-elements. There was also no fixed answer to these tasks, where users could choose answers subjectively.

Monday, 2 July 12

Page 61: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

18 PeopleIntro + Consent

UI12 tasks

UI22 tasks

UI32 tasks

QA + Debrief

Monday, 2 July 12

Page 62: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

18 PeopleIntro + Consent

UI12 tasks

UI22 tasks

UI32 tasks

QA + Debrief

Measures

QueriesRefinementsPageviews

Time

Monday, 2 July 12

Page 63: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

18 PeopleIntro + Consent

UI12 tasks

UI22 tasks

UI32 tasks

QA + Debrief

Measures

QueriesRefinementsPageviews

Time

Ease of useTask Satisfaction

Monday, 2 July 12

Page 64: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

18 PeopleIntro + Consent

UI12 tasks

UI22 tasks

UI32 tasks

QA + Debrief

Measures

QueriesRefinementsPageviews

Time

Ease of useTask Satisfaction

QuickestMost Enjoyable

Best Design

Monday, 2 July 12

Page 65: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Simple vs Exploratory

Measure S E Diff

Time 176s 179s no

Queries 1.75 2.33 p<0.05

Pageviews 1.65 2.09 p<0.005

Refinements 2.42 2.45 no

Monday, 2 July 12

Page 66: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Log data By UI

Measure Simple Exploratory

Queries UIQ < UIC & UIF UIQ > UIC & UIF

Refinements No diff UIQ & UIC < UIF

Visits No diff UIQ > UIC & UIF

Time UIQ > UIC < UIF UIC < UIF < UIQ

Monday, 2 July 12

Page 67: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Subjective Responses

Measure Simple

Easy of Use UIQ & UIC > UIF

Satisfaction UIQ & UIC > UIF

Question UIQ UIC UIF

Quickest to correct answer 11 5 2

Most enjoyed during task 4 11 3

Most appealing design 5 11 2

Monday, 2 July 12

Page 68: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

What did we actually learn?

• We did see different behaviour in all 3 conditions

• People were good at simple tasks with original UIQ

• People were faster and more effective with UIC and preferred it

• People used more filters and viewed fewer pages with UIF but did not like it so much

• But is it better or worse behaviour?

Monday, 2 July 12

Page 69: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Information vs Interaction

Query data

Monday, 2 July 12

Page 70: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Information vs Interaction

Query data Clusteredalgorithms

Facetedmetadata

Monday, 2 July 12

Page 71: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Information vs InteractionPe

rform

ance

Facets

Clusters

Suggestions

(hypothetically)Monday, 2 July 12

Page 72: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Search User Interface Design

About Me

Brain Response

My Framework

Information vs Interaction

My Research Areas

Social Media Search

Casual Search

Monday, 2 July 12

Page 73: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

SUI Design + Brain Response

Monday, 2 July 12

Page 74: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

SUI Design + Brain ResponseCognitive Load Theory

Total Mental Capacity

Simple UI

Easy Task

Monday, 2 July 12

Page 75: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

SUI Design + Brain ResponseCognitive Load Theory

Total Mental Capacity

Simple UI

Hard Task

Monday, 2 July 12

Page 76: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

SUI Design + Brain ResponseCognitive Load Theory

Total Mental Capacity

Complex UI

Hard Task

Monday, 2 July 12

Page 77: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/Monday, 2 July 12

Page 78: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/Monday, 2 July 12

Page 79: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/Monday, 2 July 12

Page 80: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

SUI Design & Brain Response

Clear design recommendations

Cost vs Gain of adding a feature

Ways to reduce cost of a feature

Monday, 2 July 12

Page 81: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/

Search User Interface Design

About Me

My Framework

Information vs Interaction

Brain Response

My Research Areas

Social Media Search

Casual Search

Monday, 2 July 12

Page 82: Search User Interface Design

Dr Max L. Wilson http://cs.nott.ac.uk/~mlw/Monday, 2 July 12