contextual information elicitation in travel recommender systems

26
ENTER 2016 Research Track Slide Number 1 Contextual Information Elicitation in Travel Recommender Systems Matthias Braunhofer and Francesco Ricci Free University of Bozen - Bolzano, Italy {mbraunhofer,fricci}@unibz.it http://www.inf.unibz.it

Upload: matthias-braunhofer

Post on 13-Apr-2017

249 views

Category:

Travel


0 download

TRANSCRIPT

Page 1: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 1

Contextual Information Elicitation in Travel Recommender Systems

Matthias Braunhofer and Francesco Ricci

Free University of Bozen - Bolzano, Italy{mbraunhofer,fricci}@unibz.it

http://www.inf.unibz.it

Page 2: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 2

Agenda

• Introduction• Related Work• Selective Context Acquisition• Experimental Evaluation and Results• Conclusions

Page 3: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 4

Context-AwareRecommender Systems (CARSs)

• CARSs provide better recommendations by incorporating contextual information (e.g., time and weather) into the recommendation process

STS (South Tyrol Suggests)

Page 4: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 5

Context Acquisition Problem of CARSs

• How to identify and acquire the truly relevant contextual factors that influence the user preferences and decision making process?

Page 5: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 6

STS w/o Selective Context Acquisition

We can’t elicit the conditions for all the available contextual factors when the user rates a POI.

Page 6: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 7

STS w/ Selective Context Acquisition

Rather, we must elicit the conditions of a small subset of most important contextual factors.

Page 7: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 8

Agenda

Page 8: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 9

Context AcquisitionProblem in Commercial Systems• Numerous commercial systems in the tourism

domain face the context acquisition problem

TripAdvisor Foursquare

Page 9: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 10

A Priori Context Selection

• Web survey in which users evaluate the influence of contextual conditions on POI categories

• Allows to identify the relevant factors before collecting ratings

(Baltrunas et al., 2012)

Page 10: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 11

A Posteriori Context Selection

• Several statistic-based methods for detecting the relevant context after collecting ratings

• Results show a significant difference in prediction of ratings in relevant vs. irrelevant context (Odić et al., 2013)

Page 11: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 12

Agenda

Page 12: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 13

Parsimonious and Adaptive Context Acquisition

• Main idea: for each user-item pair (u, i), identify the contextual factors that when acquired with u’s rating for i improve most the long term performance of the recommender– Heuristic: acquire the contextual factors that have the

largest impact on rating prediction• Challenge: how to quantify these impacts?

Page 13: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 14

CARS Prediction Model

• We use a new variant of Context-Aware Matrix Factorization (CAMF) (Baltrunas et al., 2011) that treats contextual conditions similarly to either item or user attributes

Latent vector of item i Latent vector of user u

Latent vectors of conventional (e.g., genre)

and contextual item attributes (e.g., weather)

Avg. rating for item i

Bias of user uLatent vectors of conventional (e.g., age)

and contextual user attributes (e.g., mood)

Page 14: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 15

Largest Deviation

• Given (u, i), it computes a relevance score for each contextual factor Cj by first measuring the “impact” of each contextual condition cj C∈ j:

• Finally, it computes for each factor the average of these deviation scores, and selects the contextual factors with the largest average scores

Normalized freq. of cj

Rating prediction when cj holds

Predicted context-free rating

Page 15: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 16

Illustrative Example

• Let ȓAlice Skiing Sunny = 5, ȓAlice Skiing = 3.5 & fSunny = 0.2. Then, the impact of the “Sunny” condition is:– ŵAlice Skiing Sunny = 0.2 · |5 - 3.5| = 0.3

• Let ŵAlice Skiing Cloudy= 0.2, ŵAlice Skiing Rainy= 0.3 &ŵAlice Skiing Snowy= 0.1, the impacts of the other weather conditions. Then, the overall impact of the “Weather” factor is: – (0.3 + 0.2 + 0.3 + 0.1) ÷ 5 = 0.18

Page 16: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 17

Agenda

Page 17: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 18

DatasetsDataset STS TripAdvisor

Rating scale 1-5 1-5

Ratings 2,534 4,147

Users 325 3,916

Items 249 569

Contextual factors 14 3

Contextual conditions 57 31

Avg. # of factors known for each rating 1.49 3

User attributes 7 2

Item attributes 1 12

In STS when a user rates a POI she commonly

specifies at most 4 out of the 14 factors!

Page 18: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 19

Evaluation Procedure: Overview• Repeated random sampling (20 times):

– Randomly partition the ratings into 3 subsets

– For each user-item pair (u, i) in the candidate set, compute the N most relevant contextual factors and transfer the corresponding rating and context information ruic in the candidate set to the training set as ruic’ with c’ c containing the conditions for these factors, if any⊆

– Measure prediction accuracy (MAE) and ranking quality (Precision) on testing set, after training the prediction model on extended training set

– Repeat

Training set (25%) Candidate set (50%) Testing set (25%)

Page 19: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 20

user-item pair

Evaluation Procedure: Example

(Alice, Skiing)

Season and Weather

rAlice Skiing Winter, Sunny, Warm, Morning = 5

rAlice Skiing Winter, Sunny = 5

top two contextual factors

rating in candidate set

rating transferred to training set

+

+

=

Page 20: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 21

Baseline Methods for Evaluation

• Mutual Information: given a user-item pair (u,i), computes the relevance for a contextual factor Cj as the mutual information between ratings for items belonging to i’s category (Baltrunas et al., 2012)

• Freeman-Halton Test: calculates the relevance of Cj using the Freeman-Halton test (Odić et al., 2013)

• mRMR: ranks each Cj according to its relevance to the rating variable and redundancy to other contextual factors (Peng et al., 2005)

Page 21: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 22

Evaluation Results: Prediction Accuracy

Page 22: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 23

Evaluation Results: Ranking Quality

Page 23: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 24

Evaluation Results: # of Acquired Conditions

Page 24: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 25

Agenda

Page 25: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 26

Conclusions

• Using Largest Deviation, we know that we can ask only the contextual factors C1, C2 and C3 when we ask user u to rate item i

Page 26: Contextual Information Elicitation in Travel Recommender Systems

ENTER 2016 Research Track Slide Number 29

Questions?

Thank you.