enhancement of the neutrality in recommendation

25
Enhancement of the Neutrality in Recommendation Toshihiro Kamishima * , Shotaro Akaho * , Hideki Asoh * , and Jun Sakuma *National Institute of Advanced Industrial Science and Technology (AIST), Japan University of Tsukuba, Japan; and Japan Science and Technology Agency Workshop on Human Decision Making in Recommender Systems In conjunction with the RecSys 2012 @ Dublin, Ireland, Sep. 9, 2012 1 START

Upload: tosshihiro-kamishima

Post on 06-May-2015

776 views

Category:

Technology


0 download

DESCRIPTION

Enhancement of the Neutrality in Recommendation Workshop on Human Decision Making in Recommender Systems, in conjunction with RecSys 2012 Article @ Official Site: http://ceur-ws.org/Vol-893/ Article @ Personal Site: http://www.kamishima.net/archive/2012-ws-recsys-print.pdf Handnote : http://www.kamishima.net/archive/2012-ws-recsys-HN.pdf Program codes : http://www.kamishima.net/inrs Workshop Homepage: http://recex.ist.tugraz.at/RecSysWorkshop2012 Abstract: This paper proposes an algorithm for making recommendation so that the neutrality toward the viewpoint specified by a user is enhanced. This algorithm is useful for avoiding to make decisions based on biased information. Such a problem is pointed out as the filter bubble, which is the influence in social decisions biased by a personalization technology. To provide such a recommendation, we assume that a user specifies a viewpoint toward which the user want to enforce the neutrality, because recommendation that is neutral from any information is no longer recommendation. Given such a target viewpoint, we implemented information neutral recommendation algorithm by introducing a penalty term to enforce the statistical independence between the target viewpoint and a preference score. We empirically show that our algorithm enhances the independence toward the specified viewpoint by and then demonstrate how sets of recommended items are changed.

TRANSCRIPT

Page 1: Enhancement of the Neutrality in Recommendation

Enhancement of the Neutralityin Recommendation

Toshihiro Kamishima*, Shotaro Akaho*, Hideki Asoh*, and Jun Sakuma†

*National Institute of Advanced Industrial Science and Technology (AIST), Japan†University of Tsukuba, Japan; and Japan Science and Technology Agency

Workshop on Human Decision Making in Recommender SystemsIn conjunction with the RecSys 2012 @ Dublin, Ireland, Sep. 9, 2012

1START

Page 2: Enhancement of the Neutrality in Recommendation

Overview

2

Decision Making and Neutrality in Recommendation

Providing neutral information is important in recommendation

Information Neutral Recommender System

A system makes recommendation so as to enhancethe neutrality from a viewpoint specified by a user

The absolutely neutral recommendation is intrinsically infeasible, because recommendation is always biased in a sense that it is arranged for a specific user,

Decisions based on biased information brings undesirable results

Page 3: Enhancement of the Neutrality in Recommendation

Outline

3

IntroductionImportance of the Neutrality and the Filter Bubble

influence of biased recommendation, filter bubble problem, discussion in the RecSys 2011 panel

Neutrality in Recommendationugly duckling theorem, information neutral recommendation

Information Neutral Recommender Systemlatent factor model, viewpoint variable, neutrality function

ExperimentsConclusion

Page 4: Enhancement of the Neutrality in Recommendation

Importance of the Neutralityand the Filter Bubble

4

Page 5: Enhancement of the Neutrality in Recommendation

Biased Recommendation

5

Biased Recommendation

exclude a good candidate from a set of optionsrate relatively inferior options higher

Inappropriate Decision

The Filter Bubble ProblemPariser posed a concern that personalization technologies narrow and bias the topics of information provided to people

http://www.thefilterbubble.com/

Page 6: Enhancement of the Neutrality in Recommendation

Filter Bubble

6

[TED Talk by Eli Pariser]

Friend Recommendation List in Facebook

Users lost opportunities to obtain information about a wide variety of topicsEach user obtains too personalized information, and this make it difficult to build consensus in our society

A summary of Pariser’s claim

conservative people are eliminated form Pariser’s recommendation list

Page 7: Enhancement of the Neutrality in Recommendation

RecSys 2011 Panel on Filter Bubble

7

Intrinsic trade-off

providinga diversity of topics

focusing onusers’ interests

To select something is not to select other things

RecSys 2011 Panel on Filter Bubble

Are there “filter bubbles?”To what degree is personalized filtering a problem?What should we as a community do to address the filter bubble issue?http://acmrecsys.wordpress.com/2011/10/25/panel-on-the-filter-bubble/

[RecSys 2011 Panel on the Filter Bubble]

Page 8: Enhancement of the Neutrality in Recommendation

RecSys 2011 Panel on Filter Bubble

8

Intrinsic trade-off

providingdiversity of topics

focusing onusers’ interests

To select something is not to select other things

Personalized filtering is a necessity

Personalized filtering is a very effective toolto find interesting things from the flood of information

[RecSys 2011 Panel on the Filter Bubble]

Page 9: Enhancement of the Neutrality in Recommendation

RecSys 2011 Panel on Filter Bubble

9

recipes for alleviatingundesirable influence of personalized filtering

capture the users’ long-term interestsconsider preference of item portfolio, not individual itemsfollow the changes of users’ preference pattern

give users to control perspectiveto see the world through other eyes

our approach

[RecSys 2011 Panel on the Filter Bubble]

Personalized filtering is a necessity

Personalized filtering is a very effective toolto find interesting things from the flood of information

Page 10: Enhancement of the Neutrality in Recommendation

Neutrality in Recommendation

10

Page 11: Enhancement of the Neutrality in Recommendation

Ugly Duckling Theorem

11

Ugly Duckling Theorem on fundamental property of classification[Watanabe 69]

=similarity betweenan ugly and a normal ducklings

similarity betweenany pair of normal ducklings

if the similarity between a pair of ducklings is measured by the number of potential binary classification rules that classifies both of them into the same positive class

An ugly and a normal ducklings are indistinguishable

We classify them by methods like SVMs everyday...

Extremely unintuitive! Why?

Page 12: Enhancement of the Neutrality in Recommendation

Ugly Duckling Theorem

12

[Watanabe 69]

When classification, one must emphasize some features of objects and must ignore the other features

The number of classification rules are considered,but properties of rules are completely ignored

All features are equally treatedex. the weight of a body color feature equals to that of a length of a ducklingThe complexity of rules is ignoredex. the number of features included in rules

Extremely unintuitive! Why?

Page 13: Enhancement of the Neutrality in Recommendation

Information Neutral Recommendation

13

Ugly Duckling TheoremA part of aspects must be stressed when classifying objects

It is infeasible to make recommendationthat is neutral from any viewpoints

Information Neutral Recommendation

ex. A recommender system enhances the neutrality in terms of whether conservative or progressive, but it is allowed to make biased recommendations in terms of other viewpoints, for example, the birthplace or age of friends

the neutrality from a viewpoint specified by a userand other viewpoints are not considered

Page 14: Enhancement of the Neutrality in Recommendation

Information NeutralRecommender System

14

Page 15: Enhancement of the Neutrality in Recommendation

Information Neutral Recommender System

15

: viewpoint variableA binary variable representing a viewpoint specified by a user

Information Neutral Recommender System

neutral from a specified viewpointmaximize statistical independence

between a preference score and a viewpoint variable

Information neutral version of a latent factor model

+high prediction accuracy

minimize an empirical error plus a L2 regularization term

v

Page 16: Enhancement of the Neutrality in Recommendation

Latent Factor Model

16

s(x, y) = µ + bx + cy + pxq�y

Predicting Ratings Taskpredict a preference score of an item y rated by a user x

[Koren 08]

Latent Factor Model : basic model of matrix decomposition

cross effect ofusers and itemsglobal bias

user-dependent bias item-dependent bias

For a given training data set, model parameters are learned by minimizing the squared loss function with a L2 regularizer

Page 17: Enhancement of the Neutrality in Recommendation

Information Neutral Latent Factor Model

17

modifications of a latent factor model

s(x, y, v) = µ(v) + b(v)x + c(v)

y + p(v)x q(v)

y

adjust scores according to the state of a viewpointincorporate dependency on a viewpoint variableenhance the neutrality of a score from a viewpointadd a neutrality function as a constraint term

adjust scores according to the state of a viewpoint

Multiple latent factor models are built separately, and each of these models corresponds to the each value of a viewpoint variableWhen predicting scores, a model is selected according to the value of viewpoint variable

viewpoint variables

Page 18: Enhancement of the Neutrality in Recommendation

Information Neutral Latent Factor Model

18

enhance the neutrality of a score from a viewpoint

D(si � s(xi, yi, vi))

2 � � neutral(s(xi, yi, vi), vi) + � ���22

Parameters are learned by minimizing this objective function

squared loss function neutrality function L2 regularizer

regularizationparameter

neutrality parameter to balancebetween the neutrality and the accuracy

neutral(s, v)neutrality function, : quantify the degree of neutrality

The larger output of a neutrality function,the higher degree of the neutrality of a prediction score

from a viewpoint variable

Page 19: Enhancement of the Neutrality in Recommendation

Mutual Information as Neutrality Function

19

neutrality = scores are not influenced by a viewpoint variable

neutrality function = negative mutual informationWe treat the neutrality as the statistical independence,

and it is quantified by mutual informationbetween a predicted score and a viewpoint variable

This distribution function modeled by a histogram model

Failed to derive an analytical form of gradients of objective function

An objective function is minimized by a Powell method without gradients

Pr[s|v] is required for computing mutual information

Page 20: Enhancement of the Neutrality in Recommendation

Experiments

20

Page 21: Enhancement of the Neutrality in Recommendation

Experimental Conditions

21

General Conditions9,409 use-item pairs are sampled from the Movielens 100k data set(A Powell optimizer is computationally inefficient and cannot be applied to a large data set)the number of latent factor K = 1regularization parameter λ = 0.01Evaluation measures are calculated by using five-fold cross validation

Evaluation MeasureMAE (mean absolute error)

prediction accuracyNMI (normalized mutual information)

the neutrality of a preference score from a specified viewpoint(mutual information between the predicted scores and the values of viewpoint variable, and it is normalized into the range [0, 1])

Page 22: Enhancement of the Neutrality in Recommendation

The values of viewpoint variables are determineddepending on a user and/or an item

Viewpoint Variables

22

The older movies have a tendency to be rated higher, perhaps because only masterpieces have survived [Koren 2009]

“Year” viewpoint : a movie’s release year is newer than 1990 or not

“Gender” viewpoint : a user is male or female

The movie rating would depend on the user’s gender

Page 23: Enhancement of the Neutrality in Recommendation

neutrality parameter η : the lager value enhances the neutrality more

YearGender

0.80

0.85

0.90

0 10 20 30 40 50 60 70 80 90 100

YearGender

0.005

0.010

0.050

0 10 20 30 40 50 60 70 80 90 100

Experimental Results

23

high

er a

ccur

acy

high

er d

egre

e of

neu

tralit

y

degree of neutrality (NMI)prediction accuracy (MAE)

INRS could successfully improved the neutralitywithout seriously sacrificing the prediction accuracy

As the increase of a neutrality parameter η, prediction accuracies were worsened slightly, but the neutralities were improved drastically

Page 24: Enhancement of the Neutrality in Recommendation

Conclusion

24

Our ContributionsWe formulate the neutrality in recommendation based on the ugly duckling theoremWe developed a recommender system that can enhance the neutrality in recommendationOur experimental results show that the neutrality is successfully enhanced without seriously sacrificing the prediction accuracy

Future WorkOur current formulation is poor in its scalability

formulation of the objective function whose gradients can be derived analyticallyneutrality functions other than mutual information, such as the kurtosis used in ICA

Information neutral version of generative recommendation models, such as a pLSA / LDA model

Page 25: Enhancement of the Neutrality in Recommendation

program codes and data setshttp://www.kamishima.net/inrs

acknowledgementsWe would like to thank for providing a data set for the Grouplens research labThis work is supported by MEXT/JSPS KAKENHI Grant Number 16700157, 21500154, 22500142, 23240043, and 24500194, and JST PRESTO 09152492

25