semtech 2012 - making your semantic app addictive: incentivizing users
DESCRIPTION
TRANSCRIPT
Making Your Semantic Application Addictive:
Incentivizing Users
Roberta Cuel Univeristy of Trento (Italy) – KIT (Germany)
Topics of the session
• The role of human contributions in the creation of semantic descriptions of digital artifacts.
• Methods and principles for the design of incentives-compatible semantic-annotation technology.
• Case studies: • TID: Telefónica R&D corporate knowledge• “Taste it! Try it” mobile app
Semantic content authoring
• Rely on human inputs: • Modeling a domain.• Understanding text and media content• Integrating data sources originating from different
contexts• …
• Motivating users to contribute is essential for semantic technologies to reach critical mass and ensure sustainable growth.
• Realize incentivized semantic applications.
What is the secret to sustainable success?
• Offer solution to a real problem: right solution at the right time – at least 50% of success
Our approach: Ideally: field desk lab field
A procedural ordering of methods to develop incentive compatible applications
2/23/11 2
Motivations in the Web 2.0
2/23/11 2
• Motivation and incentives– Reciprocity – Reputation – Competition– Altruism– Self-esteem – Fun– Money
Intrinsic / Extrinsic motivations
Kaufman, Schulze, Veit (Mannheim University)
Theories of motivation (latin move)
Content theories of motivation•Need theories •Herzberg’s “two factor” theory•McClelland’s achievement-<power-affiliation theory
Job characteristic approach (Skill variety, autonomy, .. )
Process Theories of motivation
-Reinforcement theory
-Goal setting theory
-Expectancy theory
-Organizational justice theory,
-…, …, ...
Performance : f (ability*motivation)Incentives Motivation Performance
Psychological meaning: internal mental state pertaining to:-initiation, -direction, -persistence, -intensity and -termination of behavior
The incentive analytical tool
Goal Tasks Social
Structure
Nature of good being produced
Communication level (about the goal of the tasks)
High
Variety of
High
Hierarchy neutral
Public good (non-rival
non-exclusive)
Medium Medium
Low Low
Participation level (in the definition
of the goal)
High
Specificity of
High
Medium Medium
Low Low
Clarity level
High Identification
with High
Hierarchical Private good
(rival, exclusive)
Low
Low Required
skills
Highly specific Trivial
Common
TWO CASE STUDIES
TID: Telefónica R&D corporate knowledge
“Taste it! Try it” mobile app for reviewing restaurant and other PoI
Enterprise Knowledge Management @ TID - Spain
• Services of the intranet portal• Document management• Corporate directories• Pilot/Product/Service
catalogues• News• Bank of ideas• Blogs, wikis, forums• Search engines
• Some info• 1200 employees in 7 cities
and 3 countries (↑)• ˜3050 visits per day, ˜56000
page views (impressions) per day, average visit time: 20’
Field and domain analysis
Domain analysis•Site visit, semi-structured, qualitative interviews (Communication processes, Existing usage practices, problems, tools/solutions)
• Tape recording, transcription
• Data analysis per ex-post categorization
•Focus group discussion• Usability lab tests and Expert walkthroughs
•Lab experiment • Two payments
•Field experiment• Natural vs. semantic
annotation
We need to design the “game” in a way that permits to achieve the outcome in annotations but does not distruct too much employees from their main job
The incentive analytical tool and TID motivations
The Mechanism design exercise in our case study (I)
Interplay of two alternative games:• Principal agent game
• No tools to check employees perform at their best• Management can implement various incentives:
• Piece rate wages (labour intensive tasks)• Performance measurement (all levels of tasks)• Tournaments (internal labour market)
• Public goods game• Semantic content creation is a public good (non-excludable and
non-rival)• The problem of free riding
The prototype creation
PD workshops and HCI analysis
Lab experiment
36 students
Individual task: annotation of images
Time: 8 mins
Two rewarding/incentives systems•Pay per click: 0,03 € per tag•Winner takes all model: 20€
2/23/11 www.insemtives.eu 39
2/23/11 www.insemtives.eu 40
2/23/11 www.insemtives.eu 41
Some resultsIn WTA treatment, 76 % of subjects make more annotations than the average number of annotations in PPT scenario.
Prototype refinement
Incentivizing the tool …making it fun
… harnessing the networks and reputation effects
• Competitive environment• Internal market of labour• Reputation in terms of expertise)• HR Department should be involved
Field experiment
Real users and tasks should have
– practical usefulness for users (search):
– social implications, providing information about people, and their performances
Some results • 2761 annotations, • 82% are semantic
Competition Social
Number of annotation 1589 1172
% of semantic annotation 88,92% 71,84
Maximum number of annotation 439 262
Annotation of free text 180 326
Competition: 200€
Social: daily contributor on Yammer
Social rewards are as strong as monetary rewards! (Man
Whitney test )
Taste it! Try it! Goals of the tool:
• provide semantically-enabled reviews Features
• sufficiently easy to create for end-user acceptance• keep a user entertained - Facebook and badges• offer the personalized, semantic, context-aware recommendation process
Research context: (ontology-based) collaborative filtering and user clustering, structuring and disambiguation of the reviews by using domain knowledge and incentives
The application
Badges
A scenario
Experiment
Hypothesis:•Points vs. badge •No information about others vs. information•No information about herself vs. information
(6 groups) x (~ 25 students) = ~150 students•Group 0: Points, Piece vise, no info on others private info, web based•Group 1: Points, piece vise, median, public info•Group 2: Points, piece vise, neighborhood, public info•Group 3: Badge, piece vise, no info on others private info, web based•Group 4: Badge, piece vise, median, public info - treatment•Group 5. Badge, piece vise, neighborhood, public info - treatment
Points: max. 8 for creating reviews and 2 points for filling in the questionnaire
Average ScoreAverage number
reviewsAverage number
semantic annotation Average time Average number
of actions *10Group 0 7,4223 11,41 4,41 6,6 4,85 Group 1 7,4904 12,08 3,76 5,26 5,71 Group 2 10,3607 15,44 7,26 4,83 7,14 Group 3 7,6246 12,08 4,98 4,26 10,42 Group 4 7,7612 12,32 4,48 6,46 8,24 Group 5 8,1615 12 5,87 5,58 11,51
As proposed in game mechanics (showing the neighborhood performance) is more effective than the median story that is now the "top" at least in published
economics papers ;-)