t9. trust and reputation in multi-agent systems
DESCRIPTION
14th European Agent Systems Summer SchoolTRANSCRIPT
Trust & Reputation in Multi-Agent Systems
Dr. Jordi Sabater Mir
1 EASSS 2012, Valencia, Spain
Dr. Javier Carbó
IIIA – Artificial Intelligence Research Institute CSIC – Spanish National Research Council
Dr. Jordi Sabater-Mir
Outline
• Introduction • Approaches to control the interaction • Computational reputation models
– eBay – ReGreT
• A cognitive perspective to computational reputation models – A cognitive view on Reputation – Repage, a computational cognitive reputation model – [Properly] Integrating a [cognitive] reputation model into a
[cognitive] agent architecture – Arguing about reputation concepts
“A complete absence of trust
would prevent [one] even getting
up in the morning.”
Niklas Luhman - 1979
Trust
Trust
A couple of definitions that I like: “Trust begins where knowledge [certainty] ends: trust provides a basis dealing with uncertain, complex, and threatening images of the future.” (Luhmann,1979) “Trust is the outcome of observations leading to the belief that the actions of another may be relied upon, without explicit guarantee, to achieve a goal in a risky situation.” (Elofson, 2001)
6
Trust
“The subjective probability by which an individual, A, expects that another individual, B, performs a given action on which its welfare depends” [Gambetta]
“An expectation about an uncertain behaviour” [Marsh]
“The decision and the act of relying on, counting on, depending on [the trustee]” [Castelfranchi & Falcone]
Epistemic
Motivational
"After death, a tiger leaves behind
his skin, a man his reputation"
Vietnamese proverb
Reputation
Reputation
“What a social entity says about a target regarding his/her behavior”
Set of individuals plus a set of social relations among these individuals or properties that identify them as a group in front of its own members and the society at large.
• The social evaluation linked to the reputation is not necessarily a belief of the issuer. • Reputation cannot exist without communication.
It is always associated to a specific behaviour/property
What is reputation good for?
• Reputation is one of the elements that allows us to build trust.
• Reputation has also a social dimension. It is not only useful for the individual but also for the society as a mechanism for social order.
But... why we need computational models of those concepts?
What we are talking about...
Mr. Yellow
What we are talking about...
Mr. Yellow
Direct experiences Two years ago... Trust based on...
What we are talking about...
Mr. Yellow
Third party information Trust based on...
Mr. Pink
What we are talking about...
Mr. Yellow
Third party information Trust based on...
Mr. Pink Mr. Green
What we are talking about...
Mr. Yellow
Reputation Trust based on...
What we are talking about...
Mr. Yellow
What we are talking about...
?
Characteristics of computational trust and reputation mechanisms
• Each agent is a norm enforcer and is also under surveillance by the others. No central authority needed.
• Their nature allows to arrive where laws and central authorities cannot.
• Punishment is based usually in ostracism. Therefore, exclusion must be a punishment for the outsider.
• Bootstrap problem.
• Not all kind of environments are suitable to apply these mechanisms. It is necessary a social environment.
Characteristics of computational trust and reputation mechanisms
Approaches to control the interaction
Different approaches to control the interaction
Security approach
• Security approach
Different approaches to control the interaction
Agent identity validation. Integrity, authenticity of messages. ...
Different approaches to control the interaction
Security approach
Institutional approach
• Institutional approach
Different approaches to control the interaction
Different approaches to control the interaction
Security approach
Institutional approach
Social approach Trust and reputation mechanisms are at this level.
They are complementary and cover different aspects of interaction.
Computational reputation models
Classification dimensions
• Paradigm type • Mathematical approach
• Cognitive approach
• Information sources • Direct experiences
• Witness information
• Sociological information
• Prejudice
• Visibility types • Subjective
• Global
• Model’s granularity • Single context
• Multi context
• Agent behaviour assumptions • Cheating is not considered
• Agents can hide or bias the
information but they never lie
• Type of exchanged information
Subjective vs Global • Global
• The reputation is maintained as a centralized resource. • All the agents in that society have access to the same reputation values. Advantages: • Reputation information is available even if you are a newcomer and do not depend on how well connected or good informants you have. • Agents can be simpler because they don’t need to calculate reputation values, just use them.
Disadvantages: • Particular mental states of the agent or its singular situation are not taken into account when reputation is calculated. Therefore, a global view it is only possible when we can assume that all the agents think and behave similar. • Not always is desireable for an agent to make public information about the direct experiences or submit that information to an external authority. • Therefore, a high trust on the central institution managing reputation is essential.
Subjective vs Global
• Subjective • The reputation is maintained by each agent and is calculated according to its own direct experiences, information from its contacts, its social relations...
Advantages: • Reputation values can be calculated taking into account the current state of the agent and its individual particularities.
Disadvantages: • The models are more complex, usually because they can use extra sources of information. • Each agent has to worry about getting the information to build reputation values. • Less information is available so the models have to be more accurate to avoid noise.
A global reputation model: eBay
Model oriented to support trust between buyer and seller.
• Completely centralized. • Buyers and sellers may leave comments about each other after transactions. • Comment: a line of text + numeric evaluation (-1,0,1) • Each eBay member has a Feedback score that is the summation of the numerical evaluations.
eBay model
Specifically oriented to scenarios with the following characteristics:
• A lot of users (we are talking about milions) • Few chances of repeating interaction with the same partner • Easy to change identity • Human oriented
• Considers reputation as a global property and uses a single value that is not dependent on the context. • A great number of opinions that “dilute” false or biased information is the only way to increase the reliability of the reputation value.
eBay model
A subjective reputation model: ReGreT
What is the ReGreT system?
It is a modular trust and reputation system
oriented to complex e-commerce environments
where social relations among individuals play
an important role.
Reputation model
Witness reputation
System reputation
Neigh- bourhood reputation
ODB
Direct Trust
Credibility
IDB SDB
Trust
The ReGreT system
Reputation model
Witness reputation
System reputation
Neigh- bourhood reputation
ODB
Direct Trust
Credibility
IDB SDB
Trust
The ReGreT system
Outcome:
The initial contract
– to take a particular course of actions
– to establish the terms and conditions of a transaction.
AND
The actual result of the contract.
Outcomes and Impressions
Prize =c 2000
Quality =c A
Quantity =c 300
Prize =f 2000
Quality =f C
Quantity =f 295
Example:
Outcome
Contract
Fulfillment
Outcomes and Impressions
Prize =c 2000
Quality =c A
Quantity =c 300
Prize =f 2000
Quality =f C
Quantity =f 295
Outcome
offers_good_prices
maintains_agreed_quantities
Impression:
The subjective evaluation of an outcome from a specific point of view.
Outcomes and Impressions
Prize =c 2000
Quality =c A
Quantity =c 300
Prize =f 2000
Quality =f C
Quantity =f 295
Outcome ),(Imp 1o
),(Imp 2o
),(Imp 3o
Reputation model
Witness reputation
System reputation
Neigh- bourhood reputation
ODB
Direct Trust
Credibility
IDB SDB
Trust
The ReGreT system
Reliability of the value based on: • Number of outcomes • Deviation: The greater the variability in the rating values the more volatile will be the other agent in the fulfillment of its agreements.
Trust relationship calculated directly from an agent’s outcomes database.
t
tttf i
i ),(
ba
grj IDBo j
ii
ttf
ttftt
,)(
),(
),(),(
ba
gri ODBo
iiba ottDT,
)(
),Imp(),()(
Direct Trust
)(1()()(,
)(
,
)(
ba
gr
ba
grbaODBDvODBNoDTRL
DT reliability
Direct Trust
Deviation (Dv)
The greater the variability in
the rating values the more volatile will be the other
agent in the fulfillment of its agreements.
Number of outcomes
(No)
10),(,
)( itmODBNo
ba
gr
Reputation model
Witness reputation
System reputation
Neigh- bourhood reputation
ODB
Direct Trust
Credibility
IDB SDB
Trust
The ReGreT system
Problems of witness information:
• Can be false.
• Can be incomplete.
• It may suffer from the “correlated evidence” problem.
Witness reputation
Reputation that an agent builds on another agent based on the beliefs gathered from society members (witnesses).
A B C
D o
+
o
#
o
+
#
^
+
^
a1 o
a2 +
c1 o+
d2 ^
d1 +
b1 o
b2 # c2
#^
u2
u4
u1 u5
u3
o
#
+
+
^
o
#
u6
u7
u8 u9
u2
u4
u1
u5
u3
u6
u7
u8
u9
b1 o
a2 +
c1 o+
d2 ^
d1 +
a1 o
b2 #
c2 #^
trade
A B C
D o
+
o
#
o
+
#
^
+
^
a1 o
a2 +
c1 o+
d2 ^
d1 +
b1 o
b2 # c2
#^
u2
u4
u1 u5
u3
o
#
+
+
^
o
#
u6
u7
u8 u9
cooperation
u2
u4 u1
u3
u6 u7 u8
u9
u5
Big exchange of sincere infor-
mation and some kind of predispo-
sition to help if it is possible.
A B C
D o
+
o
#
o
+
#
^
+
^
a1 o
a2 +
c1 o+
d2 ^
d1 +
b1 o
b2 # c2
#^
u2
u4
u1 u5
u3
o
#
+
+
^
o
#
u6
u7
u8 u9
competition
u2
u4
u1 u3
u6
u7 u8
u9
u5
Agents tend to use all the available
mechanisms to take some advantage
from their competitors.
Witness reputation
Step 1: Identifying
the witnesses
• Initial set of witnesses:
Agents that have had
a trade Relation with
the target agent
u2
u4
u1
u5
u3
u6
u7
u8
u9
b1 o
a2 +
c1 o+
d2 ^
d1 +
a1 o
b2 #
c2 #^
trade
?
#
Witness reputation
Step 1: Identifying
the witnesses
• Initial set of witnesses:
Agents that have had
a trade Relation with
the target agent
u2
u4
u5
u3
u6
u7
u8
b2 #
trade
Grouping agents with frequent interactions
among them and considering each one of these
groups as a single source of reputation values:
• Minimizes the correlated evidence problem.
• Reduces the number of queries to agents that
probably will give us more or less the same
information.
To group agents ReGreT relies on sociograms.
cooperation
Heuristic to identify groups and
the best agents to represent
them:
1. Identify the components of
the graph.
2. For each component, find the
set of cut-points.
3. For each component that
does not have any cut-point,
select a central point (node
with larger degree).
u2
u4
u5
u3
u6
u7
u8
# b2
Central-point
Cut-point
Witness reputation
Step 1: Identifying
the witnesses
• Initial set of witnesses:
Agents that have had
a trade Relation with
the target agent
• Grouping and selecting
the most representative
witnesses
u2
u4
u5
u3
u6
u7
u8
b2 #
trade
Witness
reputation
Step 1: Identifying
the witnesses
u2
u5
b2 #
trade
• Initial set of witnesses:
Agents that have had
a trade Relation with
the target agent
• Grouping and selecting
the most representative
witnesses
u2
u5
u3
b2 #
trade
Witness
reputation
Step 1: Identifying
the witnesses
u2
u5
Step 2: Who can I
trust?
u3
)(),( 2222 bubu TrustRLTrust
)(),( 2525 bubu TrustRLTrust
Witness
reputation
Reputation model
Witness reputation
System reputation
Neigh- bourhood reputation
ODB
Direct Trust
Credibility
IDB SDB
Trust
The ReGreT system
Credibility model
Two methods are used to evaluate the credibility of
witnesses:
Credibility
(witnessCr)
Social relations
(socialCr)
Past history
(infoCr)
• socialCr(a,w,b): credibility that agent a assigns to agent w when
w is giving information about b and considering the social structure
among w, b and himself.
cooperative relation
competitive relationw - witness
b - target agent
a - source agent
w
a
b
w
a
b
w
a
b
w
a
b
w
a
b
w
a
b
w
a
b
w
a
b
w
a
b
Credibility model
IF coop(w,b) is h
THEN socialCr(a,w,b) is vl
Regret uses fuzzy rules to calculate how the structure of
social relations influences the credibility on the information.
Credibility model
0 1
0
1
low
(l)
moderate
(m)
high
(h)
0 1
0
1
very_low
(vl)
low
(l)
moderate
(m)
very_high
(vh)
high
(h)
Reputation model
Witness reputation
System reputation
Neigh- bourhood reputation
ODB
Direct Trust
Credibility
IDB SDB
Trust
The ReGreT system
ReGreT uses fuzzy rules to model this reputation.
IF is X AND coop(b, ) low
THEN is X
)( d_qualityoffers_gooDTina
)( d_qualityoffers_gooR bain
in
IF is X’ AND coop(b, ) is Y’
THEN is T(X’,Y’)
)( d_qualityoffers_gooDTRLina
)( d_qualityoffers_gooRL bain
in
Neighbourhood reputation
The trust on the agents that are in the “neighbourhood” of the target agent and their relation with it are the elements used to calculate what we call the Neighbourhood reputation.
Reputation model
Witness reputation
System reputation
Neigh- bourhood reputation
ODB
Direct Trust
Credibility
IDB SDB
Trust
The ReGreT system
The idea behind the System reputation is to use the common knowledge about social groups and the role that the agent is playing in the society as a mechanism to assign reputation values to other agents. The knowledge necessary to calculate a system reputation is usually inherited from the group or groups to which the agent belongs to.
System reputation
Trust
Reputation model
Witness reputation
System reputation
Neigh- bourhood reputation
Direct Trust
Trust
If the agent has a reliable direct trust value, it will use that as a measure of trust. If that value is not so reliable then it will use reputation.
A cognitive perspective to computational reputation models
• A cognitive view on Reputation
• Repage, a computational cognitive reputation model
• [Properly] Integrating a [cognitive] reputation model into a [cognitive] agent architecture
• Arguing about reputation concepts
Social evaluation
• A social evaluation, as the name suggests, is the evaluation by a social entity of a property related to a social aspect.
• Social evaluations may concern physical, mental, and social properties of targets. • A social evaluation includes at least three sets of agents:
a set E of agents who share the evaluation (evaluators) a set T of evaluation targets a set B of beneficiaries
We can find examples where the different sets intersect totally, partially, etc... e (e in E) may evaluate t (t in T) with regard to a state of the world that is in b’s (b in B) interest, but of which b not necessarily is aware.
Example: quality of TV programs during children’s timeshare
Image and Reputation
• Both are social evaluations.
• They concern other agents' (targets) attitudes toward socially desirable behaviour but... ...whereas image consists of a set of evaluative beliefs about the characteristics of a target, reputation concerns the voice that is circulating on the same target.
Reputation in artificial societies [Rosaria Conte, Mario Paolucci]
Beliefs
Image
The agent has accepted φ as something true and its decisions from now on will take this into account. B
Is the result of an internal reasoning on different sources of information that leads the agent to create a belief about the behaviour of another agent.
Social evaluation
“An evaluative belief; it tells whether the target is good or bad with respect to a given behaviour” [Conte & Paolucci]
Reputation
• A voice is something that “it is said”, a piece of information that is being transmitted.
• Reputation: a voice about a social evaluation that is recognised by the members of a group to be circulating among them.
Beliefs
B(S(f)) • The agent believes that the social evaluation f is communicated.
• This does not imply that the agent believes that f is true.
Reputation Implications:
• The agent that spreads a reputation, because it is not implicit that it believes the associated social evaluation, takes no responsibility about that social evaluation (another thing is the responsibility associated to the action of spreading that reputation).
• This fact allows reputation to circulate more easily than image (less/no fear of retaliation).
• Notice that if an agent believes “what people say”, image and reputation colapse.
• This distinction has important advantages from a technical point of view.
Gossip
• In order for reputation to exist, it has to be transmitted. We cannot have reputation without communication.
• Gossip currently has the meaning of an idle talk or rumour, especially about the personal or private affairs of others. Usually has a bad connotation. But in fact is an essential element in human nature.
• The antecedents of gossip is grooming. • Studies from evolutionary psicology have found gossip to be very important as a mechanism to spread reputation [Sommerfeld et al. 07, Dunbar 04]
• Gossip and reputation complement social norms: Reputation evolves along with implicit norms to encourage socially desirable conducts, such as benevolence or altruism and discourage socially unacceptable ones, like cheating.
Outline
• A cognitive view on Reputation
• Repage, a computational cognitive reputation model
• [Properly] Integrating a [cognitive] reputation model into a [cognitive] agent architecture
• Arguing about reputation concepts
RepAge
What is the RepAge model?
It is a reputation model evolved from a
cognitive theory by Conte and Paolucci.
The model is designed with an special
attention to the internal representation of the
elements used to build images and
reputations as well as the inter-relations of
these elements.
P P P P P
P P P
P P
Rep
RepAge memory
Img P
Strength: 0.6
Value:
RepAge memory
Outline
• A cognitive view on Reputation
• Repage, a computational cognitive reputation model
• [Properly] Integrating a [cognitive] reputation model into a [cognitive] agent architecture
• Arguing about reputation concepts
What do you mean by “properly”?
Trust & Reputation system
Inputs
Current models
Planner
Decision mechanism
Comm
Agent Black box
Reactive
?
Trust & Reputation system
Inputs
Current models
Planner
Decision mechanism
Comm
Agent Black box
Reactive
Value
What do you mean by “properly”?
Trust & Reputation system
Inputs
Planner
Decision mechanism
Comm
Agent
The next generation?
What do you mean by “properly”?
Inputs
Planner
Decision mechanism
Comm
Agent
The next generation?
Not only reactive... ... proactive
What do you mean by “properly”?
BDI model
• Very popular model in the multiagent community.
• Has the origins in the theory of human practical reasoning [Bratman] and the notion of intentional systems [Dennett].
• The main idea is that we can talk about computer programs as if they have a “mental state”.
• Specifically, the BDI model is based on three mental attitudes: Beliefs - what the agent thinks it is true about the world. Desires - world states the agent would like to achieve. Intentions - world states the agent is putting efforts to achieve.
BDI model
• The agent is described in terms of these mental attitudes.
• The decision-making model underlying the BDI model is known as practical reasoning.
• In short, practical reasoning is what allows the agent to go from beliefs, desires and intentions to actions.
UNITS
Bridge Rules • Rules of inference wich relate formulae in different units.
• Structural entities representing the main architecture components. Each unit has a single logic associated with it.
Logics
Theories
• Declarative languages, each with a set of axioms amd a number of rules of inference.
• Sets of formulae written in the logic associated with a unit
Multicontext systems
U1:b
U3:a
U2:d ,
U1 U2
U3
d
U1:b
U3:a
U2:d ,
U1 U2
U3
d b
U1:b
U3:a
U2:d ,
U1 U2
U3
d b
U1:b
U3:a
U2:d ,
U1 U2
U3
d b
a
Multicontext
Repage integration in a BDI architecture
BC-LOGIC
Grounding Image and Reputation to BC-Logic
Repage integration in a BDI architecture
Desire and Intention context
Generating Realistic Desires
Generating Intentions
Repage integration in a BDI architecture
Outline
• A cognitive view on Reputation
• Repage, a computational cognitive reputation model
• [Properly] Integrating a [cognitive] reputation model into a [cognitive] agent architecture
• Arguing about reputation concepts
Arguing about Reputation Concepts
Goal: Allow agents to participate in argumentation-based dialogs regarding reputation elements in order to:
- Decide on the acceptance of a communicated social evaluation based on its reliability.
“Is the argument associated to a communicated social evaluation (and according to my knowledge) strong enough to consider its inclusion in the knwoledge base of my reputation model?”
- Help in the process of trust alignment. What we need:
• A language that allows the exchange of reputation-related information. • An argumentation framework that fits the requirements imposed by the particular nature of reputation. • A dialog protocol to allow agents establish information seeking dialogs.
The language: LRep
LREP : First-order sorted languange with special predicates representing the typology of social evaluations we use: Img, Rep, ShV, ShE, DE, Comm. •SF: Set of constant formulas
Allows LREP formulas to be nested in communications
• SV: Set of evaluative values
Ex 2: Linguistic Labels
{ 0 , 1 , 2 , 3 , 4 } f:
The reputation argumentation framework
• Given the nature of social evaluations (the values of a social evaluation are graded) we need an argumentation framework that allows to weight the attacks. Example: We have to be able to differentiate between Img(j,seller,VG) being attacked by Img(j,seller,G) or being attacked by Img(j,seller,VB).
• Specifically we instantiate the Weighted Abstract Argumentation Framework defined in P.E. Dunne, A. Hunter, P. McBurney, S. Parsons, and M. Wooldridge, ‘Inconsistency tolerance in weighted argument systems’, in AAMAS’09, pp. 851–858, (2009). • Basically, this framework introduces the notions of strength and inconsistency budgets (defined as the amount of “inconsistency” that the system can tolerate regarding attacks) in a classical Dung’s framework.
Building Argumentative Theories
Reputation-related information
Argumentation level ? ?
Reputation theory: set of ground elements (expressed in LREP) gathered by j through interactions and communications.
Consequence relation (Reputation model)
Specific to each agent
Argumentative theory (Build from the
reputation theory) Simple shared consequence relation
Attack and Strength
Strength of the attack
{ 0 , 1 , 2 , 3 , 4 } f:
Example of argumentative dialog
• Agent i: proponent • Agent j: opponent
i j
Role: seller
Role: sell(q) quality
Role: sell(dt) delivery time
Role: Inf informant
• Each agent is equipped with a Reputation Weighted Argument System
Example of argumentative dialog
i j
Example of argumentative dialog
Strength of the attack
i j
Example of argumentative dialog
i j
Example of argumentative dialog
i j
Example of argumentative dialog
i j
Example of argumentative dialog
i j
i j
Using Inconsistency Budgets
Outline
+ PART II:
Trust Computing Approaches
Security
Institutional
Social
Evaluation of Trust and Reputation Models
EASSS 2010, Saint-Etienne, France 111
GIAA – Group of Applied Artificial Intelligence Univ. Carlos III de Madrid
Dr. Javier Carbó
Trust in Information Security
Same Word, Different World
Security approach tackles “hard” problems of trust.
They view trust as an objective, universal and verifiable property of agents.
Their trust problems have solutions:
• False identity
• Reading/modification of messages by third parties
• Repudiation of messages
• Certificates of accomplishing tasks/services according to standards
EASSS 2010, Saint-Etienne, France 113
An example, Public Key Infrastructure
EASSS 2010, Saint-Etienne, France 114
LDAP directory Certificate authority
Registration authority 1. Client identity
2. Private key sent
3. Public key sent
4. Publication of certificate
5. Certificate sent
Trust in I&S, limitations
Their trust relies on central entities:
– Authorities, Trust Third Parties
– Partially solved using hierarchies of TTPs.
They ignore part of the problem:
- Top authority should be trusted by any other way
Their scope is far away from Real Life Trust issues:
– lies, defection, collusions, social norm violations, …
EASSS 2010, Saint-Etienne, France 115
Institutional approach Institutions have proved to successfully regulate human
societies for a long time:
- created to achieve particular goals while complying norms.
- responsible for defining the rules of the game (norms), to enforce them and assess penalties in case of violation.
Examples: auction houses, parliaments, stock exchange markets,.…
Institutional approach is focused on the existence of organizations:
• Providing an execution infrastructure
• Controlling the resources access
• Sanctionning/rewarding agents’ behaviors
EASSS 2010, Saint-Etienne, France 116
An example: e-institutions
EASSS 2010, Saint-Etienne, France 117
Institutional approach, limitations They view trust as an partially objective, local and verifiable
property of agents. Intrusive control on the agents (modification on the
execution resources, process killing, …) They require a shared agreeement to define of what is
expected (norm compliance, case laws…) They require a central entity and global supervision
– Repositories, access control entities should be centralised
– Low scalability if every agent is observed by the institution
Assumes that the institution itself is trusted
EASSS 2010, Saint-Etienne, France 118
Social approach
Social approach consists in the idea of an auto-organized society (Adam Smith’s invisible hand)
Each agent has its own evaluation criteria of what is expected: no social norms, just individual norms
Each agent is in charge of rewards and punishments (often in terms of more/less future cooperative interactions)
No central entity at all, it consists of a completely distributed social control of malicious agents.
Trust as an emergent property
Avoids Privacy issues caused by centralized approaches
EASSS 2010, Saint-Etienne, France 119
Social approach, limitations Unlimited, but undefined and unexpected trust scope:
We view trust as a subjective, local and unverifiable property of agents.
Exclusion/Isolation is the typical punishment for the malicious agents Difficult to enforce it in open and dynamical societies of agents
Malicious behaviors may occur, they are supposed to be prevented due to the lack of incentives and punishments.
Difficult to define which domain and society is appropriate to test this social approach.
EASSS 2010, Saint-Etienne, France 120
Ways to evaluate any system
Integration on real applications
Using real data from public datasets
Using realistic data generated artificially
Using ad-hoc simulated data with no justification/motivation
None of above
Ways to evaluate T&R in agent systems
Integration of T&R on real agent applications
Using real T&R data from public datasets
Using realistic T&R data generated artificially
Using ad-hoc simulated data with no justification/motivation
None of above
Real Applications using T&R in an agent system
• What real application are we looking for?
• Trust and reputation:
– System that uses (for something) and exchanges subjective opinions about other participants Recommender Systems
• Agent System:
– Distributed view, no central entity collects, aggregates and publishes a final valuation ???
Real Applications using T&R in an agent system
• Desiderata of application domains:
(To be filled by students)
Real data & public datasets
• Assuming real agent applications exists, would data be publicly available? – Privacy concerns – Lack of incentives to save data along time – Distribution of data.Heisenberg uncertainty
principle: If users knew their subjective opinions would be collected by a central entity, they would not be as if their opinions had just a private (supposed-to-be friendly) reader.
• No agents, no distribution public dataset from recomender systems
A view on privacy concerns
• Anonymity: use of arbitrary/secure pseudonysms
• Using concordance: similarity between users within a single context. Mean of differences rating a set of items. Users tend to agree. (Private Collaborative Filtering using estimated concordance measures, N. Lathia, S. Hailes, L. Capra, 2007)
• Secure Pair-wise comparison of fuzzy ratings (Introducing newcomers into a fuzzy reputation agent system, J. Carbo, J.M. Molina, J. Davila, 2002)
Real Data & Public Datasets
• MovieLens, www.grouplens.org: Two datasets:
– 100,000 ratings for 1682 movies by 943 users.
– 1 million ratings for 3900 movies by 6040 users.
• These are the “standard” datasets that many recommendation system papers use in their evaluation
My paper with MovieLens
• I selected users among those who had rated 70 or more movies, and we also selected the movies that were evaluated more than 35 times in order to avoid the sparsity problem.
• Finally we had 53 users and 28 movies.
• The average votes per user is approximately 18. So the sparsity of the selected set of users and movies is under 35%
“Agent-based collaborative filtering based on fuzzy recommendations” J. Carbó, J.M. Molina, IJWET v1 n4,
2004
Real Data & Public Datasets
BookCrossing (BX) dataset:
• www.informatik.uni-freiburg.de/~cziegler/BX
• collected by Cai-Nicolas Ziegler in a 4-week crawl (August / September 2004) from the Book-Crossing community.
• It contains 278,858 users providing 1,149,780 ratings (explicit / implicit) about 271,379 books.
Real Data & Public Datasets
Last.fm Dataset
• top artists played by all users:
– contains <user, artist-mbid, artist-name, total-plays>
– tuples for ~360,000 users about 186,642 artists.
• full listening history of 1000 users:
– Tuples of <user-id, timestamp, artist-mbid, artist-name, song-mbid, song-title>
• Collected by Oscar Celma, Univ. Pompeu Fabra
• www.dtic.upf.edu/~ocelma/MusicRecommendationDataset
Real Data & Public Datasets
Jester Joke Data Set:
• Ken Goldberg from UC Berkeley released a dataset from Jester Joke Recommender System.
• 4.1 million continuous ratings (-10.00 to +10.00) of 100 jokes from 73,496 users.
• www.ieor.berkeley.edu/~goldberg/jester-data/
• It differentiates itself from other datasets by having a much smaller number of rateable items.
Real Data & Public Datasets
Epinions dataset, collected by P. Massa:
• in a 5-week crawl (November/December 2003) from the Epinions.com
• Not just ratings about items, also trust statements:
– 49,290 users who rated a total of
– 139,738 different items at least once, writing 664,824 reviews.
– 487,181 issued trust statements.
• only positive trust statements and not negative ones
Real Data & Public Datasets
Advogato: www.trustlet.org
• a weighted dataset. Opinions aggregated (centrally) on a 3 levels base, Apprentice, Journeyer, and Master
• Tuples of: minami -> polo [level="Journeyer"];
• Used to test trust propagation in social networks (asuming trust transitivity).
• Trust metric (by P. Massa) uses this information in order to assign to every user a final certification level aggregating weighted opinions.
Real Data & Public Datasets
MoviePilot dataset: www.moviepilot.com
• this dataset contains information related to concepts from the world of cinema, e.g. single movies, movie universes (such as the world of Harry Potter movies), upcoming details (trailers, teasers, news, etc
• RecSysChallenge: live evaluation session will take place where algorithms trained on offline data will be evaluated online, on real users.
Mendeley dataset: www.mendeley.com
• recommendations to users about scientific papers that they might be interested in.
Real Data & Public Datasets
• No agents, no distribution public dataset from recomender systems
• Authors have to distribute opinions to participants in some way.
• Ratings about items, not trust statements.
• Relationship between # of ratings / # of items too low
• Relationship between # of ratings / # of users too low
• No time-stamps
• Papers intend to be based on real data, but required transformation from centralized to distributed aggregation distort reality of these data.
Realistic Data
• We need to generate realistic data to test trust and reputation in agent systems.
• Several technical/design problems arise:
– Which # of users, ratings and items we need?
– How much dynamic would be the society of agents?
• But the hardest part is the pshichological/sociological one:
– How individuals take trust decisions? Which types of individuals?
– How real society of humans trust? How many of each individual type belong to real human society?
Realistic Data
• Large-scale simulation with Netlogo (http://ccl.northwestern.edu/netlogo/)
• Others: MASON (https://mason.dev.java.net/), RePast (http://repast.sourceforge.net/)
• But there are mainly adhoc simulations which are difficult to repeat by third parties.
• Many of them are unrealistic agents with binary behaviour altruist/egoist based on game theory views.
Examples of AdHoc Simulations
• Convergence of reputation image to real behaviour of agents. Static behaviours, no recomendations, just consume/provide services. Worst case.
• Maximum Influence of cooperation. Free and honest recomendations from every agent based on consumed services. Best case.
• Inclusion of dynamic behaviours, different % of malicious agents in society, collusions between recommenders and providers, etc. Compare results with the previous ones.
“Avoiding malicious agents using fuzzy recommendations” J. Carbo, J. M. Molina, J. Dávila. Journal of Organizational
Computing & Electronic Commerce, vol. 17, num. 1
Technical/Design Problems to generate simulated data
• Lessons learned from the ART testbed experience.
• http://megatron.iiia.csic.es/art-testbed/
• A testbed would help to compute fair comparisons: “Researchers can perform easily-repeatable experiments in a common environment against accepted benchmarks”
• Relative Success:
– 3 international competitions jointly with AAMAS 06-08.
– Over 15 participants in each competition.
– Several journal and conference publications use it.
Art Domain
the ART testbed
ART Interface
The agent system is displayed as a topology in the left, while in the left two panels show the details of particular agent statistics and of global system statistics.
The ART testbed
• The simulation creates opinions according to an error distribution of zero mean and a standard deviation s:
s = (s∗ + α / cg) t
• where s∗, unique for each era, is assigned to an appraiser from a uniform distribution.
• t is the true value of the painting to be appraised
• α is a hidden value fixed for all appraisers that balances opinion-generation cost and final accuracy.
• cg, the cost an appraiser decides to pay to generate an opinion. Therefore, the minimum achievable error distribution standard deviation is s∗ · t
The ART testbed
• Each appraiser a’s actual client share ra takes into account the appraiser’s client share from the previous timestep:
ra = q · ra’ + (1 − q) · ˜ra
• where ra’ is appraiser a’s client share in the previous timestep.
• q is a value that reflects the influence of previous client share size on next client share size (thus the volatility in client share magnitudes due to frequent accuracy oscillations may be reduced)
2006 ART Competition
2006 Competition setup:
• Clients per agent: 20, Painting eras: 10, games with 5 agents
• Costs 100/10/1, Sensing-Cost-Accuracy=0.5, Winner iam from Southampton Univ.
Post competition discussion notes:
• Larger number of agents required, Definition of dummy agents, Relate # of eras with # of agents, More fair distribution of expertise (just uniform), More abrupt change in # of clients (greater q), Improving expertise over time?
2006 ART Winner conclusions
“The ART of IAM: The Winning Strategy for the 2006 Competition”, Luke Teacy et al, Trust WS, AAMAS 07.
• It is generally more economical for an agent to purchase opinions from a number of third parties than it is to invest heavily in its own opinion
• There is little apparent advantage to reputation sharing. reputation is most valuable in cases where direct experience is relatively more difficult to acquire
• The final lesson is that although trust can be viewed as a sociological concept, and inspiration for computational models of trust can be drawn from multiple disciplines, the problem of combining estimates of unknown variables (such as trustee behaviour) is fundamentally a statistical one.
2007 ART Competition
2007 Competition Setup: • Costs 100/10/0.1, All agents have equal sum of expertise
values, Painting eras: static but unknown, Expertise assignments may change during the course of the game, Include dummy agents, games with 25 agents
2007 Competition Discussion Notes: • it need sto facilitate reputation exchange • It doesn’t have to produce all changes at the same time,
Gradual changes • Studying barriers to entry; how a new agent joins an
existing MAS: Cold start vs. Hot start (exploration vs explotation)
• More competitive dummy agents • relationship between opinion generation cost and accuracy
2008 ART Competition
2008 Competition Setup:
• limited in the number of certainty and opinion requests that he can send.
• Certainty request has cost.
• deny the use of self opinions
• Wider range of expertise values
• Every time step, select randomly a number of eras to change, and add a given amount of positive change (increase value). For every positive change, apply also a negative change of the same amount, so that the average expertise of the agent is not modified
Evaluation criteria
• Lack of criteria on which and how the very different trust decisions should be considered
Conte and Paolucci 02:
• epistemic decisions: those about about updating and generating trust opinions from received reputations
• pragmatic-strategic decisions are decisions of how to behave with partners using these reputation-based trust
• memetic decisions stand for the decisions of how and when to share reputation with others.
Main Evaluation Criteria of The ART testbed
• The winning agent is selected as the appraiser with the highest bank account balance in the direct confrontation of appraiser agents repeated X times.
• In other words, the appraiser who is able to:
– estimate the value of its paintings most accurately
– purchase information most prudently.
• Where an ART iteration involves 19 steps (11 decisions, 8 interactions) to be taken by an agent.
Trust decisions in ART testbed
1. How our agent should aggregate reputation information about others?
2. How our agent should trust weights of providers and recommenders are updated afterwards?
3. How many agents our agent should ask for reputation information about other agents?
4. How many reputations and opinions requests from other agents should our agent answer?
5. How many agents our agent should ask for opinions about our assigned paintings?
6. How much time (economic value) our agent should spend building requested opinions about the paintings of the other agents?
7. How much time (economic value) our agent should spend building the appraisals of the own paintings? (AUTOPROVIDER!)
…
Limitations of Main Evaluation Criteria of ART testbed
From my point of view:
• Evaluates all trust decisions jointly: should participants play provider and consumer roles jointly of just the role of opinion consumers?
• Is the direct confrontation of competitor agents the right scenario to compare them?
Providers vs. Consumers
• Playing games with two participants of 2007 competition (iam2 and afras) and other 8 dummy agents.
• Dummy agents implemented ad hoc to be the solely opinion providers, they do not ask for any service to 2007 participants.
• None of both 2007 participants will ever provide opinions/reputations, they are just consumers.
• Differences between both agents were much less than the official competition stated (absolutely and relatively).
“An extension of a fuzzy reputation agent trust model in the ART testbed” Soft Computing v14, issue 8, 2010
Trust Strategies in Evolutive Agent Societies
• An evolutionarily stable strategy (ESS) is a strategy which, if adopted by a population of players, cannot be invaded by any alternative strategy
• An evolutionarily stable trust strategy is a strategy which, if becomes dominant (adopted by a majority of agents) can not be defeated by any alternative trust strategy.
• Justification: The goal of trust strategies is to establish some kind of social control over malicious/distrustful agents
• Assumption: agents may change of trust strategy. Agents with a failing trust strategy would get rid of it and they would adopt a successful trust strategy in the future.
An evolutive view of ART games
• We consider a failing trust strategy the one who lost (earning less money than the others) the last ART game.
• We consider the successful trust strategy to the one who won the last ART game (earning more money than the others).
• By this way replacing in consecutive games the participant who lost the game by the one who won it.
• We have applied it to the 16 participant agents of 2007 ART competition
ART game
Loser
Winner
16 participants in 2007 competition
ART game
Loser
Winner
ART game
and so on…
Game Winner Earnings Loser Earnings
1 iam2 17377 xerxes -8610
2 iam2 14321 lesmes -13700
3 iam2 10360 reneil -14757
4 iam2 10447 blizzard -7093
5 agentevicente 8975 Rex -5495
6 iam2 8512 alatriste -999
7 artgente 8994 agentevicente 2011
8 artgente 10611 agentevicente 1322
9 artgente 8932 novel 424
10 iam2 9017 IMM 1392
11 artgente 7715 marmota 1445
12 artgente 8722 spartan 2083
13 artgente 8966 zecariocales 1324
14 artgente 8372 iam2 2599
15 artgente 7475 iam2 2298
16 artgente 8384 UNO 2719
17 artgente 7639 iam2 2878
18 iam2 6279 JAM 3486
19 iam2 14674 artgente 2811
20 artgente 8035 iam2 3395
Results of repeated games
2007 winner is not a Evolutionarily Stable Strategy. • Although the strategy of the winner of the 2007 spreads
in the society of agents (until 6 iam2 agents out of 16), it never becomes dominant (no majority of iam2 agents).
• iam2 strategy is defeated by artgente strategy, which becomes dominant (11 artgente agents out of 16). Therefore its superiority as winner of 2007 competition is, at least, relative.
• The right equilibrium of trust strategies that form an evolutionarily stable society is composed by 10-11 Artgente agents and 6-5 iam2 agents.
CompetitionRank EvolutionRank Agent ExcludedInGame
6 1 artgente -
1 2 iam2 -
2 3 JAM 18
7 4 UNO 16
4 5 zecariocales 13
5 6 spartan 12
9 7 marmota 11
13 8 IMM 10
10 9 novel 9
15 10 agentevicente 8
11 11 alatriste 6
12 12 rex 5
3 13 Blizzard 4
8 14 reneil 3
14 15 lesmes 2
16 16 xerxes 1
Other Evaluation Criteria of the ART testbed
• The testbed also provides functionality to compute:
– the average accuracy of the appraiser’s final appraisals (final appraisal error mean)
– the consistency of that accuracy (final appraisal error standard deviation)
– the quantities of each type of message passed between appraisers are recorded.
• We could take into account other relevant evaluation criteria?
Evaluation criteria from the agent-based view
Characterization and Evaluation of Multi-agent System, P. Davidsson, S. Johanson, M. Svahnberg In Software
Engineering for Multi-Agent Systems IV, LNCS 3914, 2006.
9 Quality atributes:
1. Reactivity: How fast are opinions re-evaluated when there are changes in expertise?
2. Load balancing: How evenly is the load balanced between the appraisals?
3. Fairness: Are all the providers treated equally?
4. Utilization of resources: Are the available abilities/information utilized as much as is possible?
Evaluation criteria from the agent-based view
5. Responsiveness: How long does it take for the appraisals to get response to an individual request?
6. Communication overhead: How much extra communication is needed for the appraisals?
7. Robustness: How vulnerable is the agent to the absence of responses?
8. Modifiability: How easy is it to change the behaviour of the agent in very different conditions?
9. Scalability: How good is the system at handling large numbers of providers and consumers)?
Evaluation criteria from the agent-based view
Evaluation of Multi-Agent Systems: The case of Interaction, H. Joumaa, Y. Demazeau, J.M. Vincent, 3rd Int. Conf. on
Information & Communication Technologies: from Theory to Applications. IEEE Computer Society, Los
Alamitos (2008)
• An evaluation at the interaction level, based on the weight of the information brought by a message.
• A function Φ is defined in order to calculate the weight of pertinent messages.
Evaluation criteria from the agent-based view
• The relation between the received message m and the effects on the agent is studied in order to calculate the Φ(m) value. According to the model, two kinds of functions are considered:
– A function that associates weight to the message according to its type.
– A function that associates weight to the message according to the change provoked on the internal state and the actions triggered by its reception.
Consciousness Scale
• Too much quantification (AI is not just statistics…)
• Compare agents qualitatively Measure their level of consciusness
• A scale of 13 conscious levels according to the cognitive skills of an agent, the “Cognitive Power” of an agent.
• The higher the level obtained, the more the behavior of the agent resembles humans
• www.consscale.com
Bio-inspired order of Cognitive Skills
• From the point of view of emotions (Damasio, 1999):
“Fake Emotions”
“Feeling of a Feeling”
“Feeling”
“Emotion”
“Imagination”
“Planning”
Bio-inspired order of Cognitive Skills
• From the point of view of perception and action (Perner, 1999):
“Set Shifting”
“Attention”
“Adaptation”
“Perception”
“I Know You Know I Know”
“I Know You Know”
“I Know I Know”
Bio-inspired order of Cognitive Skills
• From the point of view of Theory of Mind (Lewis 2003):
“I Know”
Consciousness Levels
Reactive
Adaptive
Attentional
Executive
Emotional
Self-Conscious
Empathic
Social
Human-like
Super-Conscious
Evaluating agents with ConsScale
Thank you !
EASSS 2010, Saint-Etienne, France 172