harm: a hybrid rule-based agent reputation model based on temporal defeasible logic
DESCRIPTION
TRANSCRIPT
HARM A Hybrid Rule-based Agent Reputation Model based on Temporal Defeasible Logic
Kalliopi Kravari, Nick BassiliadesDepartment of Informatics
Aristotle University of ThessalonikiThessaloniki, Greece
RuleML 2012, Montpellier, Aug 27-29 2
OVERVIEW
Agents in the SW interact under uncertain and risky situations.
Whenever they have to interact with partners of whom they know nothing..
they have to make decision involving risk.
Thus:
Their success may depend on their ability to choose reliable partners.
Solution:
Reliable trust and/or reputation models.
SW evolutio
n
Intelligent Agents
SW Trust Layer
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 3
OVERVIEW
Trust is the degree of trust that can be invested in a certain agent.
Reputation is the opinion of the public towards an agent.
Reputation (trust) models provide the means to quantify reputation and trust
help agents to decide who to trust encourage trustworthy behavior deter dishonest participation
Current computational reputation models are usually built either on
interaction trust or witness reputation.
an agent’s direct experience
reports provided by others Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 4
APPROACHES’ LIMITATIONS
If the reputation estimation is based only on direct experience, it would require a long time for an agent to reach a satisfying estimation level.
Why?
because when an agent enters an environment for the first time,
it has no history of interactions with the other agents in the environment.
If the reputation estimation is based only on witness reports, it could not guarantee reliable estimation.
Why?
because self-interested agents could be unwilling or unable
to sacrifice their resources in order to provide reports.
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 5
HYBRID MODELS
Hybrid models combine both interaction trust and witness reputation.
We propose HARM:
an incremental reputation model that combines the advantages of the hybrid reputation models the benefits of temporal defeasible logic (rule-based approach)
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 6
(Temporal) Defeasible Logic
Temporal defeasible logic (TDL) is an extension of defeasible logic (DL).
DL is a kind of non-monotonic reasoning
Why defeasible logic?
Rule-based, deterministic (without disjunction)
Enhanced representational capabilities
Classical negation used in rule heads and bodies
Negation-as-failure can be emulated
Rules may support conflicting conclusions
Skeptical: conflicting rules do not fire
Priorities on rules resolve conflicts among rules
Low computational complexityNick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 7
Facts: e.g. student(Sofia)Strict Rules: e.g. student(X) person(X)Defeasible Rules: e.g. r: person(X) works(X)
r’: student(X) ¬works(X)Priority Relation between rules, e.g. r’ > r
Proof theory example: A literal q is defeasibly provable if:
supported by a rule whose premises are all defeasibly provable AND
q is not definitely provable ANDeach attacking rule is non-applicable or defeated by a
superior counter-attacking rule
Defeasible Logic
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 8
Two types of temporal literals: expiring temporal literals l:t (a literal l is valid for t time
instances) persistent temporal literals l@t
(a literal l is active after t time instances have passed and is valid thereafter)
temporal rules: a1:d1 ... an:dn d b:db
delay between the cause a1:d1 ... an:dn and the effect b:db
Example:(r1) => a@1 Literal a is
created due to r1.
(r2) a@1=>7 b:3 It becomes active at time offset 1.
It causes the head of r2 to be fired at time 8. The result b lasts only until time 10.
Thereafter, only the fact a remains.
Temporal Defeasible Logic
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 9
Evaluates four agent abilities: validity, completeness, correctness and response time.
An agent is valid if it is both sincere and credible. Sincere: believes what it says Credible: what it believes is true in the world
An agent is complete if it is both cooperative and vigilant. Cooperative: says what it believesVigilant: believes what is true in the world
An agent is correct if its provided service is correct with respect to a specification.
Response time is the time that an agent needs to complete the transaction.
HARM – Evaluated Abilities
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 10
Agent A establishes interaction with agent B:(A)Truster is the evaluating agent(B) Trustee is the evaluated agent
Truster’s rating value (r) in HARM has 8 coefficients:
2 IDs: Truster, Trustee 4 abilities: Validity, Completeness, Correctness,
Response time 2 weights: Confidence, Transaction value
Confidence: how confident the agent is for the rating Transaction value: how important the transaction was for
the agent
HARM - Ratings
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 11
Direct Experience (PRAX ) Indirect Experience
reports provided by strangers (SRAX) reports provided by known agents (e.g
friends) due to previous interactions (KRAX ) Both
Final reputation value of an agent X, required by an agent A:
RAX = {PRAX , KRAX, SRAX}
HARM – Experience Types
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 12
Sometimes one or more rating categories are missing.▫ e.g. a newcomer has no personal experience
A user is much more likely to believe statements from a trusted acquaintance than from a stranger. ▫ Thus, personal opinion (AX) is more valuable than strangers’
opinion (SX), as well as it is more valuable even from previously trusted partners (KX).
Superiority relationship among rating categories
HARM – Experience Types
KX
AX, KX, SX
AX, KX
AX, SX
KX, SX
AX
SX
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 13
RAX is a function that combines each available category▫ personal opinion (AX)▫ strangers’ opinion (SX)▫ previously trusted partners (KX)
HARM allows agents to define weights of ratings’ coefficients▫ Personal preferences
HARM – Final reputation value
, ,AX AX AXAXR PR KR SR
4 4 4
1 1 1
log log log, , ,
, , , _ 2
coefficient coefficient coefficienti AX i AX i AX
AX
i i ii i i
AVG w pr AVG w kr AVG w srR
w w w
coefficient validity completeness correctness response time
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 14
Truster’s rating (r) (defeasible RuleML / d-POSL syntax):
rating(id→rating’s_id, truster→truster’s_name, trustee→trustee’s_name, validity→value1,
completeness→value2, correctness→value3, response_time→value4, confidence→value5,
transaction_value→value6).
e.g. rating(id→1, truster→A, trustee→B, validity→5,
completeness→6, correctness→6, response_time→8, confidence→0.8,
transaction_value→0.9).
HARMRule-based Decision Making Mechanism / Facts
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 15
Confidence and transaction value allow us to decide how much attention we should pay on each rating.
It is important to take into account ratings that were made by confident trusters, since their ratings are more likely to be right.
Confident trusters, that were interacting in an important for them transaction, are even more likely to report truthful ratings.
HARMRule-based Decision Making Mechanism
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 16
r1: count_rating(rating→?idx, truster→?a, trustee→ ?x) :=
confidence_threshold(?conf), transaction_value_threshold(?tran), rating(id→?idx, confidence→?confx,
transaction_value→?tranx),
?confx >= ?conf, ?tranx >= ?tran.
r2: count_rating(…) :=
…?confx >= ?conf.
r3: count_rating(…) :=
…?tranx >= ?tran.
r1 > r2 > r3
HARMWhich ratings “count”?
• if both truster’s confidence and transaction importance are high, then that rating will be counted during the estimation process
• if the transaction value is lower than the threshold, it doesn’t matter so much if the truster’s confidence is high
• if there are only ratings with high transaction value, then they should be taken into account
• In any other case, the rating should be omitted.
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 17
All the previous rules are conclude positive literals. These literals are conflicting each other, for the
same pair of agents (truster and trustee)▫ We want in the presence e.g. of personal experience to omit
strangers’ ratings. ▫ That’s why there is also a superiority relationship between
the rules.
The conflict set is formally determined as follows:C[count_rating(truster→?a, trustee→?x)] =
{ ¬ count_rating(truster→?a, trustee→?x) } { count_rating(truster→?a1, trustee→?x1) | ?a ?a1 ∧ ?x
?x1 }
HARMConflicting Literals
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 18
r4: known(agent1→?a, agent2→?y) :-
count_rating(rating → ?id, truster→?a, trustee→?y).
r5: count_prAX(agent→?a, truster→?a, trustee→?x, rating→?id) :-
count_rating(rating → ?id, truster→? a, trustee→ ?x).
r6: count_krAX(agent→?a, truster→?k, trustee→?x, rating →?id) :-
known(agent1→?a, agent2→?k),
count_rating(rating→?id, truster→?k, trustee→ ?x).
r7: count_srAX(agent→?a, truster→?s, trustee→?x, rating→?id) :-
count_rating(rating → ?id, truster →?s, trustee→ ?x), not(known(agent1→?a, agent2→?s)).
Which agents are considered as known?
Categorization
of ratings
HARMDetermining Experience Types
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 19
Final step is to decide whose experience will “count”: direct, indirect (witness), or both.
The decision for RAX is based on a relationship theory
e.g. Theory #1: All categories count equally.
r8: participate(agent→?a, trustee→?x, rating→?id_ratingAX) :=
count_pr(agent→?a, trustee→?x, rating→ ?id_ratingAX).
r9: participate(agent→?a, trustee→?x, rating→?id_ratingKX) :=
count_kr(agent→?a, trustee→?x, rating→ ?id_ratingKX).
r10: participate(agent→?a, trustee→?x, rating→?id_ratingSX) :=
count_sr(agent→?a, trustee→?x, rating→ ?id_ratingSX).
HARMRule-based Decision Making Mechanism
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 20
HARMTheory #1: All categories count equally
KX
AX, KX, SX
AX, KX
AX, SX
KX, SX
AX
SX
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 21
e.g. Theory #2: An agent relies on its own experience if it believes it is sufficient. If not it acquires the opinions of others.
r8: participate(agent→?a, trustee→?x, rating→?id_ratingAX) :=
count_pr(agent→?a, trustee→?x, rating→ ?id_ratingAX).
r9: participate(agent→?a, trustee→?x, rating→?id_ratingKX) :=
count_kr(agent→?a, trustee→?x, rating→ ?id_ratingKX).
r10: participate(agent→?a, trustee→?x, rating→?id_ratingSX) :=
count_sr(agent→?a, trustee→?x, rating→ ?id_ratingSX).
r8>r9>r10
HARMRule-based Decision Making Mechanism
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 22
HARMTheory #2: Personal experience is preferred to friends’ opinion to strangers’ opinion
KX
AX, KX, SX
AX, KX
AX, SX
KX, SX
AX
SX
Nick Bassiliades
>>
RuleML 2012, Montpellier, Aug 27-29 23
e.g. Theory #3:If direct experience is available (PRAX), then it is preferred to be combined with ratings from known agents (KRAX). If not, HARM acts as a pure witness system.
r8: participate(agent→?a, trustee→?x, rating→?id_ratingAX) :=
count_pr(agent→?a, trustee→?x, rating→ ?id_ratingAX).
r9: participate(agent→?a, trustee→?x, rating→?id_ratingKX) :=
count_kr(agent→?a, trustee→?x, rating→ ?id_ratingKX).
r10: participate(agent→?a, trustee→?x, rating→?id_ratingSX) :=
count_sr(agent→?a, trustee→?x, rating→ ?id_ratingSX).
r8> r10, r9>r10
HARMRule-based Decision Making Mechanism
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 24
HARMTheory #3: Personal experience and friends’ opinion is preferred to strangers’ opinion
KX
AX, KX, SX
AX, KX
AX, SX
KX, SX
AX
SX
Nick Bassiliades
>
RuleML 2012, Montpellier, Aug 27-29 25
Agents may change their objectives at any time Evolution of trust over time should be taken into account Only the latest ratings participate in the reputation
estimation
In the temporal extension of HARM:each rating is a persistent temporal literal of TDLeach rule conclusion is an expiring temporal literal of TDL
The truster’s rating (r) is active after time_offset time instances have passed and is valid thereafter
rating(id→value1, truster→value2, trustee→ value3, validity→value4,
completeness→value5, correctness→value6,
response_time →value7, confidence→value8, transaction_value→value9)@time_offset.
HARM Temporal Defeasible Logic Extension
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 26
Rules are modified accordingly: each rating is active after t time instances have passed
(“@t”) each conclusion has a duration (“:duration”) each rule has a delay, which models the delay between the
cause and the effect. e.g. r1: count_rating(rating→?idx, truster→?a, trustee→ ?
x):duration := delay confidence_threshold(?conf),
transaction_value_threshold(?tran),
rating(id→?idx, confidence→?confx,
transaction_value→?tranx) @t,
?confx >= ?conf, ?tranx >= ?tran.
HARM Temporal Defeasible Logic Extension
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 27
HARM Evaluation
We implemented the model in EMERALD Framework for interoperating knowledge-
based intelligent agents in the SW. Built on JADE multi-agent platform
EMERALD uses Reasoners (agents offering reasoning services) Supports the DR-Device defeasible logic
system Used temporal predicates to simulate the
temporal semantics. o No available temporal defeasible logic reasoner
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 28
All agents provide the same service Performance is service – independent
Consumer agent selects the provider with the highest reputation value
HARM - Evaluation
Nick Bassiliades
Provider agentx
Consumer agent HARMAgent
Provider agenty
1. Request reputations of the provider agents
2. Inform about the provider with the highest reputation
3. Service request
5. Report rating
4. Service providing
RuleML 2012, Montpellier, Aug 27-29 29
Performance of providers (e.g. quality of service) is the utility that a consumer gains from each interaction Utility Gain (UG), UG [-10,
10]
Four models are used: HARM (rule-based / temporal) T-REX (temporal degradation) SERM (uses all history) NONE (no trust mechanism).
From previous study: CR, SPORAS (literature
famous, distributed)
HARM - Evaluation
HARM 5.73
T-REX 5.57
SERM 2.41
NONE 0.16
CR 5.48
SPORAS 4.65
Average UG per interaction
Nick Bassiliades
Number of simulation: 500 Number of providers: 100
Good providers 10
Ordinary providers 40
Intermittent providers 5
Bad providers 45
RuleML 2012, Montpellier, Aug 27-29 30
Conclusions
We proposed HARM that combines: the hybrid approach (interaction trust and witness
reputation)
the benefits of temporal defeasible logic (rule-based approach)
Overcomes the difficulty to locate witness reports (centralized administration authority)
It is the first reputation model that uses explicitly knowledge, in the form of defeasible logic, to predict agent’s future behavior Easy to simulate human decision making
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 31
Future Work Fully implement HARM with temporal defeasible
logic Compare HARM’s performance with other
centralized and decentralized models from the literature
Combine HARM and T-REX Develop a distributed version of HARM Verify its performance in real-world e-commerce
applications Combining it with Semantic Web metadata for
trust
Nick Bassiliades
RuleML 2012, Montpellier, Aug 27-29 32
Thank you!Any Questions?
Nick Bassiliades