t9. trust and reputation in multi-agent systems

172
Trust & Reputation in Multi-Agent Systems Dr. Jordi Sabater Mir [email protected] 1 EASSS 2012, Valencia, Spain Dr. Javier Carbó [email protected]

Upload: easss-2012

Post on 11-May-2015

1.709 views

Category:

Education


6 download

DESCRIPTION

14th European Agent Systems Summer School

TRANSCRIPT

Page 1: T9. Trust and reputation in multi-agent systems

Trust & Reputation in Multi-Agent Systems

Dr. Jordi Sabater Mir

[email protected]

1 EASSS 2012, Valencia, Spain

Dr. Javier Carbó

[email protected]

Page 2: T9. Trust and reputation in multi-agent systems

IIIA – Artificial Intelligence Research Institute CSIC – Spanish National Research Council

Dr. Jordi Sabater-Mir

Page 3: T9. Trust and reputation in multi-agent systems

Outline

• Introduction • Approaches to control the interaction • Computational reputation models

– eBay – ReGreT

• A cognitive perspective to computational reputation models – A cognitive view on Reputation – Repage, a computational cognitive reputation model – [Properly] Integrating a [cognitive] reputation model into a

[cognitive] agent architecture – Arguing about reputation concepts

Page 4: T9. Trust and reputation in multi-agent systems

“A complete absence of trust

would prevent [one] even getting

up in the morning.”

Niklas Luhman - 1979

Trust

Page 5: T9. Trust and reputation in multi-agent systems

Trust

A couple of definitions that I like: “Trust begins where knowledge [certainty] ends: trust provides a basis dealing with uncertain, complex, and threatening images of the future.” (Luhmann,1979) “Trust is the outcome of observations leading to the belief that the actions of another may be relied upon, without explicit guarantee, to achieve a goal in a risky situation.” (Elofson, 2001)

Page 6: T9. Trust and reputation in multi-agent systems

6

Trust

“The subjective probability by which an individual, A, expects that another individual, B, performs a given action on which its welfare depends” [Gambetta]

“An expectation about an uncertain behaviour” [Marsh]

“The decision and the act of relying on, counting on, depending on [the trustee]” [Castelfranchi & Falcone]

Epistemic

Motivational

Page 7: T9. Trust and reputation in multi-agent systems

"After death, a tiger leaves behind

his skin, a man his reputation"

Vietnamese proverb

Reputation

Page 8: T9. Trust and reputation in multi-agent systems

Reputation

“What a social entity says about a target regarding his/her behavior”

Set of individuals plus a set of social relations among these individuals or properties that identify them as a group in front of its own members and the society at large.

• The social evaluation linked to the reputation is not necessarily a belief of the issuer. • Reputation cannot exist without communication.

It is always associated to a specific behaviour/property

Page 9: T9. Trust and reputation in multi-agent systems

What is reputation good for?

• Reputation is one of the elements that allows us to build trust.

• Reputation has also a social dimension. It is not only useful for the individual but also for the society as a mechanism for social order.

Page 10: T9. Trust and reputation in multi-agent systems

But... why we need computational models of those concepts?

Page 11: T9. Trust and reputation in multi-agent systems

What we are talking about...

Mr. Yellow

Page 12: T9. Trust and reputation in multi-agent systems

What we are talking about...

Mr. Yellow

Direct experiences Two years ago... Trust based on...

Page 13: T9. Trust and reputation in multi-agent systems

What we are talking about...

Mr. Yellow

Third party information Trust based on...

Mr. Pink

Page 14: T9. Trust and reputation in multi-agent systems

What we are talking about...

Mr. Yellow

Third party information Trust based on...

Mr. Pink Mr. Green

Page 15: T9. Trust and reputation in multi-agent systems

What we are talking about...

Mr. Yellow

Reputation Trust based on...

Page 16: T9. Trust and reputation in multi-agent systems

What we are talking about...

Mr. Yellow

Page 17: T9. Trust and reputation in multi-agent systems

What we are talking about...

?

Page 18: T9. Trust and reputation in multi-agent systems

Characteristics of computational trust and reputation mechanisms

• Each agent is a norm enforcer and is also under surveillance by the others. No central authority needed.

• Their nature allows to arrive where laws and central authorities cannot.

• Punishment is based usually in ostracism. Therefore, exclusion must be a punishment for the outsider.

Page 19: T9. Trust and reputation in multi-agent systems

• Bootstrap problem.

• Not all kind of environments are suitable to apply these mechanisms. It is necessary a social environment.

Characteristics of computational trust and reputation mechanisms

Page 20: T9. Trust and reputation in multi-agent systems

Approaches to control the interaction

Page 21: T9. Trust and reputation in multi-agent systems

Different approaches to control the interaction

Security approach

Page 22: T9. Trust and reputation in multi-agent systems

• Security approach

Different approaches to control the interaction

Agent identity validation. Integrity, authenticity of messages. ...

Page 23: T9. Trust and reputation in multi-agent systems

Different approaches to control the interaction

Security approach

Institutional approach

Page 24: T9. Trust and reputation in multi-agent systems

• Institutional approach

Different approaches to control the interaction

Page 25: T9. Trust and reputation in multi-agent systems

Different approaches to control the interaction

Security approach

Institutional approach

Social approach Trust and reputation mechanisms are at this level.

They are complementary and cover different aspects of interaction.

Page 26: T9. Trust and reputation in multi-agent systems

Computational reputation models

Page 27: T9. Trust and reputation in multi-agent systems

Classification dimensions

• Paradigm type • Mathematical approach

• Cognitive approach

• Information sources • Direct experiences

• Witness information

• Sociological information

• Prejudice

• Visibility types • Subjective

• Global

• Model’s granularity • Single context

• Multi context

• Agent behaviour assumptions • Cheating is not considered

• Agents can hide or bias the

information but they never lie

• Type of exchanged information

Page 28: T9. Trust and reputation in multi-agent systems

Subjective vs Global • Global

• The reputation is maintained as a centralized resource. • All the agents in that society have access to the same reputation values. Advantages: • Reputation information is available even if you are a newcomer and do not depend on how well connected or good informants you have. • Agents can be simpler because they don’t need to calculate reputation values, just use them.

Disadvantages: • Particular mental states of the agent or its singular situation are not taken into account when reputation is calculated. Therefore, a global view it is only possible when we can assume that all the agents think and behave similar. • Not always is desireable for an agent to make public information about the direct experiences or submit that information to an external authority. • Therefore, a high trust on the central institution managing reputation is essential.

Page 29: T9. Trust and reputation in multi-agent systems

Subjective vs Global

• Subjective • The reputation is maintained by each agent and is calculated according to its own direct experiences, information from its contacts, its social relations...

Advantages: • Reputation values can be calculated taking into account the current state of the agent and its individual particularities.

Disadvantages: • The models are more complex, usually because they can use extra sources of information. • Each agent has to worry about getting the information to build reputation values. • Less information is available so the models have to be more accurate to avoid noise.

Page 30: T9. Trust and reputation in multi-agent systems

A global reputation model: eBay

Model oriented to support trust between buyer and seller.

• Completely centralized. • Buyers and sellers may leave comments about each other after transactions. • Comment: a line of text + numeric evaluation (-1,0,1) • Each eBay member has a Feedback score that is the summation of the numerical evaluations.

Page 31: T9. Trust and reputation in multi-agent systems

eBay model

Page 32: T9. Trust and reputation in multi-agent systems

Specifically oriented to scenarios with the following characteristics:

• A lot of users (we are talking about milions) • Few chances of repeating interaction with the same partner • Easy to change identity • Human oriented

• Considers reputation as a global property and uses a single value that is not dependent on the context. • A great number of opinions that “dilute” false or biased information is the only way to increase the reliability of the reputation value.

eBay model

Page 33: T9. Trust and reputation in multi-agent systems

A subjective reputation model: ReGreT

What is the ReGreT system?

It is a modular trust and reputation system

oriented to complex e-commerce environments

where social relations among individuals play

an important role.

Page 34: T9. Trust and reputation in multi-agent systems

Reputation model

Witness reputation

System reputation

Neigh- bourhood reputation

ODB

Direct Trust

Credibility

IDB SDB

Trust

The ReGreT system

Page 35: T9. Trust and reputation in multi-agent systems

Reputation model

Witness reputation

System reputation

Neigh- bourhood reputation

ODB

Direct Trust

Credibility

IDB SDB

Trust

The ReGreT system

Page 36: T9. Trust and reputation in multi-agent systems

Outcome:

The initial contract

– to take a particular course of actions

– to establish the terms and conditions of a transaction.

AND

The actual result of the contract.

Outcomes and Impressions

Prize =c 2000

Quality =c A

Quantity =c 300

Prize =f 2000

Quality =f C

Quantity =f 295

Example:

Outcome

Contract

Fulfillment

Page 37: T9. Trust and reputation in multi-agent systems

Outcomes and Impressions

Prize =c 2000

Quality =c A

Quantity =c 300

Prize =f 2000

Quality =f C

Quantity =f 295

Outcome

offers_good_prices

maintains_agreed_quantities

Page 38: T9. Trust and reputation in multi-agent systems

Impression:

The subjective evaluation of an outcome from a specific point of view.

Outcomes and Impressions

Prize =c 2000

Quality =c A

Quantity =c 300

Prize =f 2000

Quality =f C

Quantity =f 295

Outcome ),(Imp 1o

),(Imp 2o

),(Imp 3o

Page 39: T9. Trust and reputation in multi-agent systems

Reputation model

Witness reputation

System reputation

Neigh- bourhood reputation

ODB

Direct Trust

Credibility

IDB SDB

Trust

The ReGreT system

Reliability of the value based on: • Number of outcomes • Deviation: The greater the variability in the rating values the more volatile will be the other agent in the fulfillment of its agreements.

Page 40: T9. Trust and reputation in multi-agent systems

Trust relationship calculated directly from an agent’s outcomes database.

t

tttf i

i ),(

ba

grj IDBo j

ii

ttf

ttftt

,)(

),(

),(),(

ba

gri ODBo

iiba ottDT,

)(

),Imp(),()(

Direct Trust

Page 41: T9. Trust and reputation in multi-agent systems

)(1()()(,

)(

,

)(

ba

gr

ba

grbaODBDvODBNoDTRL

DT reliability

Direct Trust

Deviation (Dv)

The greater the variability in

the rating values the more volatile will be the other

agent in the fulfillment of its agreements.

Number of outcomes

(No)

10),(,

)( itmODBNo

ba

gr

Page 42: T9. Trust and reputation in multi-agent systems

Reputation model

Witness reputation

System reputation

Neigh- bourhood reputation

ODB

Direct Trust

Credibility

IDB SDB

Trust

The ReGreT system

Page 43: T9. Trust and reputation in multi-agent systems

Problems of witness information:

• Can be false.

• Can be incomplete.

• It may suffer from the “correlated evidence” problem.

Witness reputation

Reputation that an agent builds on another agent based on the beliefs gathered from society members (witnesses).

Page 44: T9. Trust and reputation in multi-agent systems

A B C

D o

+

o

#

o

+

#

^

+

^

a1 o

a2 +

c1 o+

d2 ^

d1 +

b1 o

b2 # c2

#^

u2

u4

u1 u5

u3

o

#

+

+

^

o

#

u6

u7

u8 u9

u2

u4

u1

u5

u3

u6

u7

u8

u9

b1 o

a2 +

c1 o+

d2 ^

d1 +

a1 o

b2 #

c2 #^

trade

Page 45: T9. Trust and reputation in multi-agent systems

A B C

D o

+

o

#

o

+

#

^

+

^

a1 o

a2 +

c1 o+

d2 ^

d1 +

b1 o

b2 # c2

#^

u2

u4

u1 u5

u3

o

#

+

+

^

o

#

u6

u7

u8 u9

cooperation

u2

u4 u1

u3

u6 u7 u8

u9

u5

Big exchange of sincere infor-

mation and some kind of predispo-

sition to help if it is possible.

Page 46: T9. Trust and reputation in multi-agent systems

A B C

D o

+

o

#

o

+

#

^

+

^

a1 o

a2 +

c1 o+

d2 ^

d1 +

b1 o

b2 # c2

#^

u2

u4

u1 u5

u3

o

#

+

+

^

o

#

u6

u7

u8 u9

competition

u2

u4

u1 u3

u6

u7 u8

u9

u5

Agents tend to use all the available

mechanisms to take some advantage

from their competitors.

Page 47: T9. Trust and reputation in multi-agent systems

Witness reputation

Step 1: Identifying

the witnesses

• Initial set of witnesses:

Agents that have had

a trade Relation with

the target agent

u2

u4

u1

u5

u3

u6

u7

u8

u9

b1 o

a2 +

c1 o+

d2 ^

d1 +

a1 o

b2 #

c2 #^

trade

?

#

Page 48: T9. Trust and reputation in multi-agent systems

Witness reputation

Step 1: Identifying

the witnesses

• Initial set of witnesses:

Agents that have had

a trade Relation with

the target agent

u2

u4

u5

u3

u6

u7

u8

b2 #

trade

Grouping agents with frequent interactions

among them and considering each one of these

groups as a single source of reputation values:

• Minimizes the correlated evidence problem.

• Reduces the number of queries to agents that

probably will give us more or less the same

information.

To group agents ReGreT relies on sociograms.

Page 49: T9. Trust and reputation in multi-agent systems

cooperation

Heuristic to identify groups and

the best agents to represent

them:

1. Identify the components of

the graph.

2. For each component, find the

set of cut-points.

3. For each component that

does not have any cut-point,

select a central point (node

with larger degree).

u2

u4

u5

u3

u6

u7

u8

# b2

Central-point

Cut-point

Witness reputation

Page 50: T9. Trust and reputation in multi-agent systems

Step 1: Identifying

the witnesses

• Initial set of witnesses:

Agents that have had

a trade Relation with

the target agent

• Grouping and selecting

the most representative

witnesses

u2

u4

u5

u3

u6

u7

u8

b2 #

trade

Witness

reputation

Page 51: T9. Trust and reputation in multi-agent systems

Step 1: Identifying

the witnesses

u2

u5

b2 #

trade

• Initial set of witnesses:

Agents that have had

a trade Relation with

the target agent

• Grouping and selecting

the most representative

witnesses

u2

u5

u3

b2 #

trade

Witness

reputation

Page 52: T9. Trust and reputation in multi-agent systems

Step 1: Identifying

the witnesses

u2

u5

Step 2: Who can I

trust?

u3

)(),( 2222 bubu TrustRLTrust

)(),( 2525 bubu TrustRLTrust

Witness

reputation

Page 53: T9. Trust and reputation in multi-agent systems

Reputation model

Witness reputation

System reputation

Neigh- bourhood reputation

ODB

Direct Trust

Credibility

IDB SDB

Trust

The ReGreT system

Page 54: T9. Trust and reputation in multi-agent systems

Credibility model

Two methods are used to evaluate the credibility of

witnesses:

Credibility

(witnessCr)

Social relations

(socialCr)

Past history

(infoCr)

Page 55: T9. Trust and reputation in multi-agent systems

• socialCr(a,w,b): credibility that agent a assigns to agent w when

w is giving information about b and considering the social structure

among w, b and himself.

cooperative relation

competitive relationw - witness

b - target agent

a - source agent

w

a

b

w

a

b

w

a

b

w

a

b

w

a

b

w

a

b

w

a

b

w

a

b

w

a

b

Credibility model

Page 56: T9. Trust and reputation in multi-agent systems

IF coop(w,b) is h

THEN socialCr(a,w,b) is vl

Regret uses fuzzy rules to calculate how the structure of

social relations influences the credibility on the information.

Credibility model

0 1

0

1

low

(l)

moderate

(m)

high

(h)

0 1

0

1

very_low

(vl)

low

(l)

moderate

(m)

very_high

(vh)

high

(h)

Page 57: T9. Trust and reputation in multi-agent systems

Reputation model

Witness reputation

System reputation

Neigh- bourhood reputation

ODB

Direct Trust

Credibility

IDB SDB

Trust

The ReGreT system

Page 58: T9. Trust and reputation in multi-agent systems

ReGreT uses fuzzy rules to model this reputation.

IF is X AND coop(b, ) low

THEN is X

)( d_qualityoffers_gooDTina

)( d_qualityoffers_gooR bain

in

IF is X’ AND coop(b, ) is Y’

THEN is T(X’,Y’)

)( d_qualityoffers_gooDTRLina

)( d_qualityoffers_gooRL bain

in

Neighbourhood reputation

The trust on the agents that are in the “neighbourhood” of the target agent and their relation with it are the elements used to calculate what we call the Neighbourhood reputation.

Page 59: T9. Trust and reputation in multi-agent systems

Reputation model

Witness reputation

System reputation

Neigh- bourhood reputation

ODB

Direct Trust

Credibility

IDB SDB

Trust

The ReGreT system

Page 60: T9. Trust and reputation in multi-agent systems

The idea behind the System reputation is to use the common knowledge about social groups and the role that the agent is playing in the society as a mechanism to assign reputation values to other agents. The knowledge necessary to calculate a system reputation is usually inherited from the group or groups to which the agent belongs to.

System reputation

Page 61: T9. Trust and reputation in multi-agent systems

Trust

Reputation model

Witness reputation

System reputation

Neigh- bourhood reputation

Direct Trust

Trust

If the agent has a reliable direct trust value, it will use that as a measure of trust. If that value is not so reliable then it will use reputation.

Page 62: T9. Trust and reputation in multi-agent systems

A cognitive perspective to computational reputation models

• A cognitive view on Reputation

• Repage, a computational cognitive reputation model

• [Properly] Integrating a [cognitive] reputation model into a [cognitive] agent architecture

• Arguing about reputation concepts

Page 63: T9. Trust and reputation in multi-agent systems

Social evaluation

• A social evaluation, as the name suggests, is the evaluation by a social entity of a property related to a social aspect.

• Social evaluations may concern physical, mental, and social properties of targets. • A social evaluation includes at least three sets of agents:

a set E of agents who share the evaluation (evaluators) a set T of evaluation targets a set B of beneficiaries

We can find examples where the different sets intersect totally, partially, etc... e (e in E) may evaluate t (t in T) with regard to a state of the world that is in b’s (b in B) interest, but of which b not necessarily is aware.

Example: quality of TV programs during children’s timeshare

Page 64: T9. Trust and reputation in multi-agent systems

Image and Reputation

• Both are social evaluations.

• They concern other agents' (targets) attitudes toward socially desirable behaviour but... ...whereas image consists of a set of evaluative beliefs about the characteristics of a target, reputation concerns the voice that is circulating on the same target.

Reputation in artificial societies [Rosaria Conte, Mario Paolucci]

Page 65: T9. Trust and reputation in multi-agent systems

Beliefs

Image

The agent has accepted φ as something true and its decisions from now on will take this into account. B

Is the result of an internal reasoning on different sources of information that leads the agent to create a belief about the behaviour of another agent.

Social evaluation

“An evaluative belief; it tells whether the target is good or bad with respect to a given behaviour” [Conte & Paolucci]

Page 66: T9. Trust and reputation in multi-agent systems

Reputation

• A voice is something that “it is said”, a piece of information that is being transmitted.

• Reputation: a voice about a social evaluation that is recognised by the members of a group to be circulating among them.

Beliefs

B(S(f)) • The agent believes that the social evaluation f is communicated.

• This does not imply that the agent believes that f is true.

Page 67: T9. Trust and reputation in multi-agent systems

Reputation Implications:

• The agent that spreads a reputation, because it is not implicit that it believes the associated social evaluation, takes no responsibility about that social evaluation (another thing is the responsibility associated to the action of spreading that reputation).

• This fact allows reputation to circulate more easily than image (less/no fear of retaliation).

• Notice that if an agent believes “what people say”, image and reputation colapse.

• This distinction has important advantages from a technical point of view.

Page 68: T9. Trust and reputation in multi-agent systems

Gossip

• In order for reputation to exist, it has to be transmitted. We cannot have reputation without communication.

• Gossip currently has the meaning of an idle talk or rumour, especially about the personal or private affairs of others. Usually has a bad connotation. But in fact is an essential element in human nature.

• The antecedents of gossip is grooming. • Studies from evolutionary psicology have found gossip to be very important as a mechanism to spread reputation [Sommerfeld et al. 07, Dunbar 04]

• Gossip and reputation complement social norms: Reputation evolves along with implicit norms to encourage socially desirable conducts, such as benevolence or altruism and discourage socially unacceptable ones, like cheating.

Page 69: T9. Trust and reputation in multi-agent systems

Outline

• A cognitive view on Reputation

• Repage, a computational cognitive reputation model

• [Properly] Integrating a [cognitive] reputation model into a [cognitive] agent architecture

• Arguing about reputation concepts

Page 70: T9. Trust and reputation in multi-agent systems

RepAge

What is the RepAge model?

It is a reputation model evolved from a

cognitive theory by Conte and Paolucci.

The model is designed with an special

attention to the internal representation of the

elements used to build images and

reputations as well as the inter-relations of

these elements.

Page 71: T9. Trust and reputation in multi-agent systems

P P P P P

P P P

P P

Rep

RepAge memory

Img P

Strength: 0.6

Value:

Page 72: T9. Trust and reputation in multi-agent systems

RepAge memory

Page 73: T9. Trust and reputation in multi-agent systems
Page 74: T9. Trust and reputation in multi-agent systems

Outline

• A cognitive view on Reputation

• Repage, a computational cognitive reputation model

• [Properly] Integrating a [cognitive] reputation model into a [cognitive] agent architecture

• Arguing about reputation concepts

Page 75: T9. Trust and reputation in multi-agent systems

What do you mean by “properly”?

Trust & Reputation system

Inputs

Current models

Planner

Decision mechanism

Comm

Agent Black box

Reactive

?

Page 76: T9. Trust and reputation in multi-agent systems

Trust & Reputation system

Inputs

Current models

Planner

Decision mechanism

Comm

Agent Black box

Reactive

Value

What do you mean by “properly”?

Page 77: T9. Trust and reputation in multi-agent systems

Trust & Reputation system

Inputs

Planner

Decision mechanism

Comm

Agent

The next generation?

What do you mean by “properly”?

Page 78: T9. Trust and reputation in multi-agent systems

Inputs

Planner

Decision mechanism

Comm

Agent

The next generation?

Not only reactive... ... proactive

What do you mean by “properly”?

Page 79: T9. Trust and reputation in multi-agent systems

BDI model

• Very popular model in the multiagent community.

• Has the origins in the theory of human practical reasoning [Bratman] and the notion of intentional systems [Dennett].

• The main idea is that we can talk about computer programs as if they have a “mental state”.

• Specifically, the BDI model is based on three mental attitudes: Beliefs - what the agent thinks it is true about the world. Desires - world states the agent would like to achieve. Intentions - world states the agent is putting efforts to achieve.

Page 80: T9. Trust and reputation in multi-agent systems

BDI model

• The agent is described in terms of these mental attitudes.

• The decision-making model underlying the BDI model is known as practical reasoning.

• In short, practical reasoning is what allows the agent to go from beliefs, desires and intentions to actions.

Page 81: T9. Trust and reputation in multi-agent systems

UNITS

Bridge Rules • Rules of inference wich relate formulae in different units.

• Structural entities representing the main architecture components. Each unit has a single logic associated with it.

Logics

Theories

• Declarative languages, each with a set of axioms amd a number of rules of inference.

• Sets of formulae written in the logic associated with a unit

Multicontext systems

Page 82: T9. Trust and reputation in multi-agent systems

U1:b

U3:a

U2:d ,

U1 U2

U3

d

Page 83: T9. Trust and reputation in multi-agent systems

U1:b

U3:a

U2:d ,

U1 U2

U3

d b

Page 84: T9. Trust and reputation in multi-agent systems

U1:b

U3:a

U2:d ,

U1 U2

U3

d b

Page 85: T9. Trust and reputation in multi-agent systems

U1:b

U3:a

U2:d ,

U1 U2

U3

d b

a

Page 86: T9. Trust and reputation in multi-agent systems

Multicontext

Page 87: T9. Trust and reputation in multi-agent systems

Repage integration in a BDI architecture

Page 88: T9. Trust and reputation in multi-agent systems

BC-LOGIC

Page 89: T9. Trust and reputation in multi-agent systems

Grounding Image and Reputation to BC-Logic

Page 90: T9. Trust and reputation in multi-agent systems
Page 91: T9. Trust and reputation in multi-agent systems

Repage integration in a BDI architecture

Page 92: T9. Trust and reputation in multi-agent systems

Desire and Intention context

Page 93: T9. Trust and reputation in multi-agent systems

Generating Realistic Desires

Page 94: T9. Trust and reputation in multi-agent systems

Generating Intentions

Page 95: T9. Trust and reputation in multi-agent systems
Page 96: T9. Trust and reputation in multi-agent systems

Repage integration in a BDI architecture

Page 97: T9. Trust and reputation in multi-agent systems

Outline

• A cognitive view on Reputation

• Repage, a computational cognitive reputation model

• [Properly] Integrating a [cognitive] reputation model into a [cognitive] agent architecture

• Arguing about reputation concepts

Page 98: T9. Trust and reputation in multi-agent systems

Arguing about Reputation Concepts

Goal: Allow agents to participate in argumentation-based dialogs regarding reputation elements in order to:

- Decide on the acceptance of a communicated social evaluation based on its reliability.

“Is the argument associated to a communicated social evaluation (and according to my knowledge) strong enough to consider its inclusion in the knwoledge base of my reputation model?”

- Help in the process of trust alignment. What we need:

• A language that allows the exchange of reputation-related information. • An argumentation framework that fits the requirements imposed by the particular nature of reputation. • A dialog protocol to allow agents establish information seeking dialogs.

Page 99: T9. Trust and reputation in multi-agent systems

The language: LRep

LREP : First-order sorted languange with special predicates representing the typology of social evaluations we use: Img, Rep, ShV, ShE, DE, Comm. •SF: Set of constant formulas

Allows LREP formulas to be nested in communications

• SV: Set of evaluative values

Ex 2: Linguistic Labels

{ 0 , 1 , 2 , 3 , 4 } f:

Page 100: T9. Trust and reputation in multi-agent systems

The reputation argumentation framework

• Given the nature of social evaluations (the values of a social evaluation are graded) we need an argumentation framework that allows to weight the attacks. Example: We have to be able to differentiate between Img(j,seller,VG) being attacked by Img(j,seller,G) or being attacked by Img(j,seller,VB).

• Specifically we instantiate the Weighted Abstract Argumentation Framework defined in P.E. Dunne, A. Hunter, P. McBurney, S. Parsons, and M. Wooldridge, ‘Inconsistency tolerance in weighted argument systems’, in AAMAS’09, pp. 851–858, (2009). • Basically, this framework introduces the notions of strength and inconsistency budgets (defined as the amount of “inconsistency” that the system can tolerate regarding attacks) in a classical Dung’s framework.

Page 101: T9. Trust and reputation in multi-agent systems

Building Argumentative Theories

Reputation-related information

Argumentation level ? ?

Reputation theory: set of ground elements (expressed in LREP) gathered by j through interactions and communications.

Consequence relation (Reputation model)

Specific to each agent

Argumentative theory (Build from the

reputation theory) Simple shared consequence relation

Page 102: T9. Trust and reputation in multi-agent systems

Attack and Strength

Strength of the attack

{ 0 , 1 , 2 , 3 , 4 } f:

Page 103: T9. Trust and reputation in multi-agent systems

Example of argumentative dialog

• Agent i: proponent • Agent j: opponent

i j

Role: seller

Role: sell(q) quality

Role: sell(dt) delivery time

Role: Inf informant

• Each agent is equipped with a Reputation Weighted Argument System

Page 104: T9. Trust and reputation in multi-agent systems

Example of argumentative dialog

i j

Page 105: T9. Trust and reputation in multi-agent systems

Example of argumentative dialog

Strength of the attack

i j

Page 106: T9. Trust and reputation in multi-agent systems

Example of argumentative dialog

i j

Page 107: T9. Trust and reputation in multi-agent systems

Example of argumentative dialog

i j

Page 108: T9. Trust and reputation in multi-agent systems

Example of argumentative dialog

i j

Page 109: T9. Trust and reputation in multi-agent systems

Example of argumentative dialog

i j

Page 110: T9. Trust and reputation in multi-agent systems

i j

Using Inconsistency Budgets

Page 111: T9. Trust and reputation in multi-agent systems

Outline

+ PART II:

Trust Computing Approaches

Security

Institutional

Social

Evaluation of Trust and Reputation Models

EASSS 2010, Saint-Etienne, France 111

Page 112: T9. Trust and reputation in multi-agent systems

GIAA – Group of Applied Artificial Intelligence Univ. Carlos III de Madrid

Dr. Javier Carbó

Page 113: T9. Trust and reputation in multi-agent systems

Trust in Information Security

Same Word, Different World

Security approach tackles “hard” problems of trust.

They view trust as an objective, universal and verifiable property of agents.

Their trust problems have solutions:

• False identity

• Reading/modification of messages by third parties

• Repudiation of messages

• Certificates of accomplishing tasks/services according to standards

EASSS 2010, Saint-Etienne, France 113

Page 114: T9. Trust and reputation in multi-agent systems

An example, Public Key Infrastructure

EASSS 2010, Saint-Etienne, France 114

LDAP directory Certificate authority

Registration authority 1. Client identity

2. Private key sent

3. Public key sent

4. Publication of certificate

5. Certificate sent

Page 115: T9. Trust and reputation in multi-agent systems

Trust in I&S, limitations

Their trust relies on central entities:

– Authorities, Trust Third Parties

– Partially solved using hierarchies of TTPs.

They ignore part of the problem:

- Top authority should be trusted by any other way

Their scope is far away from Real Life Trust issues:

– lies, defection, collusions, social norm violations, …

EASSS 2010, Saint-Etienne, France 115

Page 116: T9. Trust and reputation in multi-agent systems

Institutional approach Institutions have proved to successfully regulate human

societies for a long time:

- created to achieve particular goals while complying norms.

- responsible for defining the rules of the game (norms), to enforce them and assess penalties in case of violation.

Examples: auction houses, parliaments, stock exchange markets,.…

Institutional approach is focused on the existence of organizations:

• Providing an execution infrastructure

• Controlling the resources access

• Sanctionning/rewarding agents’ behaviors

EASSS 2010, Saint-Etienne, France 116

Page 117: T9. Trust and reputation in multi-agent systems

An example: e-institutions

EASSS 2010, Saint-Etienne, France 117

Page 118: T9. Trust and reputation in multi-agent systems

Institutional approach, limitations They view trust as an partially objective, local and verifiable

property of agents. Intrusive control on the agents (modification on the

execution resources, process killing, …) They require a shared agreeement to define of what is

expected (norm compliance, case laws…) They require a central entity and global supervision

– Repositories, access control entities should be centralised

– Low scalability if every agent is observed by the institution

Assumes that the institution itself is trusted

EASSS 2010, Saint-Etienne, France 118

Page 119: T9. Trust and reputation in multi-agent systems

Social approach

Social approach consists in the idea of an auto-organized society (Adam Smith’s invisible hand)

Each agent has its own evaluation criteria of what is expected: no social norms, just individual norms

Each agent is in charge of rewards and punishments (often in terms of more/less future cooperative interactions)

No central entity at all, it consists of a completely distributed social control of malicious agents.

Trust as an emergent property

Avoids Privacy issues caused by centralized approaches

EASSS 2010, Saint-Etienne, France 119

Page 120: T9. Trust and reputation in multi-agent systems

Social approach, limitations Unlimited, but undefined and unexpected trust scope:

We view trust as a subjective, local and unverifiable property of agents.

Exclusion/Isolation is the typical punishment for the malicious agents Difficult to enforce it in open and dynamical societies of agents

Malicious behaviors may occur, they are supposed to be prevented due to the lack of incentives and punishments.

Difficult to define which domain and society is appropriate to test this social approach.

EASSS 2010, Saint-Etienne, France 120

Page 121: T9. Trust and reputation in multi-agent systems

Ways to evaluate any system

Integration on real applications

Using real data from public datasets

Using realistic data generated artificially

Using ad-hoc simulated data with no justification/motivation

None of above

Page 122: T9. Trust and reputation in multi-agent systems

Ways to evaluate T&R in agent systems

Integration of T&R on real agent applications

Using real T&R data from public datasets

Using realistic T&R data generated artificially

Using ad-hoc simulated data with no justification/motivation

None of above

Page 123: T9. Trust and reputation in multi-agent systems

Real Applications using T&R in an agent system

• What real application are we looking for?

• Trust and reputation:

– System that uses (for something) and exchanges subjective opinions about other participants Recommender Systems

• Agent System:

– Distributed view, no central entity collects, aggregates and publishes a final valuation ???

Page 124: T9. Trust and reputation in multi-agent systems

Real Applications using T&R in an agent system

• Desiderata of application domains:

(To be filled by students)

Page 125: T9. Trust and reputation in multi-agent systems

Real data & public datasets

• Assuming real agent applications exists, would data be publicly available? – Privacy concerns – Lack of incentives to save data along time – Distribution of data.Heisenberg uncertainty

principle: If users knew their subjective opinions would be collected by a central entity, they would not be as if their opinions had just a private (supposed-to-be friendly) reader.

• No agents, no distribution public dataset from recomender systems

Page 126: T9. Trust and reputation in multi-agent systems

A view on privacy concerns

• Anonymity: use of arbitrary/secure pseudonysms

• Using concordance: similarity between users within a single context. Mean of differences rating a set of items. Users tend to agree. (Private Collaborative Filtering using estimated concordance measures, N. Lathia, S. Hailes, L. Capra, 2007)

• Secure Pair-wise comparison of fuzzy ratings (Introducing newcomers into a fuzzy reputation agent system, J. Carbo, J.M. Molina, J. Davila, 2002)

Page 127: T9. Trust and reputation in multi-agent systems

Real Data & Public Datasets

• MovieLens, www.grouplens.org: Two datasets:

– 100,000 ratings for 1682 movies by 943 users.

– 1 million ratings for 3900 movies by 6040 users.

• These are the “standard” datasets that many recommendation system papers use in their evaluation

Page 128: T9. Trust and reputation in multi-agent systems

My paper with MovieLens

• I selected users among those who had rated 70 or more movies, and we also selected the movies that were evaluated more than 35 times in order to avoid the sparsity problem.

• Finally we had 53 users and 28 movies.

• The average votes per user is approximately 18. So the sparsity of the selected set of users and movies is under 35%

“Agent-based collaborative filtering based on fuzzy recommendations” J. Carbó, J.M. Molina, IJWET v1 n4,

2004

Page 129: T9. Trust and reputation in multi-agent systems

Real Data & Public Datasets

BookCrossing (BX) dataset:

• www.informatik.uni-freiburg.de/~cziegler/BX

• collected by Cai-Nicolas Ziegler in a 4-week crawl (August / September 2004) from the Book-Crossing community.

• It contains 278,858 users providing 1,149,780 ratings (explicit / implicit) about 271,379 books.

Page 130: T9. Trust and reputation in multi-agent systems

Real Data & Public Datasets

Last.fm Dataset

• top artists played by all users:

– contains <user, artist-mbid, artist-name, total-plays>

– tuples for ~360,000 users about 186,642 artists.

• full listening history of 1000 users:

– Tuples of <user-id, timestamp, artist-mbid, artist-name, song-mbid, song-title>

• Collected by Oscar Celma, Univ. Pompeu Fabra

• www.dtic.upf.edu/~ocelma/MusicRecommendationDataset

Page 131: T9. Trust and reputation in multi-agent systems

Real Data & Public Datasets

Jester Joke Data Set:

• Ken Goldberg from UC Berkeley released a dataset from Jester Joke Recommender System.

• 4.1 million continuous ratings (-10.00 to +10.00) of 100 jokes from 73,496 users.

• www.ieor.berkeley.edu/~goldberg/jester-data/

• It differentiates itself from other datasets by having a much smaller number of rateable items.

Page 132: T9. Trust and reputation in multi-agent systems

Real Data & Public Datasets

Epinions dataset, collected by P. Massa:

• in a 5-week crawl (November/December 2003) from the Epinions.com

• Not just ratings about items, also trust statements:

– 49,290 users who rated a total of

– 139,738 different items at least once, writing 664,824 reviews.

– 487,181 issued trust statements.

• only positive trust statements and not negative ones

Page 133: T9. Trust and reputation in multi-agent systems

Real Data & Public Datasets

Advogato: www.trustlet.org

• a weighted dataset. Opinions aggregated (centrally) on a 3 levels base, Apprentice, Journeyer, and Master

• Tuples of: minami -> polo [level="Journeyer"];

• Used to test trust propagation in social networks (asuming trust transitivity).

• Trust metric (by P. Massa) uses this information in order to assign to every user a final certification level aggregating weighted opinions.

Page 134: T9. Trust and reputation in multi-agent systems

Real Data & Public Datasets

MoviePilot dataset: www.moviepilot.com

• this dataset contains information related to concepts from the world of cinema, e.g. single movies, movie universes (such as the world of Harry Potter movies), upcoming details (trailers, teasers, news, etc

• RecSysChallenge: live evaluation session will take place where algorithms trained on offline data will be evaluated online, on real users.

Mendeley dataset: www.mendeley.com

• recommendations to users about scientific papers that they might be interested in.

Page 135: T9. Trust and reputation in multi-agent systems

Real Data & Public Datasets

• No agents, no distribution public dataset from recomender systems

• Authors have to distribute opinions to participants in some way.

• Ratings about items, not trust statements.

• Relationship between # of ratings / # of items too low

• Relationship between # of ratings / # of users too low

• No time-stamps

• Papers intend to be based on real data, but required transformation from centralized to distributed aggregation distort reality of these data.

Page 136: T9. Trust and reputation in multi-agent systems

Realistic Data

• We need to generate realistic data to test trust and reputation in agent systems.

• Several technical/design problems arise:

– Which # of users, ratings and items we need?

– How much dynamic would be the society of agents?

• But the hardest part is the pshichological/sociological one:

– How individuals take trust decisions? Which types of individuals?

– How real society of humans trust? How many of each individual type belong to real human society?

Page 137: T9. Trust and reputation in multi-agent systems

Realistic Data

• Large-scale simulation with Netlogo (http://ccl.northwestern.edu/netlogo/)

• Others: MASON (https://mason.dev.java.net/), RePast (http://repast.sourceforge.net/)

• But there are mainly adhoc simulations which are difficult to repeat by third parties.

• Many of them are unrealistic agents with binary behaviour altruist/egoist based on game theory views.

Page 138: T9. Trust and reputation in multi-agent systems

Examples of AdHoc Simulations

• Convergence of reputation image to real behaviour of agents. Static behaviours, no recomendations, just consume/provide services. Worst case.

• Maximum Influence of cooperation. Free and honest recomendations from every agent based on consumed services. Best case.

• Inclusion of dynamic behaviours, different % of malicious agents in society, collusions between recommenders and providers, etc. Compare results with the previous ones.

“Avoiding malicious agents using fuzzy recommendations” J. Carbo, J. M. Molina, J. Dávila. Journal of Organizational

Computing & Electronic Commerce, vol. 17, num. 1

Page 139: T9. Trust and reputation in multi-agent systems

Technical/Design Problems to generate simulated data

• Lessons learned from the ART testbed experience.

• http://megatron.iiia.csic.es/art-testbed/

• A testbed would help to compute fair comparisons: “Researchers can perform easily-repeatable experiments in a common environment against accepted benchmarks”

• Relative Success:

– 3 international competitions jointly with AAMAS 06-08.

– Over 15 participants in each competition.

– Several journal and conference publications use it.

Page 140: T9. Trust and reputation in multi-agent systems

Art Domain

Page 141: T9. Trust and reputation in multi-agent systems

the ART testbed

Page 142: T9. Trust and reputation in multi-agent systems
Page 143: T9. Trust and reputation in multi-agent systems

ART Interface

The agent system is displayed as a topology in the left, while in the left two panels show the details of particular agent statistics and of global system statistics.

Page 144: T9. Trust and reputation in multi-agent systems

The ART testbed

• The simulation creates opinions according to an error distribution of zero mean and a standard deviation s:

s = (s∗ + α / cg) t

• where s∗, unique for each era, is assigned to an appraiser from a uniform distribution.

• t is the true value of the painting to be appraised

• α is a hidden value fixed for all appraisers that balances opinion-generation cost and final accuracy.

• cg, the cost an appraiser decides to pay to generate an opinion. Therefore, the minimum achievable error distribution standard deviation is s∗ · t

Page 145: T9. Trust and reputation in multi-agent systems

The ART testbed

• Each appraiser a’s actual client share ra takes into account the appraiser’s client share from the previous timestep:

ra = q · ra’ + (1 − q) · ˜ra

• where ra’ is appraiser a’s client share in the previous timestep.

• q is a value that reflects the influence of previous client share size on next client share size (thus the volatility in client share magnitudes due to frequent accuracy oscillations may be reduced)

Page 146: T9. Trust and reputation in multi-agent systems

2006 ART Competition

2006 Competition setup:

• Clients per agent: 20, Painting eras: 10, games with 5 agents

• Costs 100/10/1, Sensing-Cost-Accuracy=0.5, Winner iam from Southampton Univ.

Post competition discussion notes:

• Larger number of agents required, Definition of dummy agents, Relate # of eras with # of agents, More fair distribution of expertise (just uniform), More abrupt change in # of clients (greater q), Improving expertise over time?

Page 147: T9. Trust and reputation in multi-agent systems

2006 ART Winner conclusions

“The ART of IAM: The Winning Strategy for the 2006 Competition”, Luke Teacy et al, Trust WS, AAMAS 07.

• It is generally more economical for an agent to purchase opinions from a number of third parties than it is to invest heavily in its own opinion

• There is little apparent advantage to reputation sharing. reputation is most valuable in cases where direct experience is relatively more difficult to acquire

• The final lesson is that although trust can be viewed as a sociological concept, and inspiration for computational models of trust can be drawn from multiple disciplines, the problem of combining estimates of unknown variables (such as trustee behaviour) is fundamentally a statistical one.

Page 148: T9. Trust and reputation in multi-agent systems

2007 ART Competition

2007 Competition Setup: • Costs 100/10/0.1, All agents have equal sum of expertise

values, Painting eras: static but unknown, Expertise assignments may change during the course of the game, Include dummy agents, games with 25 agents

2007 Competition Discussion Notes: • it need sto facilitate reputation exchange • It doesn’t have to produce all changes at the same time,

Gradual changes • Studying barriers to entry; how a new agent joins an

existing MAS: Cold start vs. Hot start (exploration vs explotation)

• More competitive dummy agents • relationship between opinion generation cost and accuracy

Page 149: T9. Trust and reputation in multi-agent systems

2008 ART Competition

2008 Competition Setup:

• limited in the number of certainty and opinion requests that he can send.

• Certainty request has cost.

• deny the use of self opinions

• Wider range of expertise values

• Every time step, select randomly a number of eras to change, and add a given amount of positive change (increase value). For every positive change, apply also a negative change of the same amount, so that the average expertise of the agent is not modified

Page 150: T9. Trust and reputation in multi-agent systems

Evaluation criteria

• Lack of criteria on which and how the very different trust decisions should be considered

Conte and Paolucci 02:

• epistemic decisions: those about about updating and generating trust opinions from received reputations

• pragmatic-strategic decisions are decisions of how to behave with partners using these reputation-based trust

• memetic decisions stand for the decisions of how and when to share reputation with others.

Page 151: T9. Trust and reputation in multi-agent systems

Main Evaluation Criteria of The ART testbed

• The winning agent is selected as the appraiser with the highest bank account balance in the direct confrontation of appraiser agents repeated X times.

• In other words, the appraiser who is able to:

– estimate the value of its paintings most accurately

– purchase information most prudently.

• Where an ART iteration involves 19 steps (11 decisions, 8 interactions) to be taken by an agent.

Page 152: T9. Trust and reputation in multi-agent systems

Trust decisions in ART testbed

1. How our agent should aggregate reputation information about others?

2. How our agent should trust weights of providers and recommenders are updated afterwards?

3. How many agents our agent should ask for reputation information about other agents?

4. How many reputations and opinions requests from other agents should our agent answer?

5. How many agents our agent should ask for opinions about our assigned paintings?

6. How much time (economic value) our agent should spend building requested opinions about the paintings of the other agents?

7. How much time (economic value) our agent should spend building the appraisals of the own paintings? (AUTOPROVIDER!)

Page 153: T9. Trust and reputation in multi-agent systems

Limitations of Main Evaluation Criteria of ART testbed

From my point of view:

• Evaluates all trust decisions jointly: should participants play provider and consumer roles jointly of just the role of opinion consumers?

• Is the direct confrontation of competitor agents the right scenario to compare them?

Page 154: T9. Trust and reputation in multi-agent systems

Providers vs. Consumers

• Playing games with two participants of 2007 competition (iam2 and afras) and other 8 dummy agents.

• Dummy agents implemented ad hoc to be the solely opinion providers, they do not ask for any service to 2007 participants.

• None of both 2007 participants will ever provide opinions/reputations, they are just consumers.

• Differences between both agents were much less than the official competition stated (absolutely and relatively).

“An extension of a fuzzy reputation agent trust model in the ART testbed” Soft Computing v14, issue 8, 2010

Page 155: T9. Trust and reputation in multi-agent systems

Trust Strategies in Evolutive Agent Societies

• An evolutionarily stable strategy (ESS) is a strategy which, if adopted by a population of players, cannot be invaded by any alternative strategy

• An evolutionarily stable trust strategy is a strategy which, if becomes dominant (adopted by a majority of agents) can not be defeated by any alternative trust strategy.

• Justification: The goal of trust strategies is to establish some kind of social control over malicious/distrustful agents

• Assumption: agents may change of trust strategy. Agents with a failing trust strategy would get rid of it and they would adopt a successful trust strategy in the future.

Page 156: T9. Trust and reputation in multi-agent systems

An evolutive view of ART games

• We consider a failing trust strategy the one who lost (earning less money than the others) the last ART game.

• We consider the successful trust strategy to the one who won the last ART game (earning more money than the others).

• By this way replacing in consecutive games the participant who lost the game by the one who won it.

• We have applied it to the 16 participant agents of 2007 ART competition

Page 157: T9. Trust and reputation in multi-agent systems

ART game

Loser

Winner

16 participants in 2007 competition

ART game

Loser

Winner

ART game

and so on…

Page 158: T9. Trust and reputation in multi-agent systems

Game Winner Earnings Loser Earnings

1 iam2 17377 xerxes -8610

2 iam2 14321 lesmes -13700

3 iam2 10360 reneil -14757

4 iam2 10447 blizzard -7093

5 agentevicente 8975 Rex -5495

6 iam2 8512 alatriste -999

7 artgente 8994 agentevicente 2011

8 artgente 10611 agentevicente 1322

9 artgente 8932 novel 424

10 iam2 9017 IMM 1392

11 artgente 7715 marmota 1445

12 artgente 8722 spartan 2083

13 artgente 8966 zecariocales 1324

14 artgente 8372 iam2 2599

15 artgente 7475 iam2 2298

16 artgente 8384 UNO 2719

17 artgente 7639 iam2 2878

18 iam2 6279 JAM 3486

19 iam2 14674 artgente 2811

20 artgente 8035 iam2 3395

Page 159: T9. Trust and reputation in multi-agent systems

Results of repeated games

2007 winner is not a Evolutionarily Stable Strategy. • Although the strategy of the winner of the 2007 spreads

in the society of agents (until 6 iam2 agents out of 16), it never becomes dominant (no majority of iam2 agents).

• iam2 strategy is defeated by artgente strategy, which becomes dominant (11 artgente agents out of 16). Therefore its superiority as winner of 2007 competition is, at least, relative.

• The right equilibrium of trust strategies that form an evolutionarily stable society is composed by 10-11 Artgente agents and 6-5 iam2 agents.

Page 160: T9. Trust and reputation in multi-agent systems

CompetitionRank EvolutionRank Agent ExcludedInGame

6 1 artgente -

1 2 iam2 -

2 3 JAM 18

7 4 UNO 16

4 5 zecariocales 13

5 6 spartan 12

9 7 marmota 11

13 8 IMM 10

10 9 novel 9

15 10 agentevicente 8

11 11 alatriste 6

12 12 rex 5

3 13 Blizzard 4

8 14 reneil 3

14 15 lesmes 2

16 16 xerxes 1

Page 161: T9. Trust and reputation in multi-agent systems

Other Evaluation Criteria of the ART testbed

• The testbed also provides functionality to compute:

– the average accuracy of the appraiser’s final appraisals (final appraisal error mean)

– the consistency of that accuracy (final appraisal error standard deviation)

– the quantities of each type of message passed between appraisers are recorded.

• We could take into account other relevant evaluation criteria?

Page 162: T9. Trust and reputation in multi-agent systems

Evaluation criteria from the agent-based view

Characterization and Evaluation of Multi-agent System, P. Davidsson, S. Johanson, M. Svahnberg In Software

Engineering for Multi-Agent Systems IV, LNCS 3914, 2006.

9 Quality atributes:

1. Reactivity: How fast are opinions re-evaluated when there are changes in expertise?

2. Load balancing: How evenly is the load balanced between the appraisals?

3. Fairness: Are all the providers treated equally?

4. Utilization of resources: Are the available abilities/information utilized as much as is possible?

Page 163: T9. Trust and reputation in multi-agent systems

Evaluation criteria from the agent-based view

5. Responsiveness: How long does it take for the appraisals to get response to an individual request?

6. Communication overhead: How much extra communication is needed for the appraisals?

7. Robustness: How vulnerable is the agent to the absence of responses?

8. Modifiability: How easy is it to change the behaviour of the agent in very different conditions?

9. Scalability: How good is the system at handling large numbers of providers and consumers)?

Page 164: T9. Trust and reputation in multi-agent systems

Evaluation criteria from the agent-based view

Evaluation of Multi-Agent Systems: The case of Interaction, H. Joumaa, Y. Demazeau, J.M. Vincent, 3rd Int. Conf. on

Information & Communication Technologies: from Theory to Applications. IEEE Computer Society, Los

Alamitos (2008)

• An evaluation at the interaction level, based on the weight of the information brought by a message.

• A function Φ is defined in order to calculate the weight of pertinent messages.

Page 165: T9. Trust and reputation in multi-agent systems

Evaluation criteria from the agent-based view

• The relation between the received message m and the effects on the agent is studied in order to calculate the Φ(m) value. According to the model, two kinds of functions are considered:

– A function that associates weight to the message according to its type.

– A function that associates weight to the message according to the change provoked on the internal state and the actions triggered by its reception.

Page 166: T9. Trust and reputation in multi-agent systems

Consciousness Scale

• Too much quantification (AI is not just statistics…)

• Compare agents qualitatively Measure their level of consciusness

• A scale of 13 conscious levels according to the cognitive skills of an agent, the “Cognitive Power” of an agent.

• The higher the level obtained, the more the behavior of the agent resembles humans

• www.consscale.com

Page 167: T9. Trust and reputation in multi-agent systems

Bio-inspired order of Cognitive Skills

• From the point of view of emotions (Damasio, 1999):

“Fake Emotions”

“Feeling of a Feeling”

“Feeling”

“Emotion”

Page 168: T9. Trust and reputation in multi-agent systems

“Imagination”

“Planning”

Bio-inspired order of Cognitive Skills

• From the point of view of perception and action (Perner, 1999):

“Set Shifting”

“Attention”

“Adaptation”

“Perception”

Page 169: T9. Trust and reputation in multi-agent systems

“I Know You Know I Know”

“I Know You Know”

“I Know I Know”

Bio-inspired order of Cognitive Skills

• From the point of view of Theory of Mind (Lewis 2003):

“I Know”

Page 170: T9. Trust and reputation in multi-agent systems

Consciousness Levels

Reactive

Adaptive

Attentional

Executive

Emotional

Self-Conscious

Empathic

Social

Human-like

Super-Conscious

Page 171: T9. Trust and reputation in multi-agent systems

Evaluating agents with ConsScale

Page 172: T9. Trust and reputation in multi-agent systems

Thank you !

EASSS 2010, Saint-Etienne, France 172