auctions

106
1 Auctions

Upload: coen

Post on 19-Jan-2016

85 views

Category:

Documents


0 download

DESCRIPTION

Auctions. Automated Negotiation. Auction Protocols. English auctions First price sealed-bid auctions Second best price sealed-bid auctions (Vickery auctions) Dutch auctions. The Contract Net. R. G. Smith and R. Davis. DPS System Characteristics and Consequences. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Auctions

1

Auctions

Page 2: Auctions

2

Auction Protocols

English auctions First price sealed-bid auctions Second best price sealed-bid auctions

(Vickery auctions) Dutch auctions

Page 3: Auctions

3

Page 4: Auctions

4

The Contract Net

R. G. Smith and R. Davis

Page 5: Auctions

5

DPS System Characteristicsand Consequences

DPS System Characteristicsand Consequences

Communication is slower than computation— loose coupling— efficient protocol— modular problems— problems with large grain size

Page 6: Auctions

6

More DPS System Characteristicsand Consequences

More DPS System Characteristicsand Consequences

Any unique node is a potential bottleneck— distribute data— distribute control— organized behavior is hard to guarantee (since no one node has complete picture)

Page 7: Auctions

7

The Contract NetThe Contract Net

An approach to distributed problem solving, focusing on task distribution

Task distribution viewed as a kind of contract negotiation

“Protocol” specifies content of communication, not just form

Two-way transfer of information is natural extension of transfer of control mechanisms

Page 8: Auctions

8

Four Phases to Solution,as Seen in Contract NetFour Phases to Solution,as Seen in Contract Net

1. Problem Decomposition

2. Sub-problem distribution

3. Sub-problem solution

4. Answer synthesis

The contract net protocol deals with phase 2.

Page 9: Auctions

9

Contract NetContract Net The collection of nodes is the “contract net” Each node on the network can, at different

times or for different tasks, be a manager or a contractor

When a node gets a composite task (or for any reason can’t solve its present task), it breaks it into subtasks (if possible) and announces them (acting as a manager), receives bids from potential contractors, then awards the job (example domain: network resource management, printers, …)

Page 10: Auctions

10

Node Issues Task AnnouncementNode Issues Task Announcement

Manager

Task Announcement

Page 11: Auctions

11

Idle Node Listening toTask AnnouncementsIdle Node Listening toTask Announcements

Manager

Manager

Manager

PotentialContractor

Page 12: Auctions

12

Node Submitting a BidNode Submitting a Bid

Manager

PotentialContractor

Bid

Page 13: Auctions

13

Manager listening to bidsManager listening to bids

Manager

PotentialContractor

PotentialContractor

Bids

Page 14: Auctions

14

Manager Making an AwardManager Making an Award

Manager

Contractor

Award

Page 15: Auctions

15

Contract EstablishedContract Established

Manager

Contractor

Contract

Page 16: Auctions

16

Domain-Specific EvaluationDomain-Specific Evaluation

Task announcement message prompts potential contractors to use domain specific task evaluation procedures; there is deliberation going on, not just selection — perhaps no tasks are suitable at present

Manager considers submitted bids using domain specific bid evaluation procedure

Page 17: Auctions

17

Types of MessagesTypes of Messages

Task announcement Bid Award Interim report (on progress) Final report (including result description) Termination message (if manager wants

to terminate contract)

Page 18: Auctions

18

Efficiency ModificationsEfficiency Modifications

Focused addressing — when general broadcast isn’t required

Directed contracts — when manager already knows which node is appropriate

Request-response mechanism — for simple transfer of information without overhead of contracting

Node-available message — reverses initiative of negotiation process

Page 19: Auctions

19

Message FormatMessage Format

Task Announcement Slots:— Eligibility specification— Task abstraction— Bid specification— Expiration time

Page 20: Auctions

20

Task Announcement Example(common internode language)

Task Announcement Example(common internode language)

To: *From: 25Type: Task AnnouncementContract: 43–6Eligibility Specification: Must-Have FFTBOXTask Abstraction:

Task Type Fourier TransformNumber-Points 1024Node Name 25Position LAT 64N LONG 10W

Bid Specification: Completion-TimeExpiration Time: 29 1645Z NOV 1980

Page 21: Auctions

21

The existence of a common internode language allows new nodes to be added to the system modularly, without the need for explicit linking to others in the network (e.g., as needed in standard procedure calling).

Page 22: Auctions

22

Applications of the contract Net

Sensing Task Allocation (Malone) Delivery companies (Sandholm) Market-oriented programming (Wellman)

Page 23: Auctions

23

Bidding Mechanisms for Data Allocation

A user sends its query directly to the server where the needed document is stored.

Page 24: Auctions

24

Environment Description

serveri

serverj

a clienta query

a document

area i area j

distance

Page 25: Auctions

25

Utility Function

Each server is concerned only whether a dataset is stored locally or remotely, but is indifferent with respect to different remote location of the dataset.

Page 26: Auctions

26

The Trading Mechanism

Bidding sessions are carried on during predefined time periods.

In each bidding session, the location of the new datasets is determined and the location of each old dataset can be changed.

Until a decision is reached, the new datasets are stored in a temporary buffer.

Page 27: Auctions

27

The Trading Mechanism - cont.

Each dataset has an initial owner (called contractor(ds)), according to the static allocation.For an old dataset - the server which stores it.For a new dataset - the server with the nearest

topics (defined according to the topics of the datasets stored by this server).

Page 28: Auctions

28

The Bidding Steps

Each server broadcasts an announcement for each new dataset it owns, and also for some of its old local datasets.

For each such announcement, each server sends the price it is willing to pay in order to store the dataset locally.

The winner of each dataset is determined by its contractor. It broadcasts a message, including: the winner, the price it has to pay, and the server which bids this price.

Page 29: Auctions

29

Cost of Reallocating Old Datasets

move_cost(ds,bidder):the cost for contractor(ds) for moving ds from its current location to bidder. (for new datasets, move_cost=0)

obtain_cost(ds,bidder):the cost for bidder for moving ds from its current location to bidder.

Page 30: Auctions

30

Protocol Details

winner(ds) denotes the winner of dataset ds. winner(ds)=

arg max bidder move(ds)=true price_suggested(bidder,ds) - move_cost(ds,bidder) none otherwise

Page 31: Auctions

31

Protocol Details - cont.

price(ds) denotes the price paid by the winner for dataset ds.

price(ds)= second_max bidder־SERVERS

{ price_suggested(s,ds) -move_cost(ds,bidder) }+ move_cost(ds,winner).

Page 32: Auctions

32

Bidding Strategies

Attribute:move(ds)=true ifsecond_max bidder־SERVERS

{price_suggested(bidder,ds) -

move_cost(ds,bidder)} Ucontractor(ds)

(ds,contractor(ds)).

Page 33: Auctions

33

Bidding Strategies - cont.

LemmaIf the winner server had bid its true value of storing the dataset locally, then it will have a nonnegative utility from obtaining it.

LemmaEach server will bid its utility from obtaining the dataset:price_suggested(bidder,ds)= Ubidder(ds,bidder) - obtain_cost(ds,bidder).

Page 34: Auctions

34

Bidding Strategies - cont.

TheoremIf announcing and bidding are free, then the allocation reached by the bidding protocol leads to better or equal utility for each server than does the static policy.

The utility function is evaluated according to the expected profits of the server from the allocation.

Page 35: Auctions

35

Usage Estimation

Each server knows only the usage of datasets stored locally.

For new datasets and remote datasets, the server has no information about past usage.

It estimates the future usage of new and remote datasets, using the past usage of local datasets, which contain similar topics.

Page 36: Auctions

36

Queries Structure

We assume that a query sent to a server contains a list of required documents.

This is the situation if the search mechanism to find the required documents is installed locally by the client.

In this situation, the server has to learn from the queries about its local documents, to the expected usage of other documents, in order to decide whether it needs them or not.

Page 37: Auctions

37

Usage Prediction

We assume that a dataset contains several keywords (k1..kn).

For each local dataset ds, and each server d, the server saves the past usage of ds by d, in the last period

Then, it has to predict the future usage of ds by d. It assumes the same behavior than in the past.

Page 38: Auctions

38

Usage Prediction - cont.

It is assumed that the users are interested in keywords, so the usage of a dataset is a function of the keywords it contains.

The simplest model is: when a dataset usage is the sum of the the usage of each of its keywords. However, the relationship between the keywords and the dataset may be different.

Page 39: Auctions

39

Usage Prediction - cont.

The server has to learn about usage of datasets not stored locally:

We suggest that it will build a Neural Network for learning the usage template of each area.

Page 40: Auctions

40

What is Neural Network

•A neural network is composed of a number of nodes, or units, connected by links.

•Each link has numeric weight associated with it.

•The weight are modified so as to try to bring the network’s input/output behavior more into line with that of the environment providing the input.

Page 41: Auctions

41

Neural Network - Cont.

. . .

Hidden layer

Input layer

Output layer

Output unit

Input unit

Page 42: Auctions

42

Structure of the Neural Network

For each area d, we build a neural network. Each dataset stored by the server in area d, is

one example for the neural network of d. The inputs of the examples contain, for each

possible keyword, whether it exist in this dataset, or not.

Page 43: Auctions

43

Structure of the Neural Network - cont.

The output unit of the Neural Network for area d, is its past usage of this dataset.

In order to find the expected usage of another dataset, ds2, by d, we provide the network with the keywords of ds2.

The output of the network is its predicted usage of ds2 by area d.

Page 44: Auctions

44

. . .

Structure of the NN

For a certain dataset, for each keyword k there is an input unit: 1 if the dataset contains k.0 otherwise.

Hidden layer

Output unit: the usage of the dataset by a certain area.

Page 45: Auctions

45

Experimental Evaluation - Results Measurement

vcosts(alloc) - the variable cost of an allocation, which consists of the transmission costs due to the flow of queries.

vcost_ratio: the ratio of the variable costs when using the bidding mechanism and the variable costs of the static allocation.

Page 46: Auctions

46

Experimental Evaluation Complete information concerning previous queries (still

uncertainty):

The bidding mechanism reaches results near to that of the optimal allocation (reached by a central decision maker).

The bidding mechanism yields a lower standard deviation of the servers utilities than the optimal allocation.

Incomplete information: The results of the bidding mechanism are better

than those of static allocation.

Page 47: Auctions

47

Influence of ParametersComplete Information, no movements of old datasets

As the standard deviation of the distances increases, vcost_ratio decreases.

auction results

0

0.5

1

1.5

unifo

rm10

0030

0050

00

distance standard deviation

vco

st

rati

o

vcost ratio

Page 48: Auctions

48

Influence of Parameters - cont.

When increasing the number of servers and the number datasets, vcost_ratio is not influenced.

query_price, answer_cost, storage_cost, dataset_size and retrieve_ cost do not influence vcost_ratio.

usage, std. usage, distance do not influence vcost_ratio.

Page 49: Auctions

49

vcost ratio

0.75

0.8

0.85

0.9

0.95

0.02

0.06 0.

10.

140.

18

epsilon

vcost ratio

`

As epsilon decreases, vcost ratio increases: the system behaves better.

Influence of Learning on the System

Page 50: Auctions

50

Conclusion

We have considered the data allocation problem in a distributed environment.

We have presented the utility function of the servers, which expresses their preferences over the data allocation.

We have proposed using a bidding protocol for solving the problem.

Page 51: Auctions

51

Conclusion - cont. We have considered complete as well as

incomplete information. For the complete information case, we have

proved that the results obtained by the bidding mechanism are better than those of the static allocation, and closed to the optimal results.

Page 52: Auctions

52

Conclusion - cont. For the incomplete information environment: We

have developed a neural-network based learning mechanism.

For each area d, we build a neural network, trained by the server of d.

By this network, we find expectation for other datasets, not currently stored by d.

We found, by simulation, that the results obtained are still significantly better than the static allocation.

Page 53: Auctions

53

Future Work

Future Work:Datasets can be stored in more than one server.Bounded rationality. Repeated game.

Page 54: Auctions

54

Reaching Agreements Through Argumentation

Collaborator: Katia Sycara, Madhura Nirkhe, Amir Evenchik, and

Ariel Stolman

Page 55: Auctions

55

Introduction

Argumentation--an iterative process emerging from exchanges among agents to persuade each other and bring about a change in intentions.

A logical model of the mental states of the agents: beliefs, desires, intentions, goals.

The logic is used to specify argument formulation and a basis for Automated Negotiation Agent.

Page 56: Auctions

56

Agents as Belief, Desire, Intention systems

Belief: information about the current world state subjective

Desire preferences over future world states can be inconsistent (in contrast to goals)

Intentions set of goals the agent is committed to achieve the agent’s “runtime stack”

Formal models: mostly modal logics with possible-worlds semantics

Page 57: Auctions

57

Logic Background

Modal logics; Kripke structures Syntactic Approaches Baysen Networks

Page 58: Auctions

58

Language: there is a set of n agents Kia --- i knows a Bia ----i believes a P -- a set of primitive propositions: P,QExamples: K1a

Semantics: A Kripke Structure consists of four elements:

A set of possible worlds P-----> {True,False}

n binary relations on the worlds ~1, ~2.…,

Modal LogicsModal Logics

Page 59: Auctions

59

Example of a Kripke Structure

W={w1,w2,w3} M,w1|= p & K1p

P,QQ

P

w1w2

w3

1

1

2 2

Page 60: Auctions

60

Axioms

Kia & Ki(a-->b) ---> Kib

If |-a then |-Kia

Kia -->a

Kia-> KiKia

~Kia --> Ki ~ Kia

~Bi false

Each axiom can be associated with a condition on the binary relation.

Page 61: Auctions

61

Problems in using Possible Worlds Semantics

Logical omniscience--the agent believes all the logical consequences of its belief.

The agent believes in all tautologies.

Philosophers: possible worlds do not exist.

Page 62: Auctions

62

Minimal Models: partial solution

The intension of a sentence: the set of possible worlds in which the sentence is satisfied

Note: if two sentences have the same intensions then they are semantically equivalent.

A sentence is a belief at a given world if its intension is belief-accessible.

According to this definition, the agent's beliefs are not closed under inferences; the agent may even believe in contradictions.

Page 63: Auctions

63

Minimal model: example

P QP ~Q

P ~Q

~P ~Q

Page 64: Auctions

64

Beliefs, Desires, Goals and Intentions

We use time lines rather than possible worlds. An agent's belief set includes beliefs concerning the world and

beliefs concerning mental states of other agents. An agent may be mistaken in both kinds of beliefs and beliefs

may be inconsistent. The beliefs are used to generate arguments in the negotiations. Desires: may be inconsistent. Goals: a consistent subset of the set of desires. Intentions serves to contribute to one or more of the agent's

desires.

Page 65: Auctions

65

Intentions

Two types: Intention-To and Intention-That Intention-to: refer to actions that are within the

direct control of the agent. Intention-that: refer to propositions that are not

directly within the agent's realm of control, that it must rely on other agents for satisfying-- can be achieved through argumentation.

Page 66: Auctions

66

Argumentation TypesArgumentation Types

A promise of a future reward. A threat. An appeal to past promise. Appeal to precedents as “counter example.” Appeal to “prevailing practice.” Appeal to self-interests

Page 67: Auctions

67

Example: 2 Robots

Two mobile robots on Mars each built to maximize its own utility.

R1 requests R2 to dig for a mineral. R2 refuses. R1 responds with a threat: ``If you do not dig for me, I will break your antenna''. R2 needs to evaluate this threat.

Another possibility: R1 promises a reward: ``If you dig for me today, I will help you move your equipment tomorrow.'' R2 needs to evaluate the promise of future reward.

Page 68: Auctions

68

Usage of the logic

Specification for agent design: the model constraints certain planning and negotiation processes. Axioms for argumentation types

The logic is used by the agents themselves: ANA (Automated Negotiation Agent)

Page 69: Auctions

69

ANA

Complies with the definition of an Agent Oriented Programming (AOP) system (Shoham): The agent is represented using notions of

mental states; The agent's actions depend on these mental

states;The agent's mental state may change over time; Mental state changes are driven by inference

rules.

Page 70: Auctions

70

The Block World EnvironmentThe Block World Environment

ח5 4 3 2 1

11

2

Page 71: Auctions

71

Mental State ModelMental State Model

Beliefs b(agent1,world_state([blockE / 5 / 1,blockD / 4 / 1,blockC / 3 / 1,blockB /

2 / 1,blockA / 1 / 1]),[0,2,t]).

Desires desire(desire1,0,agent1 ,[blockB / 6 / 2], 39,0). desire(desire2,0,agent1, [blockB / 1 / 1], 29,0). desire(desire3,0,agent1, [blockB / 6 / 1], 35,0). desire(desire4,0,agent1, [blockE / 2 / 1], 38,0).

Goals goal(agent1, 0, [[blockB / 6 / 2 / [desire1], blockE / 2 / 1 / [desire4]])

Page 72: Auctions

72

Mental State ModelMental State Model

Desired World desired_world([ [blockC / 3 / 1 /[unused_block],

blockA / 1 / 1 /[unused_block], blockD / 6 / 1 /[supporting], blockB / 6 / 2 /[desire1], blockE / 2 / 1 /[desire4]]).

Intentions intention(1,agent1,0,that,intention_is_done(agent1,0), [blockB /

2 / 1 / 7 / 1],0, [ towards_goals]). intention(2,agent1,0,to,intention_is_done(agent1,1), [blockD / 4

/ 1 / 6 / 1],0, [supporting]). intention(3,agent1,0, that,intention_is_done( agent1,2), [blockB

/ 7 / 1 / 6 / 2],0, [desire1]). intention(4,agent1,0,to,intention_is_done(agent1,3), [blockE / 5

/ 1 / 2 / 1],0, [desire4]).

Page 73: Auctions

73

Agent Infrastructure: Agent Life Agent Infrastructure: Agent Life CycleCycle

First PlanReading Messages

Planning next steps

Performing nextintentions

Dealing with the agent’s own threats

Page 74: Auctions

74

The Agent Life Cycle: Reading The Agent Life Cycle: Reading MessagesMessages

Types of messages Queue Waiting for answers Negotiation and world change aspects Inconsistency recovery

Figure 3.2 - Reading Messages Stage

NegotiationAspect

ReadMessage

WorldChangeAspect

Page 75: Auctions

75

The Agent Life Cycle:The Agent Life Cycle: Dealing with the agent’s own threats Dealing with the agent’s own threats

Detection Make abstract threats concrete Execute evaluation

Dealing with the agent’s own threats

Regular Threat Abstract Threat

Page 76: Auctions

76

The Agent Life Cycle: Planning The Agent Life Cycle: Planning next stepnext step

Mental states usage Backtracking Better than current state New state or dead end Achievable plan

GoalSelection

DesiredWorld

SelectionIntentionsGenerator

Planning next step

Page 77: Auctions

77

The Agent Life Cycle: Performing The Agent Life Cycle: Performing next intentionnext intention

Intention to - intention that Other agent listening? One argument per cycle.

Figure 3.5 - Perform next intention

Perform Intention-to

Generate and send an argument

Page 78: Auctions

78

Agent Definition Examples

Agent Type agent_type(robot_name, memory-less).

Agent Capability capable(robot, blockC / 3 / 1 / 4 / 1).

Agent Beliefsb(first_robot, capable(second_robot, AnyAction), [[0, t]]).

Agent Desiresdesire(first_desire, 0, robot, [blockA/3/1],

15, 1).

Page 79: Auctions

79

Agent Infrastructure: Agent Agent Infrastructure: Agent Parameters ListParameters List

Cooperativeness

Reliability (promises keeping)

Assertiveness

Performance threshold (Asynchronous action)

Usage of first argument

Argument direction

Knowledge about other desires

Knowledge about other capabilities

Measurement of other agent promises keeping

Execution of threats by another agent

Page 80: Auctions

80

A promise of a future rewardA promise of a future reward

Application conditions Opponent agent can perform the requested action. The reward action will help the opponent achieve a goal (requires knowledge

of opponent desires). Argument not used in the near past.

Implementation Generate opponent’s expected intentions. Offer one of the intentions as a reward:

– Mutual intention which opponent cannot perform by itself (requires knowledge of opponent capabilities).

– Opponent’s intention which it cannot perform.

– Any mutual intention.

– Any Opponent’s intention.

Page 81: Auctions

81

A threatA threat Application conditions

Opponent agent can perform the requested action. The threat action will interfere with the opponent’s achieving some goals

(requires knowledge of opponent desires). Argument not used in the near past.

Implementation Agent chooses best cube (requires knowledge of opponent capabilities). Agent chooses best desire. Agent chooses a threshing action:

– Moving out.

– Blocking.

– Interfering.

Page 82: Auctions

82

Request Evaluation Mechanism- Request Evaluation Mechanism- Parameters ListParameters List

DL (Doing Length)

NDL (Not Doing Length)

DTL (Doing That Length)

NDTL (Not Doing That Length)

PL (Punish Length)

PTL (Punish That Length)

DP (Doing Preference)

NDP (Not Doing Preference)

Page 83: Auctions

83

Request Evaluation Mechanism- Agent Request Evaluation Mechanism- Agent ParametersParameters

CP: The agent’s cooperativness.

AS: The agent’s assertiveness.

RL: The agent’s reliability.

ORL: The Other agent’s reliability for keeping promises.

OTE: The Other agent’s percentage of threat executing.

Page 84: Auctions

84

Request Evaluation Mechanism- The Request Evaluation Mechanism- The FormulasFormulas

A s i m p l e r e q u e s t

A c c e p t a n c e V a l u eN D L

D L

N D T L

D T L

D P

N D PC P

1

12

1

1

1

13

A n a p p e a l t o p a s t p r o m i s e

A c c e p t a n c e V a l u eN D L

D L

N D T L

D T L

D P

N D PC P R L

1

12

1

1

1

13

A p r o m i s e o f a f u t u r e r e w a r d

A c c e p t a n c e V a l u e

N D L

D L O R L R D

N D T L

D T L O R L R D

D P

N D PC P R L

1

1 1 12

1

1 1 1

1

13

W h e n a n a c t i o n i s c o n s i d e r e d t o b e a r e w a r d , R D ( R e w a r d ) i s e q u a l t o 1 , a n d 0 , i f n o t .

Page 85: Auctions

85

Request Evaluation Mechanism- The Request Evaluation Mechanism- The FormulasFormulas

A t h r e a t

A c c e p t a n c e v a l u e =

N D L P L N D L O T E

D L

N D T L P T L N D T L O T E

D T L

D P

N D P O T E N D P P PA S

1

12

1

1

1

13

A n a b s t r a c t t h r e a t

A c c e p t a n c e V a l u eN D L

D L

N D T L

D T L

D P

N D PA S

1

12

1

1

1

13

Page 86: Auctions

86

Experiments ResultsExperiments Results

Negotiating is better than not negotiating only where each agent has particular expertise.

Negotiating is better than not negotiating only where the agents have complete information.

Negotiating is better than not negotiating only for mutually cooperative agents or for an aggressive agent with a cooperative opponent.

Environment (game time, resources) effects the negotiations results.

Page 87: Auctions

87

Negotiations vs. no negotiationsNegotiations vs. no negotiations

When the agents that do not negotiate succeed in obtaining only 29.8% of their desires preference values, the negotiating agents succeed in obtaining 40.4%, on the average. (F=5.047, p<0.024, df=79).

Negotiations vs. No negotiations

0

10

20

30

40

50

Negotiation No negotiation

Su

cs

se

s

Page 88: Auctions

88

Complete information vs. no Complete information vs. no informationinformation

Agents that had no information succeed in obtaining a success rate of only 30.8%, while agents that had full information succeed in obtaining 40.4% on the average. (F=4.326,p<0.04,df=38).

Full information vs. no information

0

10

20

30

40

50

Full information No information

Su

cs

se

s

Page 89: Auctions

89

Using the first argument vs. using Using the first argument vs. using the best foundthe best found

Agents that used the first argument succeed in obtaining a success rate of only 34.8%, while agents that used the best argument succeed in obtaining 40.4% on the average, but this result is not significant. (F=2.28,p<0.138,df=38).

First vs. best argument

32

34

36

38

40

42

Uses best argument Uses f irst argument

Su

cs

se

s

Page 90: Auctions

90

Cooperative vs. Aggressive agentCooperative vs. Aggressive agent

23.5% : 41.4% (F=10.78,p<0.001,df=63).

Negotiations between cooperative and aggressive agents

0

1020

3040

50

Cooperative Aggressive

Su

css

es

Page 91: Auctions

91

Cooperative and Aggressive Agents vs. Cooperative and Aggressive Agents vs. No NegotiationsNo Negotiations

38.3% : 29.8% (F=6.01, p<0.019,df=35).

Cooperative agents vs. Non negotiating agents

0

10

20

30

40

50

Cooperative agents Non negotiating agents

Su

css

es

Page 92: Auctions

92

No negotiations VS. Aggressive Negotiation

Not negotiating vs. Agressive negotiations

0

10

20

30

40

No Negotiations Agressives

Su

css

es

38.3% : 20.8% (F=10.03, p<0.002,df=50).

Page 93: Auctions

93

Cooperative vs. aggressiveCooperative vs. aggressive

Aggressive vs. cooperative 41.4% Two cooperatives 38.3% No negotiation 29.8% Cooperative vs. aggressive 23.5% Two aggressive 20.8%

Page 94: Auctions

94

Environment ConstraintsEnvironment Constraints

Number of desires

0

10

20

30

3 Desires 6 Desires 9 Desires

Number of Desires

Su

css

es

Page 95: Auctions

95

Environment ConstraintsEnvironment Constraints

Time for game

16.7% : 21.7% (F=2.41, p<0.122, df=139).

05

10152025

60 Sec 120 Sec

Time for a Game

Su

cs

se

s

Page 96: Auctions

96

Is it worth it to use formal methods for Multi-Agent Systems in general and

Negotiations in particular?

Page 97: Auctions

97

Game-theory Based Frameworks(Non-cooperative Models)

Strategic-negotiation model based on: alternating offers model of rubinstein. Applications:Data allocation (schwartz & kraus AAAI97),Resource allocation , task distribution

(kraus wilkenfeld zlotkin AIJ95, kraus AMAI97),

hostage crisis (kraus wilkenfeld TSMC93).

Page 98: Auctions

98

Advantages and Difficulties:Negotiation on Data Allocation

Beneficial results; proved to be better than current methods; simple strategies.

Problems: Need to develop utility functions; Finding possible action: identifying optimal

allocations is NP complete; Incomplete information: game-theory

provides limited solutions.

Page 99: Auctions

99

Game-theory Based Frameworks(Non-cooperative Models)

Auctions applications:

Data allocation (schwartz & kraus ATAL97),Electronic commerce.

Subcontracting based on: principle agent models. Applications:

Task allocation (kraus, AIJ96).

Page 100: Auctions

100

Advantages and Difficulties:Auctions for Data Allocation

Beneficial results; proved to be better than current methods.

Problems: Utility functions,Applicable only when a server is concerned only

about the data stored locally, Difficult to find bidding when there is incomplete

information and the evaluations are dependant on each other: no procedures.

Page 101: Auctions

101

Coalition theories applications:

Group and teams formation (shehory &kraus CI99).

Benefits: well-defined concepts of stability; mechanisms to divide benefits.

Difficulties: utility functions, no procedures for coalition formation; exponential problems.

DPS model: combinatory theories & operations research (shehory &kraus AIJ98).

Game-theory Based Frameworks(Cooperative Models)

Page 102: Auctions

102

Multi-attributed decision making: application:Intentions reconciliation in SharedPlans

(grosz & kraus, 98). Benefits: using results of MADM, e.G.,

Specific method is not so important, standardization techniques.

Problems: choosing attributes; assigning values, choosing weights.

Decision-theory Based Frameworks

Page 103: Auctions

103

Logical Models

Modal logic: BDI models:applications:Automated argumentation's (kraus, sycara &

eventchick AIJ99).Specification of sharedplans (grosz & kraus AIJ96).

Bounded agents (nirkhe, kraus, perlis JLC97).Agents reasoning about other agents (kraus &

lehmann TCT88 kraus & subrahmanian IJIS95).

Page 104: Auctions

104

Advantages and Difficulties:Logical Models

Formal models with well studied properties:excellent for specification.

Problems: Some assumptions are not valid (e.g., omnicience).Complexity problems.There are no procedures for actions: required a lot of

programming; decision making; developing preferences.

Page 105: Auctions

105

Physics Based Models

Physical models of particle-dynamics Applications: Cooperation in large-scale multi-agent systems: freight deliveries within a metropolitan area. (Shehory & Kraus ECAI96 Shehory,

Kraus & Yadgar ATAL98). Benefits: efficient; inherits the physics

properties. Problems: adjustments; potential functions

Page 106: Auctions

106

Summary Benefits: formal models which have already

been studied; lead to efficient results. No need to invent the wheel.

Problems: Restrictions and assumptions made by game-

theory are not valid in real world MAS situations: extensions are needed.

It is difficult to develop utility functions.Complexity problems.