from consensus to social learning ali jadbabaie department of electrical and systems engineering and...

Post on 22-Dec-2015

214 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

TRANSCRIPT

From Consensus to Social Learning

Ali Jadbabaie

Department of Electrical and Systems Engineeringand GRASP Laboratory

Alvaro Sandroni

Penn Econ. and Kellogg School of Management, Northwestern University

Block Island Workshop on Swarming, June 2009

With Alireza Tahbaz-Salehi and Victor Preciado

Emergence of Consensus, Emergence of Consensus, synchronization, flockingsynchronization, flocking

Opinion dynamics, crowd control, synchronization and flocking

Flocking and opinion dynamicsFlocking and opinion dynamics

Bounded confidence opinion model (Krause, 2000)Nodes update their opinions as a weighted average

of the opinion value of their friends

Friends are those whose opinion is already close

When will there be fragmentation and when will there be convergence of opinions?

Dynamics changes topology

Conditions for reaching consensusConditions for reaching consensus

Theorem (Jadbabaie et al. 2003, Tsitsiklis’84): If there is a sequence of bounded, non-overlapping time intervals Tk, such that over any interval of length Tk, the network of agents is “jointly connected ”, then all agents will reach consensus on their velocity vectors.

Convergence time (Olshevsky, Tsitskilis) : T(eps)=O(n3log n/eps)

Similar result when network changes randomly.

Random NetworksRandom Networks

The graphs could be correlated so long as they are stationary-ergodic.

Variance of consensus value for ER graphsVariance of consensus value for ER graphsNew results for finite random graphs: Explicit expression for the variance of x*. The variance is a function of c, n and the initial conditions x(0) (although the explicit expression is messy)

Plots of Var(x*) for initial conditions uniformly distributed in [0,1]

The average weight matrix is symmetric!!

p

Var(x*)n=3 n=6 n=9 n=12 n=15

2

12

1

2

1

,)(

,

*)(

n

kk

jiji

n

kk

xnnpnnn

xxnpx

xVar

where r(p,n) is a non-trivial (although closed-form) function that goes to 1 as n goes to infinity

Consensus and Naïve Social learning Consensus and Naïve Social learning

When is consensus a good thing?Need to make sure update converges to the correct value

Naïve vs. Rational Decision MakingNaïve vs. Rational Decision Making

Just average!

Fuse info with Bayes Rule

Naïve learning

Social learningSocial learning

There is a true state of the world, among countably many

We start from a prior distribution, would like to update the distribution (or belief on the true state) with more observations

Ideally we use Bayes rule to do the information aggregation

Works well when there is one agent (Blackwell, Dubin’1963), become impossible when more than 2!

Locally Rational, Globally Naïve: Locally Rational, Globally Naïve: Bayesian learning under peer pressureBayesian learning under peer pressure

Model DescriptionModel Description

Model DescriptionModel Description

Belief Update RuleBelief Update Rule

Why this update?Why this update?

Eventually correct forecastsEventually correct forecasts

Eventually-correct estimation of the output!

Why strong connectivity?Why strong connectivity?

No convergence if different people interpret signals differently

N is misled by listening to the less informed agent B

ExampleExample

One can actually learn from others

Convergence of beliefs and Convergence of beliefs and consensus on correct value!consensus on correct value!

Learning from othersLearning from others

SummarySummary

top related