distributed optimization in multiagent systems · distributed admm illustrated distributed admm...

49
Distributed Optimization in Multiagent Systems

Upload: others

Post on 04-Oct-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Distributed Optimization in Multiagent Systems

Page 2: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

The Consensus Problem

1 2

3

4

x

x

x

x

1/25

Page 3: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

The Consensus Problem

x x x x+ + +

=

x

No single agent knows the target function to optimize

1/25

Page 4: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

The Consensus Problem

x x x x+ + +

=

x

No single agent knows the target function to optimize

1/25

Page 5: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Formally

minx∈X

N∑n=1

fn(x)

I N = number of nodes / agents

I X = Rd

I fn is the cost function of agent n

I Two agents n and m can exchange messages if n ∼ m

Numerous works on that problemEarly work: Tsitsiklis ’84

2/25

Page 6: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Example #1: Wireless Sensor Networks

Yn = random observation of sensor nx = unknown parameter to be estimated

p(Y1, · · · ,YN ; x) = p1(Y1; x) · · · pN(YN ; x)

The maximum likelihood estimate writes

x = arg maxx

∑n

ln pn(Yn; x)

[Schizas’08, Moura’11]

3/25

Page 7: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Example #2: Machine Learning

Data set formed by T samples (Xi ,Yi ) (i = 1 . . .T )

I Yi = variable to be explained

I Xi = explanatory features

minx

T∑i=1

`(xTXi ,Yi ) + r(x)

Split data into N batches

minx

N∑n=1

∑i

`(xTXi,n,Yi,n) + r(x)

n.b.: some problems are more involved (I. Colin’16)

minx

∑i

∑j

f (x ;Xi ,Yi ,Xj ,Yj) + r(x)

4/25

Page 8: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Example #3: Resource Allocation

Let xn be the resource of an agent n

I Agents share a resource b:∑n

xn ≤ b

I Agent n gets reward Rn(xn) for using resource xn

I Maximize the global reward

maxx :∑

n xn≤b

N∑n=1

Rn(xn)

The dual of a sharing problem is a consensus problem

5/25

Page 9: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Networks

Parallel:

.

.

Distributed:

.

.

6/25

Page 10: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Outline

Distributed gradient descent

Distributed Alternating Direction Method of Multipliers (D-ADMM)

Total Variation Regularization on Graphs

Page 11: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Outline

Distributed gradient descent

Distributed Alternating Direction Method of Multipliers (D-ADMM)

Total Variation Regularization on Graphs

Page 12: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Adapt-and-combine (Tsitsiklis’84)

I [Local step] Each agent n generates a temporary update

xk+1n = xk

n − γk∇fn(xkn )

I [Agreement step] Connected agents merge their temporary estimates

xk+1n =

∑m∼n

A(n,m) xk+1m

where A satisfies technical constraints (must be doubly stochastic)

Convergence rates (e.g. [Nedic’09], [Duchi’12])

I Decreasing step size γk → 0 is needed in general

I Sublinear converges rates

7/25

Page 13: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Adapt-and-combine (Tsitsiklis’84)

I [Local step] Each agent n generates a temporary update

xk+1n = xk

n − γk∇fn(xkn )

I [Agreement step] Connected agents merge their temporary estimates

xk+1n =

∑m∼n

A(n,m) xk+1m

where A satisfies technical constraints (must be doubly stochastic)

Convergence rates (e.g. [Nedic’09], [Duchi’12])

I Decreasing step size γk → 0 is needed in general

I Sublinear converges rates

7/25

Page 14: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

More problems

1. AsynchronismSome agents are active at time n, others aren’tRandom link failures

2. NoiseGradients may be observed up to a random noise (online algorithms)

3. Constraints

MinimizeN∑

n=1

fn(x) subject to x ∈ C

xk+1n = projC [xk

n − γk(∇fn(xkn )+noise)]

xk+1n =

∑m∼n

Ak+1(n,m) xk+1m

8/25

Page 15: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Distributed stochastic gradient algorithm

Under technical conditions,

Convergence (Bianchi et al.’12): xkn tends to a KKT point x?

Convergence rate (Morral et. al’12): If x? ∈ int(C)

√γk−1(xk

n − x?)L−→ N (0,ΣOPT + ΣNET )

I ΣOPT is the covariance corresponding to the centralized setting

I ΣNET is the excess variance due to the distributed setting

Remark: ΣNET = 0 for some protocols which can be characterized

9/25

Page 16: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Outline

Distributed gradient descent

Distributed Alternating Direction Method of Multipliers (D-ADMM)

Total Variation Regularization on Graphs

Page 17: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Alternating Direction Method of Multipliers

Consider the generic problem

minx

F (x) + G(Mx)

where F ,G are convex. Rewrite as a constrained problem

minz=Mx

F (x) + G(z)

The augmented Lagrangian is:

Lρ(x , z ;λ) = F (x) + G(z) + 〈λ,Mx − z〉+ρ

2‖Mx − z‖2

ADMM

xk+1 = arg minxLρ(x , zk ;λk) → only F needed

zk+1 = arg minzLρ(xk+1, z ;λk) → only G needed

λk+1 = λk + ρ(Mxk+1 − zk+1)

10/25

Page 18: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Back to our problem

All functions fn : X → R are assumed convex. Consider the problem:

minu∈X

N∑n=1

fn(u)

Main trick: Define

F : x = (x1, . . . , xN) 7→∑n

fn(xn)

Equivalent problem:minx∈XN

F (x) + ιsp(1)(x)

where ιsp(1)(x) =

{0 if x1 = · · · = xN+∞ otherwise

I F is separable in x1, . . . , xN

I G = ιsp(1) couples the variables but is simple

11/25

Page 19: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

ADMM illustratedSet xk = 1

N

∑n x

kn

Algorithm (see e.g. [Boyd’11])

For all n, λkn = λk−1

n + ρ(xkn − xk)

xk+1n = arg min

yfn(y) +

ρ

2‖xk − ρ−1λk

n − y‖2

.

xkn

1. Transmit current estimates

.

12/25

Page 20: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

ADMM illustratedSet xk = 1

N

∑n x

kn

Algorithm (see e.g. [Boyd’11])

For all n, λkn = λk−1

n + ρ(xkn − xk)

xk+1n = arg min

yfn(y) +

ρ

2‖xk − ρ−1λk

n − y‖2

.

xk 2. Compute average xk

.

12/25

Page 21: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

ADMM illustratedSet xk = 1

N

∑n x

kn

Algorithm (see e.g. [Boyd’11])

For all n, λkn = λk−1

n + ρ(xkn − xk)

xk+1n = arg min

yfn(y) +

ρ

2‖xk − ρ−1λk

n − y‖2

.

xk3. Transmit xk to all agents

.

12/25

Page 22: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

ADMM illustratedSet xk = 1

N

∑n x

kn

Algorithm (see e.g. [Boyd’11])

For all n, λkn = λk−1

n + ρ(xkn − xk)

xk+1n = arg min

yfn(y) +

ρ

2‖xk − ρ−1λk

n − y‖2

.

4. Compute λkn, xk+1

n for all n

.

12/25

Page 23: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

ADMM illustratedSet xk = 1

N

∑n x

kn

Algorithm (see e.g. [Boyd’11])

For all n, λkn = λk−1

n + ρ(xkn − xk)

xk+1n = arg min

yfn(y) +

ρ

2‖xk − ρ−1λk

n − y‖2

.

4. Compute λkn, xk+1

n for all n

.

The algorithm is parallel but not distributed on the graph

12/25

Page 24: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Subgraph consensus

Let A1,A2, · · · ,AL be subsets of agents

.

2

4

531

.

A1 = {1, 3}, A2 = {2, 3}, A3 = {3, 4, 5}

(x1

x3

)∈ sp (1

1)(x2

x3

)∈ sp (1

1) x3

x4

x5

∈ sp(

111

)

consensus within subgraphs ⇔ global consensus

13/25

Page 25: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Subgraph consensus

Let A1,A2, · · · ,AL be subsets of agents

.

2

4

531

.

A1 = {1, 3}, A2 = {2, 3}, A3 = {3, 4, 5}

(x1

x3

)∈ sp (1

1)

(x2

x3

)∈ sp (1

1) x3

x4

x5

∈ sp(

111

)

consensus within subgraphs ⇔ global consensus

13/25

Page 26: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Subgraph consensus

Let A1,A2, · · · ,AL be subsets of agents

.

2

4

531

.

A1 = {1, 3}, A2 = {2, 3}, A3 = {3, 4, 5}

(x1

x3

)∈ sp (1

1)(x2

x3

)∈ sp (1

1)

x3

x4

x5

∈ sp(

111

)

consensus within subgraphs ⇔ global consensus

13/25

Page 27: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Subgraph consensus

Let A1,A2, · · · ,AL be subsets of agents

.

2

4

531

.

A1 = {1, 3}, A2 = {2, 3}, A3 = {3, 4, 5}

(x1

x3

)∈ sp (1

1)(x2

x3

)∈ sp (1

1) x3

x4

x5

∈ sp(

111

)

consensus within subgraphs ⇔ global consensus

13/25

Page 28: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Subgraph consensus

Let A1,A2, · · · ,AL be subsets of agents

.

2

4

531

.

A1 = {1, 3}, A2 = {2, 3}, A3 = {3, 4, 5}

(x1

x3

)∈ sp (1

1)(x2

x3

)∈ sp (1

1) x3

x4

x5

∈ sp(

111

)

consensus within subgraphs ⇔ global consensus

13/25

Page 29: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Example (Cont.)

The initial problem isminx∈XN

F (x) + G(Mx)

where Mx =

x1

x3

x2

x3

x3

x4

x5

that is: M =

1 0 0 0 00 0 1 0 00 1 0 0 00 0 1 0 00 0 1 0 00 0 0 1 00 0 0 0 1

and where G is the indicator function of the subspace of vectors of the form

ααββδδδ

14/25

Page 30: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Distributed ADMM illustrated

Distributed ADMM (early works by [Schizas’08])

For all n, Λkn = Λk−1

n + ρ(xkn − χk

n)

xk+1n = arg min

yfn(y) +

ρ|σn|2‖χk

n − ρ−1Λkn − y‖2

where |σn| = number of “neighbors” of n

.

Compute xkA1

.

1. For each subgraph, compute average xkA`

15/25

Page 31: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Distributed ADMM illustratedDistributed ADMM (early works by [Schizas’08])

For all n, Λkn = Λk−1

n + ρ(xkn − χk

n)

xk+1n = arg min

yfn(y) +

ρ|σn|2‖χk

n − ρ−1Λkn − y‖2

where |σn| = number of “neighbors” of n

.

Compute xkA2

.

1. For each subgraph, compute average xkA`

15/25

Page 32: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Distributed ADMM illustratedDistributed ADMM (early works by [Schizas’08])

For all n, Λkn = Λk−1

n + ρ(xkn − χk

n)

xk+1n = arg min

yfn(y) +

ρ|σn|2‖χk

n − ρ−1Λkn − y‖2

where |σn| = number of “neighbors” of n

.

Compute xkA3

.

1. For each subgraph, compute average xkA`

15/25

Page 33: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Distributed ADMM illustratedDistributed ADMM (early works by [Schizas’08])

For all n, Λkn = Λk−1

n + ρ(xkn − χk

n)

xk+1n = arg min

yfn(y) +

ρ|σn|2‖χk

n − ρ−1Λkn − y‖2

where |σn| = number of “neighbors” of n

.

.

2. For each n, compute χkn = Average(xk

A`: ` s.t. n ∈ A`)

15/25

Page 34: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Distributed ADMM illustrated

Distributed ADMM (early works by [Schizas’08])

For all n, Λkn = Λk−1

n + ρ(xkn − χk

n)

xk+1n = arg min

yfn(y) +

ρ|σn|2‖χk

n − ρ−1Λkn − y‖2

where |σn| = number of “neighbors” of n

.

.

3. For each n, compute λkn and xk+1

n

15/25

Page 35: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Linear convergence of the Distributed ADMM

Assumption: H? :=∑

n∇2fn(x?) > 0 at the minimizer x?

‖xkn − x?‖ ∼ αk as k →∞

I [Shi et al.’ 13] non-asymptotic bound but pessimistic

I [Iutzeler et al.’ 16] asymptotic but tight

0 10 20 30 40 50 60 70 80 90 100

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

Number of iterations

Simulation

log α = −0.19

Bound of Shi = −5e−4

1k

log ‖xk − 1⊗ x?‖ as a function of k

16/25

Page 36: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Example: ring network.

.

Define α = limk→∞ ‖xkn − x?‖1/k

Set fn : R→ R and f ′′n (x?) = σ2

α ≥

√1 + cos 2π

N

2(1 + sin 2πN

)with equality when ρ =

σ2

2 sin 2πN

17/25

Page 37: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Asynchronous D-ADMM

I All agents must complete their arg min computation before combining

I The network waits for the slowest agents

Our objective: allow for asynchronism

18/25

Page 38: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Revisiting ADMM as a fixed point algorithm

Set ζk = λk + ρzk . Fact: λk = P(ζk) where P is a projection.

ADMM can be written as a fixed point algorithm [Gabay,83] [Eckstein,92]

ζk+1 = J(ζk)

where J is firmly non-expansive i.e.,

‖J(x)− J(y)|2 ≤ ‖x − y‖2 − ‖(I − J)(x)− (I − J)(y)‖2

.

yx

JA(y)

JA(x)

.

19/25

Page 39: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Random coordinate descent

Introducing the block-components of ζk+1 = J(ζk) :

ζk+11

...ζk+1`

...ζk+1L

=

J1(ζk)...

J`(ζk)

...JL(ζk)

Convergence of the Asynchronous ADMM [Iutzeler’13]

This algorithm still converges if active components are chosen at random

Main idea: For a well-chosen norm ||| . ||| and a fixed point ζ? of J, prove

E(∣∣∣∣∣∣∣∣∣ζk+1 − ζ?

∣∣∣∣∣∣∣∣∣2 |Fk

)≤∣∣∣∣∣∣∣∣∣ζk − ζ?∣∣∣∣∣∣∣∣∣2

⇒ ζk is getting “stochastically” closer to ζ?

20/25

Page 40: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Random coordinate descent

If only one block ` = `(k + 1) is active at time k + 1:

ζk+11

...ζk+1`

...ζk+1L

=

ζk1...

J`(ζk)

...ζkL

Convergence of the Asynchronous ADMM [Iutzeler’13]

This algorithm still converges if active components are chosen at random

Main idea: For a well-chosen norm ||| . ||| and a fixed point ζ? of J, prove

E(∣∣∣∣∣∣∣∣∣ζk+1 − ζ?

∣∣∣∣∣∣∣∣∣2 |Fk

)≤∣∣∣∣∣∣∣∣∣ζk − ζ?∣∣∣∣∣∣∣∣∣2

⇒ ζk is getting “stochastically” closer to ζ?

20/25

Page 41: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Random coordinate descent

If only one block ` = `(k + 1) is active at time k + 1:

ζk+11

...ζk+1`

...ζk+1L

=

ζk1...

J`(ζk)

...ζkL

Convergence of the Asynchronous ADMM [Iutzeler’13]

This algorithm still converges if active components are chosen at random

Main idea: For a well-chosen norm ||| . ||| and a fixed point ζ? of J, prove

E(∣∣∣∣∣∣∣∣∣ζk+1 − ζ?

∣∣∣∣∣∣∣∣∣2 |Fk

)≤∣∣∣∣∣∣∣∣∣ζk − ζ?∣∣∣∣∣∣∣∣∣2

⇒ ζk is getting “stochastically” closer to ζ?

20/25

Page 42: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Asynchronous ADMM explicited

.

mn

.

Activate two nodes A` = {m, n}

I Agent n computes

xk+1n = arg min

xfn(x)+

∑j∼k

(〈x , λk

j,n〉+ρ

2‖x − xk

j,n‖2)

and similarly for Agent m.

I They exchange xk+1m and xk+1

n

I Agent n computes

xk+1m,n =

1

2(xk+1

m + xk+1n ),

λk+1m,n = λk

m,n + ρxk+1n − xk+1

m

2

and similarly for Agent m.

21/25

Page 43: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Asynchronous ADMM explicited

.

mn

.

Activate two nodes A` = {m, n}

I Agent n computes

xk+1n = arg min

xfn(x)+

∑j∼k

(〈x , λk

j,n〉+ρ

2‖x − xk

j,n‖2)

and similarly for Agent m.

I They exchange xk+1m and xk+1

n

I Agent n computes

xk+1m,n =

1

2(xk+1

m + xk+1n ),

λk+1m,n = λk

m,n + ρxk+1n − xk+1

m

2

and similarly for Agent m.

21/25

Page 44: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Asynchronous ADMM explicited

.

mn

.

Activate two nodes A` = {m, n}

I Agent n computes

xk+1n = arg min

xfn(x)+

∑j∼k

(〈x , λk

j,n〉+ρ

2‖x − xk

j,n‖2)

and similarly for Agent m.

I They exchange xk+1m and xk+1

n

I Agent n computes

xk+1m,n =

1

2(xk+1

m + xk+1n ),

λk+1m,n = λk

m,n + ρxk+1n − xk+1

m

2

and similarly for Agent m.

21/25

Page 45: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Generalization: Distributed Vu-Condat algorithm

I Vu-Condat algorithm generalizes ADMM (allows “gradients” evaluations)

I Distributed Vu-Condat algorithm is applicable using the same principle

I Bianchi’16, Fercoq’17 provide a random coordinate descent version

I The algorithm is asynchronous at the node level and not at the edge level

22/25

Page 46: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Stochastic Optimization

minx∈X

N∑n=1

E (fn(x , ξn))

I Law of ξn unknown, but revealed on-line through random copies ξ1n , ξ

2n , . . .

I Stochastic approximation: at time k, replace the unknown functionE (fn( . , ξn)) by its random version fn( . , ξkn )

Example : stochastic gradient descent

I Thesis of A. Salim: Stochastic versions of generic optimization algorithms(Forward-Backward, Douglas-Rachford, ADMM, Vu-Condat, etc.)

I Byproduct : distributed stochastic algorithms

23/25

Page 47: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Outline

Distributed gradient descent

Distributed Alternating Direction Method of Multipliers (D-ADMM)

Total Variation Regularization on Graphs

Page 48: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Total variation regularization (1/2)

Notation: On a graph G = (V ,E), the total variation of x ∈ RV is

TV(x) =∑{i,j}∈E

|xi − xj |

General problem:minx∈RV

F (x) + TV(x)

I Trend filtering: F (x) = 12‖x −m‖2 where m ∈ RV are noisy measurements

I Graph inpainting: complete possibly missing measurements on the nodes

Proximal gradient algorithm:

xn+1 = proxγTV(xn − γ∇F (xn))

I Computing proxTV is difficult over large unstructured graphs

I But efficient algorithms exist for 1D-graphs (Mammen’97) (Condat’13)

24/25

Page 49: Distributed Optimization in Multiagent Systems · Distributed ADMM illustrated Distributed ADMM (early works by [Schizas’08]) For all n; k n = k 1 n + ˆ(x k n ˜ k n) xk+1 n= arg

Total variation regularization (2/2)

Write TV as an expectation: Let ξ be simple random walk in G of fixed length

TVG (x) ∝ E(TVξ(x))

Algorithm (Salim’16) At time n,

I Draw a random walk ξn+1

I Compute xn+1 = proxγnTVξn+1(xn − γn∇F (xn)) → easy, 1D

Hidden difficulty: one should avoid loops when choosing the walk. . .

Trend filtering example. Cost function vs time(s). Stochastic block model 105 nodes, 25.106 edges.

Blue: Stochastic proximal gradient, Green: dual proximal gradient, Red: dual L-BFGS-B

25/25