deep multi-agent reinforcement learning · recent ai breakthrough 2. what’s next for ai shifting...

78
Deep Multi-Agent Reinforcement Learning Chongjie Zhang Institute of Interdisciplinary Information Sciences Tsinghua University August 6, 2020 Reinforcement Learning China Summer School

Upload: others

Post on 03-Sep-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Deep Multi-Agent Reinforcement Learning

Chongjie Zhang

Institute of Interdisciplinary Information Sciences

Tsinghua University

August 6, 2020

Reinforcement Learning China Summer School

Page 2: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Recent AI Breakthrough

2

Page 3: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

What’s Next for AI

▪ Shifting from pattern recognition to decision-making/control

▪ Shifting from single-agent to multi-agent settings

3

Drone Delivery

Autonomous Vehicles

Home Robots

Video Games

Smart Grids

Multi-robot assembly

Page 4: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Types of Multi-Agent Systems

▪ Cooperative

▪ Working together and coordinating their actions

▪ Maximizing a shared team reward

▪ Competitive

▪ Self-interested: maximizing an individual reward

▪ Opposite rewards

▪ Zero-sum games

▪ Mixed

▪ Self-interested with different individual rewards (not opposite)

▪ General-sum games4

Page 5: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Cooperative Multi-Agent Systems (MAS)

▪ A group of agents work together to optimize team performance

▪ Model: decentralized partially observable Markov decision

process (Dec-POMPD)

▪ Multi-agent sequential decision-making under uncertainty

▪ Extension of MDPs and POMDPs

▪ At each step, each agent i takes an

action and receives:

▪ A local observation oi

▪ A joint immediate reward r 5

Environment

a1

a2

o2, r

Page 6: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Dec-POMDP

▪ Model

▪ Agent: 𝑖 ∈ 𝐼 = 1, 2, … ,𝑁

▪ State: 𝑠 ∈ 𝑆

▪ Action: 𝑎𝑖 ∈ 𝐴, 𝒂 ∈ 𝐴𝑁

▪ Transition function: 𝑃 𝑠′ 𝑠, 𝒂

▪ Reward: 𝑅 𝑠, 𝒂

▪ Observation: 𝑜𝑖 ∈ Ω

▪ Observation function: 𝑜𝑖 ∈ Ω ~ 𝑂 𝑠, 𝑖

6

Page 7: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Dec-POMDP

▪ Objective: to find policies for agents to jointly maximize the

expected cumulative reward

▪ A local policy 𝜋𝑖 for each agent 𝑖: mapping its observation-

action history 𝜏𝑖 to its action

▪ Action-observation history: 𝜏𝑖 ∈ 𝑇 = Ω × 𝐴 ∗

▪ State is unknown, so beneficial to remember the history

▪ Joint policy 𝝅 =< 𝜋1, … , 𝜋𝑛>

▪ Value function: 𝑄𝑡𝑜𝑡𝝅 𝝉, 𝒂 = 𝔼 [σ𝑡=0

∞ 𝛾𝑡𝑟𝑡 ∣ 𝑠0 = 𝑠, 𝒂0 = 𝒂,𝝅]

▪ Policy 𝝅(𝝉) = argmax𝒂 𝑄𝑡𝑜𝑡𝝅 𝝉, 𝒂 7

Page 8: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Multi-Agent Reinforcement Learning (MARL)

▪ MARL is promising for solving Dec-POMDP problems

▪ The environment model is often unknown

▪ MARL: learning policies for multiple agents

▪ Where agents are interacting

▪ Learning by interacting with other agents and the environment

8

Page 9: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Outline

▪ Value-Based Methods

▪ Paradigm: Centralized Training and Decentralized Execution

▪ Basic methods: VDN, QMIX, QPLEX

▪ Theoretical analysis

▪ Extensions

▪ Policy Gradient Methods

▪ Paradigm: Centralized Critic and Decentralized Actors

▪ Method: Decomposable Off-Policy Policy Gradient (DOP)

▪ Goals:

▪ To give an brief introduction to (cooperative) MARL

▪ To excite you about MARL 9

Page 10: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

𝐽𝑜𝑖𝑛𝑡 𝑁𝑒𝑡𝑤𝑜𝑟𝑘

𝑄𝑡𝑜𝑡

𝜏1, 𝑎1 𝜏𝑛, 𝑎𝑛

Centralized Value Functions

⋯Agent 1 Agent 𝑛

𝜏1, 𝑎1 𝜏𝑛, 𝑎𝑛

𝑄1 𝑄𝑛

Decentralized Value Functions

𝑀𝑖𝑥𝑖𝑛𝑔 𝑁𝑒𝑡𝑤𝑜𝑟𝑘

𝑄𝑡𝑜𝑡

⋯Agent 1 Agent 𝑛

𝜏1, 𝑎1 𝜏𝑛, 𝑎𝑛

𝑄1 𝑄𝑛

Factorized Value Functions

Multi-Agent Reinforcement Learning (MARL)

▪ How to learn joint policy 𝝅 or value function 𝑄𝑡𝑜𝑡?

Scalability Non-stationarityCredit assignment

Centralized trainingDecentralized execution

10

Page 11: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

▪ Paradigm: centralized training

11

𝑄1(𝜏1, 𝑎1) 𝑄𝑛(𝜏𝑛, 𝑎𝑛)

Mixing Network

𝑄𝑡𝑜𝑡(𝝉, 𝒂) 𝑇𝐷 𝐿𝑜𝑠𝑠

with decentralized execution

Factorized Value Function Learning

▪ Individual-Global Maximization (IGM) Principle

▪ Consistent action selection between joint and individuals

▪ argmax𝒂

𝑄𝑡𝑜𝑡 𝝉, 𝒂 = argmax𝑎1 𝑄1 𝜏1, 𝑎1 , … , argmax𝑎𝑛 𝑄𝑛 𝜏𝑛, 𝑎𝑛

Page 12: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Value Decomposition Networks (VDN)

12[Sunehag et. al., 2017]

▪ VDN: 𝑄𝑡𝑜𝑡(𝝉, 𝒂) = σ𝑖𝑄𝑖( 𝜏𝑖 , 𝑎𝑖)

▪ Sufficient for IGM constraint

▪ argmax𝒂

𝑄𝑡𝑜𝑡 𝝉, 𝒂 =

argmax𝑎1 𝑄1 𝜏1, 𝑎1…

argmax𝑎𝑛 𝑄𝑛 𝜏𝑛, 𝑎𝑛

▪ No specific reward for each agent

▪ Implicit credit assignment

through gradient backpropagation

Page 13: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Learned Kiting Strategy in Starcraft II

Page 14: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Why VDN Works?

▪ Scalable maximization operator for action selection

▪ Because of the consistency of individual-global maximization

▪ Parameter sharing among agents

▪ Implicit credit assignment

▪ Cons:

▪ Not necessary for IGM

▪ Limited representation

14

Page 15: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

QMIX: A Monotonic Mixing Network

▪ Monotonic function: 𝜕𝑄𝑡𝑜𝑡𝜕𝑄𝑖

> 0

▪ Weights of the mixing network are restricted to be non-

negative

15[Rashid et. al., 2018]

Page 16: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Representational Complexity

▪ QMIX’s representation is also limited

▪ Monotonic condition is sufficient, but not necessary for IGM

▪ Sketch Illustration

16

Ground Truth VDN QMIX

Page 17: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Empirical Results on Matrix Games

17

Page 18: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

18

▪ Three Modules of QPLEX

▪ Individual Action-Value Function

▪ Transformation

▪ Duplex Dueling

𝑄𝑖 𝝉, 𝑎𝑖 = 𝑤𝑖 𝝉 𝑄𝑖 𝜏𝑖 , 𝑎𝑖 + 𝑏𝑖 𝝉

QPLEX: A Duplex Dueling Multi-Agent Q-Learning

[Wang et. al., 2020]

Page 19: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

QPLEX: Duplex Dueling Mixing Network

▪ Input: {𝑄1 𝝉,∙ ,⋯ , 𝑄𝑛 𝝉,∙ }

▪ Individual Dueling:

▪ 𝑉𝑖 𝝉 = max𝑎𝑖′𝑄𝑖 𝝉, 𝑎𝑖

▪ 𝐴𝑖 𝝉, 𝑎𝑖 = 𝑄𝑖 𝝉, 𝑎𝑖 − 𝑉𝑖 𝝉

▪ Joint Dueling:

▪ 𝑉𝑡𝑜𝑡 𝝉 = σ𝑖=1𝑛 𝑉𝑖 𝝉

▪ 𝐴𝑡𝑜𝑡 𝝉, 𝒂 = σ𝑖=1𝑛 𝜆𝑖 𝝉, 𝒂 𝐴𝑖 𝝉, 𝑎𝑖 , where 𝜆𝑖 𝝉, 𝒂 > 0

▪ 𝑄𝑡𝑜𝑡 𝝉, 𝒂 = 𝑉𝑡𝑜𝑡 𝝉 + 𝐴𝑡𝑜𝑡 𝝉, 𝒂 = σ𝑖 𝑉𝑖 𝝉 + σ𝑖=1𝑛 𝜆𝑖 𝝉, 𝒂 𝐴𝑖 𝝉, 𝑎𝑖

19

Page 20: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Representation Capacity of QPLEX

Theorem: The joint action-value function class that

QPLEX can realize is equivalent to what is induced by

the IGM principle.

20

Individual-Global Maximization Principle:

argmax𝒂

𝑄𝑡𝑜𝑡 𝝉, 𝒂 =

argmax𝑎1 𝑄1 𝜏1, 𝑎1…

argmax𝑎𝑛 𝑄𝑛 𝜏𝑛, 𝑎𝑛

Page 21: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

QPLEX Realizes the IGM Constraint

▪ Sufficient Condition for IGM

argmax𝒂

𝑄𝑡𝑜𝑡 𝝉, 𝒂 = argmax𝒂

𝑖

𝑉𝑖 𝝉 +

𝑖=1

𝑛

𝜆𝑖 𝝉, 𝒂 𝐴𝑖 𝝉, 𝑎𝑖

= argmax𝒂σ𝑖=1𝑛 𝜆𝑖 𝝉, 𝒂 𝐴𝑖 𝝉, 𝑎𝑖

= argmax𝑎1 𝐴1 𝝉, 𝑎1 , … , argmax𝑎𝑛 𝐴𝑛 𝝉, 𝑎𝑛

= argmax𝑎1 𝑄1 𝝉, 𝑎1 , … , argmax𝑎𝑛 𝑄𝑛 𝝉, 𝑎𝑛

= argmax𝑎1 𝑤1 𝝉 𝑄1 𝜏1, 𝑎1 + 𝑏1 𝝉 ,… , argmax𝑎𝑛 𝑤𝑛 𝝉 𝑄𝑛 𝜏𝑛, 𝑎𝑛 + 𝑏𝑛 𝝉

= argmax𝑎1 𝑄1 𝜏1, 𝑎1 , … , argmax𝑎𝑛 𝑄𝑛 𝜏𝑛, 𝑎𝑛21

(∀𝑖, 𝜆𝑖 𝝉, 𝒂 > 0, 𝐴𝑖 𝜏𝑖 , 𝑎𝑖 ≤ 0,and max

𝑎𝑖𝐴𝑖 𝜏𝑖 , 𝑎𝑖 = 0)

(∀𝑖, 𝑤𝑖 𝝉 > 0)

(QPLEXDefinition)

(∀𝑖, 𝑄𝑖 𝝉, 𝑎𝑖 = 𝑉𝑖 𝝉 + 𝐴𝑖(𝝉, 𝑎𝑖))

Page 22: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

QPLEX Realizes the IGM Constraint

▪ Necessary Condition for IGM

for 𝑄𝑡𝑜𝑡 𝝉, 𝒂 , ∃ {𝑄1 𝜏1, 𝑎1 , … , 𝑄𝑛 𝜏𝑛, 𝑎𝑛 }, s. t.

argmax𝒂 𝑄𝑡𝑜𝑡 𝝉, 𝒂 = argmax𝑎1 𝑄1 𝜏1, 𝑎1 , … , argmax𝑎𝑛 𝑄𝑛 𝜏𝑛, 𝑎𝑛

𝑄𝑡𝑜𝑡 𝝉, 𝒂 = 𝑉𝑡𝑜𝑡 𝝉 + 𝐴𝑡𝑜𝑡 𝝉, 𝒂

= σ𝑖 𝑉𝑖 𝝉 + σ𝑖=1𝑛 𝜆𝑖 𝝉, 𝒂 𝐴𝑖 𝝉, 𝑎𝑖

∀𝑄𝑖 𝝉, 𝑎𝑖 , ∃𝑤𝑖 𝝉 > 0 and 𝑏𝑖 𝝉 , s. t. 𝑄𝑖 𝝉, 𝑎𝑖 = 𝑤𝑖 𝝉 𝑄𝑖 𝜏𝑖, 𝑎𝑖 + 𝑏𝑖 𝝉

22

∃ {𝑄1 𝝉, 𝑎1 , … , 𝑄𝑛 𝝉, 𝑎𝑛 , ∃𝜆𝑖 𝝉, 𝒂 > 0𝑉𝑖 𝝉 = max𝑎𝑖 𝑄𝑖 𝝉, 𝑎𝑖 ,

𝐴𝑖 𝝉, 𝑎𝑖 = 𝑄𝑖 𝝉, 𝑎𝑖 − 𝑉𝑖 𝝉 )

In QPLEX, 𝜆𝑖 𝝉, 𝒂 , 𝑤𝑖 𝝉 > 0 and 𝑏𝑖 𝝉 are represented by neural networks,

and thus 𝑄𝑡𝑜𝑡 𝝉, 𝒂 can be realized by QPLEX with {𝑄1 𝜏1, 𝑎1 , … , 𝑄𝑛 𝜏𝑛, 𝑎𝑛 }.

Page 23: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Representational Complexity

▪ Sketch Illustration

23

Ground Truth VDN QMIX QPLEX

Page 24: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Empirical Results on Matrix Games

24

Page 25: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

25

StarCraft II Benchmark: Online Learning

Page 26: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

26

Data collected by a behavior policy learned by QMIX

StarCraft II Benchmark: Offline Learning

Page 27: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Outline

▪ Value-Based Methods

▪ Paradigm: Centralized Training and Decentralized Execution

▪ Basic methods: VDN, QMIX, QPLEX

▪ Theoretical analysis

▪ Extensions

▪ Policy Gradient Methods

▪ Paradigm: Centralized Critic and Decentralized Actors

▪ Method: Decomposable Off-Policy Policy Gradient (DOP)

27

Page 28: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Theoretical Analysis on Linear Value Factorization

▪ Linear Value Factorization

▪ Example methods: VDN, Qatten, …

▪ A theoretical analysis framework

▪ Multi-agent fitted Q-iteration

▪ Analysis

▪ Implicit credit assignment

▪ Convergence28

𝑄1(𝜏1, 𝑎1) 𝑄𝑛(𝜏𝑛, 𝑎𝑛)

𝑆𝑢𝑚𝑚𝑎𝑡𝑖𝑜𝑛

𝑄𝑡𝑜𝑡(𝝉, 𝒂) 𝑇𝐷 𝐿𝑜𝑠𝑠

[Wang et. al. , 2020]

Page 29: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Fitted Q-Iteration (FQI) Framework

29

▪ Multi-Agent MDP (MMDP): assume full observability

▪ Fitted Q-Iteration (FQI) for MMDP

▪ Bellman optimality operator 𝒯:

▪ 𝒯𝑄 𝑡𝑜𝑡 𝑠, 𝐚 = 𝑟 𝑠, 𝐚 + 𝛾𝔼𝑠′ max𝐚′

𝑄𝑡𝑜𝑡 𝑠′, 𝐚′

▪ Given a dataset 𝐷 = 𝑠, 𝐚, 𝑟, 𝑠′ .

▪ Iteratively optimization (empirical Bellman error minimization):

▪ 𝑄 𝑡+1 ← argmax𝑄

𝔼 𝑠,𝐚,𝑟,𝑠′ ~𝐷 𝑟 + 𝛾max𝐚′

𝑄𝑡𝑜𝑡𝑡

𝑠′, 𝐚′ − 𝑄𝑡𝑜𝑡 𝑠′, 𝐚′

2

Page 30: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Basic Assumptions

▪ Assumption 1: deterministic dynamics

▪ 𝑃 ⋅ 𝑠, 𝐚 is deterministic

▪ Assumption 2: adequate and factorizable dataset

▪ Empirical probability is factorizable

▪ 𝑝𝐷 𝐚 𝑠 = ς𝑖 𝑝𝐷(𝑎𝑖|𝑠) , σ𝑎𝑖 𝑝𝐷 𝑎𝑖 𝑠 = 1, 𝑝𝐷 𝑎𝑖 𝑠 > 0

30

Page 31: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Fitted Q-Iteration with Linear Value Factorization

▪ The action-value function class 𝒬𝐿𝑉𝐷:

▪ 𝒬𝐿𝑉𝐷 = 𝑄 𝑄𝑡𝑜𝑡 ⋅, 𝐚 = σ𝑖=1𝑛 𝑄𝑖 ⋅, 𝑎𝑖 , ∀𝐚 and ∀𝑄𝑖 𝑖=1

𝑛

▪ Given an adequate and factorizable dataset 𝐷.

▪ Iteratively optimization framework:

▪ 𝑄 𝑡+1 ← argmax𝑄∈𝒬𝐿𝑉𝐷

σ 𝑠,𝐚 𝑝𝐷 𝐚 𝑠 𝑦(𝑡) 𝑠, 𝐚 − σ𝑖=1𝑛 𝑄𝑖 𝑠, 𝑎𝑖

2

▪ 𝑦(𝑡) 𝑠, 𝐚 = 𝑟 + 𝛾max𝐚′

𝑄𝑡𝑜𝑡𝑡

𝑠′, 𝐚′

31

Page 32: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Theoretical Analysis of Linear Value Factorization

Theorem 1.(Closed-form solution of FQI-LVD)

▪ A single iteration of empirical Bellman operator 𝑄 𝑡+1 = 𝒯𝐷𝐿𝑉𝐷𝑄 𝑡 :

▪ 𝑄𝑖𝑡+1

𝑠, 𝑎𝑖 = 𝔼𝑎−𝑖′ 𝑦 𝑡 𝑠, 𝑎𝑖 ⊕𝑎−𝑖

′ −𝑛−1

𝑛𝔼𝐚′ 𝑦

𝑡 𝑠, 𝐚′ + 𝑤𝑖 𝑠

▪ 𝑎𝑖 ⊕𝑎−𝑖′ = 𝐚′ = 𝑎1

′ , … , 𝑎𝑖−1′ , 𝑎𝑖 , 𝑎𝑖+1

′ , … , 𝑎𝑛′

▪ ∀𝐰 = 𝑤𝑖 𝑖=1𝑛 s. t. σ𝑖=1

𝑛 𝑤𝑖 𝑠 = 0

Proof sketch:

▪ regarded as a weighted linear least squares problem with 𝑛 𝑆 |𝐴|variables and 𝑆 𝐴 𝑛 data points.

▪ 𝑄 𝑡+1 ← argmax𝑄∈𝒬𝐿𝑉𝐷

σ 𝑠,𝐚 𝑝𝐷 𝐚 𝑠 𝑦(𝑡) 𝑠, 𝐚 − σ𝑖=1𝑛 𝑄𝑖 𝑠, 𝑎𝑖

2

▪ Construct a solution and use pseudo inverse to prove32

Page 33: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

33

▪ Theorem (Closed-form solution of FQI-LVD)

▪ A single iteration of empirical Bellman operator 𝑄 𝑡+1 = 𝒯𝐷𝐿𝑉𝐷𝑄 𝑡 :

▪ 𝑄𝑖𝑡+1

𝑠, 𝑎𝑖 = 𝔼𝑎−𝑖′ 𝑦 𝑡 𝑠, 𝑎𝑖 ⊕𝑎−𝑖

′ −𝑛−1

𝑛𝔼𝐚′ 𝑦

𝑡 𝑠, 𝐚′ + 𝑤𝑖 𝑠

▪ ∀𝐰 = 𝑤𝑖 𝑖=1𝑛 s. t. σ𝑖=1

𝑛 𝑤𝑖 𝑠 = 0

1: Will be eliminated when considering the joint action-value function.

2: Do not change the greedy action selection.

Theoretical Analysis of Linear Value Factorization

Page 34: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

34

▪ Implicit counterfactual credit assignment mechanism of

FQI-LVD

▪ A single iteration of empirical Bellman operator 𝑄 𝑡+1 = 𝒯𝐷𝐿𝑉𝐷𝑄 𝑡 :

▪ 𝑄𝑖𝑡+1

𝑠, 𝑎𝑖 = 𝔼𝑎−𝑖′ 𝑦 𝑡 𝑠, 𝑎𝑖 ⊕𝑎−𝑖

′ −𝑛−1

𝑛𝔼𝐚′ 𝑦

𝑡 𝑠, 𝐚′

Evaluation of 𝑎𝑖 Baseline

Theoretical Analysis of Linear Value Factorization

Page 35: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

35

▪ Theorem 2. With uniform data distribution, there exists MMDPs, FQI-LVD diverges to infinity from any arbitrary initialization.

▪ Theorem 3. With 𝜖-greedy exploration, FQI-LVD has local convergence when 𝜖 is sufficient small.

▪ Multi-agent Q-learning with linear value decomposition structure requires on-policy samples to maintain numerical stability.

Convergence Analysis of Linear Value Factorization

Page 36: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Offline Learning on Starcraft Benchmark

36

Data collected by a behavior policy learned by QMIX

Page 37: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Outline

▪ Value-Based Methods

▪ Paradigm: Centralized Training and Decentralized Execution

▪ Basic methods: VDN, QMIX, QPLEX

▪ Theoretical analysis

▪ Extensions

▪ Policy Gradient Methods

▪ Paradigm: Centralized Critic and Decentralized Actors

▪ Method: Decomposable Off-Policy Policy Gradient (DOP)

37

Page 38: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Challenges of Value Factorization Learning

▪ Uncertainty

▪ Full value factorization ➔ miscoordination

38

𝑄1(𝜏1, 𝑎1) 𝑄𝑛(𝜏𝑛, 𝑎𝑛)

𝑀𝑖𝑥𝑖𝑛𝑔 𝑁𝑒𝑡𝑤𝑜𝑟𝑘

𝑄𝑡𝑜𝑡(𝝉, 𝒂)

Page 39: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

▪ Can cause miscoordinations during execution

▪ Need Communication!

39

Limitations of Full Value Factorization

Page 40: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

▪ Allowing communication, but minimized

▪ Learn when, what, and with whom to communicate

40

Nearly Decomposable Q-Value Learning (NDQ)

Page 41: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

41

𝑄1(𝜏1, 𝑎1) 𝑄𝑛(𝜏𝑛, 𝑎𝑛)

𝑀𝑖𝑥𝑖𝑛𝑔 𝑁𝑒𝑡𝑤𝑜𝑟𝑘

𝑄𝑡𝑜𝑡(𝝉, 𝒂) 𝑇𝐷 𝐿𝑜𝑠𝑠

𝐼(𝐴𝑖; M𝑗𝑖)

Mutual Information Loss ↑

𝐻(M𝑗𝑖)

Entropy Loss ↓

𝑄1(𝜏1, 𝑎1, 𝑚1in) 𝑄𝑛(𝜏𝑛, 𝑎𝑛, 𝑚𝑛

in)

Message 𝑀𝑗𝑖

Expressiveness Succinctness

NDQ Framework: Communication Optimization

[Wang et. al. , 2020]

Page 42: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

42

𝑡 = 2𝑡 = 1 𝑡 = 3

Communication

𝑡 = 4

Communication

Learning Optimal Communication Protocol

Page 43: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Experiments: Micro-Management in Starcraft II

https://sites.google.com/view/ndq

Page 44: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

SC2 benchmark: without Message Drop

44

3b vs 1h1m 3s vs 5z 1o2r vs 4r

5z vs 1ul 1o10b vs 1r MMM

Page 45: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

SC2 benchmark: 80% Messages Dropped

45

3b vs 1h1m 1o2r vs 4r3s vs 5z

5z vs 1ul 1o10b vs 1r MMM

Page 46: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Challenges of Value Factorization Learning

▪ Uncertainty

▪ Full value factorization ➔ miscoordination

▪ Complex tasks require diverse or

heterogeneous agents

▪ Shared value network ➔ ineffective learning

46

𝑄1(𝜏1, 𝑎1) 𝑄𝑛(𝜏𝑛, 𝑎𝑛)

𝑀𝑖𝑥𝑖𝑛𝑔 𝑁𝑒𝑡𝑤𝑜𝑟𝑘

𝑄𝑡𝑜𝑡(𝝉, 𝒂)

Page 47: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Why dynamic shared learning?

▪ Complex cooperative tasks require

diverse behaviors among agents

▪ Learning a single shared policy network for agents[1-4]

▪ Lack of diversity and requiring a high-capacity neural network

▪ May result in slow, ineffective learning

▪ Learning independent policy networks is not efficient

▪ Some agents perform similar sub-tasks, especially in large systems

47

[1] Rashid, et. al. QMIX: Monotonic value function factorisation for deep multi-agent reinforcement learning. (ICML 2018)[2] Vinyals, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. (Nature 2019)[3] Baker, et al. Emergent tool use from multi-agent autocurricula. (ICLR 2020)[4] Lowe, et al. Multi-agent actor-critic for mixed cooperative-competitive environments. (NeurIPS 2017)

Page 48: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

ROMA: Multi-Agent Reinforcement Learning

with Emerging Roles

▪ Agents with similar roles have similar

policies and share their learning

▪ Similar roles similar subtasks similar behaviors

▪ Inferring an agent’s roles based on the local observations

and execution trajectories

▪ Conditioning agents’ policies on their roles

▪ An agent can change its roles in different situations

48[Wang et. al. , 2020]

Page 49: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

49

𝑇𝐷 Loss

Role Encoder 𝑜𝑖

Role Decoder

𝜏𝑖𝑡−1, 𝑜𝑖

𝜃𝑖

Agent 𝑖

𝑀𝐼 Loss

𝑄𝑖(𝜏𝑖 ,∙) 𝑄𝑡𝑜𝑡(𝝉, 𝒂)

Differentiation

Loss

Mixing

Network

𝜌−𝑖

Role Distribution

Data Flow

Gradient Flow

Local Utility

Network

Role Learning

Q-Learning

𝜌𝑖

𝑄−𝑖

𝜏𝑖

ROMA Framework

[Wang et. al. , 2020]

Page 50: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

The SMAC Challenge in Starcraft II

50

https://sites.google.com/view/romarl

Page 51: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Starcraft II: 27 Marines vs 30 Marines

51

Page 52: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Dynamic Roles

52

𝑡 = 1 𝑡 = 8 𝑡 = 19 𝑡 = 27

Page 53: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Specialized Roles

53

Page 54: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

more expressive

less expressive

easy to learndifficult to learn

fully

centralized

fully

decentralized

VDN

QTRAN QPLEX

NDQ

ROMA

QMIX

Page 55: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Outline

▪ Value-Based Methods

▪ Paradigm: Centralized Training and Decentralized Execution

▪ Basic methods: VDN, QMIX, QPLEX

▪ Theoretical analysis

▪ Extensions

▪ Policy Gradient Methods

▪ Paradigm: Centralized Critic and Decentralized Actors

▪ Method: Decomposed Off-Policy Policy Gradient (DOP)

55

Page 56: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Value-Based Methods

▪ Significantly contribute to the recent progress of MARL.

▪ VDN, QMIX, QPLEX, QTRAN, Qatten, NDQ, ROMA, …

▪ Drawbacks:

▪ Lack of stability;

▪ Limited in discrete action spaces.

▪ Policy gradient methods hold a promise

▪ Paradigm: centralized critic with decentralized actors

56

Page 57: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Centralized Critic with Decentralized Actors

57

Environment

Actor 1 Actor 2

Critic

𝑠𝑡 𝑟𝑡𝑎1𝑡 𝑎2

𝑡𝑔2𝑡𝑔1

𝑡

𝑎1𝑡 𝑎2

𝑡

𝑜1𝑡 𝑜2

𝑡

Page 58: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Centralized Critic with Decentralized Actors

58

Environment

Actor 1 Actor 2𝑎1𝑡 𝑎2

𝑡

𝑜1𝑡 𝑜2

𝑡

Page 59: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Centralized Critic with Decentralized Actors

▪ Single-agent

𝑔 = 𝔼𝜋 [𝑄𝜋(𝑠, 𝑎)∇𝜃 log 𝜋𝜃(𝑎|𝑠)]

▪ Multi-agent (Centralized Critics with Decentralized Actors)

𝑔 = 𝔼𝝅 σ𝑖𝑄𝜋 𝑠, 𝒂 ∇𝜃 log 𝜋𝑖(𝑎𝑖|𝜏𝑖)

▪ Add a baseline to reduce variance

𝑔 = 𝔼𝝅

𝑖

(𝑄𝜋 𝑠, 𝒂 − 𝑏(𝑠, 𝒂−𝑖))∇𝜃 log 𝜋𝑖(𝑎𝑖|𝜏𝑖)

59

Independent of 𝑎𝑖

Page 60: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Counterfactual Baseline to Assign Credit

▪ Gradient: 𝑔 = 𝔼𝝅 σ𝑖(𝑄𝜋 𝑠, 𝒂 − 𝑏(𝑠, 𝒂−𝑖))∇𝜃 log 𝜋𝑖(𝑎𝑖|𝜏𝑖)

▪ A problem

▪ 𝑄𝜋 𝑠, 𝒂 is an estimation of the global return;

▪ Not tailored to a specific agent

▪ COMA method: counterfactual baseline:

▪ 𝑏 𝑠, 𝒂−𝑖 = σ𝑎𝑖 𝜋𝑖(𝑎𝑖|𝑜𝑖)𝑄𝜋 𝑠, 𝒂−𝑖 , 𝑎𝑖

▪ Simultaneously achieving

▪ Variance reduction

▪ Credit assignment 60

[Foerster et. al., 2018]

Page 61: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Policy-Based Methods

▪ Centralized Critic with Decentralized Actors

▪ COMA[6] (stochastic policy gradients)

▪ MADDPG[7] (deterministic policy gradients)

▪ Some extensions

▪ MAAC[8]

61

[6] Foerster, J.N., Farquhar, G., Afouras, T., Nardelli, N. and Whiteson, S., Counterfactual multi-agent policy gradients. (AAAI 2018).

[7] Lowe, R., Wu, Y.I., Tamar, A., Harb, J., Abbeel, O.P. and Mordatch, I., Multi-agent actor-critic for mixed cooperative-competitive

environments. (NIPS 2017).

[8] Iqbal, S. and Sha, F., Actor-attention-critic for multi-agent reinforcement learning. (ICML 2019).

Page 62: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Unsatisfactory Performance

62

Page 63: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Why?

▪ On-policy learning of stochastic PG

▪ E.g., COMA

▪ Lack credit assignment mechanism of deterministic PG

▪ E.g., MADDPG, MAAC

▪ Centralized-Decentralized Mismatch (CDM)

▪ Centralized critics introduce the influence of other agents.

63

Page 64: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Centralized-Decentralized Mismatch (CDM)

𝑔 = 𝔼𝝅[

𝑖

𝑄𝑡𝑜𝑡𝝅 (𝜏, 𝑎1, 𝑎2, ⋯ , 𝑎𝑛)∇𝜃𝑖 log 𝜋𝑖(𝑎𝑖|𝜏𝑖)]

▪ The joint critic introduces the influence from other agents.

▪ Assume the optimal action under 𝝉 is 𝒂∗ = (𝑎1∗ , ⋯ , 𝑎𝑛

∗ ).

▪ It is possible that 𝔼𝝅−𝒊 𝑄𝑡𝑜𝑡𝝅 𝜏, 𝒂−𝑖 , 𝑎𝑖

∗ < 0, leading to the decrease

of 𝜋𝑖 𝑎𝑖∗ 𝜏𝑖 . (e.g. when 𝝅−𝒊 is suboptimal)

▪ Suboptimality reinforces each other by propagating though the

joint critic.

▪ Leading to large variance

64

Page 65: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Centralized-Decentralized Mismatch

10 agents, 10 actions. Rewarded 10 if all agents take the

first action; otherwise -10.65

COMA

DOP

Page 66: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

DOP: Off-Policy Decomposed Policy Gradient

▪ Introducing linearly decomposed critic

▪ 𝑄𝑡𝑜𝑡𝝅 𝝉, ∙ = σ𝑖 𝑘𝑖 𝝉 𝑄𝑖 𝝉,∙ + 𝑏 𝝉

▪ where 𝑘𝑖 𝝉 > 𝟎

▪ Benefits:

▪ Simple policy update rules

▪ Tractable off-policy learning

▪ Convergence guarantees

▪ Addressing centralized-decentralized mismatch (CDM)66

Page 67: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

DOP Policy Gradient Theorem

Policy gradient theorem

∇𝐽(𝜃) = 𝔼𝝅 𝑖∇𝜃 log 𝜋𝑖 𝑎𝑖 𝜏𝑖 𝑘𝑖 𝝉 𝑄𝑖 𝝉, 𝑎𝑖

Proof

𝑈𝑖 𝝉, 𝑎𝑖 = 𝑄𝑡𝑜𝑡𝝅 𝝉, 𝒂 − σ𝑥 𝜋𝑖 𝑥 𝜏𝑖 𝑄𝑡𝑜𝑡

𝝅 𝝉, 𝑥, 𝒂−𝑖

= σ𝑗 𝑘𝑗 𝝉 𝑄𝑗 𝝉, 𝑎𝑗 − σ𝑥 𝜋𝑖 𝑥 𝜏𝑖 σ𝑗≠𝑖 𝑘𝑗 𝝉 𝑄𝑗 𝝉, 𝑎𝑗 + 𝑘𝑖 𝝉 𝑄𝑖 𝝉, 𝑥

= 𝑘𝑖 𝝉 𝑄𝑖 𝝉, 𝑎𝑖 − σ𝑥 𝜋𝑖 𝑥 𝜏𝑖 𝑄𝑖 𝝉, 𝑥

∇𝐽(𝜃) = 𝔼𝝅 σ𝑖 ∇𝜃 log 𝜋𝑖 𝑎𝑖 𝜏𝑖 𝑈𝑖 𝝉, 𝑎𝑖

= 𝔼𝝅 σ𝑖 ∇𝜃 log 𝜋𝑖 𝑎𝑖 𝜏𝑖 𝑘𝑖 𝝉 𝑄𝑖 𝝉, 𝑎𝑖 − σ𝑥 𝜋𝑖 𝑥 𝜏𝑖 𝑄𝑖 𝝉, 𝑥

= 𝔼𝝅 σ𝑖 ∇𝜃 log 𝜋𝑖 𝑎𝑖 𝜏𝑖 𝑘𝑖 𝝉 𝑄𝑖 𝝉, 𝑎𝑖 67

Page 68: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Off-Policy Learning

▪ Evaluate value functions using off-policy data:

▪ Estimate 𝑄𝑡𝑜𝑡𝝅 (𝝉, 𝒂) using data collected by a behavior policy 𝛽.

▪ Popular techniques in single-agent settings:

▪ Importance sampling:

▪ Require computing ς𝑖𝜋𝑖(𝑎𝑖|𝜏𝑖)

𝛽𝑖(𝑎𝑖|𝜏𝑖)in multi-agent settings.

▪ The variance grows exponentially with the number of agents.

68

Page 69: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Off-Policy Evaluation with Linear Decomposition

▪ Evaluate value functions using off-policy data:

▪ Estimate 𝑄𝑡𝑜𝑡𝝅 (𝝉, 𝒂) using data collected by a behavior policy 𝛽.

▪ Popular techniques in single-agent settings:

▪ Tree-backup (Sutton & Barto (2018 edition), section 7.5) :

▪ Require computing 𝔼𝝅[𝑄𝑡𝑜𝑡𝝅 (𝝉,∙)] in multi-agent settings;

▪ Need a summation for every joint action; the complexity is exponential;

▪ Fortunately, using linearly decomposed critics, this expectation can be

computed in linear time:

𝔼𝝅 𝑄𝑡𝑜𝑡𝝅 𝜏,∙ =

𝑖

𝑘𝑖 𝝉 𝔼𝜋𝑖 𝑄𝑖 𝝉,∙ + 𝑏 𝝉

69

Page 70: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Other Properties of DOP

▪ Policy Improvement Theorem

▪ Attenuating Centralized-Decentralized Mismatch

70

Page 71: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

DOP Deterministic Policy Gradient

▪ A similar theorem can be derived for the deterministic case:

∇𝐽(𝜃) = 𝔼𝝉∼𝒟 𝑖∇𝜃𝜋𝑖(𝜏𝑖) ∇𝑎𝑖𝑘𝑖 𝝉 𝑄𝑖 𝝉, 𝑎𝑖 |𝑎𝑖=𝜋𝑖(𝜏𝑖)

Proof. Drawing inspirations from the single-agent case [silver et. al, 2014].

∇𝐽 𝜃 = 𝔼𝝉∼𝒟 ∇𝜃𝑄𝑡𝑜𝑡 𝝉, 𝒂

= 𝔼𝝉∼𝒟 σ𝑖 ∇𝜃𝑘𝑖 𝝉 𝑄𝑖 𝝉, 𝑎𝑖 |𝑎𝑖=𝜋𝑖(𝜏𝑖)

= 𝔼𝝉∼𝒟 𝑖∇𝜃𝜋𝑖(𝜏𝑖) ∇𝑎𝑖𝑘𝑖 𝝉 𝑄𝑖 𝝉, 𝑎𝑖 |𝑎𝑖=𝜋𝑖(𝜏𝑖)

71

Page 72: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

DOP Deterministic Policy Gradient

▪ Sufficient Representational Capability

▪ Attenuating Centralized-Decentralized Mismatch

72

Page 73: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

State of the art on SC2 Benchmark

73

Page 74: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Continuous Action Spaces

74

Multi-Agent Particle Environment (MPE)

Page 75: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Summary

▪ Value-Based Methods

▪ Paradigm: centralized training with decentralized execution

▪ Methods: VDN, QMIX, QPLEX, NDQ, ROMA

▪ Policy Gradient Methods

▪ Paradigm: centralized critic and decentralized actors

▪ DOP: off-policy decomposed multi-agent policy gradient

▪ Take-away

▪ Value factorization or decomposition is very useful

▪ Dynamic shared learning + communication for complex tasks

▪ MARL plays a critical role for AI, but is at the early stage75

Page 76: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

Challenges in MARL

▪ Exploration (e.g., sparse interaction)

▪ Scalability (e.g., number of agents and large action spaces)

▪ Hierarchical learning (e.g., long horizon problems)

▪ Decentralized or semi-centralized training

▪ Non-stationary environments

▪ Adversarial environments

▪ Mixed environments (e.g., social dilemma)

▪ Communication emergence

▪ Theoretical analysis

▪ … 76

Page 77: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

References

▪ Sunehag, et. al.. Value-decomposition networks for cooperative multi-agent learning. (AAMAS 2018)

▪ Rashid, et. al. QMIX: monotonic value function factorisation for deep multi-agent reinforcement learning. (ICML 2018)

▪ Jianhao Wang, Zhizhou Ren, Terry Liu, Yang Yu, Chongjie Zhang. QPLEX: Duplex Dueling Multi-Agent Q-Learning. arXiv:2008.01062. 2020

▪ Wang, Jianhao, Zhizhou Ren, Beining Han, and Chongjie Zhang. "Towards Understanding Linear Value Decomposition in Cooperative Multi-Agent Q-Learning." arXiv preprint arXiv:2006.00587 (2020).

▪ Wang, T., Wang, J., Zheng, C. and Zhang, C., 2019. Learning Nearly Decomposable Value Functions Via Communication Minimization. (ICLR 2020)

▪ Wang, Tonghan, Heng Dong, Victor Lesser, and Chongjie Zhang. "ROMA: Multi-Agent Reinforcement Learning with Emergent Roles." In Proceedings of the 37th International Conference on Machine Learning. 2020.

77

Page 78: Deep Multi-Agent Reinforcement Learning · Recent AI Breakthrough 2. What’s Next for AI Shifting from pattern recognition to decision-making/control Shifting from single-agent to

References

▪ Son, Kyunghwan, Daewoo Kim, Wan Ju Kang, David Earl Hostallero, and Yung Yi. "Qtran: Learning to factorize with transformation for cooperative multi-agent reinforcement learning." arXivpreprint arXiv:1905.05408 (2019).

▪ Yang, Yaodong, Jianye Hao, Ben Liao, Kun Shao, Guangyong Chen, Wulong Liu, and Hongyao Tang. "Qatten: A General Framework for Cooperative Multiagent Reinforcement Learning." arXivpreprint arXiv:2002.03939 (2020).

▪ Foerster, J.N., Farquhar, G., Afouras, T., Nardelli, N. and Whiteson, S., Counterfactual multi-agent policy gradients. (AAAI 2018).

▪ Lowe, R., Wu, Y.I., Tamar, A., Harb, J., Abbeel, O.P. and Mordatch, I., Multi-agent actor-critic for mixed cooperative-competitive environments. (NIPS 2017).

▪ Iqbal, S. and Sha, F., Actor-attention-critic for multi-agent reinforcement learning. (ICML 2019).

▪ Wang, Yihan, Beining Han, Tonghan Wang, Heng Dong, and Chongjie Zhang. "Off-Policy Multi-Agent Decomposed Policy Gradients." arXiv preprint arXiv:2007.12322 (2020).

78