distance optimal target assignment in robotic networks...

Post on 14-Mar-2020

7 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Distance Optimal Target Assignment in Robotic Networks under Communication and Sensing

Constraints

Jingjin Yu Soon-Jo Chung Petros G. VoulgarisCSAIL @ MIT/MechE @ BU AE @ University of Illinois

Supported by:

The Stochastic Target Assignment Problem

2

The Stochastic Target Assignment Problem

2

𝑄 = 0,1 × [0,1]

The Stochastic Target Assignment Problem

2

𝑋 = {𝑥1, … , 𝑥𝑛}

𝑄 = 0,1 × [0,1]

The Stochastic Target Assignment Problem

2

𝑋 = {𝑥1, … , 𝑥𝑛}

𝑌 = {𝑦1, … , 𝑦𝑛}

𝑄 = 0,1 × [0,1]

The Stochastic Target Assignment Problem

2

𝑋 = {𝑥1, … , 𝑥𝑛}

𝑌 = {𝑦1, … , 𝑦𝑛}

Control: 𝑥𝑖 = 𝑢𝑖 , | 𝑢𝑖 | = 1

𝑄 = 0,1 × [0,1]

The Stochastic Target Assignment Problem

2

𝑋 = {𝑥1, … , 𝑥𝑛}

𝑌 = {𝑦1, … , 𝑦𝑛}

Control: 𝑥𝑖 = 𝑢𝑖 , | 𝑢𝑖 | = 1

𝜎: permutation that pairs 𝑥𝑖 with 𝑦𝜎(𝑖)

𝑄 = 0,1 × [0,1]

The Stochastic Target Assignment Problem

2

𝑋 = {𝑥1, … , 𝑥𝑛}

𝑌 = {𝑦1, … , 𝑦𝑛}

Control: 𝑥𝑖 = 𝑢𝑖 , | 𝑢𝑖 | = 1

min𝜎,{𝑢𝑖}

𝐷𝑛 =

𝑖

| 𝑥𝑖(𝑡)|𝑑𝑡

𝜎: permutation that pairs 𝑥𝑖 with 𝑦𝜎(𝑖)

𝑄 = 0,1 × [0,1]

The Stochastic Target Assignment Problem, cont.

3

The Stochastic Target Assignment Problem, cont.

3

𝑟𝑠𝑒𝑛𝑠𝑒

The Stochastic Target Assignment Problem, cont.

3

𝑟𝑠𝑒𝑛𝑠𝑒

The Stochastic Target Assignment Problem, cont.

3

𝑟𝑠𝑒𝑛𝑠𝑒

𝑟𝑐𝑜𝑚𝑚

The Stochastic Target Assignment Problem, cont.

3

𝑟𝑠𝑒𝑛𝑠𝑒

𝑟𝑐𝑜𝑚𝑚

𝐺(𝑡)

The Stochastic Target Assignment Problem, cont.

3

𝑟𝑠𝑒𝑛𝑠𝑒

𝑟𝑐𝑜𝑚𝑚

𝐺(𝑡)

Given 𝑟𝑠𝑒𝑛𝑠𝑒 and 𝑟𝑐𝑜𝑚𝑚, how can we guarantee distance optimality?

The Stochastic Target Assignment Problem, cont.

3

𝑟𝑠𝑒𝑛𝑠𝑒

𝑟𝑐𝑜𝑚𝑚

𝐺(𝑡)

Given 𝑟𝑠𝑒𝑛𝑠𝑒 and 𝑟𝑐𝑜𝑚𝑚, how can we guarantee distance optimality? Performance of decentralized, hierarchical strategies (algorithms)?

Related Work

4

Related Work

4

Smith and Bullo, Monotonic target assignment for robotic networks, IEEE Trans. Automat. Control, vol. 54, no. 9, pp. 2042–2057, 2009

Related Work

4

Smith and Bullo, Monotonic target assignment for robotic networks, IEEE Trans. Automat. Control, vol. 54, no. 9, pp. 2042–2057, 2009

Treleaven, Pavone, and Frazzoli, Asymptotically optimal algorithms for one-to-one pickup and delivery problems with applications to transportation systems, IEEE Trans. Automat Control, vol. 59, no. 9, pp. 2261–2276, 2013.

Related Work

4

Smith and Bullo, Monotonic target assignment for robotic networks, IEEE Trans. Automat. Control, vol. 54, no. 9, pp. 2042–2057, 2009

Treleaven, Pavone, and Frazzoli, Asymptotically optimal algorithms for one-to-one pickup and delivery problems with applications to transportation systems, IEEE Trans. Automat Control, vol. 59, no. 9, pp. 2261–2276, 2013.

Penrose, The longest edge of the random minimal spanning tree, Annals of Applied Probability, vol. 7, pp. 340–361, 1997.Penrose, Random Geometric Graphs, 2003

Related Work

4

Smith and Bullo, Monotonic target assignment for robotic networks, IEEE Trans. Automat. Control, vol. 54, no. 9, pp. 2042–2057, 2009

Treleaven, Pavone, and Frazzoli, Asymptotically optimal algorithms for one-to-one pickup and delivery problems with applications to transportation systems, IEEE Trans. Automat Control, vol. 59, no. 9, pp. 2261–2276, 2013.

Penrose, The longest edge of the random minimal spanning tree, Annals of Applied Probability, vol. 7, pp. 340–361, 1997.Penrose, Random Geometric Graphs, 2003

Erdős and Rényi, On a classical problem of probability theory, Publ. Math. Inst. Hung. Acad. Sci., vol. Ser. A 6, pp. 215–220, 1961

Related Work

4

Smith and Bullo, Monotonic target assignment for robotic networks, IEEE Trans. Automat. Control, vol. 54, no. 9, pp. 2042–2057, 2009

Treleaven, Pavone, and Frazzoli, Asymptotically optimal algorithms for one-to-one pickup and delivery problems with applications to transportation systems, IEEE Trans. Automat Control, vol. 59, no. 9, pp. 2261–2276, 2013.

Penrose, The longest edge of the random minimal spanning tree, Annals of Applied Probability, vol. 7, pp. 340–361, 1997.Penrose, Random Geometric Graphs, 2003

Erdős and Rényi, On a classical problem of probability theory, Publ. Math. Inst. Hung. Acad. Sci., vol. Ser. A 6, pp. 215–220, 1961

Karaman and Frazzoli, Sampling-based Algorithms for Optimal Motion Planning. Int. Journal of Robotics Research, vol. 30, no 7, pp. 846-894, 2011

Main Result

22

Distance optimality guarantee Necessary and sufficient condition for distance optimality (non-stochastic) Non-asymptotic 1 − 𝜖 probabilistic guarantee for 0 < 𝜖 < 1

𝑛 ≥

2

𝑟𝑠𝑒𝑛𝑠𝑒

2

log1

𝜖

2

𝑟𝑠𝑒𝑛𝑠𝑒

2

, 𝑟𝑠𝑒𝑛𝑠𝑒 <10𝑟𝑐𝑜𝑚𝑚

5

5

𝑟𝑐𝑜𝑚𝑚

2

log1

𝜖

5

𝑟𝑐𝑜𝑚𝑚

2

, 𝑟𝑠𝑒𝑛𝑠𝑒 ≥10𝑟𝑐𝑜𝑚𝑚

5

Tight asymptotic bounds for high-probability guarantee

Performance of decentralized, hierarchical strategies Upper bound on the distance cost for arbitrary robot/target distribution 𝑂(1) asymptotic optimality guarantee under the uniform distribution

Main Result

23

Distance optimality guarantee Necessary and sufficient condition for distance optimality (non-stochastic) Non-asymptotic 1 − 𝜖 probabilistic guarantee for 0 < 𝜖 < 1

𝑛 ≥

2

𝑟𝑠𝑒𝑛𝑠𝑒

2

log1

𝜖

2

𝑟𝑠𝑒𝑛𝑠𝑒

2

, 𝑟𝑠𝑒𝑛𝑠𝑒 <10𝑟𝑐𝑜𝑚𝑚

5

5

𝑟𝑐𝑜𝑚𝑚

2

log1

𝜖

5

𝑟𝑐𝑜𝑚𝑚

2

, 𝑟𝑠𝑒𝑛𝑠𝑒 ≥10𝑟𝑐𝑜𝑚𝑚

5

Tight asymptotic bounds for high-probability guarantee

Performance of decentralized, hierarchical strategies Upper bound on the distance cost for arbitrary robot/target distribution 𝑂(1) asymptotic optimality guarantee under the uniform distribution

Main Result

24

Distance optimality guarantee Necessary and sufficient condition for distance optimality (non-stochastic) Non-asymptotic 1 − 𝜖 probabilistic guarantee for 0 < 𝜖 < 1

𝑛 ≥

2

𝑟𝑠𝑒𝑛𝑠𝑒

2

log1

𝜖

2

𝑟𝑠𝑒𝑛𝑠𝑒

2

, 𝑟𝑠𝑒𝑛𝑠𝑒 <10𝑟𝑐𝑜𝑚𝑚

5

5

𝑟𝑐𝑜𝑚𝑚

2

log1

𝜖

5

𝑟𝑐𝑜𝑚𝑚

2

, 𝑟𝑠𝑒𝑛𝑠𝑒 ≥10𝑟𝑐𝑜𝑚𝑚

5

Tight asymptotic bounds for high-probability guarantee

Performance of decentralized, hierarchical strategies Upper bound on the distance cost for arbitrary robot/target distribution 𝑂(1) asymptotic optimality guarantee under the uniform distribution

Main Result

25

Distance optimality guarantee Necessary and sufficient condition for distance optimality (non-stochastic) Non-asymptotic 1 − 𝜖 probabilistic guarantee for 0 < 𝜖 < 1

𝑛 ≥

2

𝑟𝑠𝑒𝑛𝑠𝑒

2

log1

𝜖

2

𝑟𝑠𝑒𝑛𝑠𝑒

2

, 𝑟𝑠𝑒𝑛𝑠𝑒 <10𝑟𝑐𝑜𝑚𝑚

5

5

𝑟𝑐𝑜𝑚𝑚

2

log1

𝜖

5

𝑟𝑐𝑜𝑚𝑚

2

, 𝑟𝑠𝑒𝑛𝑠𝑒 ≥10𝑟𝑐𝑜𝑚𝑚

5

Tight asymptotic bounds for high-probability guarantee

Performance of decentralized, hierarchical strategies Upper bound on the distance cost for arbitrary robot/target distribution 𝑂(1) asymptotic optimality guarantee under the uniform distribution

Main Result

26

Distance optimality guarantee Necessary and sufficient condition for distance optimality (non-stochastic) Non-asymptotic 1 − 𝜖 probabilistic guarantee for 0 < 𝜖 < 1

𝑛 ≥

2

𝑟𝑠𝑒𝑛𝑠𝑒

2

log1

𝜖

2

𝑟𝑠𝑒𝑛𝑠𝑒

2

, 𝑟𝑠𝑒𝑛𝑠𝑒 <10𝑟𝑐𝑜𝑚𝑚

5

5

𝑟𝑐𝑜𝑚𝑚

2

log1

𝜖

5

𝑟𝑐𝑜𝑚𝑚

2

, 𝑟𝑠𝑒𝑛𝑠𝑒 ≥10𝑟𝑐𝑜𝑚𝑚

5

Tight asymptotic bounds for high-probability guarantee

Performance of decentralized, hierarchical strategies Upper bound on the distance cost for arbitrary robot/target distribution 𝑂(1) asymptotic optimality guarantee under the uniform distribution

𝑛 - number of robots

Distance Optimality Guarantee

6

Theorem (Necessary and Sufficient Conditions for Distance Optimality).Under sensing and communication constraints, distance optimality can be guaranteed if and only if at 𝑡 = 0,

1. Every robot can communicate with every other robot, 2. Each target is observable by some robot.

Distance Optimality Guarantee

6

Theorem (Necessary and Sufficient Conditions for Distance Optimality).Under sensing and communication constraints, distance optimality can be guaranteed if and only if at 𝑡 = 0,

1. Every robot can communicate with every other robot, 2. Each target is observable by some robot.

𝑟𝑐𝑜𝑚𝑚

Distance Optimality Guarantee

6

Theorem (Necessary and Sufficient Conditions for Distance Optimality).Under sensing and communication constraints, distance optimality can be guaranteed if and only if at 𝑡 = 0,

1. Every robot can communicate with every other robot, 2. Each target is observable by some robot.

𝑟𝑐𝑜𝑚𝑚

Distance Optimality Guarantee

6

Theorem (Necessary and Sufficient Conditions for Distance Optimality).Under sensing and communication constraints, distance optimality can be guaranteed if and only if at 𝑡 = 0,

1. Every robot can communicate with every other robot, 2. Each target is observable by some robot.

𝑟𝑐𝑜𝑚𝑚

𝑟𝑠𝑒𝑛𝑠𝑒

Non-Asymptotic Optimality Guarantee

7

Non-Asymptotic Optimality Guarantee

7

Lemma. Given 𝑟𝑐𝑜𝑚𝑚 and fixing 0 < 𝜖 < 1, 𝐺(0) is connected with probability at least 1 − 𝜖 if

𝑛 ≥5

𝑟𝑐𝑜𝑚𝑚

2

log1

𝜖

5

𝑟𝑐𝑜𝑚𝑚

2

.

Non-Asymptotic Optimality Guarantee

7

Lemma. Given 𝑟𝑐𝑜𝑚𝑚 and fixing 0 < 𝜖 < 1, 𝐺(0) is connected with probability at least 1 − 𝜖 if

𝑛 ≥5

𝑟𝑐𝑜𝑚𝑚

2

log1

𝜖

5

𝑟𝑐𝑜𝑚𝑚

2

.

1 2 …… 𝑚 = ⌈ 5/𝑟𝑐𝑜𝑚𝑚⌉

Non-Asymptotic Optimality Guarantee

7

Lemma. Given 𝑟𝑐𝑜𝑚𝑚 and fixing 0 < 𝜖 < 1, 𝐺(0) is connected with probability at least 1 − 𝜖 if

𝑛 ≥5

𝑟𝑐𝑜𝑚𝑚

2

log1

𝜖

5

𝑟𝑐𝑜𝑚𝑚

2

.

𝑟𝑐𝑜𝑚𝑚

𝑞𝑖

1 2 …… 𝑚 = ⌈ 5/𝑟𝑐𝑜𝑚𝑚⌉

Non-Asymptotic Optimality Guarantee

7

Lemma. Given 𝑟𝑐𝑜𝑚𝑚 and fixing 0 < 𝜖 < 1, 𝐺(0) is connected with probability at least 1 − 𝜖 if

𝑛 ≥5

𝑟𝑐𝑜𝑚𝑚

2

log1

𝜖

5

𝑟𝑐𝑜𝑚𝑚

2

.

𝑟𝑐𝑜𝑚𝑚

𝑞𝑖

1 2 …… 𝑚 = ⌈ 5/𝑟𝑐𝑜𝑚𝑚⌉

Non-Asymptotic Optimality Guarantee

7

Lemma. Given 𝑟𝑐𝑜𝑚𝑚 and fixing 0 < 𝜖 < 1, 𝐺(0) is connected with probability at least 1 − 𝜖 if

𝑛 ≥5

𝑟𝑐𝑜𝑚𝑚

2

log1

𝜖

5

𝑟𝑐𝑜𝑚𝑚

2

.

𝑟𝑐𝑜𝑚𝑚

𝑞𝑖

𝑃 𝑛𝑖 = 0 = 1 −1

𝑚

𝑛

1 2 …… 𝑚 = ⌈ 5/𝑟𝑐𝑜𝑚𝑚⌉

Non-Asymptotic Optimality Guarantee

7

Lemma. Given 𝑟𝑐𝑜𝑚𝑚 and fixing 0 < 𝜖 < 1, 𝐺(0) is connected with probability at least 1 − 𝜖 if

𝑛 ≥5

𝑟𝑐𝑜𝑚𝑚

2

log1

𝜖

5

𝑟𝑐𝑜𝑚𝑚

2

.

𝑟𝑐𝑜𝑚𝑚

𝑞𝑖

𝑃 𝑛𝑖 = 0 = 1 −1

𝑚

𝑛

1 2 …… 𝑚 = ⌈ 5/𝑟𝑐𝑜𝑚𝑚⌉

< 𝑒−𝑛𝑚

Non-Asymptotic Optimality Guarantee

7

Lemma. Given 𝑟𝑐𝑜𝑚𝑚 and fixing 0 < 𝜖 < 1, 𝐺(0) is connected with probability at least 1 − 𝜖 if

𝑛 ≥5

𝑟𝑐𝑜𝑚𝑚

2

log1

𝜖

5

𝑟𝑐𝑜𝑚𝑚

2

.

𝑟𝑐𝑜𝑚𝑚

𝑞𝑖

𝑃 𝑛𝑖 = 0 = 1 −1

𝑚

𝑛

1 2 …… 𝑚 = ⌈ 5/𝑟𝑐𝑜𝑚𝑚⌉

𝑃

𝑖=1

𝑚

𝐸(𝑛𝑖 = 0) ≤

𝑖=1

𝑚

𝑃(𝑛𝑖 = 0)

< 𝑒−𝑛𝑚

Non-Asymptotic Optimality Guarantee

7

Lemma. Given 𝑟𝑐𝑜𝑚𝑚 and fixing 0 < 𝜖 < 1, 𝐺(0) is connected with probability at least 1 − 𝜖 if

𝑛 ≥5

𝑟𝑐𝑜𝑚𝑚

2

log1

𝜖

5

𝑟𝑐𝑜𝑚𝑚

2

.

𝑟𝑐𝑜𝑚𝑚

𝑞𝑖

𝑃 𝑛𝑖 = 0 = 1 −1

𝑚

𝑛

1 2 …… 𝑚 = ⌈ 5/𝑟𝑐𝑜𝑚𝑚⌉

𝑃

𝑖=1

𝑚

𝐸(𝑛𝑖 = 0) ≤

𝑖=1

𝑚

𝑃(𝑛𝑖 = 0)

< 𝑒−𝑛𝑚

< 𝑚𝑒−𝑛𝑚

Non-Asymptotic Optimality Guarantee

7

Lemma. Given 𝑟𝑐𝑜𝑚𝑚 and fixing 0 < 𝜖 < 1, 𝐺(0) is connected with probability at least 1 − 𝜖 if

𝑛 ≥5

𝑟𝑐𝑜𝑚𝑚

2

log1

𝜖

5

𝑟𝑐𝑜𝑚𝑚

2

.

𝑟𝑐𝑜𝑚𝑚

𝑞𝑖

𝑃 𝑛𝑖 = 0 = 1 −1

𝑚

𝑛

1 2 …… 𝑚 = ⌈ 5/𝑟𝑐𝑜𝑚𝑚⌉

𝑃

𝑖=1

𝑚

𝐸(𝑛𝑖 = 0) ≤

𝑖=1

𝑚

𝑃(𝑛𝑖 = 0)

< 𝑒−𝑛𝑚

< 𝑚𝑒−𝑛𝑚 = 𝜖

Non-Asymptotic Optimality Guarantee

7

Lemma. Given 𝑟𝑐𝑜𝑚𝑚 and fixing 0 < 𝜖 < 1, 𝐺(0) is connected with probability at least 1 − 𝜖 if

𝑛 ≥5

𝑟𝑐𝑜𝑚𝑚

2

log1

𝜖

5

𝑟𝑐𝑜𝑚𝑚

2

.

Theorem (Random Geometric Graphs [Penrose ‘97]). For 𝑛 uniformly distributed nodes in the unit square, let 𝐺(0) be the communication graph for a given 𝑟𝑐𝑜𝑚𝑚 at 𝑡 = 0. Then for any real number 𝑐, as 𝑛 → ∞ (i.e., 𝑟𝑐𝑜𝑚𝑚 → 0),

𝑃 𝐺 𝑖𝑠 𝑐𝑜𝑛𝑛𝑒𝑐𝑡𝑒𝑑 𝜋𝑛𝑟𝑐𝑜𝑚𝑚2 − log 𝑛 ≤ 𝑐) = 𝑒−𝑒

𝑐.

Theorem [Xue & Kumar ‘04]. For 𝑛 uniformly distributed nodes in the unit square, the network is asymptotically connected if and only if each node has Θ(log 𝑛) neighbors.

Non-Asymptotic Optimality Guarantee

7

Lemma. Given 𝑟𝑐𝑜𝑚𝑚 and fixing 0 < 𝜖 < 1, 𝐺(0) is connected with probability at least 1 − 𝜖 if

𝑛 ≥5

𝑟𝑐𝑜𝑚𝑚

2

log1

𝜖

5

𝑟𝑐𝑜𝑚𝑚

2

.

Theorem (Random Geometric Graphs [Penrose ‘97]). For 𝑛 uniformly distributed nodes in the unit square, let 𝐺(0) be the communication graph for a given 𝑟𝑐𝑜𝑚𝑚 at 𝑡 = 0. Then for any real number 𝑐, as 𝑛 → ∞ (i.e., 𝑟𝑐𝑜𝑚𝑚 → 0),

𝑃 𝐺 𝑖𝑠 𝑐𝑜𝑛𝑛𝑒𝑐𝑡𝑒𝑑 𝜋𝑛𝑟𝑐𝑜𝑚𝑚2 − log 𝑛 ≤ 𝑐) = 𝑒−𝑒

𝑐.

Theorem [Xue & Kumar ‘04]. For 𝑛 uniformly distributed nodes in the unit square, the network is asymptotically connected if and only if each node has Θ(log 𝑛) neighbors.

Non-Asymptotic Optimality Guarantee, cont.

8

Theorem (Non-Asymptotic Bounds) Fixing 0 < 𝜖 < 1, robots can communicate with each other and all targets are observable at 𝑡 = 0 with probability at least 1 − 𝜖 when

𝑛 ≥

2

𝑟𝑠𝑒𝑛𝑠𝑒

2

log1

𝜖

2

𝑟𝑠𝑒𝑛𝑠𝑒

2

, 𝑟𝑠𝑒𝑛𝑠𝑒 <10𝑟𝑐𝑜𝑚𝑚

5

5

𝑟𝑐𝑜𝑚𝑚

2

log1

𝜖

5

𝑟𝑐𝑜𝑚𝑚

2

, 𝑟𝑠𝑒𝑛𝑠𝑒 ≥10𝑟𝑐𝑜𝑚𝑚

5

Non-Asymptotic Optimality Guarantee, cont.

8

Theorem (Non-Asymptotic Bounds) Fixing 0 < 𝜖 < 1, robots can communicate with each other and all targets are observable at 𝑡 = 0 with probability at least 1 − 𝜖 when

𝑛 ≥

2

𝑟𝑠𝑒𝑛𝑠𝑒

2

log1

𝜖

2

𝑟𝑠𝑒𝑛𝑠𝑒

2

, 𝑟𝑠𝑒𝑛𝑠𝑒 <10𝑟𝑐𝑜𝑚𝑚

5

5

𝑟𝑐𝑜𝑚𝑚

2

log1

𝜖

5

𝑟𝑐𝑜𝑚𝑚

2

, 𝑟𝑠𝑒𝑛𝑠𝑒 ≥10𝑟𝑐𝑜𝑚𝑚

5

𝑛 = Θ(−1

𝑟𝑐𝑜𝑚𝑚2 log 𝑟𝑐𝑜𝑚𝑚) is sufficient and necessary for high probability asymptotic

guarantee on the connectivity of 𝐺 0 .

An Ideal Hierarchical Strategy

9

An Ideal Hierarchical Strategy

9

Ideal: 𝑟𝑐𝑜𝑚𝑚, 𝑟𝑠𝑒𝑛𝑠𝑒 as large as needed

An Ideal Hierarchical Strategy

9

Ideal: 𝑟𝑐𝑜𝑚𝑚, 𝑟𝑠𝑒𝑛𝑠𝑒 as large as needed

Hierarchical: The unit square is partitioned into 𝑚 small squares(here, 𝑚 = 4)

1

𝑚

An Ideal Hierarchical Strategy

9

Ideal: 𝑟𝑐𝑜𝑚𝑚, 𝑟𝑠𝑒𝑛𝑠𝑒 as large as needed

Hierarchical: The unit square is partitioned into 𝑚 small squares(here, 𝑚 = 4)

1

𝑚

An Ideal Hierarchical Strategy

9

Ideal: 𝑟𝑐𝑜𝑚𝑚, 𝑟𝑠𝑒𝑛𝑠𝑒 as large as needed

Hierarchical: The unit square is partitioned into 𝑚 small squares(here, 𝑚 = 4)

1

𝑚

An Ideal Hierarchical Strategy

9

Ideal: 𝑟𝑐𝑜𝑚𝑚, 𝑟𝑠𝑒𝑛𝑠𝑒 as large as needed

Hierarchical: The unit square is partitioned into 𝑚 small squares(here, 𝑚 = 4)

1

𝑚

An Ideal Hierarchical Strategy

9

Ideal: 𝑟𝑐𝑜𝑚𝑚, 𝑟𝑠𝑒𝑛𝑠𝑒 as large as needed

Hierarchical: The unit square is partitioned into 𝑚 small squares(here, 𝑚 = 4)

1

𝑚

Bounding Distance Cost at Lower Hierarchy

10

1

𝑚

Bounding Distance Cost at Lower Hierarchy

10

1

𝑚

Bounding Distance Cost at Lower Hierarchy

10

𝑞𝑖

1

𝑚

Bounding Distance Cost at Lower Hierarchy

10

Theorem [Talagrand ‘92] Let 𝑋 = {𝑥1, … , 𝑥𝑛}, 𝑌 = {𝑦1, … , 𝑦𝑛} be two sets sampled 𝑖. 𝑖. 𝑑. from the same arbitrary distribution on 0,1 2. Then

𝐸 min𝜎

𝑖=1

𝑛

𝑥𝑖 − 𝑦𝜎 𝑖 ≤ 𝐶 𝑛 log 𝑛,

in which 𝐶 is a universal constant.

𝑞𝑖

1

𝑚

Bounding Distance Cost at Lower Hierarchy

10

Theorem [Talagrand ‘92] Let 𝑋 = {𝑥1, … , 𝑥𝑛}, 𝑌 = {𝑦1, … , 𝑦𝑛} be two sets sampled 𝑖. 𝑖. 𝑑. from the same arbitrary distribution on 0,1 2. Then

𝐸 min𝜎

𝑖=1

𝑛

𝑥𝑖 − 𝑦𝜎 𝑖 ≤ 𝐶 𝑛 log 𝑛,

in which 𝐶 is a universal constant.

𝑞𝑖

𝐸 𝐷𝑖 ≤𝐶

𝑚𝑛𝑖 log 𝑛𝑖

1

𝑚

𝑖=1

𝑚

𝐸[𝐷𝑖] ≤ 𝐶 𝑚

𝑖=1

𝑚1

𝑚𝑛𝑖 log 𝑛𝑖

≤ 𝐶 𝑚 𝑖 𝑛𝑖𝑚

log 𝑖 𝑛𝑖𝑚

≤ 𝐶 𝑛 log 𝑛

Bounding Distance Cost at Lower Hierarchy

57

Theorem [Talagrand ‘92] Let 𝑋 = {𝑥1, … , 𝑥𝑛}, 𝑌 = {𝑦1, … , 𝑦𝑛} be two sets sampled 𝑖. 𝑖. 𝑑. from the same arbitrary distribution on 0,1 2. Then

𝐸 min𝜎

𝑖=1

𝑛

𝑥𝑖 − 𝑦𝜎 𝑖 ≤ 𝐶 𝑛 log 𝑛,

in which 𝐶 is a universal constant.

𝑞𝑖

𝐸 𝐷𝑖 ≤𝐶

𝑚𝑛𝑖 log 𝑛𝑖

1

𝑚

𝑖=1

𝑚

𝐸[𝐷𝑖] ≤ 𝐶 𝑚

𝑖=1

𝑚1

𝑚𝑛𝑖 log 𝑛𝑖

≤ 𝐶 𝑚 𝑖 𝑛𝑖𝑚

log 𝑖 𝑛𝑖𝑚

≤ 𝐶 𝑛 log 𝑛

Bounding Distance Cost at Lower Hierarchy

58

Theorem [Talagrand ‘92] Let 𝑋 = {𝑥1, … , 𝑥𝑛}, 𝑌 = {𝑦1, … , 𝑦𝑛} be two sets sampled 𝑖. 𝑖. 𝑑. from the same arbitrary distribution on 0,1 2. Then

𝐸 min𝜎

𝑖=1

𝑛

𝑥𝑖 − 𝑦𝜎 𝑖 ≤ 𝐶 𝑛 log 𝑛,

in which 𝐶 is a universal constant.

𝑞𝑖

𝐸 𝐷𝑖 ≤𝐶

𝑚𝑛𝑖 log 𝑛𝑖

1

𝑚

Bounding Distance Cost at Higher Hierarchy

11

𝑞𝑖

Bounding Distance Cost at Higher Hierarchy

11

𝑞𝑖

Bounding Distance Cost at Higher Hierarchy

11

𝑞𝑖

𝑃 𝑥𝑗 ∈ 𝑞𝑖 = 𝑃 𝑦𝑗 ∈ 𝑞𝑖 = 𝑝𝑖

Bounding Distance Cost at Higher Hierarchy

11

𝑞𝑖

𝑃 𝑥𝑗 ∈ 𝑞𝑖 = 𝑃 𝑦𝑗 ∈ 𝑞𝑖 = 𝑝𝑖

𝑃 𝑥𝑗 ∈ 𝑞𝑖 , 𝑦𝑗 ∉ 𝑞𝑖 = 𝑃 𝑥𝑗 ∉ 𝑞𝑖 , 𝑦𝑗 ∈ 𝑞𝑖 = 𝑝𝑖(1 − 𝑝𝑖)

Bounding Distance Cost at Higher Hierarchy

11

𝑞𝑖

𝑃 𝑥𝑗 ∈ 𝑞𝑖 = 𝑃 𝑦𝑗 ∈ 𝑞𝑖 = 𝑝𝑖

𝑃 𝑥𝑗 ∈ 𝑞𝑖 , 𝑦𝑗 ∉ 𝑞𝑖 = 𝑃 𝑥𝑗 ∉ 𝑞𝑖 , 𝑦𝑗 ∈ 𝑞𝑖 = 𝑝𝑖(1 − 𝑝𝑖)

𝑍𝑗 =

−1, 𝑥𝑗 ∉ 𝑞𝑖 , 𝑦𝑗 ∈ 𝑞𝑖1, 𝑥𝑗 ∈ 𝑞𝑖 , 𝑦𝑗 ∉ 𝑞𝑖0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

,

Bounding Distance Cost at Higher Hierarchy

11

𝑞𝑖

𝑃 𝑥𝑗 ∈ 𝑞𝑖 = 𝑃 𝑦𝑗 ∈ 𝑞𝑖 = 𝑝𝑖

𝑃 𝑥𝑗 ∈ 𝑞𝑖 , 𝑦𝑗 ∉ 𝑞𝑖 = 𝑃 𝑥𝑗 ∉ 𝑞𝑖 , 𝑦𝑗 ∈ 𝑞𝑖 = 𝑝𝑖(1 − 𝑝𝑖)

𝑍𝑗 =

−1, 𝑥𝑗 ∉ 𝑞𝑖 , 𝑦𝑗 ∈ 𝑞𝑖1, 𝑥𝑗 ∈ 𝑞𝑖 , 𝑦𝑗 ∉ 𝑞𝑖0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

, 𝑆𝑖 = 𝑍1 +⋯+ 𝑍𝑛

Bounding Distance Cost at Higher Hierarchy

11

𝑞𝑖

𝑃 𝑥𝑗 ∈ 𝑞𝑖 = 𝑃 𝑦𝑗 ∈ 𝑞𝑖 = 𝑝𝑖

𝑃 𝑥𝑗 ∈ 𝑞𝑖 , 𝑦𝑗 ∉ 𝑞𝑖 = 𝑃 𝑥𝑗 ∉ 𝑞𝑖 , 𝑦𝑗 ∈ 𝑞𝑖 = 𝑝𝑖(1 − 𝑝𝑖)

𝑍𝑗 =

−1, 𝑥𝑗 ∉ 𝑞𝑖 , 𝑦𝑗 ∈ 𝑞𝑖1, 𝑥𝑗 ∈ 𝑞𝑖 , 𝑦𝑗 ∉ 𝑞𝑖0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

, 𝑆𝑖 = 𝑍1 +⋯+ 𝑍𝑛

𝐸[𝑆𝑖2] = 𝑛𝐸 𝑍𝑗

2 = 2𝑛𝑝𝑖(1 − 𝑝𝑖)

Bounding Distance Cost at Higher Hierarchy

11

𝑞𝑖

𝑃 𝑥𝑗 ∈ 𝑞𝑖 = 𝑃 𝑦𝑗 ∈ 𝑞𝑖 = 𝑝𝑖

𝑃 𝑥𝑗 ∈ 𝑞𝑖 , 𝑦𝑗 ∉ 𝑞𝑖 = 𝑃 𝑥𝑗 ∉ 𝑞𝑖 , 𝑦𝑗 ∈ 𝑞𝑖 = 𝑝𝑖(1 − 𝑝𝑖)

𝑍𝑗 =

−1, 𝑥𝑗 ∉ 𝑞𝑖 , 𝑦𝑗 ∈ 𝑞𝑖1, 𝑥𝑗 ∈ 𝑞𝑖 , 𝑦𝑗 ∉ 𝑞𝑖0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

, 𝑆𝑖 = 𝑍1 +⋯+ 𝑍𝑛

𝐸[𝑆𝑖2] = 𝑛𝐸 𝑍𝑗

2 = 2𝑛𝑝𝑖(1 − 𝑝𝑖)

𝐸 𝑆𝑖 = 𝐸 𝑆𝑖2 ≤ 𝐸 𝑆𝑖

2

Bounding Distance Cost at Higher Hierarchy

11

𝑞𝑖

𝑃 𝑥𝑗 ∈ 𝑞𝑖 = 𝑃 𝑦𝑗 ∈ 𝑞𝑖 = 𝑝𝑖

𝑃 𝑥𝑗 ∈ 𝑞𝑖 , 𝑦𝑗 ∉ 𝑞𝑖 = 𝑃 𝑥𝑗 ∉ 𝑞𝑖 , 𝑦𝑗 ∈ 𝑞𝑖 = 𝑝𝑖(1 − 𝑝𝑖)

𝑍𝑗 =

−1, 𝑥𝑗 ∉ 𝑞𝑖 , 𝑦𝑗 ∈ 𝑞𝑖1, 𝑥𝑗 ∈ 𝑞𝑖 , 𝑦𝑗 ∉ 𝑞𝑖0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

, 𝑆𝑖 = 𝑍1 +⋯+ 𝑍𝑛

𝐸[𝑆𝑖2] = 𝑛𝐸 𝑍𝑗

2 = 2𝑛𝑝𝑖(1 − 𝑝𝑖)

𝐸 𝑆𝑖 = 𝐸 𝑆𝑖2 ≤ 𝐸 𝑆𝑖

2 ⇒ 𝐸[ 𝑆𝑖 ] ≤ 2𝑛𝑝𝑖 1 − 𝑝𝑖

𝑖=1

𝑚

𝐸[ 𝑆𝑖 ] =

𝑖=1

𝑚

2𝑛𝑝𝑖 1 − 𝑝𝑖 = 𝑚 2𝑛

𝑖=1

𝑚1

𝑚𝑝𝑖 1 − 𝑝𝑖

≤ 𝑚 2𝑛 𝑖=1𝑚 𝑝𝑖

𝑚1 − 𝑖=1

𝑚 𝑝𝑖

𝑚= 2𝑛 𝑚 − 1

Bounding Distance Cost at Higher Hierarchy

68

𝑃 𝑥𝑗 ∈ 𝑞𝑖 = 𝑃 𝑦𝑗 ∈ 𝑞𝑖 = 𝑝𝑖

𝑃 𝑥𝑗 ∈ 𝑞𝑖 , 𝑦𝑗 ∉ 𝑞𝑖 = 𝑃 𝑥𝑗 ∉ 𝑞𝑖 , 𝑦𝑗 ∈ 𝑞𝑖 = 𝑝𝑖(1 − 𝑝𝑖)

𝑍𝑗 =

−1, 𝑥𝑗 ∉ 𝑞𝑖 , 𝑦𝑗 ∈ 𝑞𝑖1, 𝑥𝑗 ∈ 𝑞𝑖 , 𝑦𝑗 ∉ 𝑞𝑖0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

, 𝑆𝑖 = 𝑍1 +⋯+ 𝑍𝑛

𝐸[𝑆𝑖2] = 𝑛𝐸 𝑍𝑗

2 = 2𝑛𝑝𝑖(1 − 𝑝𝑖)

𝐸 𝑆𝑖 = 𝐸 𝑆𝑖2 ≤ 𝐸 𝑆𝑖

2 ⇒ 𝐸[ 𝑆𝑖 ] ≤ 2𝑛𝑝𝑖 1 − 𝑝𝑖

𝑞𝑖

Bounding Distance Cost at Higher Hierarchy

69

𝑃 𝑥𝑗 ∈ 𝑞𝑖 = 𝑃 𝑦𝑗 ∈ 𝑞𝑖 = 𝑝𝑖

𝑃 𝑥𝑗 ∈ 𝑞𝑖 , 𝑦𝑗 ∉ 𝑞𝑖 = 𝑃 𝑥𝑗 ∉ 𝑞𝑖 , 𝑦𝑗 ∈ 𝑞𝑖 = 𝑝𝑖(1 − 𝑝𝑖)

𝑍𝑗 =

−1, 𝑥𝑗 ∉ 𝑞𝑖 , 𝑦𝑗 ∈ 𝑞𝑖1, 𝑥𝑗 ∈ 𝑞𝑖 , 𝑦𝑗 ∉ 𝑞𝑖0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

, 𝑆𝑖 = 𝑍1 +⋯+ 𝑍𝑛

𝐸[𝑆𝑖2] = 𝑛𝐸 𝑍𝑗

2 = 2𝑛𝑝𝑖(1 − 𝑝𝑖)

𝐸 𝑆𝑖 = 𝐸 𝑆𝑖2 ≤ 𝐸 𝑆𝑖

2 ⇒ 𝐸[ 𝑆𝑖 ] ≤ 2𝑛𝑝𝑖 1 − 𝑝𝑖

𝑖=1

𝑚

𝐸[ 𝑆𝑖 ] =

𝑖=1

𝑚

2𝑛𝑝𝑖 1 − 𝑝𝑖 = 𝑚 2𝑛

𝑖=1

𝑚1

𝑚𝑝𝑖 1 − 𝑝𝑖

≤ 𝑚 2𝑛 𝑖=1𝑚 𝑝𝑖

𝑚1 − 𝑖=1

𝑚 𝑝𝑖

𝑚= 2𝑛 𝑚 − 1

𝑞𝑖

Bounds on Distance Optimality

12

Theorem (Performance Upper-Bound of Ideal Hierarchical Strategies) Let𝐷𝑛 be the total distance of an ideal hierarchical strategy with ℎ hierarchies and 𝑚𝑖 regions at hierarchy 𝑖, then for arbitrary distribution on 0,1 2,

𝐸 𝐷𝑛 ≤ 𝐶 𝑛 log 𝑛 + 2 𝑛

𝑖=1

ℎ−1𝑚𝑖+1

𝑚𝑖.

Bounds on Distance Optimality

12

Theorem (Performance Upper-Bound of Ideal Hierarchical Strategies) Let𝐷𝑛 be the total distance of an ideal hierarchical strategy with ℎ hierarchies and 𝑚𝑖 regions at hierarchy 𝑖, then for arbitrary distribution on 0,1 2,

𝐸 𝐷𝑛 ≤ 𝐶 𝑛 log 𝑛 + 2 𝑛

𝑖=1

ℎ−1𝑚𝑖+1

𝑚𝑖.

Theorem [Ajtai et al. ‘84]. Under the uniform distribution, with high

probability, 𝐶1 𝑛 log 𝑛 ≤ 𝐷𝑛∗ ≤ 𝐶2 𝑛 log 𝑛.

Bounds on Distance Optimality

12

Theorem (Performance Upper-Bound of Ideal Hierarchical Strategies) Let𝐷𝑛 be the total distance of an ideal hierarchical strategy with ℎ hierarchies and 𝑚𝑖 regions at hierarchy 𝑖, then for arbitrary distribution on 0,1 2,

𝐸 𝐷𝑛 ≤ 𝐶 𝑛 log 𝑛 + 2 𝑛

𝑖=1

ℎ−1𝑚𝑖+1

𝑚𝑖.

Theorem [Ajtai et al. ‘84]. Under the uniform distribution, with high

probability, 𝐶1 𝑛 log 𝑛 ≤ 𝐷𝑛∗ ≤ 𝐶2 𝑛 log 𝑛.

Corollary. With uniform distribution, fixing ℎ and {𝑚𝑖}, as 𝑛 → ∞,

𝐸[𝐷𝑛]

𝐸[ 𝐷𝑛∗]→ 𝑂 1 .

Bounds on Distance Optimality

12

Corollary. With uniform distribution, fixing ℎ and {𝑚𝑖}, as 𝑛 → ∞,

𝐸[𝐷𝑛]

𝐸[ 𝐷𝑛∗]→ 𝑂 1 .

A two-level ideal hierarchical strategy

𝑛 - number of robots

Incorporating Arbitrary 𝑟𝑐𝑜𝑚𝑚 and 𝑟𝑠𝑒𝑛𝑠𝑒

13

Incorporating Arbitrary 𝑟𝑐𝑜𝑚𝑚 and 𝑟𝑠𝑒𝑛𝑠𝑒

13

Incorporating Arbitrary 𝑟𝑐𝑜𝑚𝑚 and 𝑟𝑠𝑒𝑛𝑠𝑒

13

Two-level decentralized hierarchical strategy

𝑛 - number of robots

Two-level ideal hierarchical strategy

2

𝑛 - number of robots

Incorporating Arbitrary 𝑟𝑐𝑜𝑚𝑚 and 𝑟𝑠𝑒𝑛𝑠𝑒

13

Two-level decentralized hierarchical strategy

𝑛 - number of robots

Two-level ideal hierarchical strategy

2

𝑛 - number of robots

Arbitrary 𝑟𝑠𝑒𝑛𝑠𝑒 can also be handled similarly.

Summary of Contribution

Guarantee on the distance optimality of the stochastic target assignment problem Necessary and sufficient condition for optimality Non-asymptotic probabilistic bounds Asymptotically tight bounds for high-probability guarantee

Performance of decentralized hierarchical strategies General upper bounds for arbitrary distributions 𝑂(1) approximation algorithm for the uniform distribution

Important takeaway: locally optimal behavior leads to near globallyoptimal behavior

top related