comparing high scale multi-source algorithms

4
Comparing High Scale Multi-source Algorithms Kathleen McGill Stephen Taylor Thayer School of Engineering Dartmouth College Hanover, NH 03755 [email protected] Ph: (402) 290-9349 Fax: (603) 646 -9795 I. OBJECTIVE The development of robotic systems to replace humans in sensory applications is often motivated by applications that involve significant risk to humans. The Department of Defense recognizes applications in reconnaissance and surveillance, target identification, counter-mine warfare, and the location of chemical, biological, radiological, nuclear, and explosive materials [1]. A common theme in many of these applications is the need to sample the gradient of some physical or chemical property in order to locate its sources. The use of robotic systems to locate multiple gradient sources simultaneously has received little attention in the literature. In this problem, a large number of robots, equipped with sensors and inter-robot communication, collectively search for all sources in minimal time. Some partial solutions to the general problem have evolved, but they have not been integrated in a cohesive manner. This paper proposes a common set of high scale validation benchmarks and a reference algorithm that provide ground-truth for comparative analysis of multi-source robot localization algorithms. The benchmarks capture the primary first-order attributes of the general problem: source characterization and distribution, initial robot distributions, and dead space (with no perceivable gradient). The Biased Random Walk (BRW) reference algorithm [2] represents a simple approach without robot communication. The Glowworm Swarm Optimization (GSO) algorithm [3] and a new GSO/BRW hybrid algorithm were evaluated in an attempt to improve upon the baseline BRW performance. This analytical framework enables direct comparisons of different algorithms to weigh the merits for robotic systems in military, medical, and service applications. II. BENCHMARK CASES Figure 1 shows three benchmark cases with the same configuration of 100 sources and alternative initial robot distributions. The characterization and distribution of sources in the field provide sources that are occluded by other sources of lesser, greater, or equal intensity. Extensive dead (a) (b) (c) Fig. 1 Three benchmark cases: (a) uniform initial distribution, (b) drop initial distribution, and (c) line initial distribution.

Upload: others

Post on 12-Sep-2021

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Comparing High Scale Multi-source Algorithms

Comparing High Scale Multi-source Algorithms

Kathleen McGill Stephen Taylor

Thayer School of Engineering Dartmouth College Hanover, NH 03755

[email protected] Ph: (402) 290-9349 Fax: (603) 646 -9795

I. OBJECTIVE

The development of robotic systems to replace humans in sensory applications is often motivated by applications that involve significant risk to humans. The Department of Defense recognizes applications in reconnaissance and surveillance, target identification, counter-mine warfare, and the location of chemical, biological, radiological, nuclear, and explosive materials [1]. A common theme in many of these applications is the need

to sample the gradient of some physical or chemical property in order to locate its sources. The use of robotic systems to locate multiple gradient sources simultaneously has received little attention in the literature. In this problem, a large number of robots, equipped with sensors and inter-robot communication, collectively search for all sources in minimal time. Some partial solutions to the general problem have evolved, but they have not been integrated in a cohesive manner.

This paper proposes a common set of high scale validation benchmarks and a reference algorithm that provide ground-truth for comparative analysis of multi-source robot localization algorithms. The benchmarks capture the primary first-order attributes of the general problem: source characterization and distribution, initial robot distributions, and dead space (with no perceivable gradient). The Biased Random Walk (BRW) reference algorithm [2] represents a simple approach without robot communication. The Glowworm Swarm Optimization (GSO) algorithm [3] and a new GSO/BRW hybrid algorithm were evaluated in an attempt to improve upon the baseline BRW performance. This analytical framework enables direct comparisons of different algorithms to weigh the merits for robotic systems in military, medical, and service applications.

II. BENCHMARK CASES

Figure 1 shows three benchmark cases with the same configuration of 100 sources and alternative initial robot distributions. The characterization and distribution of sources in the field provide sources that are occluded by other sources of lesser, greater, or equal intensity. Extensive dead

(a)

(b)

(c)

Fig. 1 Three benchmark cases: (a) uniform initial distribution, (b) drop initial distribution, and (c) line initial

distribution.

Page 2: Comparing High Scale Multi-source Algorithms

space is included in half of the space to determine the impact on search performance. The initial robot distributions are the uniform, drop, and line distributions, devised to represent robotic swarm deployment strategies. The uniform distribution provides total coverage of the search space and corresponds to deployment during some scattering process by land or air. The drop distribution represents swarm deployment by dropping robots at a single location by land or air. The line distribution corresponds to releasing robots one at a time along one side of the search space, for example from a truck driving along a road or aircraft following a path.

Each algorithm is tested by simulating 1000 searches with a swarm of 1000 robots. The performance metrics of the benchmarks are the average number of sources found in total and the time to converge on an average of 75% and 95% of sources.

III. REFERENCE ALGORITHM

The BRW algorithm [2] is implemented with a step length of ten units and a 10% bias. Figure 2 shows the average number of sources found at each time step on each benchmark. On the uniform benchmark, BRW achieves 75% and 95% convergence. BRW does not achieve 75% convergence on the drop or line benchmarks.

IV. SENSITIVITY ANALYSIS

Sensitivity analysis was conducted to determine the minimum number of simulations that yield reliable performance metrics. Figure 3 shows the standard deviation and average number sources found for increasing numbers of simulations on each benchmark. Neither the average nor the standard deviation change significantly after the first 1000 simulations, so 1000 simulations were sufficient for our performance comparisons.

V. COMPARATIVE ANALYSIS

The GSO algorithm was reproduced with a communication range of 125 units [3]. Figure 4 shows the average number

of sources found at each time step on each benchmark. On the uniform benchmark, GSO achieves 75% convergence faster than BRW but does not achieve 95% convergence. The GSO algorithm locates zero sources in the drop and line benchmarks. One reason for this failure is that GSO agents do not search when the swarm is deployed in the dead space of the gradient field. We explored a hybrid GSO/BRW algorithm to mollify the

effect of dead space on GSO performance. Figure 5 shows the average number of sources found at each time step for each benchmark. The hybrid algorithm achieves 75% convergence faster than BRW on the uniform benchmark but does not achieve 95% convergence. The hybrid algorithm does not achieve 75% convergence in the drop or line benchmarks.

0

20

40

60

80

100

0 50000 100000 150000 200000

AV

ER

AG

E S

OU

RC

ES

FO

UN

D

TIME STEP

DROP

LINE

UNIFORM

Fig. 2 Average sources found v. time step for BRW on the uniform,

drop, and line initial distribution benchmark cases.

0

10

20

30

40

50

60

70

80

90

100

0 5000 10000 15000 20000

NU

MB

ER

OF

SO

UR

CE

S

TIME STEP

1000 Simulations

2000 Simulations

3000 Simulations

4000 Simulations

5000 Simulations

STANDARD DEVIATION OF SOURCES FOUND

AVERAGE SOURCES FOUND

(a)

0

5

10

15

20

25

30

35

40

0 50000 100000 150000 200000

NU

MB

ER

OF S

OU

RC

ES

TIME STEP

1000 Simulations

2000 Simulations

3000 Simulations

4000 Simulations

5000 Simulations

STANDARD DEVIATION OF SOURCES FOUND

AVERAGE SOURCES FOUND

(b)

0

5

10

15

20

25

30

35

40

0 50000 100000 150000 200000

NU

MB

ER

OF S

OU

RC

ES

TIME STEP

1000 Simulations

2000 Simulations

3000 Simulations

4000 Simulations

5000 Simulations

STANDARD DEVIATION OF SOURCES FOUND

AVERAGE SOURCES FOUND

(c)

Fig. 3 Average sources found v. time step and Standard deviation of sources found v. time step for the (a) uniform, (b) drop, and (c) line initial distribution benchmarks after 1000, 2000, 3000, 4000, and 5000 simulations.

Page 3: Comparing High Scale Multi-source Algorithms

Table I presents the performance metrics of all benchmark experiments. The table includes the 75% and 95% convergence times and the total average number of sources found. The dashes indicate that the algorithm did not converge.

TABLE I

SUMMARY OF ALGORITHM PERFORMANCE

Algorithm 75%

Convergence Time Steps

95% Convergence Time Steps

Average Sources Found Total

BRW 930 2500 99.950 +/- 0.014 GSO 210 - 85.922 +/- 0.110 Uniform

Hybrid 180 - 91.674 +/- 0.104 BRW - - 39.758 +/- 0.158 GSO - - 0.0 +/- 0.0 Drop

Hybrid - - 9.344 +/- 0.092 BRW - - 33.750 +/- 0.132 GSO - - 0.0 +/- 0.0 Line

Hybrid - - 14.148 +/- 0.085

0

20

40

60

80

100

0 2000 4000 6000

AV

ER

AG

E S

OU

RC

ES

FO

UN

D

TIME STEP

BRW

GSO

(a)

0

20

40

60

80

100

0 50000 100000 150000 200000

AV

ER

AG

E S

OU

RC

ES

FO

UN

D

TIME STEP

BRW

GSO

(b)

0

20

40

60

80

100

0 50000 100000 150000 200000

AV

ER

AG

E S

OU

RC

ES

FO

UN

D

TIME STEP

BRW

GSO

(c)

Fig. 4 Average sources found v. time step for BRW and GSO on the (a) uniform, (b) drop, and (c) line initial distribution benchmark cases.

0

20

40

60

80

100

0 2000 4000 6000

AV

ER

AG

E S

OU

RC

ES

FO

UN

D

TIME STEP

BRW

HYBRID

(a)

0

20

40

60

80

100

0 50000 100000 150000 200000

AV

ER

AG

E S

OU

RC

ES

FO

UN

D

TIME STEP

BRW

HYBRID

(b)

0

20

40

60

80

100

0 50000 100000 150000 200000

AV

ER

AG

E S

OU

RC

ES

FO

UN

D

TIME STEP

BRW

HYBRID

(c)

Fig. 5 Average sources found v. time step for BRW and Hybrid on the (a) uniform, (b) drop, and (c) line initial distribution benchmark cases.

Page 4: Comparing High Scale Multi-source Algorithms

VI. CONCLUSIONS

All three algorithms failed on the drop and line benchmarks in which the initial robot distribution did not cover the search area. Also, the dead space in the field disabled the GSO algorithm. These results emphasize the importance of including alternative initial robot distributions and dead space in testing multi-source algorithms.

The performance of each algorithm on the high scale benchmarks was worse than published results with fewer robots and sources [2-3]. Therefore, it is necessary to test high scale performance for applications with potentially large numbers of sources.

REFERENCES

[1] DoD, “Office of the Secretary of Defense Unmanned Systems Rooadmap (2007-2032)”. 2007. [2] A. Dhariwal, G.S. Suhkatme, and A.A.G. Requicha, “Bacterium-inspired robots for environmental monitoring,” In Proceedings of the 2004 International conference on Robotics & Automation, New Orleans, Louisiana, April 2004, pp. 1436-1443. [3] K.N. Krishnanand and D. Ghose, “Theoretical foundations for multiple rendezvous of glowworm-inspired mobile agents with variable local-decision domains.,” In Proceedings of the 2006 American Control Conference, Minneapolis, Minnesota, June 2006, pp. 3588-3593.