1 generalized data naming and scalable state announcement for srm suchitra raman steven mccanne...

32
1 Generalized Data Naming and Scalable State Announcement for SRM Suchitra Raman Steven McCanne Computer Science Division University of California, Berkeley Berkeley, CA 94720-1776 © 1997 Reliable Multicast Research Group Cannes, France Sep. 1997

Upload: edgar-king

Post on 20-Jan-2018

214 views

Category:

Documents


0 download

DESCRIPTION

3 SRM Naming SRM machinery implicitly assumes a simple sequenced packet stream o Losses detected by gaps in sequence space S S R R 123

TRANSCRIPT

1

Generalized Data Naming and Scalable State Announcement for SRM

Suchitra Raman Steven McCanneComputer Science Division

University of California, BerkeleyBerkeley, CA 94720-1776

© 1997

Reliable Multicast Research GroupCannes, France

Sep. 1997

2

The Problem

• SRM is “receiver reliable”

• What is receiver’s vocabulary for “naming data”?

• Traditionally: sequence numbers...

3

SRM Naming

• SRM machinery implicitly assumes a simple sequenced packet stream

Losses detected by gaps in sequence space

S R123

4

SRM Naming

• SRM machinery implicitly assumes a simple sequenced packet stream

Losses detected by gaps in sequence space

S R13

5

SRM Naming

• SRM machinery implicitly assumes a simple sequenced packet stream

Losses detected by gaps in sequence space

S R3

NACK2

6

SRM Naming

• SRM machinery implicitly assumes a simple sequenced packet stream

Losses detected by gaps in sequence space

S RNACK2NACK2NACK2

2 2 2

7

SRM Naming

• SRM machinery implicitly assumes a simple sequenced packet stream

Losses detected by gaps in sequence space

Tail losses detected by session packets:

S R

S R123

8

SRM Naming

• SRM machinery implicitly assumes a simple sequenced packet stream

Losses detected by gaps in sequence space

Tail losses detected by session packets:

S R

S R12

9

SRM Naming

• SRM machinery implicitly assumes a simple sequenced packet stream

Losses detected by gaps in sequence space

Tail losses detected by session packets:

S R

S R*3* *3* *3* *3*

NACK3NACK3NACK3

10

SRM Naming

• SRM machinery implicitly assumes a simple sequenced packet stream

Losses detected by gaps in sequence space

Tail losses detected by session packets:

S R

S R 3 3 3 3

11

SRM Naming (cont’d)

• Simple seq# vocabulary works well for TCP protocol is entirely independent of application

name space consistent shared communication state is readily

available

• Does not work for heterogeneous multicast late joining hosts receiver-tailored retransmissions diverse requirements across app space

12

Example: wb

• Wb can grow any page, in any order, at any time

receiver-directed retransmission/browsing requires naming individual pages and “DrawOps” within page

Here, 2D seq space works...

DrawOp#

Page#

13

Session Packets

• What goes in a session packet?

• For simple seq space, it’s obvious: highest seqno sent

• For 2D seq space, it’s not so obvious All offsets of every page? Scaling problems

14

wb’s Solution

• Ad hoc! List “important” pages in session packets Use SRM repair/request recursively to learn

about existing pages

• Can we generalize this? (without sacrificing the benefits of ALF)

15

New Solution: SNAP

“SRM Naming and Announcement Protocol”

• Flexible, generalized naming scheme Supports multidimensional seq spaces Application Level Framing (ALF)

• Scalable State Announcement Enables naming scheme to scale to very large

sessions

16

Generalized Naming

View data space as a tree:

A node can besplit at any time

Each leaf is a“data container”

17

Generalized Naming

View data space as a tree:

Application toolkit can map this structure into an arbitrary ADU naming scheme (e.g., FS, URLs)

A node can besplit at any time

Each leaf is a“data container”

18

Tree Representation

Represent tree with a “trie” data structure:

Root

N1

N2

N3

1D sequence space within each node/container

19

Disseminating the Tree

• Name space tree can be arbitrarily large...

• How do receivers learn this structure? Assign unique ID to each node (source-specific) Assign unique ID to each container (source-

specific) Use same ID space in both cases

20

Disseminating the Tree (cont’d)

• Uniform ID concept allows us to re-use SRM distribute container data via SRM distribute node data via SRM

• But large, rich tree can be expensive to disseminate (especially for late-joiners)

• What do we put in the session message? every offset of every node and container? doesn’t scale...

21

Scalable State Announcement

• Instead use “signatures” to summarize state Summarize each subtree root in name-space tree

with a signature List some number of signatures in each session

packet Upon reception of session packet:

If signature mismatch, use SRM recursively to repair name-space tree (if we care)

22

Signatures

• Let s(N) be signature of node N e.g., MD-5 over node’s offset and each child’s

signatures

s(8)

s(9)

s(0)

s(6)

s(4)

s(3)

s(1)

s(5)

s(2)

s(7)

23

Example

6

1227

10

1118

3

14

3642

24

Example

6

1227

10

1118

3

14

3642

31

1412

5

New session packet...

25

Example

6

1227

10

1118

3

14

3642

6

1412

5

Which sub-tree iscausing the mismatch?

26

Example

6

1227

10

1118

3

14

3642

6

1412

5

Mismatch triggers namespace repair

SRM-like mechanismsuppresses sync’d

control traffic

Which sub-tree iscausing the mismatch?

27

Example

6

1227

10

1118

3

14

3642

6

1412

5

Mismatch triggers namespace repair

SRM-like mechanismsuppresses sync’d

control traffic

Which sub-tree iscausing the mismatch?

17

8 Namespace repair packet(but looks like any other

session packet)

28

Example

6

12

18

3

14

3642

Mismatch triggers namespace repair

SRM-like mechanismsuppresses sync’d

control traffic

Which sub-tree iscausing the mismatch?

17

8 Namespace repair packet(but looks like any other

session packet)

29

Example

6

12

18

3

14

3642

Mismatch triggers namespace repair

SRM-like mechanismsuppresses sync’d

control traffic

Which sub-tree iscausing the mismatch?

17

8 Namespace repair packet(but looks like any other

session packet)

30

Status

• Implemented in ns

• Simulated evaluation of “learning time”

time from state time until receiver learns of change Still refining but preliminary numbers look

“reasonable”

scalability “as good as SRM” (so any scaling innovations transfer immediately)

31

Status (cont’d)

• Implementing SNAP in our SRM toolkit Leveraging variant of RMFP

32

Summary

• Two important pieces of SNAP: Generalized naming scheme using tree-

structured name space Scalable announcement protocol

• A solution to data naming is critical to the success of a framework-based approach for reliable multicast