flow stats module -- control

50
John DeHart and James Moscola (Original FastPath Design) August 2008 Flow Stats Module -- Control

Upload: dalmar

Post on 07-Jan-2016

38 views

Category:

Documents


0 download

DESCRIPTION

Flow Stats Module -- Control. John DeHart and James Moscola (Original FastPath Design) August 2008. SCR. SCR. SCR. SCR. SCR. SCR. SCR. SRAM. SRAM. SCR. NN. NN. NN. NN. NN. NN. NN. Freelist. SPP V1 LC Egress with 1x10Gb/s Tx. XScale. NAT Miss Scratch Ring. S W I T C - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Flow Stats Module -- Control

John DeHart and

James Moscola (Original FastPath Design)August 2008

Flow Stats Module--

Control

Page 2: Flow Stats Module -- Control

2 - Flow Stats Module – John DeHart and James Moscola

SPP V1 LC Egress with 1x10Gb/s Tx

SWITCH

MSF

Rx1

RBUF

Rx2Key

ExtractLookup

HdrFormat

FlowStats1NN

1x10GTx1

NN1x10G

Tx2

MSF

TBUF

NNNN NN NN

RTM

Stats(1 ME) SRAM3SCR SRAM1

SRAM2

NN

FlowStats2

TCAM

XScale

SC

R

XScale

NAT MissScratch Ring

XScale

NAT Pktreturn

SC

R

SRAMArchive Records

PortSplitter

QM0 SCR

QM1 SCR

QM2 SCR

QM3 SCR

SCR

SR

AM

Freelist

Page 3: Flow Stats Module -- Control

3 - Flow Stats Module – John DeHart and James Moscola

SPP V1 LC Egress with 10x1Gb/s Tx

SWITCH

MSF

Rx1

RBUF

Rx2Key

ExtractLookup

HdrFormat

MSF

TBUF

NNNN NN NN

RTM

NN

SC

R

XScale

NAT MissScratch Ring

TCAM

5x1GTx1

(P0-P4)5x1GTx2

(P5-P9)

SCR

SCR

FlowStats1

SRAM1

SRAM2FlowStats2XScale XScale

NAT Pktreturn

SC

R

SRAMArchive Records

PortSplitter

QM0 SCR

QM1 SCR

QM2 SCR

QM3 SCR

Stats(1 ME) SRAM3SCR

SCR

SR

AM

Freelist

Page 4: Flow Stats Module -- Control

4 - Flow Stats Module – John DeHart and James Moscola

Overview of Flow Stats

Main functions»Uniquely identify flows based on 6-tuple

Hash header values to get an index into a table of records»Maintain packet and byte counts for each flow

Compare packet header with header values in record, and increment if same

Otherwise, follow hash chain until correct record is found»Send flow information to XScale for archiving every five minutes

Secondary functions»Maintain hash table

Identify and remove flows that are no longer active Invalid flows are removed so memory can be resused

Page 5: Flow Stats Module -- Control

5 - Flow Stats Module – John DeHart and James Moscola

Design Considerations

Efficiently maintaining a hash table with chained collisions»Efficiently inserting and deleting records

Efficiently reading hash table records

Synchronization issues»Multiple threads modifying hash table and chains

Page 6: Flow Stats Module -- Control

6 - Flow Stats Module – John DeHart and James Moscola

Start Timestamp (16b)

Packet Counter (32b)

SrcPort (16b) DestPort (16b)

Destination Address (32b)

Source Address (32b)

Protocol (8b)

LW0

LW1

LW2

LW3

LW4

LW5

LW6

LW7

Flow Record Total Record Size = 8 32-bit words

» V is valid bit Only needed at head of chain ‘1’ for valid record ‘0’ for invalid record

» Start timestamp (16-bits) is set when record starts counting flow

Reset to zero when record is archived» End timestamp (16-bits) is set each

time a packet is seen for the given flow

» Packet and Byte counters are incremented for each packet on the given flow

Reset to zero when record is archived» Next Record Number is next record in

hash chain 0x1FFFF if record is tail Address of next record =

(next_record_num * record_size) + collision_table_base_addr

Next Record Number (17b)

Slice ID (VLAN) (12b)Reserved

(6b)

Byte Counter (32b)

Reserved (14b)

= Member of 6-tuple

V(1b)

End Timestamp (16b)

TCP Flags(6b)

Page 7: Flow Stats Module -- Control

7 - Flow Stats Module – John DeHart and James Moscola

Timestamp Details Timestamp on XScale is 64-bits

Storing 64-bit start and end timestamps would cause each flow record to be too large for a single SRAM read

Instead, only store the 16-bits of each timestamp required to represent a five minute time interval» Clock frequency = 1.4 GHz» Timestamp increments every 16 clock cycles» Use bits 41:26 for 16 bit timestamps

(226 * 16 cycles)/1.4GHz = .767 seconds (241 * 16 cycles)/1.4GHz =25131.69 seconds (418 minutes)

» Time interval that can be represented using these bits .767 seconds through 418 minutes

Page 8: Flow Stats Module -- Control

8 - Flow Stats Module – John DeHart and James Moscola

Hash Table Memory Allocating 4 MBytes in SRAM Channel 3 for hash table

»Supports ~130K records»Divided memory 75% for the main table and 25% for

the collision table»Memory required =

Main_table_size + Collision_table_size.75*(#records * #bytes/record) + .25*(#records * #bytes/record)~98K records + ~32K records~3Mbytes + ~1Mbytes

Space for main table and collision table can be adjusted to tune performance»Larger main table means fewer collisions, but still need

adequate space for collision table CollisionTable

MainTable

~25%

~75%

Page 9: Flow Stats Module -- Control

9 - Flow Stats Module – John DeHart and James Moscola

Inserting Into Hash Table IXP has 3 different hash functions (48-bit, 64-bit, 128-bit)

» Using 64-bit hash function is sufficient and takes less time than 128-bit hash function

Not including Source Addr or Protocol into address HASH(D.Addr, S.Port, D.Port);

Result of hash is used to address the main hash table» Since we want ~100K records in main table, result of hash is used to get as

close to 100K entries as possible by adding a 16bit and 15bit chunk from the hash result

hash_result(15:0) + hash_result(30:16) = record_number» Records in the main table represent the head of a chain» If slot at head of chain is empty (valid_bit=0), store record there» If slot at head of chain is occupied, compare 6-tuple

If 6-tuple matches If packet_count == 0 then (existing flows will have 0 packet_counts when

previous packets on flow have just been archived)– Increment packet_counter for record– Add size of current packet to byte_counter– Set start and end time stamps

If packet_count > 0 then– Increment packet_counter for record– Add size of current packet to byte_counter– Set end time stamp

If 6-tuples doesn’t match then a collision has occurred and the record needs to be stored in collision table

CollisionTable

MainTable

Page 10: Flow Stats Module -- Control

10 - Flow Stats Module – John DeHart and James Moscola

Hash Collisions Hash collisions are chained in linked list

» Head of list is in the main table» Remainder of list is in collision table

SRAM ring maintains list of free slots in collision table» Slots are numbered from 0 to #_Collision_Table_Slots

Same as next_record_number To convert to memory address

(slot_num * record_size) + collision_table_base_addr» When a collision occurs, a pointer to an open slot in the

collision table can be retrieved from the SRAM ring» When a record is removed from the collision table, a pointer

is returned to the SRAM ring for the invalidated slotCollision

Table

MainTable

SRAMRing

Free list

Page 11: Flow Stats Module -- Control

11 - Flow Stats Module – John DeHart and James Moscola

Archiving Hash Table Records Send all valid records in hash table to

XScale for archiving every 5 minutes For each record in the main table (i.e. start

of chain) ...» For each record in hash chain ...

If record is valid ... If packet count > 0 then

– Send record to XScale via SRAM ring– Set packet count to 0– Set byte count to 0– Leave record in table

If packet count == 0 then– Flow has already been archived– No packet has arrived on flow in 5 minutes – Record is no longer valid– Delete record from hash table to free

memory

Start Timestamp_high (32b)

Start Timestamp_low (32b)

End Timestamp_high (32b)

Packet Counter (32b)

SrcPort (16b) DestPort (16b)

Destination Address (32b)

Source Address (32b)

Protocol (8b)

LW0

LW1

LW2

LW3

LW4

LW5

LW6

LW7

End Timestamp_low (32b)

LW8

LW9

Slice ID (VLAN) (12b)Reserved

(6b)

Byte Counter (32b)

Info Sent to XScale for eachflow every 5 minutes

TCP Flags(6b)

Page 12: Flow Stats Module -- Control

12 - Flow Stats Module – John DeHart and James Moscola

Deleting Records from Hash Table

While archiving records» If packet count is zero then remove record from

hash table Record has already been archived, and no packets have

arrived in the last five minutes

To remove a record» If ((record == head) && (record == tail))

Valid_bit = 0

»Else If ((record == head) && (record != tail)) Replace record with record.next Free the slot for the moved record

»Else if record != head Set previous records next pointer to record.next Free slot for the deleted record

CollisionTable

MainTable

SRAMRing

Free list

Page 13: Flow Stats Module -- Control

13 - Flow Stats Module – John DeHart and James Moscola

Memory Synchronization Issues Multiple threads reading/writing same block of memory

Only allow 1 ME to modify structure of hash table»Inserting and deleting nodes

Use global registers to indicate that the structure of the hash table is being modified»Eight global lock registers (1 per thread) to indicate what chain in

the hash table is being modified»When a thread wants to insert/delete a record from hash table

Store pointer to the head of the hash chain in the threads dedicated global lock register

If another thread is processing a packet that hashed to the same hash chain, wait for lock register to clear and restart processing packet

Otherwise, continue processing the packet normally Clear global lock register when done with insert/deletes

Value of 0xFFFFFFFF indicates that lock is clear

Page 14: Flow Stats Module -- Control

14 - Flow Stats Module – John DeHart and James Moscola

Flow Stats Execution ME 1

» Init - Configure hash function» 8 threads

Read packet header Hash packet header Send header and hash result to ME2 for processing

ME 2 (thread numbers may need adjusting)

» Init - Load SRAM ring with addresses for each slot in the collision tableInit - Set TIMESTAMP to 0

» 7 threads (ctx 1-7) Insert records into hash table Increment counter for records

» 1 thread (ctx 0) Archive and delete hash table records

Page 15: Flow Stats Module -- Control

15 - Flow Stats Module – John DeHart and James Moscola

Diagram of Flow Stats Execution (ME1)

get bufferhandle from QM

read bufferdescriptor (SRAM)

read packetheader (DRAM)

build hash key

compute hash

send packetinfo to ME2

send bufferhandle to TX

150 cycles

300 cycles

~50 cycles

100 cycles

60 cycles

60 cycles

60 cycles

~570 cycles

300 cycles

300 cycles

Page 16: Flow Stats Module -- Control

16 - Flow Stats Module – John DeHart and James Moscola

Incrementing Counters» Adds records to hash chain, but doesn’t remove them

match?valid?

read hash tablerecord (SRAM)

comparerecord to header

insertnew record

NoYes

Yes

No

~10 cycles

150 cycles

get packetinfo from ME1

60 cycles

tail?

Yes

No

get record slotfrom freelist

150 cycles

insertnew record

read nextrecord in chain

150 cycles

Best: ~360 cyclesWorst: ~520 +160x

x

Diagram of Flow Stats Execution (ME2)

Iterating through hash chainLocking head of chain

clear lockregister

count==0?

No

Yes

Write START/ENDtime & new counts

Write ENDtime & new counts

clear lockregister

clear lockregister

clear lockregister

set registerto lock chain

set registerto lock chain

set registerto lock chain

set registerto lock chain

150 cycles 150 cycles150 cycles150 cycles

Page 17: Flow Stats Module -- Control

17 - Flow Stats Module – John DeHart and James Moscola

read currenttime

set valid bitto zero

head of list?

No

write next_ptr toprevious list item

return recordslot to freelist

set registerto lock chain

clear lockregister

set registerto lock chain

clear lockregister

tail of list?Yes

set registerto lock chain

readrecord.next

replace recordwith record.next

return record.nextslot to freelist

clear lockregister

Diagram of Flow Stats Execution (ME2) Archiving Records

» Removes records from hash chain, but doesn’t add them» Processing of archiving records occurs every five minutes

5 minutes?No

Yesread next recordfrom main table

send recordto XScale

set registerto lock chain

reset countersand timestamps

clear lockregister

more recordsin chain?

count == 0?Yes

No

Yes

Waiting to archive

Locking head of chain

valid?

done withall records?

Yes

No

No

Yes

read next recordin chain

No

Yes

No

Page 18: Flow Stats Module -- Control

18 - Flow Stats Module – John DeHart and James Moscola

match currentchain?

Yes

check globallock values

No

continue processing packet

Return from Swap When returning from each CTX switch, always check global lock

registers» If any of the global locks contain the address of the hash chain that the current

thread is trying to modify, then the hash chain is locked and the current thread must restart processing on the current packet

» If none of the global locks contain the address of the hash chain that the current thread is trying to modify, then the current thread can just continue processing that packet as usual

restart procssingpacket

Page 19: Flow Stats Module -- Control

19 - Flow Stats Module – John DeHart and James Moscola

SPP V1 LC Egress with 1x10Gb/s Tx

FlowStats1NN

1x10GTx1

SRAM3FlowStats2XScale

SC

R

SRAMArchive Records

QM0

QM1

QM2

QM3

SCR

SR

AM

Freelist

SrcPort (16b) DestPort (16b)

Destination Address (32b)

Source Address (32b)

Slice ID (VLAN) (12b)

Protocol (8b)

Hash Result (17b)

Rsv(2b)

Rsvd (3b)

Packet Length (16b)

Start Timestamp_high (32b)

Start Timestamp_low (32b)

End Timestamp_high (32b)

Packet Counter (32b)

SrcPort (16b) DestPort (16b)

Destination Address (32b)

Source Address (32b)

Protocol (8b)

End Timestamp_low (32b)

Slice ID (VLAN) (12b)Reserved

(6b)

Byte Counter (32b)

Buffer Handle(24b)Rsv(3b)

Port(4b)

V1

V: Valid Bit

TCP Flags(6b)

TCP Flags(6b)

Page 20: Flow Stats Module -- Control

20 - Flow Stats Module – John DeHart and James Moscola

Flow Statistics Module Scratch rings

» QM_TO_FS_RING_1: 0x2400 – 0x27FF // for receiving from QM» QM_TO_FS_RING_2: 0x2800 – 0x2BFF // for receiving from QM» FS1_TO_FS2_RING: 0x2C00 - 0x2FFF // for sending data from FS1 to FS2» FS_TO_TX_RING_1: 0x3000 - 0x33FF // for sending data to TX1» FS_TO_TX_RING_2: 0x3400 – 0x37FF // for sending data to TX2

SRAM rings» FS2_FREELIST: 0x???? - 0x???? // stores list of open slots in collision table» FS2_TO_XSCALE: 0x???? – 0x???? // for sending record information to the XScale for archiving

LC Egress SRAM Channel 3 info for Flow Stats» HASH_CHAIN_TAIL 0x1FFFF // indicates the end of a hash chain» ARCHIVE_DELAY 0x0188 // 5 minutes

» RECORD_SIZE 8 * 4 = 32 // 8 32-bit words/record * 4 bytes/word» TOTAL_NUM_RECORDS 130688 // MAX with 4 MB table is ~130K records» NUM_HASH_TABLE_RECORDS 98304 // NUM_HASH_TABLE_RECORDS<=TOTAL_NUM_RECORDS (mod 32 = 0)» NUM_COLLISION_TABLE_RECORDS TOTAL_NUM_RECORDS - NUM_HASH_TABLE_RECORDS = 32384

» LCE_FS_HASH_TABLE_BASE SRAM_CHANNEL_3_BASE_ADDR + 0x200000 = 0xC0200000» LCE_FS_HASH_TABLE_SIZE 0x400000» LCE_FS_COLLISION_TABLE_BASE (HASH_TABLE_BASE + (RECORD_SIZE * NUM_HASH_TABLE_RECORDS)) = 0xC0500000

Page 21: Flow Stats Module -- Control

21 - Flow Stats Module – John DeHart and James Moscola

Overview of Flow Stats

2 MEs in Fastpath to collect flow data for each pkt»Byte counter per flow»Pkt counter per flow»Archive data to XScale via SRAM ring every 5 minutes

XScale control daemon(s) to process data»Receive flow information from MEs»Reformat to put into PlanetFlow format»Maintain databases for PlanetLab archiving and for identifying internal flows (pre-NAT translation) when an external flow (post-NAT) has a complaint lodged against it.

Page 22: Flow Stats Module -- Control

22 - Flow Stats Module – John DeHart and James Moscola

SPP V1 LC Egress with 10x1Gb/s Tx

SWITCH

MSF

Rx1

RBUF

Rx2Key

ExtractLookup

HdrFormat

MSF

TBUF

NNNN NN NN

RTM

NN

SC

R

XScale

NAT MissScratch Ring

TCAM

5x1GTx1

(P0-P4)5x1GTx2

(P5-P9)

SCR

SCR

FlowStats1

SRAM1

SRAM2FlowStats2XScale XScale

NAT Pktreturn

SC

R

SRAMArchive Records

PortSplitter

QM0 SCR

QM1 SCR

QM2 SCR

QM3 SCR

Stats(1 ME) SRAM3SCR

SCR

SR

AM

Freelist

Page 23: Flow Stats Module -- Control

23 - Flow Stats Module – John DeHart and James Moscola

Start Timestamp (16b)

Packet Counter (32b)

SrcPort (16b) DestPort (16b)

Destination Address (32b)

Source Address (32b)

Protocol (8b)

LW0

LW1

LW2

LW3

LW4

LW5

LW6

LW7

Flow Record Total Record Size = 8 32-bit words

» V is valid bit Only needed at head of chain ‘1’ for valid record ‘0’ for invalid record

» Start timestamp (16-bits) is set when record starts counting flow

Reset to zero when record is archived» End timestamp (16-bits) is set each

time a packet is seen for the given flow

» Packet and Byte counters are incremented for each packet on the given flow

Reset to zero when record is archived» For TCP Flows, the TCP Flags are or’ed

in from each packet» Next Record Number is next record in

hash chain 0x1FFFF if record is tail Address of next record =

(next_record_num * record_size) + collision_table_base_addr

Next Record Number (17b)

Slice ID (VLAN) (12b)Reserved

(6b)

Byte Counter (32b)

Reserved(14b)

= Member of 6-tuple

V(1b)

End Timestamp (16b)

TCP Flags(6b)

Page 24: Flow Stats Module -- Control

24 - Flow Stats Module – John DeHart and James Moscola

Archiving Hash Table Records Send all valid records in hash table to

XScale for archiving every 5 minutes For each record in the main table (i.e. start

of chain) ...» For each record in hash chain ...

If record is valid ... If packet count > 0 then

– Send record to XScale via SRAM ring– Set packet count to 0– Set byte count to 0– Leave record in table

If packet count == 0 then– Flow has already been archived– No packet has arrived on flow in 5 minutes – Record is no longer valid– Delete record from hash table to free

memory

Start Timestamp_high (32b)

Start Timestamp_low (32b)

End Timestamp_high (32b)

Packet Counter (32b)

SrcPort (16b) DestPort (16b)

Destination Address (32b)

Source Address (32b)

Protocol (8b)

LW0

LW1

LW2

LW3

LW4

LW5

LW6

LW7

End Timestamp_low (32b)

LW8

LW9

Slice ID (VLAN) (12b)Reserved

(6b)

Byte Counter (32b)

Info Sent to XScale for eachflow every 5 minutes

TCP Flags(6b)

Page 25: Flow Stats Module -- Control

25 - Flow Stats Module – John DeHart and James Moscola

Overview of Flow Stats Control

Main functions»Collection of Flow Information for PlanetLab Node

Used when a complaint is lodged about a misbehaving flow Must be able to identify flow and the Slice that produced it.

»Aggregation of Flow Information from: Multiple GPEs Multiple NPEs

»Correlation with NAT records to identify internal flow and external flow

External flow will be what complaint will be about. Internal flow will be what involved PlanetLab researcher will

know about.

Page 26: Flow Stats Module -- Control

26 - Flow Stats Module – John DeHart and James Moscola

Overview of PlanetFlow PlanetFlow

»Unprivileged slice Flow Collector:

Ulogd (fprobe-ulog)– Netlink socket– Uses VSys for

privileged operations– Every 5 minutes

dumps its cache to DB DB:

On PlanetLab Node 5-minute records Flows spanning 5-minute

intervals aggregated daily.

Central Archive»At Princeton?»Updated periodically by

using rsync to retrieve new DB entries from ALL PlanetLab nodes.

Page 27: Flow Stats Module -- Control

27 - Flow Stats Module – John DeHart and James Moscola

SPP PlanetFlow

IngressXScale

EgressXScale

MEsS

CR

SR

AM

SC

R

SCD

NATScratchRings

FlowStatsSRAMRing

CP

ExtPFDB

NATd FSd

NATrecords

Flowrecords

GPEPFDB

GPE

dbAccumulator

CentralArchive

rsync

PFDB

HF LK FS2Central Archive Record = <time, sliceID, Proto, SrcIP, SrcPort, DstIP, DstPort, PktCnt, ByteCnt>Ext PF DB Record = <Central Archive Record>

Page 28: Flow Stats Module -- Control

28 - Flow Stats Module – John DeHart and James Moscola

SPP PlanetFlow

IngressXScale

EgressXScale

MEsS

CR

SR

AM

SC

R

SCD

NATScratchRings

FlowStatsSRAMRing

CP

ExtPFDB

NATd FSd

NATrecords

Flowrecords

GPEPFDB

GPE

dbAccumulator

CentralArchive

rsync

PFDB

HF LK FS2Central Archive Record = <time, sliceID, Proto, SrcIP, SrcPort, DstIP, DstPort, PktCnt, ByteCnt>Ext PF DB Record = <Central Archive Record>

Page 29: Flow Stats Module -- Control

29 - Flow Stats Module – John DeHart and James Moscola

Translations needed

NPE Flow Records:»VLAN to SliceID

Comes from SRM»IXP timestamp to wall clock time

SCD records wall clock time it started IXP How do we manage time slip between clocks?

GPE Flow Records:»NAT Port translations

Src Port from GPE record becomes SPP Orig Src Port Src Port from natd translation record becomes Src Port

natd provides port translation updates

Page 30: Flow Stats Module -- Control

30 - Flow Stats Module – John DeHart and James Moscola

SPP PlanetFlow Databases

NATrecords

Flowrecords

CentralArchive

ExtPFDB

CP

CP

CP

PFDB

GPE

<time, sliceID, proto, srcIP, srcPort, dstIP, dstPort, pktCnt, byteCnt>

<time, sliceID, proto, srcIP, srcPort, dstIP, dstPort, pktCnt, byteCnt>

<time, sliceID, proto, srcIP, srcPort, dstIP, dstPort, pktCnt, byteCnt>

<time, sliceID, proto, srcIP, srcPort, dstIP, dstPort, pktCnt, byteCnt>

<time, proto, srcIP, intSrcPort, xlatedSrcPort>

Page 31: Flow Stats Module -- Control

31 - Flow Stats Module – John DeHart and James Moscola

Merging of DBs NPE Flows

»No NAT»Goes directly into Ext PF DB

SPP Orig Src Port == SrcPort

»Do they need SliceID translation? We use the VLAN, but this probably needs to be the PlanetLab version of a

Slice ID. SRM will provide a VLAN to SliceID translation

Where and When?

GPE Configured Flows»No NAT»Goes directly into Ext PF DB

SPP Orig Src Port == SrcPort GPE NAT Flows

»Find corresponding NAT Record, extract Translated SrcPort Insert record into Ext PF DB with original SrcPort moved to SPP Orig Src Port Set Src Port to translated SrcPort

CP Traffic?

Page 32: Flow Stats Module -- Control

32 - Flow Stats Module – John DeHart and James Moscola

Overview of PlanetFlow PlanetFlow

»Unprivileged slice Flow Collector:

Ulogd (fprobe-ulog)– Netlink socket– Uses VSys for

privileged operations– Every 5 minutes

dumps its cache to DB DB:

On PlanetLab Node 5-minute records Flows spanning 5-minute

intervals aggregated daily.

Central Archive»At Princeton?»Updated periodically by

using rsync to retrieve new DB entries from ALL PlanetLab nodes.

X X

Page 33: Flow Stats Module -- Control

33 - Flow Stats Module – John DeHart and James Moscola

PlanetFlow Raw Data0005 0011 8e10638b 48a40477 00062638

0000371d 0000 0000 80fc99cd 80fc99d3

00000000 0000 0004 0000000b 0000062d

8dae5570 8dae558b cc1f 01bb 00 1f 0600

0000 0000 02000000 80fc99cd 80fc99d3

00000000 0000 0004 0000001a 000008b7

8dae54eb 8dae5533 cc1e 01bb 001e 0600

0000 0000 02000000

SA DA

IPv4 NextHop(Unused) Pkt Count Byte Count

Src Port Dst Port Pad

Tcpflag

sProto

Src

Tos

Src As(Unused)

Dst As(Unused) XID (SliceID) SA DA

In SNMP(if_nametoindex)

Out SNMP(if_nametoindex) Pkt Count Byte Count

First Switched(flow creation time)

Last Switched(time of last pkt) Src Port Dst Port Pa

d

Tcpflag

s

Src

Tos

XID (SliceID)

Uptime Unix Secs Unix nSecsVersion Count

Flow SequencePad16

(unused)

First Switched(flow creation time)

Last Switched(time of last pkt)

In SNMP(if_nametoindex)

Out SNMP(if_nametoindex)

IPv4 NextHop(Unused)

Src As(Unused)

Dst As(Unused)

NetFlow Header (beginning of file and repeats

every 30 flow records)

NetFlow FlowRecord

NetFlow FlowRecord

128.252.153.205128.252.153.211

52254 443

52255 443

128.252.153.205128.252.153.211

Proto

EngineType

(unused)

Engine Id

(unused)

223126

158111

Page 34: Flow Stats Module -- Control

34 - Flow Stats Module – John DeHart and James Moscola

SPP/PlanetFlow Raw Data0005 0011 8e10638b 48a40477 00062638

0000371d xx yy 0000 80fc99cd 80fc99d3

00000000 0000 0004 0000000b 0000062d

8dae5570 8dae558b cc1f 01bb 00 1f 0600

zzzz 0000 02000000 80fc99cd 80fc99d3

00000000 0000 0004 0000001a 000008b7

8dae54eb 8dae5533 cc1e 01bb 001e 0600

zzzz 0000 02000000

SA DA

IPv4 NextHop(Unused) Pkt Count Byte Count

Src Port Dst Port Pad

Tcpflag

sProto

Src

Tos

SPP OrigSrc Port

Dst As(Unused) XID (SliceID) SA DA

In SNMP(if_nametoindex)

Out SNMP(if_nametoindex) Pkt Count Byte Count

First Switched(flow creation time)

Last Switched(time of last pkt) Src Port Dst Port Pa

d

Tcpflag

s

Src

Tos

XID (SliceID)

Uptime (msecs) Unix Secs Unix nSecsVersion Count

Flow SequencePad16

(unused)

First Switched(msec)(flow creation time)

Last Switched(msec)(time of last pkt)

In SNMP(if_nametoindex)

Out SNMP(if_nametoindex)

IPv4 NextHop(Unused)

SPP OrigSrc Port

Dst As(Unused)

NetFlow Header (beginning of file and repeats

every 30 flow records)

NetFlow FlowRecord

NetFlow FlowRecord

128.252.153.205128.252.153.211

52254 443

52255 443

128.252.153.205128.252.153.211

Proto

SPPEngineType

SPPEngine

Id

223126

158111

Page 35: Flow Stats Module -- Control

35 - Flow Stats Module – John DeHart and James Moscola

Issues and Notes Time:

» Keeping time in sync among various machines: Flow Stats ME timestamps with IXP clock ticks.

Something has to convert this to a Unix time. GPE(s) timestamps with Unix gettimeofday(). CP collects flow records and aggregates based on time. Proposal:

XScale, GPE(s) and CP will use ntp to keep their Unix times in sync At the beginning of each reporting cycle, the Flow Stats ME should send a timestamp

record just to allow the XScale and CP to keep the time in sync. OR: Can XScale read the IXP clock tick and report that to the CP with along with the

XScale’s Unix time.» What are the times that are recorded in the Header and Flow Records?

Header Uptime (msecs): msecs since a base start time Time since Unix Epoch: time since January 1, 1970

– Unix secs– Unix nSecs

Uptime and Unix (secs, nSecs) represent the SAME time– So that the Flow times can be calculated based on them.

Flow Record First Switched (flow creation time): msecs since a base start time Last Switched (last packet in flow seen time): msecs since base start time

Page 36: Flow Stats Module -- Control

36 - Flow Stats Module – John DeHart and James Moscola

Issues and Notes (continued) NetFlow Header

» Filled in AFTER 30 flow records are filled in OR we get a timeout (10 minutes)» COUNT field tells how many flow records are valid.

File or data packet is ALWAYS padded out to a size that would hold 30 flow records» Flow Sequence: Running total of number of flow records emitted.

Flow Header and Flow Records» Emitted in chunks of 30 flow records plus a Flow Header

Emitted either by writing to a file or sending over a socket to a mirror site. Padded out to a size that would hold 30 flow records.

» A flow is emitted when it has been inactive for at least a minute or when it has been active for at least 5 minutes.

Fprobe-ulog threads:» emit_thread» scan_thread» cap_thread» unpending_thread

Flow lists» flows[]: hashed array of flows, buckets chained off head of list

These are flows that have been reported over netlink socket» flows_emit: linked list of flows ready to be emitted.

Page 37: Flow Stats Module -- Control

37 - Flow Stats Module – John DeHart and James Moscola

Issues and Notes (continued) VLANs and SliceIDs

» NPE and LC use VLANs to differentiate Slices» Flow records must record slice IDs

SRM will provide VLAN to SliceID translation

» GPE(s) do not differentiate Slices by VLAN. All flows from a GPE will use the same VLAN GPE keeps flow records locally using Slice ID Flow Stats ME could ignore GPE flow packets if it was told what the default GPE VLAN

was. Otherwise, one of the fs daemons could drop the flow records for the GPE flows that the Flow Stats

ME reports.

Slice ID:» What exactly is it?» Is the XID that is recorded by PlanetFlow actually the slice id or is it the VServer id?

Page 38: Flow Stats Module -- Control

38 - Flow Stats Module – John DeHart and James Moscola

Issues and Notes (continued) NAT Port Translations

» GPE flow records are the ones that need the NAT Port translation data» GPE flow records will come across from the GPE(s) to the CP via rsync or similar» natd will report NAT port translations with timestamps to the fs daemon» fs daemon will have to maintain NAT port translations (with their timestamps)

for possible later correlation with GPE flow records GPE(s) will all use the same default VLAN

» SRM will send this VLAN to scd so it can write it to SRAM for the fs ME to read in Fs ME will then filter out GPE flow records.

SRM fsd messaging» srm will push out VLAN SliceID translation creation and deletion messages

srm will wait ~10 minutes before re-using a VLAN srm will send the delete VLAN message after waiting the 10 minutes. fsd should not have to keep any history of VLAN/SliceID translations

It should get the creation before it receives any flow records for it It should get the last flow record before it gets the deleteion

» fsd will also be able to query SRM for current translation This will facilitate a restart of the fsd while the SRM maintains current state.

Page 39: Flow Stats Module -- Control

39 - Flow Stats Module – John DeHart and James Moscola

Issues and Notes (continued) rsync of flow record files from GPE(s) to CP

» A particular run of rsync may get a file that is still being written to by fprobe-ulog on the GPE

A subsequent rsync will may get the file again with additional records in it.

» Sample rsync command: rsync --timeout 15 -avzu -e "ssh -i /vservers/plc1/etc/planetlab/root_ssh_key.rsa " root@drn02:/vservers/pl_netflow/pf /root/pf

This will report the files that have been copied over

Page 40: Flow Stats Module -- Control

40 - Flow Stats Module – John DeHart and James Moscola

Issues and Notes (continued) Sample fprobe-ulog command:

» /sbin/fprobe-ulog -M -e 3600 -d 3600 -E 60 -T 168 -f pf2 -q 1000 -s 30 -D 250000» Started from /etc/rc.d/rc[2345].d/S56fprobe-ulog

All linked to /etc/init.d/fprobe-ulog GPE Flow record collection daemon: fprobe-ulog

» Scan thread Collects flow records into a linked list

» Emit thread Periodically writes flow records out to a file

Every 600 seconds – ten minutes!

» Daemon can also send flow records to a remote collector! So we could have the GPEs emit their flow records directly to the flow stats daemon on

the CP. Sample command:

/sbin/fprobe-ulog -M -e 3600 -d 3600 -E 60 -T 168 -f pf2 -q 1000 -s 30 -D 250000 <remote>:<port>[/[<local][/<type]] … There can be multiple remote host specifications Where

– remote: remote host to send to– port: destination port to send to– local: local hostname to use– type: m for mirror-site, r for rotate-site– send to all mirror-sites, rotate through rotate-sites.

Page 41: Flow Stats Module -- Control

41 - Flow Stats Module – John DeHart and James Moscola

SPP PlanetFlow

IngressXScale

EgressXScale

MEsS

CR

SR

AM

SC

R

scd

NATScratchRings

FlowStatsSRAMRing

CP

ExtPFDB

natd

fsd

GPE

GPE

CentralArchive

rsync

HF LK FS2Central Archive Record = <time, sliceID, Proto, SrcIP, SrcPort, DstIP, DstPort, PktCnt, ByteCnt>Ext PF DB Record = <Central Archive Record>

fprobe

fprobesrm

Page 42: Flow Stats Module -- Control

42 - Flow Stats Module – John DeHart and James Moscola

Plan/Design Flow Stats daemon, fsd, runs on CP

»Collects flow records from GPE(s) and NPE(s) and writes them into a series of PlanetFlow2 files with names:

pf2.#, where # is (0-162) Current file is closed after N minutes and # is incremented and new file is

opened and started. This mimics what fprobe-ulog does now on the GPE(s)

These files are then collected periodically by PLC for use and archiving I don’t think there is any explicit indication that PLC has picked up the files but the

timing must be such that we know it is done before we roll over the file names and overwrite an old file.

»Gets NAT data from natd Keep records of this with timestamps so we can correlate with flow records

coming from GPE(s) Check with Mart on how this will work

»Gets VLAN to sliceID data from srm srm will send start translation, stop translation msgs with a 10 minute wait

period when stopping a translation to make sure we are done with flow records for that slice

FS ME archives records every 5 minutes. Slices are long lived (right?) so this should not be a problem Fsd can also request a translation from srm

This is in case fsd has to be restarted while srm and other daemons continue running.

Page 43: Flow Stats Module -- Control

43 - Flow Stats Module – John DeHart and James Moscola

Plan/Design (continued) Fsd gathers records from GPE(s) and NPE(s)

»Gathers flow records from GPE(s) via socket(s) from fprobe-ulog on GPE(s)

Come across as one data packet with up to 30 flow records Packet is padded out to full 30 flow records with Count in Header

indicating how many of them are valid Update NetFlow header to indicate that this is an SPP and which SPP

node it is using Engine Type and Engine ID fields Update with NAT data and write immediately out to current pf2 file

keeping its NetFlow header.»Gathers flow records from NPE(s) via socket from scd on XScale

Come across one flow record at a time No NetFlow Header

Create NetFlow Header With appropriate Uptime and UnixTime (secs, nsecs) With SPP Engine Type and SPP Engine ID Modify Flow Record times to be msecs correlated with Uptime

Update NPE flow record with SliceID from srm. Collect NPE records for a period of time or until we get 30 and then

write them out to current pf2 file with NetFlow header.

Page 44: Flow Stats Module -- Control

44 - Flow Stats Module – John DeHart and James Moscola

Plan/Design (continued)FS ME and scd

»Use a command field in records coming across from FS ME to scd

»Use one command to set current time When FS ME is starting an archive cycle, first it sends a

timestamp command When scd gets this timestamp command it associates it with

a gettimeofday() time and sends the FS ME time and the gettimeofday() time to the fsd on the CP so it can associated ME times with Unix times.

»Use another command to indicate flow records Flow records can be sent directly on to fsd on CP

Page 45: Flow Stats Module -- Control

45 - Flow Stats Module – John DeHart and James Moscola

End

Page 46: Flow Stats Module -- Control

46 - Flow Stats Module – John DeHart and James Moscola

OLD STUFF

Page 47: Flow Stats Module -- Control

47 - Flow Stats Module – John DeHart and James Moscola

PlanetFlow Raw Data0500 0b00 8385 1bd2 a148 31d4 0f00 f84d

0000 8134 0000 0000 fc80 cd99 bb42 04e0

0000 0000 0000 0400 0000 0500 0000 7c01

2e85 eeb2 6d85 d636 7b00 7b00 0000 0011

0000 0000 0002 0000 fc80 cd99 fc80 d399

0000 0000 0000 0400 0000 1a00 0000 b708

3785 9d52 3785 e352 b1b2 bb01 1e00 0006

0000 0000 0002 0000

SA DA

IPv4 NextHop(Unused) Pkt Count Byte Count

Src Port Dst Port Pad

Tcpflag

sProto

Src

Tos

Src As(Unused)

Dst As(Unused) XID (SliceID) SA DA

In SNMP(if_nametoindex)

Out SNMP(if_nametoindex) Pkt Count Byte Count

First Switched(flow creation time)

Last Switched(time of last pkt) Src Port Dst Port Pa

d

Tcpflag

sProto

Src

Tos

XID (SliceID)

Uptime Unix Secs Unix nSecsVersion Count

Flow SequenceEng. Type

(unused)

Engine Id

(unused)

Pad16(unused)

NetFlow Header (beginning of file and repeats

every 30 flow records)

NetFlow FlowRecord

NetFlow FlowRecord

Each 16 bits has bytes swapped

First Switched(flow creation time)

Last Switched(time of last pkt)

In SNMP(if_nametoindex)

Out SNMP(if_nametoindex)

IPv4 NextHop(Unused)

Src As(Unused)

Dst As(Unused)

Page 48: Flow Stats Module -- Control

48 - Flow Stats Module – John DeHart and James Moscola

SPP PlanetFlow Databases

NATrecords

Flowrecords

CentralArchive

ExtPFDB

CP

IntPFDB

CP

CP

CP

PFDB

GPE

<time, sliceID, proto, srcIP, srcPort, dstIP, dstPort, pktCnt, byteCnt>

<time, sliceID, proto, srcIP, srcPort, dstIP, dstPort, pktCnt, byteCnt>

<time, sliceID, proto, srcIP, srcPort, dstIP, dstPort, pktCnt, byteCnt, PE ID, intSrcPort>

<time, sliceID, proto, srcIP, srcPort, dstIP, dstPort, pktCnt, byteCnt>

<time, sliceID, proto, srcIP, srcPort, dstIP, dstPort, pktCnt, byteCnt>

<time, proto, srcIP, intSrcPort, xlatedSrcPort>

Page 49: Flow Stats Module -- Control

49 - Flow Stats Module – John DeHart and James Moscola

SPP PlanetFlow

IngressXScale

EgressXScale

MEsS

CR

SR

AM

SC

R

SCD

NATScratchRings

FlowStatsSRAMRing

CP

ExtPFDB

NATd FSd

NATrecords

Flowrecords

GPEPFDB

GPE

dbAccumulator

CentralArchive

rsync

PFDB

HF LK FS2Central Archive Record = <time, sliceID, Proto, SrcIP, SrcPort, DstIP, DstPort, PktCnt, ByteCnt>Ext PF DB Record = <Central Archive Record>Int PF DB Record = <Central Archive Record, NPE/GPE ID, Internal Src Port>

IntPFDB

Page 50: Flow Stats Module -- Control

50 - Flow Stats Module – John DeHart and James Moscola

Merging of DBs NPE Flows

»No NAT»Goes directly into Ext PF DB and into Int PF DB

Internal SrcPort == SrcPort

»Do they need SliceID translation? We use the VLAN, but this probably needs to be the PlanetLab version of a

Slice ID. SRM will provide a VLAN to SliceID translation

Where and When?

GPE Configured Flows»No NAT»Goes directly into Ext PF DB and into Int PF DB

Internal SrcPort == SrcPort GPE NAT Flows

»Find corresponding NAT Record, extract Translated SrcPort» Insert record with translated SrcPort into Ext PF DB» Insert record with internal SrcPort into Int PF DB

CP Traffic?