portland presented by muhammad sadeeq and ling su

14
PortLand Presented by Muhammad Sadeeq and Ling Su

Post on 21-Dec-2015

213 views

Category:

Documents


0 download

TRANSCRIPT

PortLandPresented by

Muhammad Sadeeq and Ling Su

Present Data Center Network• Topology High price to achieve the non-blocking network

• Forwarding Layer 3: administrative burden, hard-to-diagnosed

errors Layer 2: scalability, limited performance Middle ground: VLAN

• End Host Virtualization Migration Scalability

Current Issues

Present ideal work!• R1: VM should not change their IP addresses otherwise they would break TCP connection.• R2: No need to configure any switch before deployment.• R3: End users communicate easily.• R4: No forwarding loops.• R5: Failure detection should be rapid and efficient.

R1: VM Migration• IP will not been changed

• Evaluation

R2:Switch deployment:• PMAC allocation to all new comers.• PMAC consists of pod.position.port.vmid

-pod(16bits) Reflects directly connected hosts.-position (8bits) Shows position in the pod.-port (8bits) Port num of the host is connected to.-vmid (16 bits) To multiplex multiple VM on the same

Physical machine.• The switch maps AMAC & IP addresses to its PMAC and its

done automatically by LDP.

Switch deployment?• Fabric manager is responsible for ARP requests.

Switch deployment?• Incase Fabric manager doest have the IP to PMAC? After ARP efficient broadcast to all pods and edge

switches and get IP’s. while most of the time VM migrating from one

physical machine to another sends ARP to FM with its new IP.

Portland uses LDM (location discovery protocol)Switch-ID, pod number, pos, level, up/down.

R3: End Host Communication

R4: No loops:• Core switch simply inspects the bits for pod number in

the PMAC.• To diff pod the aggregation switch’s links it with any

available core layer.• Portland maps multicast groups to a core switch using

determistic hash function.• Fabric manager installs forwarding state in all core and

aggregation switches to ensure the best available path to destination.

• Provably loop free because the packet always goes up or down to their ultimate destination.

R5: Failure Detection

Relative work.• Fat tree: Such as multi-rooted trees form the basis for many

existing data center topology. -Actually Portland's implementation is on a small-scale fat tree.

• Smart Bridge: Extension of single spanning tree while maintaining the loop free property of LANs

-Just to propagate the signal to the desired destination.- still suffers from scalability challenges of network.

Relative work…• DCell: Specialized topology for data center environment, Its

implicit hierarchy make it compatible with our topology. • MOOSE: Suggests hierarchical Ethernet addresses and

address some of the Ethernet limitations.• RBridges: Run at layer-2,essentailly switches broadcast

information all the time.

Comparison…

Summary• PortLand makes it feasible for future plug-

and-play fabric Efficiency Fault tolerance Flexibility Manageability

• This paper is cited 93 times in two years Significant impact