virtualization and openflow nick mckeown [email protected] nick mckeown [email protected] visa...
TRANSCRIPT
Virtualization and OpenFlow
Nick [email protected]
Nick [email protected]
VISA Workshop, Sigcomm 2009
Supported by NSF, Stanford Clean Slate Program, Cisco, DoCoMo, DT, Ericsson, NEC, Xilinx
In a nutshell
A revolution is just starting in networkingDriven by cost and control
It started in data centers…. and is spreading
Trend is towards an open-source, software-defined network
The new opportunity to innovate will bring about the need to try new ideas
Hence virtualization (or slicing)
I’ll outline one way to do it with OpenFlow
Why the revolution
Cost
500,000 serversFanout of 50 10,000 switches$10k commercial switch $100M$1k custom-built switch $10M
Savings in 10 data centers = $900M
Control
1.Optimize for features needed2.Customize for services & apps3.Quickly improve and innovate
Example: New data center
Software-defined Network
1. Data CentersCost and control
2. Network & Cellular operators Bit-pipe avoidance
Cost and control
Security and mobility
1. ResearchersGENI, FIRE, …
Computer
Application
Computer
Application Application
OS
OS abstracts hardware substrate Innovation in applications
x86(Computer)
Windows(OS)
ApplicationApplication
LinuxMacOS
x86(Computer)
Windows(OS) or or
ApplicationApplication
Simple, common, stable, hardware substrate below+ Programmability+ Competition Innovation in OS and applications
LinuxMacOS
x86(Computer)
Windows(OS)
or or
ApplicationApplication Windows(OS)
Windows(OS)
LinuxMacOS
x86(Computer)
Windows(OS)
AppApp
LinuxLinuxMacOS
MacOS
Virtualization
App
Simple, common, stable, hardware substrate below+ Programmability + Strong isolation model+ Competition above Innovation in infrastructure
A simple stable common substrate
1. Allows applications to flourish Internet: Stable IPv4 led to the web
2. Allows the infrastructure on top to be defined in softwareInternet: Routing protocols, management, …
3. Rapid innovation of the infrastructure itselfInternet: er...? What’s missing? What is the
substrate…?
(Statement of the obvious)
In networking, despite several attempts…
We’ve never agreed upon a clean separation between:1. A simple common hardware substrate
2. And an open programming environment on top
A prediction
1. A clean separation between the substrate and an open programming environment
2. A simple low-cost hardware substrate that generalizes, subsumes and simplifies the current substrate
3. Very few preconceived ideas about how the substrate will be programmed
4. Strong isolation among features
But most of all….
Owners, operators, administrators, developers, researchers will want to…
…improve, update, fix, experiment, share,
build-upon, and versiontheir network.
Therefore, the software-defined network will allow
simple ways to program and version.
One way to do this is virtualizing/slicing the network
substrate.
Windows(OS)
Windows(OS)
LinuxMacOS
x86(Computer)
Windows(OS)
AppApp
LinuxLinuxMacOS
MacOS
Virtualization
App
Simple, common, stable, hardware substrate below+ Programmability + Strong isolation model+ Competition above Faster innovation
Controller 1
AppApp
Controller2
Virtualization (FlowVisor)
App
OpenFlow
Controller 1
Controller 1
Controller2
Controller2
New function!
Operators, users, 3rd party developers, researchers, …
Step 1: Separate intelligence from datapath
Step 2: Cache decisions in minimal flow-based datapath
“If header = x, send to port 4”
FlowTableFlowTable
“If header = ?, send to me”“If header = y, overwrite header with z, send to ports 5,6”
Packet-switching substrate
PayloadPayloadEthernetDA, SA, etc
EthernetDA, SA, etc
IPDA, SA, etc
IPDA, SA, etc
TCPDP, SP, etc
TCPDP, SP, etc
Collection of bits to plumb flows (of different granularities)
between end points
Properties of a flow-based substrate
We need flexible definitions of a flowUnicast, multicast, waypoints, load-balancing
Different aggregations
We need direct control over flowsFlow as an entity we program: To route, to make private, to move, …
Exploit the benefits of packet switchingIt works and is universally deployed
It’s efficient (when kept simple)
Substrate: “Flowspace”
PayloadPayloadEthernetDA, SA, etc
EthernetDA, SA, etc
IPDA, SA, etc
IPDA, SA, etc
TCPDP, SP, etc
TCPDP, SP, etc
Collection of bits to plumb flows (of different granularities)
between end points
PayloadPayloadHeaderUser-defined flowspace
HeaderUser-defined flowspace
“OpenFlow 2.0”
Properties of Flowspace
Backwards compatibleCurrent layers are a special case
No end points need to change
Easily implemented in hardwaree.g. TCAM flow-table in each switch
Strong isolation of flowsSimple geometric construction
Can prove which flows can/cannot communicate
Approach 1: Slicing using VLANs Sliced OpenFlow Switch
Normal L2/L3 Processing
Flow Table
A VLANs
(Legacy VLANs)
Flow Table
Flow TableC VLANs
B VLANs
Controller C
Controller B
Controller A
Some prototype OpenFlow switches do this…
OpenFlow Switch
OpenFlowProtocolOpenFlowProtocol
FlowVisor
Bob’sControllerAlice’s
Controller
OpenFlowProtocolOpenFlowProtocol
Approach 2: FlowVisorRob Sherwood* ([email protected])
OpenFlow Switch
OpenFlow Switch
* Deutsche Telekom, “T-Labs”
OpenFlowProtocol
FlowVisor
BroadcastMulticast
OpenFlowProtocol
httpLoad-balancer
FlowVisor
OpenFlow Switch
OpenFlow Switch
OpenFlow Switch
OpenFlow Protocol
FlowVisor
OpenFlow Switch
OpenFlow Switch
OpenFlow Switch
OpenFlow Protocol
ProductionNetwork
Controller
Alices’s FlowVisor
GENI’s FlowVisor
GENIGENI
GENI Aggregate Manager
Bob’s FlowVisor
Learning switch Mobile VMs New BGP
WiMax-WiFiHandover
Tricast LosslessHandover
FlowVisor
A proxy between switch and guest controller Parses and rewrites OpenFlow messages as
they pass Ensures that one experiment doesn’t affect
another Allows rich virtual network boundaries
By port, by IP, by flow, by time, etc.
Define virtualization rules in software
FlowVisor Goals
Transparency Unmodified guest controllers Unmodified switches
Strong resource Isolation Link b/w, switch CPU, etc. Flow space: who gets this message
Virtualization Policy module Rich network slicing