linux tag 2014 openstack networking

18
OpenStack Networking So#wareDefined Networking for OpenStack using Open Source Plugins and VMware NSX Yves Fauser Network Virtualiza3on Pla6orm System Engineer @ VMware OpenStack DACH Day 2014 @ Linux Tag Berlin, 09.05

Upload: yfauser

Post on 19-Aug-2014

648 views

Category:

Engineering


1 download

DESCRIPTION

This presentation for a talk at the Linux Tag 2014 has a couple of new Slides compared to earlier presentations that explain some different networking models like Flat, VLAN based, 'SDN Fabric based', etc.

TRANSCRIPT

Page 1: Linux Tag 2014 OpenStack Networking

OpenStack  Networking So#ware-­‐Defined  Networking  for  OpenStack  using    

Open  Source  Plugins  and  VMware  NSX    

Yves  Fauser  Network  Virtualiza3on  Pla6orm  System  Engineer  @    VMware  

OpenStack  DACH  Day  2014  @  Linux  Tag  Berlin,  09.05  

Page 2: Linux Tag 2014 OpenStack Networking

OpenStack  Networking  –  Flat §  In  the  simple  ‘flat’  networking  model,  all  instances  (VMs)  are  bridged  to  a  physical  adapter  

§  L3  first-­‐hop  rou3ng  is  either  provided  by  the  physical  networking  devices  (flat  model),  or  by  OpenStack  L3  Service  (flat-­‐DHCP  model)  

§  Sufficient  in  single  tenant  or  ‘full  trust’  use  cases  were  no  segmenta3on  is  needed    (beside  iptables/ebtables  between  VM  interfaces  and  bridge)    

§  Doesn’t  provide  mul3-­‐tenancy,  L2  isola3on  and  overlapping  IP  address  support  

§  Available  in  Neutron  and  in  Nova-­‐Networking  

OpenStack  DACH  Day  2014  @  Linux  Tag  Berlin,  09.05  

VM   VM   VM   VM  

VM   VM   VM   VM  

VM   VM   VM   VM  

VM   VM   VM   VM  

L3  L2  

L3  L2  

Access  port  (no  VLAN  tag)  

Page 3: Linux Tag 2014 OpenStack Networking

OpenStack  Networking  –  VLAN  based §  The  VLAN  based  model  uses  VLANs  per  tenant  network  (with  Neutron)  to  provide    

mul3-­‐tenancy,  L2  isola3on  and  support  for  overlapping  IP  address  spaces    

§  The  VLANs  can  either  be  pre-­‐configured  manually  on  the  physical  switches,  or  a  neutron  vendor  plugin  can  communicate  with  the  physical  switches  to  provision  the  VLAN  §  Examples  of  vendor  plugins  that  are  crea3ng  VLANs  on  Switches  are  the  Arista  and  Cisco  Nexus/UCS  ML2  mechanism  driver  

§  L3  first-­‐hop  rou3ng  can  be  done  either;  §  On  the  physical  switches/routers,  or  

OpenStack  DACH  Day  2014  @  Linux  Tag  Berlin,  09.05  

VM   VM   VM   VM  

VM   VM   VM   VM  

VM   VM   VM   VM  

L3  L2  

L3  L2  

VLAN  trunk  port    (VLAN  tags  used)  

VM   VM   VM   VM  

Neutron  vendor  plugin  can  create  VLANs  through  vendor  API  

Page 4: Linux Tag 2014 OpenStack Networking

OpenStack  Networking  –  VLAN  based

OpenStack  DACH  Day  2014  @  Linux  Tag  Berlin,  09.05  

VM   VM   VM   VM  

VM   VM   VM   VM  

VM   VM   VM   VM  

L3  L2  

L3  L2  

VLAN  trunk  port    (VLAN  tags  used)  

Logical  routers  are  handling  the  first-­‐hop  gateway  func3on  on  Neutron  Network-­‐Node  

§  The  VLAN  based  model  uses  VLANs  per  tenant  network  (with  Neutron)  to  provide    mul3-­‐tenancy,  L2  isola3on  and  support  for  overlapping  IP  address  spaces    

§  The  VLANs  can  either  be  pre-­‐configured  manually  on  the  physical  switches,  or  a  neutron  vendor  plugin  can  communicate  with  the  physical  switches  to  provision  the  VLAN  §  Examples  of  vendor  plugins  that  are  crea3ng  VLANs  on  Switches  are  the  Arista  and  Cisco  Nexus/UCS  ML2  mechanism  driver  

§  L3  first-­‐hop  rou3ng  can  be  done  either;  §  On  the  physical  switches/routers,  or  §  As  logical  routers  in  Neutron  

Neutron  vendor  plugin  can  create  VLANs  through  vendor  API  

L3  for  tenant  networks  

Page 5: Linux Tag 2014 OpenStack Networking

VM   VM   VM   VM  

OpenStack  Networking  Models  –  ‘SDN  Fabric’  based §  In  this  model  mul3-­‐tenancy  is  achieved  using  different  ‘edge’  and  ‘fabric’  tags.    

E.g.  VLANs  can  be  used  to  address  the  tenant  between  the  hypervisor  vSwitch  and  the  Top-­‐of-­‐Rack  switch,  and  some  other  tag  is  used  inside  of  the  vendors  fabric  to  isolate  the  tenants  

OpenStack  DACH  Day  2014  @  Linux  Tag  Berlin,  09.05  

VM   VM   VM   VM   VM   VM   VM   VM  

Vendor  Fabric  uses  some  form  of  ‘Fabric  Tag’  to  address  the  tenant  

VM   VM   VM   VM   VM   VM   VM   VM   VM   VM   VM   VM  

Hypervisor  to  Top  of  Rack  Switch  uses  some  form  of  ‘edge  tag’    (e.g.  VLAN,  VXLAN  header,  etc.)  

Central  controller  controls  the  vSwitches  and  physical  Switches  

Controller  

§  Usually  a  single  controller  controls  both  the  vSwitches  and  the  physical  switches  

§  L3  first-­‐hop  rou3ng  and  L2  bridging  to  physical    usually  done  in  the  physical  switch  fabric  

§  Single  vendor  design  for  physical  and  virtual  networking  

§  Examples;  BigSwitch,  NEC,  Cisco  ACI,  …  

Neutron  vendor  plugin  talks  to  controller  through  vendor  API  

Fabric  Tag  

Edge  Tag   Edge  Tag  

Page 6: Linux Tag 2014 OpenStack Networking

OpenStack  Networking  Models  –  Network  VirtualizaAon §  With  network  virtualiza3on  (aka  overlay)  model,  mul3-­‐tenancy  is  achieved  by  overlaying    

MAC-­‐in-­‐IP  ‘tunnels’  onto  the  physical  switching  fabric  (aka  transport  network)  

§  An  ID  field  is  used  in  the  encapsula3on  header  (e.g.  VXLAN,  GRE,  STT)  to  address  the  tenant  network.  A  full  L2  isola3on  and  overlapping  IP  space  support  is  achieved  

§  Controller  controls  only  the  vSwitches  and  the  Gateways  

§  L3  first-­‐hop  rou3ng  and  L2  bridging  to  physical  done  either  by  sohware  or  hardware  gateways  (or  both)  

§  Examples;  VMware  NSX,  Midokura,  Plumgrid,  Contrail,  Nuage,  …  

OpenStack  DACH  Day  2014  @  Linux  Tag  Berlin,  09.05  

VM   VM   VM   VM  VM   VM   VM   VM   VM   VM   VM   VM  

VM   VM   VM   VM   VM   VM   VM   VM  

Physical  network  fabric  uses  L3  rou3ng  protocols  (e.g.  OSPF  or  BGP)  to  build  a  stable  Layer  3  Fabric  

SDN  controller  cluster  controls  the  vSwitches  in  the  Hypervisors  

MAC-­‐in-­‐IP  ‘Tunnel’  is  used  to  address  and  isolate  the  tenants    (e.g.  using  VXLAN)  

L3  Gateway  

L3  L2  

L3  L2  

L3  L3  

L3  L2  

Neutron  plugin  talks  to  controller  through  vendor  API  

Page 7: Linux Tag 2014 OpenStack Networking

Why  I  think  the  ‘Network  virtualizaAon’    (aka  overlay)  approach  is  the  best  model

OpenStack  DACH  Day  2014  @  Linux  Tag  Berlin,  09.05  

§  It  achieves  mul3-­‐tenancy,  L2  isola3on  and  overlapping  IP  address  support  without  the  need  to  re-­‐configure  physical  network  devices  

§  Logical  Network  for  Instances  (VMs)  is  loca3on  independent  –  It  spans  over  L2/L3  boundaries,  and  therefore  doesn’t  force  bad  (flat)  network  design  

§  Very  big  ID  space  for  tenant  addressing  compared  to  the  usual  VLAN  id  space    (max.  4094)  

§  Network  virtualiza3on  runs  as  a  sohware  construct  on  top  of  any  physical  network  topology,  vendor,  etc.  

§  Physical  network  and  logical  network  can  evolve  independently  from  each  other,  each  one  can  be  procured,  exchanged,  upgraded  and  serviced  independently    

§  Large  number  of  commercial  and  open  source  implementa3ons  are  available  today  §  Proven  in  produc3on  in  some  of  the  largest  OpenStack  deployments  out  there  

Page 8: Linux Tag 2014 OpenStack Networking

OpenStack  Neutron  –  Plugin  Concept Neutron Core API"

Neutron Service (Server)""

•  L2  network  abstrac3on  defini3on  and  management,  IP  address  management  

•  Device  and  service  ajachment  framework  •  Does  NOT  do  any  actual  implementa3on  of  abstrac3on  

"

Plugin API"

"Vendor/User Plugin"

•  Maps  abstrac3on  to  implementa3on  on  the  Network  (Overlay  e.g.  NSX  or  physical  Network)  •  Makes  all  decisions  about  *how*  a  network  is  to  be  implemented  •  Can  provide  addi3onal  features  through  API  extensions.    •  Extensions  can  either  be  generic  (e.g.  L3  Router  /  NAT),  or  Vendor  Specific  

"

Neutron API Extension"

Extension  API  implementa3on  is  op3onal  

OpenStack  DACH  Day  2014  @  Linux  Tag  Berlin,  09.05  

Page 9: Linux Tag 2014 OpenStack Networking

Core  and  service  plugins §  Core  plugin  implement  the  “core”  Neutron  API  func3ons    

(l2  Networking,  IPAM,  …)  

§  Service  plugins  implements  addi3onal  network  services  (l3  rou3ng,  Load  Balancing,  Firewall,  VPN)  

§  Implementa3ons  might  choose  to  implement  relevant  extensions  in  the  Core  plugin  itself  

Neutron Core API"Function"

Core  "

L3  "

FW  "

Core  "

L3  "

FW  "

Core  "

L3  "

FW  "

Plugin"Core Plugin  

"

Core Plugin  

"

FW plugin  

"

Core Plugin  

"

FW plugin  

"

L3 plugin  

"

OpenStack  DACH  Day  2014  @  Linux  Tag  Berlin,  09.05  

Page 10: Linux Tag 2014 OpenStack Networking

OpenStack  Neutron  –  Modular  Plugins §  Before  the  modular  plugin  (ML2),  every  team  or  vendor  had  to  implement  a  

complete  plugin  ‘housekeeping’  (IPAM,  DB  Access,  etc.)  

§  The  ML2  Plugin  separates  core  func3ons  like  IPAM,  virtual  network  id  management,  etc.  from  vendor/implementa3on  specific  func3ons,  and  therefore  makes  it  easier  for  vendors  not  to  reinvent  to  wheel  with  regards  to  ID  Management,  DB  access  …  

§  Exis3ng  and  future  non-­‐modular  plugins  are  called  “Standalone”  plugins  

§  ML2  calls  the  management  of  network  types  “type  drivers”,  and  the  implementa3on  specific  part  “mechanism  drivers”  

Arista  

Cisco  Linux  Bridge  

OVS   etc.  

Mechanism

Drivers"

GRE  

VLAN  

VXLAN  

etc.  Type

Drivers"

Type Manager" Mechanism Manager "

ML2 Plugin & API Extensions"

OpenStack  DACH  Day  2014  @  Linux  Tag  Berlin,  09.05  

Page 11: Linux Tag 2014 OpenStack Networking

Some  of  the  Plugins  available  in  the  market  (1/2)

§  ML2  modular  Plugin  §  With  support  for  the  type  drivers:  local,  flat,  VLAN,  GRE,  VXLAN  

§  And  the  following  mechanism  drivers:  Arista,  Cisco  Nexus,  Hyper-­‐V  Agent,  L2  Popula3on,  Linuxbridge,  Open  vSwitch  Agent,  Tail-­‐f  NCS  

§  Open  vSwitch  Plugin  –  The  most  used  (Open  Source)  plugin  today  

§  Supports  GRE  based  Overlays,  NAT/Security  groups,  etc.  §  Depreca3on  planned  for  Icehouse  release  in  favor  of  ML2  

§  Linuxbridge  Plugin  §  Limited  to  L2  func3onality,  L3,  floa3ng  IPs  and  provider  networks.    

No  support  for  Overlays  

§  Depreca3on  planned  for  Icehouse  release  in  favor  of  ML2  

OpenStack  DACH  Day  2014  @  Linux  Tag  Berlin,  09.05  

Page 12: Linux Tag 2014 OpenStack Networking

Some  of  the  Plugins  available  in  the  market  (2/2)

§  VMware  NSX  (aka  Nicira  NVP)  Plugin  §  Network  Virtualiza3on  solu3on  with  centralized  controller  +  OpenVSwitch  

§  Cisco  UCS  /  Nexus  5000  Plugin  §  Provisions  VLANs  on  Nexus  5000  switches  and  on  UCS  Fabric-­‐Interconnect  as  

well  as  UCS  B-­‐Series  Servers  network  card  (palo  adapter)  

§  NEC  and  Ryu  Plugin  §  “SDN  Fabric/OpenFlow”  based  implementa3ons  with  NEC  or  Ryu  controller  

§  Other  plugins  include  Midokura,  Juniper  (OpenContrail),  Big  Switch,  Brocade,  Plumgrid,  Embrane,  Melanox  

§  LBaaS  Service  Plugins  from;  A10  and  Citrix  

§  This  List  can  only  be  incomplete,  please  check  the  latest  informa3on  on:  §  hjps://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers  §  hjp://www.sdncentral.com/openstack-­‐neutron-­‐quantum-­‐plug-­‐ins-­‐

comprehensive-­‐list/  

OpenStack  DACH  Day  2014  @  Linux  Tag  Berlin,  09.05  

Page 13: Linux Tag 2014 OpenStack Networking

New  Plugins  /  ML2  Drivers  in  Icehouse  Release §  New  ML2  Mechanism  Drivers:  

§  Mechanism  Driver  for  OpenDaylight  Controller  

§  Brocade  ML2  Mechanism  Driver  for  VDX  Switch  Cluster  §  New  Neutron  Plugins  

§  IBM  SDN-­‐VE  Controller  Plugin  

§  Nuage  Networks  Controller  Plugin  §  Service  Plugins  

§  Embrane  and  Radware  LBaaS  driver  §  Cisco  VPNaaS  driver  

§  Various  §  VMware  NSX  -­‐  DHCP  and  Metadata  Service  

§  This  list  is  incomplete,  please  see  here  for  more  details:    hjps://blueprints.launchpad.net/neutron/icehouse  

OpenStack  DACH  Day  2014  @  Linux  Tag  Berlin,  09.05  

Page 14: Linux Tag 2014 OpenStack Networking

Neutron  –OVS  Agent  Architecture §  The  following  components  play  a  role  in  OVS  Agent  Architecture  

§  Neutron-­‐OVS-­‐Agent:  Receives  tunnel  &  flow  setup  informa3on  from  OVS-­‐Plugin  and  programs  OVS  to  build  tunnels  and  to  steers  traffic  into  those  tunnels  

§  Neutron-­‐DHCP-­‐Agent:  Sets  up  dnsmasq  in  a  namespace  per  configured  network/subnet,    and  enters  mac/ip  combina3on  in  dnsmasq  dhcp  lease  file  

§  Neutron-­‐L3-­‐Agent:  Sets  up  iptables/rou3ng/NAT  Tables  (routers)  as  directed  by  OVS  Plugin  or  ML2  OVS  mech_driver  

§  In  most  cases  GRE  or  VXLAN  overlay              tunnels    are  used,  but  flat  and  vlan              modes  are  also  possible    

IP  Stack  

Neutron-­‐  Network-­‐Node  

nova-­‐compute  

hypervisor  VM   VM  

IP  Stack  

Compute  Node  

nova-­‐compute  

hypervisor  VM   VM  

Compute  Node  

External    Network  (or  VLAN)  

WAN/Internet  

iptables/  rouLng  

Layer  3  Transport  Network  

dnsmasq  NAT  &    floaLng  -­‐IPs  

iptables/  rouLng  

N.-­‐L3-­‐Agent   N.-­‐DHCP-­‐Agent   N.-­‐OVS-­‐Agent  

ovsdb/  ovsvsd  

Neutron-­‐Server  +  OVS-­‐Plugin  

N.-­‐OVS-­‐Agent   N.-­‐OVS-­‐Agent  

ovsdb/  ovsvsd  

ovsdb/  ovsvsd  

Layer  3  Transport  Net.  

IP  Stack  

br-­‐int   br-­‐int  br-­‐tun  

br-­‐int  br-­‐tun  

br-­‐tun  

L2  in  L3  (GRE)    Tunnel  

dnsmasq  

br-­‐ex  

OpenStack  DACH  Day  2014  @  Linux  Tag  Berlin,  09.05  

Page 15: Linux Tag 2014 OpenStack Networking

§  Centralized  scale-­‐out  controller  cluster  controls  all  Open  vSwitches  in  all  Compute-­‐  and  Network  Nodes.  It  configures  the  tunnel  interfaces  and  programs  the  flow  tables  of  OVS  

§  NSX  L3  Gateway  Service  (scale-­‐out)  is  taking  over  the  L3  rou3ng  and  NAT  func3ons  

§  NSX  Service-­‐Node  relieves  the  Compute  Nodes  from  the  task  of  replica3ng  broadcast,  unknown  unicast  and  mul3cast  traffic  sourced  by  VMs  

§  Security-­‐Groups  are  implemented  na3vely  in  OVS,  instead  of  iptables/ebtables  

IP  Stack  

Neutron-­‐  Network-­‐Node  

nova-­‐compute  

hypervisor  VM   VM  

IP  Stack  

Compute  Node  

nova-­‐compute  

hypervisor  VM   VM  

Compute  Node  

Management  Network  

WAN/Internet  

dnsmasq  

N.-­‐DHCP-­‐Agent  

ovsdb/  ovsvsd  

Neutron-­‐Server  +  NVP-­‐Plugin  

ovsdb/  ovsvsd  

ovsdb/  ovsvsd  

Layer  3  Transport  Net.  

IP  Stack  

br-­‐int   br-­‐int  br-­‐0  

br-­‐int  br-­‐0  

br-­‐0  

L2  in  L3  (STT)    Tunnel  

dnsmasq  

Using  “SDN  controllers”  -­‐  VMware  NSX  Plugin  example

NSX  L3GW  +  NAT  

Layer  3  Transport  Network  

NSX  Controller  Cluster  

NSX  Service-­‐Node  

OpenStack  DACH  Day  2014  @  Linux  Tag  Berlin,  09.05  

Page 16: Linux Tag 2014 OpenStack Networking

§  Tunnel  status  

§  Port-­‐to-­‐port  troubleshoo3ng  tool  

§  Traceflow  packet  injec3on  

VMware  NSX  -­‐  Management  &  OperaAons

Page 17: Linux Tag 2014 OpenStack Networking

§  Automated  deployment  of  new  Version  

§  Built  in  compa3bility  verifica3on  

§  Rollback  

§  Online  Upgrade    (i.e.  dataplane  &    control  plane  services  stay  up)  

VMware  NSX  -­‐  Management  &  OperaAons  –  SoXware  Upgrades

Page 18: Linux Tag 2014 OpenStack Networking

Thank  You!  Any  quesAons?

OpenStack  DACH  Day  2014  @  Linux  Tag  Berlin,  09.05