software defined networks … an enterprise view part ii
TRANSCRIPT
Software Defined Networks …
An Enterprise View
Part II
Matthew Liste Managing Director / Technology Fellow
Goldman Sachs
Our Network Challenge
• A large number of various, specialized networks
• Global coverage with differing requirements
• Many bespoke deployments, with everything from trading environments to walled off networks for investment bankers
Enterprise Software-Defined Past
Enterprises have been leveraging “software-defined” for the last decade, using ‘expect’, SNMP, custom APIs etc.
But, all done through complex ‘Rube Goldberg’ machines
Which are highly crystalline and fragile
The promise of open…
For all infrastructure, we desire:
• Commodity scale out architectures
• Software-defined everything
• Application-centric data centers
• Open standards / open architectures
So why can’t we manage network complexity through software?
With programmatic and comprehensive control planes?
And an ability to improve data planes and control planes independent of each other?
Just like…
• Linux for compute
• REST for APIs
• Hadoop for map/reduce
• Etc.
OPEN
CONTROL
PLANES
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
It has been done before
(and will be done again)
Gated Quagga
Separation of software from appliance
Routing & Switching L4-L7 Service Insertion
COPS
Separation of control and data planes
OpenFlow
Frame
Relay ATM
MPLS
L2TP
tunnels
Overlay networks and tunneling
OS-based tunnels IPSec
A recap from our presentation last year…
We were enthusiastic and eager that these new approaches would change our ability to deploy, manage and operate our network And that this would be something we could deploy in the immediate term…
What have we tried?
• Vendor systems for overlay
• Open Source systems for overlay
• Hardware only Openflow topologies – Merchant silicon table sizes didn’t scale for our use
– Centralized controllers didn’t handle our policy scale
• Hypervisor and hardware Openflow topologies
• Hardware Traffic Engineering systems using Openflow
• TAP/Matrix switching using Openflow (in production)
• Various Whitebox solutions with Open OS implementations (traditional and Openflow forwarding)
• OpenStack
What will we also try?
• Further testing and use case examination of overlays
• Broad Adoption of merchant silicon and open source routing and switching
• L4-L7 services in NFV model – Load balancers, Firewalls
• Centralized controllers for policy and traffic engineering
• Continued use and adoption of open source management and provisioning tools
What would we like? • Common control planes where vendors can plug in their “drivers”
• Bare-metal switches with common hardware abstraction layers
• “linux-like” OSes for switches with common methods
– Add VLAN
– Add port to VLAN
– Etc.
What are the appropriate vehicles?
• Commercial vendors
• Community Efforts
– OpenStack
– Open Compute Project (OCP)
– Open Network Foundation (ONF)
• Start-ups
We want it all!