data center design for the midsize...
TRANSCRIPT
Data Center Design for the Midsize Enterprise
BRKDCT-2218
Jerry Hency
Technical Marketing Engineer, Data Center Group
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Data Center Design for the Midsize Enterprise Terminology and Goals for this session
“Midsize” Enterprise/Organization Designs:
• Minimum requirement a dedicated pair of Data Center switches.
• The transition point upwards from true “SMB” collapsed-core designs.
• Distinct Layer 2/3 boundary, with Data Center oriented feature set.
Support current and future needs:
• Right-sizing the Data Center, not just large scale.
• Build using components that will also transition easily into larger designs.
Make Data Center design choices:
• Topology options: from single layer designs to spine/leaf data center fabrics.
• Tradeoffs of components to fill topology roles.
3
WAN/Internet Edge
Client Access/Enterprise
Data Center L3
----------- L2
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Agenda
• Introduction
• Data Center Requirements for Midsize Organizations
• Building an Access Pod
• Single-Layer Design Examples
• Moving to a Spine/Leaf Fabric
• Conclusion
4
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Midsize Data Center Design Challenges This session will provide example designs which are:
– Flexible: Entry-level design models for smaller organizations to take advantage of
Nexus Unified Fabric features and easily scale up when they are ready.
– Practical: Balancing cost with port count, software features, and hardware capabilities.
– Easy to use: providing deployment and operational simplicity for organizations
managing a growing network with a small staff.
5
Choose which platform requirements to prioritize:
Port types, speeds, cable plant: 1/10GigE, FEX, vPC, Fibre Channel, FCoE
High Availability features: Dual Supervisors, ISSU, Spine/Leaf Design
Data Center Interconnect (DCI): Overlay Transport Virtualization (OTV), MPLS, VPLS
Fabric Integration and Orchestration: Programmability, API’s, Controller-based options
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Server and Storage Needs Drive Design Choices
6
VM VM VM
FCoE
iSCSI
FC
NFS/
CIFS
VM VM VM
Virtualization Requirements
– vSwitch/DVS/OVS
– Nexus 1000v
– APIs/Programmability/Orchestration
Connectivity Model
– 10 or 1-GigE Server ports
– NIC/HBA Interfaces per-server
– NIC Teaming models
6
Form Factor
– Unified Computing Fabric
– 3rd Party Blade Servers
– Rack Servers (Non-UCS Managed)
Storage Protocols
– Fibre Channel
– FCoE
– IP (iSCSI, NAS)
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Data Center Fabric Requirements
7
• Varied “North-South” communication needs with end-users and external entities.
• Increasing “East-West” communication: clustered applications and workload mobility.
• High throughput and low latency requirements.
• Increasing high availability requirements.
• Automated provisioning and control with orchestration, monitoring, and management tools.
EAST – WEST TRAFFIC
NO
RT
H -
SO
UT
H T
RA
FF
IC
FC
FCoE
iSCSI / NAS
Server/Compute
Site B Enterprise
Network
Public Cloud
Internet
DATA CENTER FABRIC
Mobile
Services
Storage
Orchestration/
Monitoring
Offsite DC
API
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Cisco Intercloud Fabric Workload Portability for the Hybrid Cloud
PRIVATE
CLOUD
PUBLIC
CLOUD
Director
Secure Extender
Cisco
Powered
VM VM
Provider Platform
Secure network extension
Workload mobility
Administration portal
Workload management Cloud APIs
• Dev/Test
• Control of “Shadow IT”
• Capacity Augmentation
• Disaster Recovery
8
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Agenda
• Introduction
• Data Center Requirements for Midsize Organizations
• Building an Access Pod
• Single-Layer Design Examples
• Moving to a Spine/Leaf Fabric
• Conclusion
9
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Access Pod basics: Compute, Storage, and Network
10
Access/Leaf
Switch Pair
Storage Array
UCS Fabric
Interconnect
System
To Data Center
Aggregation or
Network Core
“Different Drawing, Same Components”
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Access Pod Features: Virtual Port Channel (vPC)
11
Virtual Port Channel
L2
SiSi SiSi
Non-vPC vPC
Physical Topology Logical Topology
• vPC provides port-channel link aggregation across a pair of separate physical switches.
• Allows the creation of resilient Layer-2 topologies based on Link Aggregation.
• Spanning Tree Protocol (STP) is no longer the primary means of loop prevention.
• Provides more efficient bandwidth utilization since all links are actively forwarding.
• vPC maintains independent control and management planes.
• Two peer vPC switches are joined together to form a vPC domain.
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Access Pod Features: Nexus 2000 Fabric Extension
12
Dual NIC 802.3ad Server
Dual NIC Active/Standby
Server
• Using FEX provides Top-of-Rack presence in more racks with fewer points of management, less cabling, and lower cost.
• In a “straight-through” or single-homed FEX configuration, each Nexus 2000 FEX is only connected to one parent switch.
• FEX parent switch may be Nexus 5000, 6000 or 7000 9000 Series.
• Nexus 2000 includes 1/10GigE TOR models with 10 or 40GigE uplinks, plus the B22 models for use in blade server chassis from HP, Dell, Fujitsu, and IBM.
Design note: Verify platform-specific FEX compatibility and scale numbers on cisco.com.
Nexus 2000 FEX
Nexus Parent Switch
End/Middle of Row Switching with FEX
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Nexus Fabric Features: Enhanced vPC (EvPC) Dual-homed FEX with addition of dual-homed servers
13
Dual NIC 802.3ad Dual NIC Active/Standby
Single NIC
• In an Enhanced vPC configuration, server NIC teaming configurations or single-homed server connections are supported on any port.
• No vPC ‘orphan ports’ on FEX in the design.
• All components in the network path are fully redundant.
• Supported FEX parent switches are Nexus 6000, 5600 and 5500.
• Provides flexibility to mix all three server NIC configurations (single NIC, Active/Standby and NIC Port Channel).
Design Notes:
Port Channel to active/active server is not configured as a “vPC”.
N7000 planned to support dual-homed FEX (without dual-homed servers) targeted in NX-OS 7.1
Nexus 6000/5600/5500
FEX
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Nexus Fabric Features: Unified Ports and FCoE Seamless transport of both storage and data traffic at the server edge
Unified Ports:
• May be configured to support either native Fibre Channel or Ethernet
• Available on Nexus 5500/5600UP switches, or as an expansion module on Nexus 6004.
Fibre Channel over Ethernet (FCoE):
• FCoE allows encapsulation and transport of Fibre Channel traffic over a shared Ethernet network
• Traffic may be extended over Multi-Hop FCoE, or directed to an FC SAN
• SAN “A” / “B” isolation is maintained across the network
FC
Servers with CNA
Nexus Ethernet/FC Switches
FCoE
Links
SAN-B SAN-A
Fibre
Channel
Traffic
Ethernet
or Fibre
Channel
Traffic
Fibre
Channel
Any Unified Port can be configured as:
Disk Array
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Planning Physical Data Center Pod Requirements
15
Compute
Rack
Network/Storage
Rack
(2)N2232
FEX
(32) 1RU
Rack
Servers
• Map physical Data Center needs to a flexible fabric topology.
• Plan for growth in a modular, pod-based repeatable fashion.
• Your own “pod” definition may be based on compute, network, or storage requirements.
• Access Pod TOR switching becomes the leaf switches of a spine/leaf topology.
• How many current servers/racks, and what is the expected growth?
(2) N5672UP
Storage
Arrays
Term Server,
Management Switch
PATCH
Today’s
Server
Racks
Tomorrow’s
Data Center
Floor
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Data Center Service Integration Approaches
16
VM VM VM VM VM VM
Network
Core
Virtualized Servers with
Nexus 1000v and vPath
Physical DC
Service Appliances
(Firewall, ADC/SLB,
etc.)
Virtual DC
Services in
Software
Data Center Service Insertion Needs • Firewall, Intrusion Prevention
• Application Delivery, Server Load Balancing
• Network Analysis, WAN Optimization
Physical Appliances/Switch Modules • Typically introduced at Layer 2/3 Boundary;
spine/aggregation or “services leaf” switches.
• Traffic direction with VLAN provisioning, Policy-Based Routing, WCCP.
Virtualized Services • Deployed in a distributed manner along with
virtual machines.
• Traffic direction with vPath and Nexus 1000v.
• Application Centric Infrastructure (ACI) automated framework for service insertion.
L3 -----------
L2
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Agenda
• Introduction
• Data Center Requirements for Midsize Organizations
• Building an Access Pod
• Single-Layer Design Examples
• Moving to a Spine/Leaf Fabric
• Conclusion
17
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
FC
Single Layer DC, Fixed/Semi-Modular Switching
FCoE
iSCSI / NAS
1Gig/100M
Servers 10 or 1-Gig attached
UCS C-Series
10-GigE
UCS C-Series
L3 -----------
L2 Nexus 5600
Client Access
WAN / DCI
Nexus 5600 Data Center Switches: • 5672UP: 1RU, 48 1/10GE + 6 QSFP (16 Unified Ports)
• 56128P: 2RU, 48 1/10GE + 4 QSFP, 2 expansion slots (24 1/10GE-Unified Port + 2 QSFP module available)
Non-blocking, line-rate Layer-2/3 switching with low latency ~1 µs.
FCoE plus 2/4/8G Fibre Channel options.
Hardware-based Layer-2/3 VXLAN, NVGRE.
Dynamic Fabric Automation (DFA) capable.
Design Notes:
DCI (OTV, MPLS, VPLS, LISP) may be provisioned through separate Nexus 7000 or ASR WAN Routers
ISSU not supported with Layer-3 on Nexus 5000/6000
Nexus 2000 FEX
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Single Layer switching plus FEX design
Nexus 5672UP: 48 x 10GE ports plus 6 40GE QSFP.
Using 2232PP FEX: 32 1/10GE host, 8 x 10 GE uplinks.
Example is connecting all 8 uplinks per-FEX to maintain 4:1 oversubscription, 4 ports per-parent.
N5672UP: 48 x 10 GE / 4 uplinks = 12 FEX
Using 4 x 10 breakout on each QSFP provides 24 additional 10GigE port count to account for:
• Uplinks to Client Access, WAN/DCI
• vPC Peer link
• Storage arrays, Service Devices
• Direct-attached 10GigE Servers
FC
FCoE
iSCSI / NAS
L3 -----------
L2 Nexus 5600
Client Access
WAN / DCI
12 x 2232 FEX = 384 1/10 GE ports (platform limit 24 FEX)
……
Reference
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Single Layer Data Center plus UCS Fabric Alternate Server Edge 1: UCS Fabric Interconnects
L3 -----------
L2 Nexus 5672UP
WAN / DCI Typically 4 – 8 UCS Chassis per Fabric Interconnect (FI) pair. (Maximum is 20).
UCS Manager can also manage UCS C-Series rack servers on the same system.
Add UCS Director to provide management and orchestration of the unified infrastructure.
Example DC Switching Components:
• 2 x Nexus 5672UP
• Layer 3 and Storage Licensing
• 2 x Nexus 2232PP/TM-E
Optional design: direct-attach storage to UCS Fabric Interconnect
UCSM managed
C-Series Servers
UCS Fabric
Interconnect
s
FC
FCoE
iSCSI / NAS
Client Access
UCS Blade Chassis
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Single Layer Data Center with Nexus B22 FEX Alternate Server Edge 2: Third-Party Blade Server
L3 -----------
L2 Nexus 5672UP
WAN / DCI B22 FEX allows Fabric Extension directly into compatible 3rd-party chassis.
Provides consistent network topology for multiple 3rd-party blade systems.
FCoE on FEX uplinks, or MDS/Nexus SAN connected to server HBA’s.
Example Components:
• 2 x Nexus 5672UP’s
• L3 and Storage Licensing
• 4 x Nexus B22
HP, Dell, Fujitsu, and IBM blade chassis supported, see data sheet for model specifics.
UCS C-Series
Cisco B22 FEX for Blade
Chassis Access
Client Access
FC
FCoE
iSCSI / NAS
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Single Layer Data Center, Modular Chassis
• Nexus 7700 example topology, common asics and software shared with Nexus 7000 platform.
• Concurrent support for: DCI capability with OTV, LISP, MPLS, VPLS
FabricPath, FCoE and FEX
• Dual-Supervisor High Availability.
• Layer-2/3 In Service Software Upgrade (ISSU)
• Virtual Device Contexts (VDC)
• Layer-2/3 VXLAN in hardware on F3 card.
• Dynamic Fabric Automation support; NX-OS 7.1
Design Notes:
For native Fibre Channel add Nexus/MDS SAN.
N7k FCoE directly to FEX, support planned for NX-OS 7.1
iSCSI / NAS
10 or 1-Gig attached UCS C-Series
L3 -----------
L2 Nexus 7706
WAN
Client Access High-Availability, Modular 1/10/40/100 GigE
Spine+
Leaf
VDCs
OTV
VDCs
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Single Layer Data Center, ACI-Ready Platform
1Gig/100M
Servers 10 or 1-Gig attached
UCS C-Series
10-GigE
UCS C-Series
L3 -----------
L2 Nexus 9396PX
Client Access
WAN / DCI
Nexus 9000 switching platforms enable migration to Application Centric Infrastructure (ACI).
May also be deployed in standalone NX-OS mode (without APIC controller).
• 9396PX: 48 1/10GigE SFP+ ports, 12 QSFP
• 9504: Small-footprint HA modular platform
• Basic vPC and straight-through FEX supported as of NX-OS 6.1(2)I3(1)
• VXLAN Layer-2/3 in hardware
• IP-based storage support
• Low latency, non-blocking Layer-2/3 switching ~1µs
Design Notes:
OTV, LISP DCI may be provisioned through separate Nexus 7000 or ASR 1000 WAN Routers.
Fibre Channel or FCoE support requires separate MDS or Nexus 5500/5600 SAN switching. (Future FCoE capable)
ISSU support on 9300 targeted for 2HCY14.
iSCSI /
NAS
FEX
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Working with 40 Gigabit Ethernet
24
QSFP-40G-SR4 with direct MPO and 4x10
MPO-to-LC duplex splitter fiber cables
QSFP-40G-CR4
direct-attach cables QSFP+ to 4-SFP+
direct-attach cables
(splitter)
Nexus family switches support QSFP-based 40 Gigabit Ethernet interfaces.*
On most platforms, splitter cables can be used to provision 4x10GigE ports out of 1 QSFP.*
40 Gigabit Ethernet cable types: • Direct-attach copper [QSFP <-> QSFP] and [QSFP <-> 4 x
SFP+]. Passive cables at 1/3/5m, active cables at 7 and 10m.
• SR4 uses bit-spray over 4 fiber pairs within a 12 fiber MPO/MTP connector to reach up to 100/150m on multimode OM3/OM4
• CSR4 is a higher powered SR4 optic with reach up to 300/400m on multimode OM3/OM4
• LR4 uses CWDM to reach up to 10km on a single-mode fiber pair.
* Verify platform-specific support of capabilities and roadmap
Reference
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
QSFP-BIDI vs. QSFP-40G-SR4 Run 40 GigE over existing duplex multimode cable plant
TX/RX
TX/RX
2 x 20G
2 x 20G
Duplex (two strand) multimode fiber with Duplex LC connectors at both ends
Duplex Multimode Fiber
Use of duplex multimode fiber lowers cost of
upgrading from 10G to 40G by leveraging
existing 10G multimode infrastructure
QSFP-BIDI
Duplex Multimode Fiber
QSFP-BIDI TX
RX
4 x 10G
4 x 10G
12-Fiber ribbon cable with MPO connectors at both ends
12-Fiber infrastructure
Higher cost to upgrade from 10G to
40G due to 12-Fiber infrastructure
QSFP SR
12-Fiber infrastructure
QSFP SR
25
Configuration Best Practices: vPC with Layer-2, Layer-3
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
vPC Options: Auto-Recovery
• By default, both parents must be present for a newly connected vPC to be brought active
• Auto-Recovery Allows vPC’s to be established with only a single parent present.
• Addresses multiple scenarios:
After a power failure with a partial restore where only one parent switch is present.
New vPC-attached devices to be configured or powered on during a hardware issue with one of the parent switches.
Ongoing operations based off of either the configured vPC Primary or Secondary parent when one is down for any reason.
“Missing” vPC Peer
N6004-a(config)# vpc domain 10
N6004-a(config-vpc-domain)# auto-recovery
vPC Domain
27
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
vPC Options: Orphan Ports Suspend
• An orphan port is a device attached to only one member of a vPC pair.
• Intended for devices that do not support port-channel. Other devices should be dually connected by vPCs.
• If the vPC peer-link were to go down, the vPC secondary peer device shuts all its vPC member ports as well as designated orphan ports.
• Configure switch ports for single attached devices (like Firewall or Load Balancer) as orphan-port.
• Configuration allows consistent behavior of orphan ports with vPC member ports.
• Active/Standby Server NIC teaming also uses Orphan Ports.
S2-Secondary S1 -Primary
vPC peer-link
Keepalive
Orphan port
Active or Standby
Active or Standby
S1(config)# int eth 1/1
S1(config-if)# vpc orphan-ports suspend
S2(config)# int eth 1/1
S2(config-if)# vpc orphan-ports suspend
Server with Active/Standby NIC
Teaming
28
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
vPC Options: vPC Peer Switch Unifies Spanning Tree processing across vPC peers
• For use on vPC pairs acting as root bridge of an STP domain (Not needed on FabricPath edge)
• Allows ongoing STP processing without a root bridge transition in the event of a switch failure.
• STP configuration and priority settings must be identical on both peer switches
• vPC Peer-link operates in forwarding state for all vPC VLANs
vPC Peer-link
S1 S2
vPC Primary vPC Secondary
vPC1 vPC2
S1 S2
vPC Primary vPC Secondary
Peer-switch
Root Root Root
S,0
,S
Logical representation Physical representation
Root
Peer-switch
29
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
vPC Options: vPC Peer Gateway
Non-RFC compliant end hosts:
• The vPC peer-gateway functionality allows a vPC switch to act as the active gateway for packets that are addressed to the router physical MAC address of the vPC peer.
• Some non-compliant devices use the MAC address of the sender device (Switch physical MAC instead of virtual MAC)
• Certain NAS devices (i.e. NETAPP Fast-Path or EMC IP-Reflect) have been found to do this.
vPC Peer Gateway Feature
• Allows a vPC peer to respond to both the HSRP virtual and the real MAC address of both itself and it’s peer
Switch B
Layer 2/3
Layer-2 Access
• Physical IP A • Physical MAC A • Virtual IP • Virtual MAC • Physical MAC B
• Physical IP B • Physical MAC B • Virtual IP • Virtual MAC • Physical MAC A
Switch A
VLAN 100 VLAN 200
30
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
vPC Options: ip arp synchronize
• When the peer-link connection is first established, perform an ARP bulk-sync using CFS over Ethernet to the peer switch
• Improves convergence times for layer 3 flows after recovery of a peer relationship
Primary vPC
Secondary vPC S
P
P S
ARP TABLE
IP1 MAC1 VLAN 100
IP2 MAC2 VLAN 200
ARP TABLE
IP1 MAC1 VLAN 100
IP2 MAC2 VLAN 200
IP1 MAC1 IP2 MAC2
SVIs
S1(config-vpc-domain)#
ip arp synchronize
S2(config-vpc-domain)#
ip arp synchronize
31
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Routing Protocol Peering between vPC Peers Deployment specifics for Nexus Switches
• Nexus 5000/6000 series only support using a VLAN over the vPC Peer-Link
• Do not provision a separate physical link for router peering on Nexus 5000/6000
Nexus 7000 Layer 2/3
Nexus Access Layer-2
Layer-3 Core Layer-3
Links
Nexus 5/6000 Layer 2/3
Nexus Access Layer-2
Layer-3 Core Layer-3
Links
L3 SVI’s with VLAN over
Shared Peer Link
Separate L3 Physical
Port-Channel
• Nexus 7000 series allow use of a separate physical port channel for Layer-3 Peering
• This is optional and can provide greater control of behavior for service integrations
32
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Agenda
• Introduction
• Data Center Requirements for Midsize Organizations
• Building an Access Pod
• Single-Layer Design Examples
• Moving to a Spine/Leaf Fabric
• Conclusion
33
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Designing Switching with Oversubscription: Balancing Cost and Performance
34
3:1
Oversubscription: • Most servers will not be consistently filling a 10 GigE
interface.
• A switch may be a line-rate non-blocking device, but still introduce oversubscription into an overall topology by design.
• Consider Ethernet-based storage traffic when planning ratios, keep plans on the conservative side.
Example device numbers assuming all ports connected: Nexus 5672UP: 48x10Gig + 6x40Gig uplink = 48:24
or 2:1 oversubscription
Nexus 2232PP FEX: 32x10Gig + 8x10Gig uplink = 32:8 or 4:1 oversubscription
Actual oversubscription can be controlled by how many ports and uplinks are physically connected.
4:1
Spine
Leaf
FEX
Servers
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Value of FabricPath/vPC+ in Spine/Leaf Designs Adding FabricPath to a traditional physical DC Topology
35
VM VM VM
FabricPath
vPC+
Spine
Leaf
FEX
UCS Rack Servers
vPC becomes vPC+ when used at the edge of a FabricPath network, the Peer Link also runs FabricPath.
FabricPath Benefits:
• Topology flexibility beyond the vPC limitation of using switches in pairs.
• Ease of configuration.
• Completely eliminates STP from running between Leaf and Spine.
• No Orphan Port isolation on Leaf switch vPC Peer-link loss.
• Improved Multicast and routing support with vPC+.
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
VXLAN Overlay Encapsulation Dynamic network segmentation across traditional Layer-3 boundaries
36
Outer IP Header
Outer MAC Header
Outer UDP Header
FCS VXLAN Header
Original Ethernet Frame
VTEP VTEP
VTEP VTEP
• Overlay encapsulations allow fabric segmentation beyond VLAN limits for greater flexibility and scale.
• Software-only VXLAN implementations can provide Layer-2 workload mobility, but with limited visibility into the physical network.
• Nexus 9000, 7000-F3, 6000X, and 5600 platforms support VXLAN in hardware.
• An optimal control plane will utilize the benefits of VXLAN encapsulation, while integrating directly with the underlying physical network.
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Dynamic Fabric Automation (DFA) Modular building blocks for migration to an automated fabric
37
Leaf
Nexus 7k, 6k, 5k
Spine
Nexus 7k,6k
WAN / DCI
DFA Fabric
Client Access
Border-Leaf
Nexus 7k, 6k
DCNM
DFA Central Point
of Management
• Integration with cloud orchestration platforms and supports dynamic workload mobility.
• Provides a distributed default gateway in the leaf layer to handle traffic to and from any subnet or VLAN.
• Implements segment-id in frame header to eliminate hard VLAN scale limits and support multi-tenancy.
• Provides central point of fabric management (CPOM) for network, virtual-fabric and host visibility.
• Auto-configuration of new switches to expand the fabric using POAP, also provides cable plan consistency checking.
DCNM
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Application Centric Infrastructure (ACI)
38
ACI Leaf
ACI Spine
WAN / DCI
ACI Fabric
• Centralized provisioning and abstraction layer for control of the switching fabric.
• Simplified automation with an application-driven policy model.
• Controller provides policy to switches in the fabric but is not in the forwarding path.
• Normalizes traffic to a VXLAN encapsulation with Layer-3 Gateway and optimized forwarding.
• Decouples endpoint identity, location, and policy from the underlying topology.
• Provides for service insertion and redirection.
Application Policy
Infrastructure
Controller Client Access
APIC APIC APIC
Border-Leaf
Nexus 9000
APIC controller-managed fabric based on Nexus 9000 hardware innovations
Spine/Leaf Data Center and Dual-tier Switching Design Examples
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Migration from single-layer to spine/leaf fabric
40
Nexus 7000, 6004 or 9500 Single Layer
Nexus 5000 or 9300 Single Layer
• Larger switches more likely to becoming spine (or aggregation) layer.
• Smaller switches more likely to becoming leaf/access layer.
• Layer-3 gateway can migrate to spine switches or to “border-leaf” switch pair.
• Spine switches can support leaf switch connections, plus some FEX and direct-attached servers during migration.
Spine/Leaf Data Center Fabric
Spine
Leaf
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Expanded Spine/Leaf Nexus Data Center Fabric Introduction of Spine layer, and FabricPath forwarding
Data Center switching control plane distributed over Dual Layers.
• Spine: FabricPath switch-id based forwarding, but also providing Layer-3 and service integration.
• Leaf: Physical TOR switching or FEX aggregation for multiple racks.
Multi-hop FCoE with dedicated links.
Example Components:
• 2 x Nexus 6004, 2 x Nexus 5672UP
• Layer-3 and Storage Licensing
• 12 x Nexus 2232PP/TM-E
FabricPath enabled between tiers for configuration simplicity and future expansion.
Nexus 5600
Leaf
L3 -----------
L2
Nexus 6004
Spine
10 or 1-Gig attached
UCS C-Series
WAN /DCI
FCoE
iSCSI / NAS
FabricPath
Forwarding
FC
41
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
WAN /DCI
Adding Access Pods to Grow the Fabric Modular expansion with added leaf-switch access pods
L3 -----------
L2
Rack Server Access
with FEX
FCoE
iSCSI / NAS
Rack Server Access
with FEX
Nexus 5600
Leaf
Nexus 6004
Spine
Data Center switching control plane distributed over Dual Layers.
• Spine: FabricPath switch-id based forwarding, but also providing Layer-3 and service integration.
• Leaf: Physical TOR switching or FEX aggregation for multiple racks.
Multi-hop FCoE with dedicated links.
Example Components:
• 2 x Nexus 6004, 4 x Nexus 5672UP
• Layer-3 and Storage Licensing
• 24 x Nexus 2232PP/TM-E
FabricPath enabled between tiers for configuration simplicity and future expansion.
FC
42
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Modular, High Availability Data Center Fabric Virtual Device Contexts partitioning the physical switch
43
WAN
Spine
VDC
Storage
VDC
OTV
VDC
Core
VDC
L3 -----------
L2
Rack Server Access
with FEX
Rack Server Access
with FEX
FCoE
iSCSI / NAS
Nexus 7700 FabricPath Spine, 5672UP Leaf
• High Availability spine-switching design with dual-supervisor.
• VDCs allow OTV and Storage functions to be partitioned on common hardware.
• Add leaf pairs for greater end node connectivity.
• Add spine nodes for greater fabric scale and HA.
• FCoE support over dedicated links and VDC.
Specific Nexus features utilized:
• Integrated DCI support with OTV, LISP, MPLS, and VPLS .
• Feature-rich switching fabric with FEX, vPC, FabricPath, FCoE.
• Investment protection of a chassis-based switch.
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
FabricPath with vPC+ Best Practices Summary
44
L3 -----------
L2
• Manually assign FabricPath physical switch ID’s to easily identify switches for operational support.
• Configure all leaf switches with STP root priority, or use pseudo-priority to control STP.
• Ensure all access VLANs are “mode fabricpath” to allow forwarding over the vPC+ peer-link which is a FabricPath link.
• Use vPC+ at the Layer-3 gateway pair to provide active/active HSRP, or use Anycast HSRP.
• Set FabricPath root-priority on the Spine switches for multi-destination trees.
• Enable overload-bit under FabricPath domain to delay switch forwarding state on insertion into fabric “set-overload-bit on-startup <seconds>”
VPC+ Domain
100
VPC+ Domain
10
FabricPath
SW-ID: 101
FabricPath
SW-ID: 102
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
UCS Blade System Rack Servers/FEX
Dual-Layer, ACI-Ready Data Center NX-OS Standalone mode with vPC
45
Nexus 9396PX
Leaf
Nexus 9500
• NX-OS standalone model allows deployment of vPC multi-tier switching architecture without using the APIC controller.
• Allows expansion beyond a single pair of leaf switches with a 40GigE fabric using hardware that is ACI-ready.
• vPC is used for connectivity between switching layers in a traditional Aggregation/Access topology.
Design Notes:
Layer-3 connectivity and services would move to designated leaf switches in an ACI fabric.
IP-based storage now, future support of FCoE.
WAN / DCI
Client Access
iSCSI /
NAS
L3 -----------
L2
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Data Center Fabric Provisioning and Operations Evolving toolsets and platforms to utilize based on YOUR requirements.
46
OnePK:
Python
Java, C
Open Programmability
NX-API/JSON
APIC
Application Centric Infrastructure Deployment
APIC
DDNM OpenDaylight
Controller
Automated and Optimized Networking
DCNM
APIC
OpFlex Policy
Protocol
Nexus
Switching Fabric Designs
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Summary: Scalable Midsize Data Center Designs
47
• Midsize Data Centers can benefit from the same technology advances as larger ones.
• Smaller Nexus platforms allow building feature-rich fabrics at an entry-level scale.
• Example designs allow future migration to controller-based fabric solutions such as DFA, and ACI.
• Both direct programmability and controller-based options are available for all Nexus switching platforms.
• Plan ahead for re-use of components in new roles as needs change.
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Related Sessions
48
Session-ID Session Name
BRKDCT-3445 Building Scalable Networks with NX-OS and Nexus 7000
BRKDCT-2081 Cisco FabricPath Technology and Design
BRKDCT-2121 Virtual Device Context (VDC) Design and Implementation
BRKDCT-2334 Real World Data Center Deployments and Best Practice
Session
BRKDCT-2385 Cisco Dynamic Fabric Automation Architecture
BRKDCT-2404 VXLAN Deployment Models
BRKDCT-2000 Introduction to Application Centric Infrastructure
BRKDCT-2006 Integration of Hypervisors and L4-7 Services into an ACI Fabric
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Complete Your Online Session Evaluation
• Give us your feedback and you could win fabulous prizes. Winners announced daily.
• Complete your session evaluation through the Cisco Live mobile app or visit one of the interactive kiosks located throughout the convention center.
Don’t forget: Cisco Live sessions will be available for viewing on-demand after the event at CiscoLive.com/Online
49
© 2014 Cisco and/or its affiliates. All rights reserved. BRKDCT-2218 Cisco Public
Continue Your Education
• Demos in the Cisco Campus
• Walk-in Self-Paced Labs
• Table Topics
• Meet the Engineer 1:1 meetings
50