ntg ti dtc tnext generation data center networking. for emc vmware.pdfvmw esx vm #4 vm #3 vm #2 vm...
TRANSCRIPT
N t G ti D t C tNext Generation Data Center Networking.
Intelligent Information Network.,עמרם- עמי בן
יועץ להנדסת מערכותamib@cisco [email protected]
Cisco Israel.
1© Copyright 2008 EMC Corporation. All rights reserved.
Transparency in the Eye of the Beholder
With virtualization, VMs have a
transparent view of their resources…
2© Copyright 2008 EMC Corporation. All rights reserved.
Transparency in the Eye of the Beholder
…but its difficult to monitor & apply network and storage policy back
to virtual machines
3© Copyright 2008 EMC Corporation. All rights reserved.
Transparency in the Eye ofthe Beholder
Scaling globally depends on maintaining transparency while also
providing operational consistency
4© Copyright 2008 EMC Corporation. All rights reserved.
Why the Network is ChangingChanging
1. Desire for VM-level access-layer policy &
it imonitoring
2. Virtualization is driving higher link tili ationhigher link utilization
3. More demanding role of network (i e DRS)network (i.e. DRS)
4. Current approaches lead to inconsistentlead to inconsistent network policies
5© Copyright 2008 EMC Corporation. All rights reserved.
VN-Link Brings VM Level Granularity
Problems:VM tiVMotion
• VMotion may move VMs across physical ports—policy must
followfollow • Impossible to view or apply policy to locally switched traffic
• Cannot correlate traffic on physical links—from multiple
VN-Link:• Extends network to the VM
p y pVMsVLAN
101
• Extends network to the VM • Consistent services
• Coordinated, coherent managementContinuum of deployment options
6© Copyright 2008 EMC Corporation. All rights reserved.
• Continuum of deployment options
What is VN-Link?
VN-Link or Virtual Network Link is a term which describes a newVN Link, or Virtual Network Link, is a term which describes a new set of features and capabilities that enable VM interfaces to be individually identified, configured, monitored, migrated and diagnosed.g
The term literally refers to a VM specific link that is created Hypervisor
VNIC VNIC
specific link that is created between the VM and Cisco
switch. It is the logical equivalent & combination of a
yp
equivalent & combination of a NIC, a Cisco switch interface
and the RJ-45 patch cable that hooks them together
VETH VETH
VN-Link requires platform support for Port Profiles, Virtual Ethernet Interfaces, Virtual Center Integration, and Virtual Ethernet mobility.
hooks them together.
7© Copyright 2008 EMC Corporation. All rights reserved.
VN-Link With the Cisco Nexus 1000V
Cisco Nexus 1000VSoftware Based
VM VMVM
ServerVMVM
#1VM #4
VM #3
VM #2
Nexus 1000V
Industry’s first third-party ESX switchBuilt on Cisco NX-OS
Compatible with switching platforms VMW ESX
NIC NIC
Nexus
Compatible with switching platformsMaintain VirtualCenter provisioning
model unmodified for server administration but also allow network
LAN
Nexus1000V
administration but also allow network administration of Nexus 1000V via
familiar Cisco NX-OS CLI
A d
P li B d N Di tiM bilit f N t k
Announced VMWorld 2008Shipping 2Q09
8© Copyright 2008 EMC Corporation. All rights reserved.
Policy-Based VM Connectivity
Non-DisruptiveOperational Model
Mobility of Network and Security Properties
VN-Link with the Nexus 5000
N S it h ith VN Li kNexus Switch with VN-LinkHardware Based
Allows scalable hardware based ServerAllows scalable hardware-based implementations through hardware
switchesStandards-based initiative: Cisco &
VMW ESX
VM #4
VM #3
VM #2
VM #1
Standards based initiative: Cisco & VMware proposal in IEEE 802 to specify
“Network Interface Virtualization”Combines VM and physical network
VN-Link
p yoperations into one managed node
Future availability Nexus 5000
P li B d N Di tiM bilit f N t k
9© Copyright 2008 EMC Corporation. All rights reserved.
Policy-Based VM Connectivity
Non-DisruptiveOperational Model
Mobility of Network and Security Properties
Future Cisco VN-Link Architecture
S ftServer 2Server 1
VM #1
VM #4
VM #3
VM #2
VM #5
VM #7
VM #6
VM #8
Software
HypervisorHypervisor
Nexus 1000 VEM
ScalabSW
(IV)HW
ble ServFlexible VM Connectivity
(IV)HW (IV)
vicesFEX(Blade or TOR)
Flexible VM ConnectivitySW or HW based options
Blade or Rack Optimized Servers1 or 10G connectivity
Unified Management – N5KManage entire access layer
infrastructure through N5K (virtual, blades FEX)
Scalable ServicesDistributed SW based feature
Hardware
Centralized & Local Switchingblades, FEX)
Configuration, policy, statistics managed centrally, exposed to server
admin for VM application
Supports both centralized (VNTag) &
Distributed SW based – feature velocity
Centralized HW based – high performance
Services Offload – CPU intensiveVSM
10© Copyright 2008 EMC Corporation. All rights reserved.
Nexus 5000
Supports both centralized (VNTag) & local (N1K) switching
Services Offload CPU intensive features handled in N5K
Cisco Virtualization-CentricCisco Virtualization Centric Networking
1. Virtualization aware accessaware access layer
2 P li b d2. Policy-based network managementmanagement
3. Large-scale virtual machine mobility
11© Copyright 2008 EMC Corporation. All rights reserved.
Network Scale Virtualization
VMotionData
Center
VMotionCenter
Data
Virtualize at Cluster Scale
Center
Service Provider Cluster Scale
Virtualize at Data Center
ScaleVirtualize atVirtualize at
Network Scale
12© Copyright 2008 EMC Corporation. All rights reserved.
Data Center
Towards Cloud Computing
• Standards based virtualization at a network scale
• Transparent interoperability between on-premise and off premise computing
Service Provider
premise and off-premise computing• Ex. VDI and DR
• Enterprise and service provider use cases
Virtualize atVirtualize at Network Scale
13© Copyright 2008 EMC Corporation. All rights reserved.
On Premise Data Center
VM d Bl d SVM and Blade Servers Optimized SAN
14© Copyright 2008 EMC Corporation. All rights reserved.
Virtual Machines (VM) and Storage Networking
Virtual Machines pose new requirements for SANsSwitching Performance
Support complex, unpredictable, dynamically changing traffic patterns
Provide fabric scalability for higherVirtualized
ServersVirtualized
ServersVirtualized
ServersVirtualized
Servers
VirtualProvide fabric scalability for higher workload
Differentiate Quality of Service on a per VM b i
Virtual Machines
basis
Deployment, Management, Security Fabric
Create flexible and isolated SAN sections, support management Access Control
Support performance monitoring trending
StorageArray
StorageArray
Support performance monitoring, trending, and capacity planning up to each VM
Allow VM mobility without compromising it
15© Copyright 2008 EMC Corporation. All rights reserved.
security Tier 1 Tier 2 Tier 3
Virtual Machines Transparent MDS 9000 SAN
Switching Infrastructure to support growing VM Bandwidth g pp g gFlexibility, Performance, Density and Security 8 Gbps Fibre Channel Investment ProtectionInvestment Protection
VN-Link Storage Services for VM Optimized SANPer VM Unique HBA association (NPIV)Per VM Quality-of-ServicePer VM Security, Performance Monitoring and ManagementVM belongs to different VSAN ( F-port trunking)VM belongs to different VSAN ( F-port trunking)
Blade Server Optimized SANNetwork Port Virtualizer (NPV)Flex AttachF-port Port Channel, F-Port Trunking
16© Copyright 2008 EMC Corporation. All rights reserved.
High-Performance MDS 9000 Family Switching Architecture
external interfaces
external interfaces
VOQs
Crossbar and arbiter architecture designed to provide the best performance in the most difficult traffic conditions
Crossbarswitch fabric
s
Virtual Output Queues (VOQs) eliminate head-of-line blocking
Even and predictable throughput and latency for many-to-one and many-to-few traffic conditions
100% wirespeed for both large and small frames
Crossbarswitch fabric
Fair load-balancing for both large and small frames Centralized
Crossbar switch
17© Copyright 2008 EMC Corporation. All rights reserved.
Crossbar switch architecture
QoS for Individual Virtual Machines
Zone-Based QoS:VM-1 has Priority; VM-2 and any Additional Traffic has Lower Priority
VM-1 Reports Better Performances than VM-2
tual
hi
nes VM-2VM-1 Cisco MDS 9124
Multilayer Fabric Switch
Congested Link
p
Cisco MDS 9222i Multilayer Fabric Switch
Virt
Mac
Storage Array
IVRrvis
or
pWWN-V2Low Priority
FC
QoS QoSIVR
Hyp
er
pWWN V1
FC
QoS
pWWN-T
HW
FCpWWN-P
pWWN-V1High Priority
Low-Priority Traffic
18© Copyright 2008 EMC Corporation. All rights reserved.
MDS 9000 Family Virtual SANs (VSANs)Virtual SANs (VSANs)
Hardware-based isolation of taggedHardware based isolation of tagged traffic belonging to differentVSANs
T ffi t d t F P t i d
Fibre ChannelServices for Blue VSAN
Fibre ChannelVSAN header removed at egress– Traffic tagged at Fx_Port ingress and
carried across links between switches– Any switch interface in the fabric can be
placed in any VSAN
Services for Red VSAN
removed at egress point
placed in any VSAN–Independent instance of Fibre
Enhanced ISL (EISL) Trunk carries tagged traffic from Multiple
VSANChannel services for each newly created VSAN– Zone server, name server, management Fibre ChannelVSAN Header
VSANs
, , gserver, principle switch election, etc.
– Each service runs independently and is managed/configured independently
Services for Blue VSAN
Fibre ChannelServices for Red VSAN
VSAN Header Iadded at ingress point based on
port membership
19© Copyright 2008 EMC Corporation. All rights reserved.
Fully Extending Fabric Virtualization to Virtual MachinesVirtualization to Virtual Machines
NPIV allows each virtual machine Ph i l(VM) to be associated to a unique
virtual HBA– VMs register independently via unique 3
PhysicalServer
PWWN and obtain unique FCID– Standard-based (ANSI T11)
Separate fabric login by each VM
3 Virtual MachinesSeparate fabric login by each VM
enables VM level:– Zoning
SecuritySingle physical FC link carrying multiple – Security
– Traffic mgmt
Combined with F-Port Trunking, each
VSAN
gVM can now belong to a different VSAN
ERP E-Mail
20© Copyright 2008 EMC Corporation. All rights reserved.
Web
F-Port Trunking
Non VSAN-Trunking capable
Extend VSAN tagging to the N_Port to F_Port connection Hardware-based isolation of tagged
Fibre ChannelServices for Blue VSAN
g pend node
Hardware based isolation of tagged traffic belonging to different VSANs up to Servers or Storage Devices
Blue VSANFibre ChannelServices for Red VSAN
Trunking
VSAN Header removed at egress
point
VSAN-trunking-enabled drivers required for end nodes (for example, Hosts)
gE_Port
Enhanced ISL (EISL) Trunk carries taggedHosts)
Traffic tagged in Host VSAN trunking
Trunking E_Port
Trunk carries tagged traffic from multiple
VSANs
Traffic tagged in Host depending on the VM Fibre Channel
Services for Blue VSAN
Fibre ChannelServices forVSAN Hdader
VSAN-trunking support required
by end nodes
21© Copyright 2008 EMC Corporation. All rights reserved.
Services for Red VSAN
VSAN Hdader added by the HBA driver indicating Virtual Machine
membership
Trunking F_Port
N-Port Virtualizer (NPV)
Enabling Large-Scale Blade g gServer Deployments
NPV simplifies deployment andBlade System Blade System
Blade Switch is configured in NPV mode (i.e. HBA
Blade Switch Deployment Model – NPV Mode
NPV simplifies deployment and management of large scale Blade Server environments
– Reduces number of Domain IDs
Blade 1
NPV
Blade 2
Blade N
Blade 1
NPV
Blade 2
Blade N…
(mode)
– Minimizes interoperability issues with core SAN switches
– Minimizes coordination between Server and SAN administrators
NPV NPV
SAN
NPV converts a Blade Switch operating as “FC Switch” to a “FC HBA”
Storage
NPV is available on the following platforms
– IBM and HP Blade Switches
Storage
22© Copyright 2008 EMC Corporation. All rights reserved.
– MDS 9124 & 9134 Fabric Switches
Using Virtual Machines in Blade Servers with NPIV and Cisco MDS Bl d S i h S iMDS Blade Switch Series
Th i di id l bl dThe individual blade server can use NPIV to provide the virtual servers with virtual HBAsVM
04VM
05VM
06
VM07
VM08
VM09
VM10
VM11
VM12
VM01
VM02
VM03
rver
Bla
des
Disk Array(12 LUNs That Mayua
l N-P
orts
FC FC FC FCSe
al N
-Por
ts
al N
-Por
ts
al N
-Por
ts(12 LUNs That May
be MappedIndividually)3
Virt
u
Cisco MDS 9000 Family Core Switch
FCCisco MDS Blade Switch Series
3 Vi
rtua
3 Vi
rtua
3 Vi
rtua
FC
NestedNPIV (12 Vi t l N P t )
23© Copyright 2008 EMC Corporation. All rights reserved.
NPIV (12 Virtual N-Ports)
FlexAttach
Flexibility for Adds, Moves, and Changes FlexAttach (Based on WWN NAT)
E h Bl d S it h F P t i d
Blade 1
Blade N
Blade Server
New
B
lade
….–Each Blade Switch F-Port assigned a virtual WWN–Blade Switch performs NAT operations on real WWN of attached server
NPV
N
Flex Attach
BenefitsS f
No Blade Switch Config Change
–No SAN re-configuration required when new Blade Server attaches to Blade Switch port –Provides flexibility for server administrator,
No Switch Zoning Change
SAN
y ,by eliminating need for coordinating change management with networking team–Reduces downtime when replacing failed Blade Servers
StorageNo Array Configuration Change
24© Copyright 2008 EMC Corporation. All rights reserved.
Blade Servers
Enhanced Blade Switch Resiliency
F-Port PortChannelsF-Port Port Channel
Bundle multiple ports in to 1 logical linkAny port, any module
High-Availability (HA)Storage
stem
Blade N
F-Port Port Channel
Core Director
Blade Servers are transparent if a cable, port, or line cards fails
Traffic Management
Bla
deSy
s
Blade 1
Blade 2
F-PortN-Port
SAN
Higher aggregate bandwidthHardware-based load balancing
F-Port TrunkingF-Port Trunking for Blade Switch
Partition F-Port to carry traffic for multiple VSANsStorage
m Blade N
Core Director
VSAN 1
F-Port Trunking
Extend VSAN benefits to BladeServers
Separate management domainsS f
Bla
de S
yste
m
Blade 1
Blade 2
Blade N VSAN 1
VSAN 2
VSAN 3
F PortN Port
SAN
25© Copyright 2008 EMC Corporation. All rights reserved.
Separate fault isolation domainsDifferentiated services: QoS, Security
F-PortN-Port
26© Copyright 2008 EMC Corporation. All rights reserved.