vmwarenetworkingnexus1000vm-fex-v2-140125071045-phpapp01
Post on 19-Aug-2015
41 Views
Preview:
TRANSCRIPT
David PasekVirtualization Datacenter Infrastructure Architect
dpasek@cisco.com Twitter: @david_pasekCISCO Services, 2010
VMware NetworkingCISCO VN-Link Nexus 1000v VMware PTS
CISCO VM-FEX
Transparency in the Eye of the Beholder
With virtualization, VMs have a transparent
view of their resources…
Hypervisor
Transparency in the Eye of the Beholder
…but Its difficult to correlate from a network point of view
Hypervisor
Server Virtualization Issues
1. vMotion moves VMs across physical ports—the network policy must follow
2. Impossible to view or apply network policy to locally switched traffic
3. Need shared nomenclature for security policies between network and server admin
PortGroup
Hypervisor
Hypervisor
vCenter
Physical Switch Interface
VMware Virtual Networking
Virtual Switch (vSwitchN)• Connects to physical adapters (vmnicN)
• 0, 1, 2 or more (up to 32) Gb uplinks• Up to four 10Gb uplinks
• Port connection types• Virtual Machine network• VMkernel network
VMotion, iSCSI, NFS Host management (ESXi 4 only)
• Service console (host management network for ESX 4 only, not ESX 4i)
• Port groups• Aggregate/segment virtual switch ports• Identified by network labels• Support VLAN tagging
6
Physical Network
VMkernel Network
Service Console Network
Virtual Machine Network
vSwitch(with Port Groups)
Physical Adapters(vmnicN)
(ESX Server 4 only, not 4i)
Virtual Networking for an ESX Server Host – Example
7
serviceconsole
VMkernelTCP/IPServices
Physical Adapters
vSwitches
Virtual Machines
Physical Switches
1000 Mbps 100 Mbps100 Mbps 1000 Mbps 1000 Mbps
VLAN Port Groups
101, 102
vswif
Production NetworkHost
Management Network
VMkernel Network(VMotion, iSCSI,
NFS)
Test Network(VLAN 101, 102)
TrunkPort
service console and
VMkernel Ports
service console available with ESX Server 4 only, not 4i
VN-Link: Virtual Network Link• Extends the network to the
virtualization layer• Requires innovation within
networking equipment – Virtual Ethernet Interface – Port Profiles – Virtual Interface mobility
• Solution Integrated with hypervisor management solution
Hypervisor
vNICvNIC
vETH vETH
VN-Link View of the Access Layer
• VN-Link provide visibility to the individual VMs
• Policy can be configured per-VM
• Policy is mobile within the ESX cluster
• VN-Link refers to a literal link between a VM VNIC & a Cisco VN-Link Switch
VEMVEM
VEM
Boundary of network visibility
Cisco VN-Link
Nexus 1000V VSMvCenter
vSphere
Nexus1000V VEM
vSphere
Nexus1000V VEM
Defined PoliciesWEB AppsHRDBDMZ
Defined PoliciesWEB AppsHRDBDMZ
VM Connection Policy• Defined in the network• Applied in Virtual Center• Linked to VM UUID
VM Connection Policy• Defined in the network• Applied in Virtual Center• Linked to VM UUID
Faster VM Deployment
Policy-Based VM Connectivity
Policy-Based VM Connectivity
Mobility of Network & Security Properties
Mobility of Network & Security Properties
Non-Disruptive Operational Model
Non-Disruptive Operational Model
Cisco VN-Link: Virtual Network LinkCisco VN-Link: Virtual Network Link
VM VM VM VM VM VM VM VM
Cisco VN-Link
Nexus 1000V VSM
vSphere
Nexus1000V VEM
vSphere
Nexus1000V VEM
VN-Link Property Mobility• VMotion for the network• Ensures VM security• Maintains connection state
VN-Link Property Mobility• VMotion for the network• Ensures VM security• Maintains connection state
VMs Need to Move• VMotion• DRS• SW Upgrade/Patch• Hardware Failure
VMs Need to Move• VMotion• DRS• SW Upgrade/Patch• Hardware Failure
vCenter
Richer Network Services
Policy-Based VM Connectivity
Policy-Based VM Connectivity
Mobility of Network & Security Properties
Mobility of Network & Security Properties
Non-Disruptive Operational Model
Non-Disruptive Operational Model
Cisco VN-Link: Virtual Network LinkCisco VN-Link: Virtual Network Link
VM VM VM VM VM VM VM VMVM VM VM VM
Cisco VN-Link
Nexus 1000V VSM
vSphere
Nexus1000V VEM
vSphere
Nexus1000V VEM
vCenter
Network Admin Benefits• Unifies network mgmt and ops• Improves operational security• Enhances VM network features• Ensures policy persistence• Enables VM-level visibility
Network Admin Benefits• Unifies network mgmt and ops• Improves operational security• Enhances VM network features• Ensures policy persistence• Enables VM-level visibility
VI Admin Benefits• Maintains existing VM mgmt• Reduces deployment time• Improves scalability• Reduces operational workload• Enables VM-level visibility
VI Admin Benefits• Maintains existing VM mgmt• Reduces deployment time• Improves scalability• Reduces operational workload• Enables VM-level visibility
Increased Operational Efficiency
Policy-Based VM Connectivity
Policy-Based VM Connectivity
Mobility of Network & Security Properties
Mobility of Network & Security Properties
Non-Disruptive Operational Model
Non-Disruptive Operational Model
Cisco VN-Link: Virtual Network LinkCisco VN-Link: Virtual Network Link
VM VM VM VM VM VM VM VM
Cisco Nexus 1000V Components
A B F GC D E
vCenter Server
Virtual Ethernet Module(VEM) Replaces Vmware’s virtual switch Enables advanced switching capability on
the hypervisor Provides each VM with dedicated “switch
ports”
Virtual Supervisor Module(VSM) CLI interface into the Nexus 1000V Leverages NX-OS Controls multiple VEMs as a single network
device
Virtual Supervisor Modules Options
A E FB C D
vCenter Server
VSM - Virtual Appliance ESX Virtual Appliance Supports 64 VEMs Installable via GUI, OVA or ISO file
Nexus 1010 - Physical Appliance Cisco Branded Physical Server Hosts 4 VSM Virtual Appliance Deployed in pairs for redundancy
Flexible Deployment Options
• Any type of physical switch (Cisco & other vendors)
• 1G & 10G NICs• All types of servers
supporting vSphere4 / ESX 4i
Cisco Nexus 1000V Component Communication
• Communication using the VMware VIM API over SSL• Connection is setup on the VSM• Requires installation of vCenter plug-in automatically done by installer App• Once established the Nexus 1000V is created in vCenter
Pod1-VSM# show svs connections
connection VC: hostname: phx2-dc-pod5-vc ip address: 10.95.5.158 protocol: vmware-vim https certificate: default datacenter name: Phx2-Pod5 DVS uuid: df 11 38 50 0a 95 83 4e-95 69 d6 a7 f4 76 4a 7f config status: Enabled operational status: Connected
vCenter Server
Cisco VSMs
Port Profile: Network Admin View
Pod1-VSM# show port-profile name WebProfile port-profile WebProfile description: status: enabled capability uplink: no system vlans: port-group: WebProfile config attributes: switchport mode access switchport access vlan 110 no shutdown evaluated config attributes: switchport mode access switchport access vlan 110 no shutdown assigned interfaces: Veth10
Support Commands Include:
Port management VLAN PVLAN Port-channel ACL Netflow Port Security QoS
Support Commands Include:
Port management VLAN PVLAN Port-channel ACL Netflow Port Security QoS
WebServers
Visibility of the VM
Pod1-VSM# sh int virt-------------------------------------------------------------------------------Port Adapter Owner Mod Host-------------------------------------------------------------------------------Veth1 vmk1 VMware VMkernel 3 esx1.pod1.nexus1000v.laVeth2 vmk1 VMware VMkernel 4 esx2.pod1.nexus1000v.laVeth3 Net Adapter 1 Nexus1000V-VSM-Pod1 3 esx1.pod1.nexus1000v.laVeth4 Net Adapter 1 Nexus1000v-Beta 4 esx2.pod1.nexus1000v.laVeth5 Net Adapter 1 vShield-esx1 3 esx1.pod1.nexus1000v.laVeth6 Net Adapter 1 vShield Manager 3 esx1.pod1.nexus1000v.laVeth7 Net Adapter 1 vShield-esx2 4 esx2.pod1.nexus1000v.laVeth8 Net Adapter 1 WinXP-01 3 esx1.pod1.nexus1000v.laVeth9 Net Adapter 1 WinXP-02 4 esx2.pod1.nexus1000v.la
Visibility of the VM TrafficPod1-VSM# sh int veth8
Vethernet8 is up < ---- SNIP --- > Port mode is trunk 5 minute input rate 0 bits/second, 0 packets/second 5 minute output rate 40 bits/second, 0 packets/second Rx 426 Input Packets 125 Unicast Packets 15 Multicast Packets 286 Broadcast Packets 50941 Bytes Tx 81182 Output Packets 136 Unicast Packets 18 Multicast Packets 81028 Broadcast Packets 81046 Flood Packets 8387936 Bytes 1 Input Packet Drops 0 Output Packet Drops
Cisco Nexus 1000V Communication
The Nexus 1000V is a distributed switch so the VSM needs to program the VEM over the network.
The Nexus 1000V uses the same backplane messaging than the Nexus 7000 or MDS called AIPC
There are two ways to extend that connection:
-Over layer 2 using a control and a packet VLAN
-Over layer 3 using the layer 3 control capability
Nexus 1000V VSM
A B C
Cloud
2525
Layer 2 connectivity of the VSM and VEM
VM VM VM VM
Nexus 1000V VSM
L2 Cloud
Control Interface•Extend the usual backplane of the switch over the network•Carries low level messages to ensure proper configuration of the VEM. •Maintains a 2sec heartbeat with the VSM to the VEM (timeout 6 seconds)•Maintains synchronization between primary and secondary VSMs•Maximum of 7Mo of Traffic
Packet Interface •For control plane processing like CDP, IGMP snooping or stat collections like SNMP, Netflow•Maximum of 1Mo of Traffic
ControlPacket
Two virtual interfaces are used to communicate between the VSM and VEM
2626
Layer 2 connectivity of the VSM and VEM
VM VM VM VM
Nexus 1000V VSM
L2 Cloud
ControlPacket
Management, Packet and Control Interfaces can use the same VLAN
Control and Packet VLAN needs to be configured end to end to allow communication between the VSM and the VEM.
Control VLAN and Packet VLAN needs to be configured as system VLAN on the uplink port-profile
Best Practices
2727
Layer 3 connectivity of the VSM and VEM
VM VM VM VM
Nexus 1000V VSM
L3 Cloud
If there is no L2 adjacency between the VSM and the VEM,
VSM use a new svs mode type called layer 3 using either the control Interface or the management Interface
User can specify an IP address for control0 to use a separate network for VEM – VSM communication
svs-domain svs mode L3 interface (control0 | mgmt0)
Connectivity of the VSM and VEM
VM VM VM The VSM can use its own VEM as long as the customer is running 4.0(4)SV1(2) or above.
Before that he should be using the vSwitch to connect the VSM to the Network
There should always be 2 VSM deployed.Those 2 VSM should not be on the same Host
Best Practices
VM VM VM
Cisco Inc., Company Confidential - NDA Required 35
UCS Virtual Interface Card Overview
PCIe x16
10GbE/FCoE
User DefinablevNICs
Eth
0
FC
1 2
FC
3
Eth
128
UCS M81KR VIC (Palo) is Converged Network Adapter designed for both single-OS and VM-based deployments• Virtualize in Hardware• PCIe compliant
High Performance• 2x 10Gb• 500K IOPS
The OS/Hypervisor sees up to 128 distinct PCIe devices • Ethernet vNIC and FC vHBA
VN-Link in Hardware – Ideal for Virtualization Environments • Bypass vSwitch to deliver VN-Link in
hardware• Tight integration with VMware vCenter
Cisco UCS VIC OverviewMultiple Separate Interfaces – Ideal for Certain Workloads
ServerServer
Traditional CNATraditional CNA
2 x 10G ports
2 NICs 2 HBAs
ServerServer
2 x 10G ports
n NICs m HBAs
Cisco VICCisco VIC
• Ideal for workloads/applications that recommend multiple separate interfaces
• Applicable to both Single OS (e.g. Windows/RHEL) or Virtualized (ESX/Hyper-V) environments
• Virtualization achieved using classical PCIe devices (no special OS support necessary)
n + m ~=128
Cisco VIC Offers VN-Link in HardwareInnovation for Virtual Server Networking
• VN-Link refers to a virtual link between a VM VNIC & a virtual interface on the Fabric Interconnect
• Virtual Network Link (VN-Link) BenefitsVM-level network granularityPolicy-based configuration of VM interfaces (Port Profiles) Mobility of network and security properties (follow the VM during Vmotion)Non-disruptive operational modelAllows virtual host interfaces to be remotely managed/configured
• VN-LINK in hardware offers best performance
VN-link in SoftwareVN-Link in Hardware
(VM-FEX)VN-Link in Hardware
(VMDirectPath)
Deployment Options for Virtualized EnvironmentsMultiple Options Available, Invisible to VM
Nexus 1000V hypervisor switch uplinks connect to Cisco virtual interfaces (VIFs)
Each VM connects to a Cisco virtual interface (VIF) and does a pass through of the hypervisor switch
Each VM bypasses the hypervisor completely and connects to a Cisco virtual interface (VIF)
Optimize IO for Virtualized EnvironmentsScenario 1: VN-LINK in Software
VM VMVMVMVM
Policy-Based VM Connectivity
Policy-Based VM Connectivity
Mobility of Network & Security Properties
Mobility of Network & Security Properties
Non-Disruptive Operational Model
Non-Disruptive Operational Model
Cisco VN-Link: Virtual Network LinkCisco VN-Link: Virtual Network Link
Cisco Nexus 1000V VEM
Hypervisor
Cisco VIC
VN-LINK in SW = Nexus 1000V
• Each VM vnic connects to Nexus 1000V hypervisor switch
• Nexus 1000V switch uplinks connect to multiple distinct Cisco virtual interfaces (VIFs)
Likely Use Case:
• Customer has already standardized on Nexus 1000V for advanced network features like ERSPAN & Netflow
• Customer deployment needs higher scalability wrt # of VMs
o o o
Optimize IO for Virtualized Environments Scenario 2: VN-LINK in Hardware
VM VMVMVMVM
Policy-Based VM Connectivity
Policy-Based VM Connectivity
Mobility of Network & Security Properties
Mobility of Network & Security Properties
Non-Disruptive Operational Model
Non-Disruptive Operational Model
Cisco VN-Link: Virtual Network LinkCisco VN-Link: Virtual Network Link
Hypervisor
VN-LINK in HW
• Each VM vnic maps to a different virtual interface (VIF) on the Fabric Interconnect
Likely Use Case:
• Customer benefits from centralized Management through UCSM
• Customer needs higher performance provided by VN-LINK in hardware
o o o
o o o Cisco VIC
Simplify Management and Facilitate Collaboration
UCS ManagerUCS Manager
vCenter Server
1. Set Up Connection
2. Create vDS (switch)3. Define VM Port Profiles
4. vDS and VM Port Profiles available in Vcenter
Cisco Virtualized Adapter Benefits• Tight integration with hypervisor mgmt tool (e.g. vCenter)• Network admin sets up network policies, server/virtualization admin
applies them – facilitate collaboration between groups
Cisco VIC
6. VM Port Profile applied to dynamic vnic’s
COS membership (MTU), VLAN membership,Pinning group,Rate limiting applied here
5. VM created and connected to vDS, VM Port Profile available as Port Group
Optimize IO for Virtualized Environments Scenario 3: VN-LINK in HW with VMDirectPath
Policy-Based VM Connectivity
Policy-Based VM Connectivity
Mobility of Network & Security Properties
Mobility of Network & Security Properties
Non-Disruptive Operational Model
Non-Disruptive Operational Model
Cisco VN-Link: Virtual Network LinkCisco VN-Link: Virtual Network Link
VN-LINK in HW with VMDirectPath
• Bypass of Hypervisor completely… VM talks directly to Cisco Virtualized adapter
• Much higher Performance (native HW performance)
Likely Use Case:
• High Performance Workloads (e.g. Appliances)
• Vmotion doesn’t work today … Workload doesn’t need it
Hypervisor
o o o Cisco VIC
VMVMVM o o o
Cisco UCS VICDeployment Guidelines for Vmware vSphere
Deployment Option Min Vsphere Package VmotionAllowed?
Where is Port Policy (aka port-group)
created?
Vmware vswitch with Cisco VIC for uplinks
Any vSphere Package Yes vCenter
Vmware vDS (vNetwork Distributed Switch) with Cisco VIC for uplinks
Enterprise Plus Yes vCenter
VN-LINK in Software (Nexus 1000V) with Cisco VIC for uplinks
Enterprise Plus (also need to buy Nexus 1000V)
Yes VSM in Nexus 1000V
VN-LINK in Hardware (VM-FEX)
Enterprise Plus (no need to buy any other software)
Yes UCS Manager
VN-LINK in Hardware (VMDirectPath)
Any vSphere Package No (In Future)
UCS Manager
top related