h3c overlay technologies white paper

23
H3C Overlay Technologies White Paper Copyright © 2018 New H3C Technologies Co., Ltd. All rights reserved. No part of this manual may be reproduced or transmitted in any form or by any means without prior written consent of New H3C Technologies Co., Ltd. The information in this document is subject to change without notice.

Upload: others

Post on 02-Nov-2021

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: H3C Overlay Technologies White Paper

H3C Overlay Technologies White Paper

Copyright © 2018 New H3C Technologies Co., Ltd. All rights reserved.

No part of this manual may be reproduced or transmitted in any form or by any

means without prior written consent of New H3C Technologies Co., Ltd.

The information in this document is subject to change without notice.

Page 2: H3C Overlay Technologies White Paper

i

Contents

Overview ························································································1

Challenges of modern data centers ································································································· 1 Benefits of network overlays ··········································································································· 2

Network overlay technologies ·····························································2

Basic overlay network model ·········································································································· 2 Overlay deployment models ··········································································································· 3 Control plane implementations ······································································································· 4 Network overlay technologies for modern data centers ········································································ 4

VXLAN ···························································································4

VXLAN benefits and restrictions ······································································································ 4 VXLAN network architecture ·········································································································· 5 VXLAN packet format ··················································································································· 6 VXLAN gateways and VXLAN routers ······························································································ 6 Intra-VXLAN and inter-VXLAN forwarding by deploying a distributed OVS gateway ·································· 7

Benefits of an OVS distributed gateway ····················································································· 7 Intra-VXLAN and Inter-VXLAN forwarding ·················································································· 7

Flow table-based routing between overlay and physical networks ························································· 8 Overlay-to-physical unicast flow ······························································································· 9 Physical-to-overlay unicast flow ····························································································· 10

Forwarding on the VXLAN network ································································································ 11 Assignment of traffic to VXLANs ····························································································· 11 MAC learning in the data plane ······························································································ 11 Unicast forwarding ··············································································································· 12

VM migration on the overlay network ····························································································· 14 High availability of overlay gateways ······························································································ 15 Scale-out of overlay gateways ······································································································ 16 Overlay security deployment ········································································································ 17

Overlay solutions ··········································································· 18

Comparison of overlay deployment models ····················································································· 18 Network-based overlay solution ···································································································· 19 Host-based overlay solution ········································································································· 19 Hybrid overlay application scenario ······························································································· 20

Page 3: H3C Overlay Technologies White Paper

1

Overview

Network overlay technologies build virtual networks on top of an underlying physical network to convey services that require different network topologies without the need to modify the physical network. They enable virtual machine workload mobility, data mobility, disaster recovery, and business continuity by extending Layer 2 connectivity across different sites of a data center at minimum costs.

Challenges of modern data centers

Modern data centers are increasingly virtualized and migrating to cloud-first IT strategy to increase resource use efficiency, gain business agility, and offer quick, responsive deployments of diversified applications. These trends have brought challenges that traditional networking technologies can hardly address by simply adding enhancements.

Mobility of virtual machines

Virtualized modern data centers require virtual machine workload mobility, data mobility, disaster recovery, and business continuity. It is essential that virtual machines can move between data center sites without changing their IP addresses, so their movements are transparent to users and do not disrupt traffic.

These requirements have compelled the extension of Layer 2 connectivity across geographically dispersed data center sites with link and node redundancy. This extension dramatically expands the Layer 2 network to a size beyond the limit within which traditional spanning tree protocols or vendor proprietary node virtualization technologies can operate.

STP and its derivatives (for example, MSTP) are complex and prone to misconfiguration. They require network-wide recalculation each time a topology change occurs. As the network expands, their deployment complexities increase and protocol performance decreases.

Vendor proprietary node virtualization technologies help streamline network topology, simplify management, and increase high availability by virtualizing multiple nodes into one virtual switching system. H3C Intelligent Resilient Framework (IRF) and Cisco Virtual Port Channel (vPC) are typical examples of node virtualization technologies. Despite their benefits, these protocols are not suitable for data center interconnect (DCI) because of their limitations on network topology.

Scaling of network segments

802.1Q VLAN has been widely used to isolate services and users. This technology identifies each VLAN segment with a 12-bit VLAN ID and supports a maximum of 4094 configurable VLANs. In a dynamic server virtualization or cloud computing environment, 4094 VLANs are far from enough.

In addition, for two hosts in a VLAN to communicate, that VLAN must be configured on every node and connections between the hosts. As VLANs extended between data center sites grow, the VLAN provisioning complexity and the risk of data breach increase. The floods of unknown packets will increase and eventually overwhelm the network.

Depletion of forwarding table entries

Layer 2 network devices typically forward traffic based on the MAC address table, which maintains MAC-based reachability information for each end-user devices. On an extended L2 network, virtual machines can easily outnumber the MAC address entries that a network access device can maintain. As business grows, the MAC address and ARP tables on the gateway and core devices will also get depleted eventually.

To accommodate the growing business, one solution is to replace or upgrade access network devices to offer a large MAC address table and deploy distributed gateways to share virtual machine traffic. Inevitably, this solution incurs enormous IT costs.

Page 4: H3C Overlay Technologies White Paper

2

Benefits of network overlays

Network overlays offer a number of benefits that address the modern center challenges.

Transport independence

Overlay technologies typically encapsulate Layer 2 frames in the IP header. They do not have special requirements for transport network type, except that the transport network can forward IP packets. In addition, IP routed networks typically have good scalability, self-recovery, and load balancing capabilities. An organization can deploy extended Layer 2 overlay networks over an IP routed network to provide new services without modifying the existing physical network structure.

VMs can migrate between data centers without changing their IP addresses.

Site independence

Overlay technologies provides a site-independent Layer 2 abstraction for VMs. The underlay physical network is transparent to the VMs.

Extension of Layer 2 connectivity enables the VMs to migrate across geographically distributed sites over conventional IP networks. The placement of VMs and hosts are no longer constrained to a single site. This mobility is important for network resource pooling, business continuity, and disaster recovery across data center sites.

Scalability of network segments

Developed for virtualized environments, the overlay technologies provide much larger network segment identifier space than 802.1Q VLAN. For example, Virtual eXtensible LAN (VXLAN) uses a 24-bit identifier to identify each network segment. The total number of VXLANs can reach 16,777,216 (224). This specification makes VXLAN a better choice than 802.1Q VLAN to isolate traffic for user endpoints.

Forwarding table scalability

Encapsulation of Layer 2 frames in the IP header reduces the number of MAC entries to be learned on the transport network, especially at its edge devices. The devices on the transport network are not aware of the reachability information about VMs attached to the overlay networks. They only need to learn the addresses of overlay tunnel endpoints.

Nevertheless, the core devices and gateways still need large MAC and ARP tables. A typical solution is to deploy distributed core devices and gateways.

Network overlay technologies

Basic overlay network model

As shown in Figure 1, overlay technologies build a virtual network of interconnected nodes on top of the physical network (also called the underlay network).

The overlay network has an independent control plane and data plane. The physical network is transparent to the sites attached to the overlay edge devices.

Page 5: H3C Overlay Technologies White Paper

3

Figure 1 Basic overlay network model

Overlay deployment models

As shown in Figure 2, network overlays include network-based overlay, host-based overlay, and hybrid overlay by deployment model.

Network-based overlay—Establishes tunnels between edge physical switches of data center sites. This deployment model offers high hardware-based forwarding performance and supports connectivity of physical servers to the overlay network. This deployment requires that the edge devices of data center sites support VXLAN.

Host-based overlay—Establishes tunnels between virtualized servers. This deployment does not require replacing or upgrading edge physical switches to support VXLANs. However, it performs tunneling encapsulation in the hypervisors of virtualized servers and can easily create software-based forwarding performance bottlenecks.

Hybrid overlay—Integrates network-based and host-based overlays to offer high-performance forwarding with minimal changes to the physical network.

Figure 2 Overlay deployment models

Page 6: H3C Overlay Technologies White Paper

4

Control plane implementations

To negotiate tunnel establishment or exchange reachability information between sites, an overlay solution can run dynamic protocols in the control plane or deploy an SDN controller. Using an SDN controller to provide control plane functionality has gained increasing popularity. This is because an SDN controller centralizes the management of network and computing resources and can automate dynamic deployment of overlay services in response to business growth.

H3C overlay solutions typically use virtual converged framework (VCF) controllers.

Network overlay technologies for modern data centers

The major overlay standards that IETF proposed for modern data centers include Virtual eXtensible LAN (VXLAN), Network Virtualization Using Generic Routing Encapsulation (NVGRE), and Stateless Transport Tunneling (STT).

These overlay technologies all tunnel Ethernet frames over a Layer 3 IP network to connect data center sites, but with different tunneling encapsulation and tunnel establishment methods. These differences have enabled VXLAN to gain increasingly wider adoption by vendors than NVGRE and STT.

VXLAN has an advantage over NVGRE in multipathing deployments. With VXLANs, most network devices can base their traffic distribution decision on fields from Layer 2 to Layer 3 headers. With NVGRE, network devices must be upgraded to handle the GRE header and perform hash-based traffic distribution based on the flow ID in the header.

VXLAN has an advantage over STT in that VXLAN uses an UDP/IP encapsulation without modifying UDP. In contrast, STT provides a UDP-like tunneling mode by modifying TCP significantly, which introduces technical complexity.

Subsequently, this document will focus the discussion on VXLAN.

VXLAN

VXLAN is a MAC-in-UDP technology that provides Layer 2 connectivity between distant network sites across a Layer 3 IP network. VXLAN is typically used in data centers and the access layer of campus networks for multitenant services.

VXLAN benefits and restrictions

VXLAN provides the following benefits:

Support for more virtual switched domains than VLANs—Each VXLAN is uniquely identified by a 24-bit VXLAN ID. The total number of VXLANs can reach 16777216 (224). This specification makes VXLAN a better choice than 802.1Q VLAN to isolate traffic for user terminals.

Easy deployment and maintenance—VXLAN requires deployment only on the edge devices of the transport network. Devices in the transport network perform typical Layer 3 forwarding.

Easy multipathing deployment—Load sharing can be performed based on the existing IP network technologies.

VXLAN has the following infrastructure restrictions:

The end-to-end path MTU over the IP network must be increased to accommodate the 50 bytes added by VXLAN encapsulation.

Page 7: H3C Overlay Technologies White Paper

5

VXLAN decapsulation is CPU intensive. To process VXLAN packets on a vSwitch, you must assign sufficient CPU resources to that vSwitch.

VXLAN network architecture

As shown in Figure 3, VXLAN tunnel endpoints (VTEPs) establish tunnels across a Layer 3 IP network to provide Layer 2 connectivity for VMs in the same VXLANs across data center sites. All VXLAN assignment, encapsulation, and decapsulation are performed on VTEPs.

VTEPs can be deployed on virtualized servers or physical switches. In this figure, VTEPs are deployed on physical switches.

VXLAN-aware H3C switches uses ACs, VSIs, and VXLAN tunnels to provide VXLAN services.

VSI—A virtual switch instance is a virtual Layer 2 switched domain. Each VSI provides switching services only for one VXLAN. VSIs learn MAC addresses and forward frames independently of one another. VMs in different sites have Layer 2 connectivity if they are in the same VXLAN.

Attachment circuit (AC)—An AC is a physical or virtual link that connects a VTEP to a local site. Typically, ACs are site-facing Layer 3 interfaces or Ethernet service instances that are associated with the VSI of a VXLAN. Traffic received from an AC is assigned to the VSI associated with the AC. Ethernet service instances are created on site-facing Layer 2 interfaces. An Ethernet service instance matches a list of custom VLANs by using a frame match criterion.

VXLAN tunnel—Logical point-to-point tunnels between VTEPs over the transport network. Each VXLAN tunnel can trunk multiple VXLANs.

Figure 3 Basic VLAN network architecture

Page 8: H3C Overlay Technologies White Paper

6

VXLAN packet format

As shown in Figure 4, a VTEP encapsulates a frame in the following headers:

8-byte VXLAN header—VXLAN information for the frame.

Flags—If the I bit is 1, the VXLAN ID is valid. If the I bit is 0, the VXLAN ID is invalid. All other bits are reserved and set to 0.

24-bit virtual network identifier (VNI)—Identifies the VXLAN of the frame. It is also called a VXLAN ID in H3C documentation.

8-byte outer UDP header for VXLAN—The default VXLAN destination UDP port number is 4789. The source port number is assigned on a per-flow basis, and is typically the hash value of the MAC, IP, and transport port number information in the original Ethernet frame header.

20-byte outer IP header—Valid addresses of VTEPs or VXLAN multicast groups on the transport network. Devices in the transport network forward VXLAN packets based on the outer IP header.

Figure 4 VXLAN packet format

VXLAN gateways and VXLAN routers

As shown in Figure 5, a VXLAN gateway forwards traffic between VXLANs and VLANs, and a VXLAN IP gateway forwards traffic between VXLANs as does a conventional Layer 3 switch.

VXLAN gateways perform one-to-one mapping to connect one VXLAN to a VLAN. Some VXLAN gateways can also perform multiple-to-one mapping to connect multiple VXLANs to one VLAN. VXLAN gateways can be vSwitches or physical ToR switches.

VXLAN IP gateways maintain VXLAN mappings to provide connectivity between VXLANs. VXLAN IP gateways can be vRouters, physical ToR switches, or physical routers.

Figure 5 VXLAN gateway and VXLAN router

Page 9: H3C Overlay Technologies White Paper

7

Intra-VXLAN and inter-VXLAN forwarding by deploying a distributed OVS gateway

VCF controller can create a logical open vSwitch (OVS) distributed gateway that contains OVSs on multiple servers.

Benefits of an OVS distributed gateway

OVS distributed gateway offers the following benefits:

Manage OVSs from the controller in a centralized manner.

Forward traffic based on flow tables for intra-VXLAN and inter-VXLAN traffic within a host or between hosts.

Move VMs between hosts without affecting the status of their network ports, which ensures continuous statistics collection and security monitoring.

Eliminate the need to reconfigure gateway parameters after a VM moves between hosts.

Intra-VXLAN and Inter-VXLAN forwarding

The intra- and inter-VXLAN traffic handled by an OVS contains VXLAN uplink flows and VXLAN downlink flow depending on their traffic direction.

VXLAN uplink flows are from VMs to VXLAN tunnel interfaces.

VXLAN downlink flows are from VXLAN tunnel interfaces to VMs.

Generic forwarding process

1. When the OVS receives the first packet of a flow that does not have a matching flow entry, the OVS sends the packet in a packet-in message to the controller.

2. Upon receipt of the packet-in message, the controller examines the source IP-MAC binding of the incoming packet to verify the authenticity of its origin.

If the packet is authentic, the controller continues to process the packet.

If the packet is not authentic, the controller drops the packet.

3. The controller determines the forwarding flow based on packet information, including the ingress port (In port) and the vNetwork type of the port.

If the ingress port is a vPort and its vNetwork type is VXLAN, the controller performs VXLAN uplink traffic forwarding.

If the ingress port is a VXLAN tunnel interface, the controller performs VXLAN downlink traffic forwarding.

4. The controller creates a flow entry based on the destination IP address of the packet. For more information, see "VXLAN uplink flow entry creation" and "VXLAN downlink flow entry creation."

5. The controller sends the flow entry to the OVS and forwards the packet to the destination VM.

6. The OVS forwards subsequent packets of the flow based on the flow entry.

VXLAN uplink flow entry creation

The controller identifies the vPort of a VXLAN uplink packet based on its destination IP address and generates a flow entry for the OVS.

The flow entry contains match fields and an action list.

The match fields are the source IP and MAC addresses of the VM, destination IP and MAC addresses, and ingress port.

The action list varies depending on the location of the source and destination VMs.

Page 10: H3C Overlay Technologies White Paper

8

If the vPort port of the destination VM is found on the overlay network, the actions in the action list are as follows:

Location of the VMs Flow type Actions

Same VXLAN and same host

VXLAN Layer 2 intra-host flow Forward the flow to the vPort of the destination VM.

Different VXLANs, but same host

VXLAN Layer 3 intra-host flow

a Set the virtual MAC address of the VSI as the source MAC address.

b Set the MAC address of the destination physical NIC as the destination MAC address.

Same VXLAN, but different hosts

VXLAN Layer 2 inter-host flow Set the remote VTEP IP address as the destination IP address.

Different VXLANs and hosts

VXLAN Layer 3 inter-host flow

a Set the virtual MAC of the VSI as the source MAC address.

b Set the MAC address of the destination NIC as the destination MAC address.

c Set the remote VTEP IP address as the destination IP address in the outer IP header.

If no destination vPort is found, but the MAC and VTEP IP address of the physical gateway are available, the controller includes the following actions in the flow entry:

1. Set the MAC address of the physical gateway as the destination MAC address.

2. Set the virtual MAC address of the VSI as the source MAC address.

VXLAN downlink flow entry creation

The controller searches for the destination port based on the destination IP address.

If a match is found, the controller replaces the destination MAC address with the MAC address of the destination port, and generates a flow entry based on the destination MAC, VNI, and OUTPUT.

If no match is found, the controller drops the packet.

Flow table-based routing between overlay and physical networks

Figure 6 shows a network scenario that requires communication between overlay and physical networks. In this scenario, the gateways of the overlay and physical networks are not collocated.

The number of ARP request broadcasts tends to be huge in a virtualized data center. To suppress ARP floods, this network scenario uses the VCF controller as an ARP proxy to reply to ARP requests on behalf of VMs.

To forward traffic from the overlay network to the physical network, you must do the following:

Configure static routes to the overlay network subnets on the controller and issue these static routes to the gateway of the physical network, for example, through NETCONF.

Configure the gateway of the physical network to redistribute the static overlay subnet routes to the physical network.

To forward traffic from the physical network to the overlay network, you must configure a routing protocol to advertise the subnets that contain the physical servers to the VXLAN gateways. The routes will not be advertised to the virtualized network.

Page 11: H3C Overlay Technologies White Paper

9

Figure 6 Flow table-based routing between overlay and physical networks

Overlay-to-physical unicast flow

Figure 7 shows the generic process to forward a packet from an overlay network to a physical network.

1. A VM sends an ARP request to the OVS before it sends an IP packet to a destination unknown device.

2. The OVS delivers the ARP request to the controller (the ARP proxy).

3. The controller verifies the authenticity of the packet and uses the target IP address in the ARP request to search for the matching MAC address. If a match is found, the controller returns the ARP reply in a packet out message.

4. When the VM receives the ARP reply, the VM sends the first packet, with the received target MAC address encapsulated in the Ethernet header.

5. When the OVS receives the packet, the OVS sends the packet in a packet-in message to the controller if it does not have a matching flow entry.

6. The controller verifies the authenticity of the packet, searches for the vPort of the destination VM based on the destination IP address.

If a match is found, the controller sends the packet to the VM and issues the flow entry to the OVS.

If no match is found, the controller searches for the MAC address and VTEP IP of the physical gateway. Then, it issues a flow entry to the OVS and at the same time sends the packet back to the OVS in a packet out message. The flow entry instructs the OVS to encapsulate the MAC addresses of the source VM and gateway as the source and destination MAC addresses, respectively, and sends the packet to the VTEP IP of the physical gateway.

7. The OVS encapsulates and sends the packet as instructed by the flow entry. The OVS will forward subsequent packets of the flow based on the flow entry.

Page 12: H3C Overlay Technologies White Paper

10

Figure 7 Creation of flow entries and route advertisement for overlay-to-physical flows

Physical-to-overlay unicast flow

Figure 8 shows the generic process to forward a packet from a physical network to an overlay network.

1. When a VM comes on line, the OVS sends the MAC reachability information of the VM in a Port Status message to the controller.

2. The controller performs the following operations:

a. Searches for IP reachability information of the VM based on its MAC address.

b. Notifies the gateway of the IP reachability information, which includes the IP address of the VM, VNI of its VXLAN, IP address and datapathID of the OVS.

3. The gateway searches for a VXLAN tunnel to the OVS based on the IP address of the OVS.

If a tunnel is found, the gateway uses the tunnel to communicate with the OVS.

If no tunnel is found, the gateway selects one preconfigured tunnel and sets the IP address of the OVS as the destination IP address of the tunnel.

4. The controller creates and issues a downlink flow entry to the gateway group for the VXLAN.

Match fields contained in the flow entry:

Ingress port—Layer 3 interface bound to the VPN associated with the VXLAN.

Destination IP—IP address of the VM.

Action list contained in the flow entry:

Remark DMAC—Replaces the destination MAC address with the MAC address of the OVS.

Remark SMAC—Replaces the source MAC address with the MAC address of the gateway.

Page 13: H3C Overlay Technologies White Paper

11

TunnelLogicPort—Specifies the tunnel interface between the gateway and the OVS as the output interface.

Figure 8 Creation of flow entries and route advertisement for physical-to-overlay flows

Forwarding on the VXLAN network

The following information describes how VTEPs assign traffic to VXLANs, learn MAC addresses in the data plane, and perform unicast forwarding. The following information is provided on the basis that a VCF controller acts as the control plane in the network. It has established VXLAN tunnels between VTEPs and has issued a flow table on each VTEP for traffic forwarding.

Assignment of traffic to VXLANs

A VTEP assigns traffic to VXLANs as follows:

For traffic received from the VXLAN tunnel connected to a remote site, the VTEP uses the VNI in the packet to identify its VXLAN.

For traffic received from the local site, the VTEP uses Ethernet service instance to VSI mappings to identify the VXLAN for a packet. When a packet arrives from the local site, the VTEP identifies its Ethernet service instance, and then searches Ethernet service instance to a VSI mappings. If a matching VSI is found, the VTEP assigns the packet to the VXLAN created on the VSI.

MAC learning in the data plane

MAC learning can be performed in the control plane or in the data plane. The following information describes how the MAC addresses are learned in the data plane.

A VTEP performs source MAC learning on each VSI as a Layer 2 switch.

For a frame from the local site to the remote site, the VTEP learns the source MAC address before VXLAN encapsulation.

Page 14: H3C Overlay Technologies White Paper

12

For a frame from the remote site to the local site, the VTEP learns the source MAC address after removing the VXLAN encapsulation.

A VSI's MAC address table includes the following types of MAC address entries:

Local MAC—MAC entries learned from the local site are local MAC entries. The outgoing interfaces for the MAC address entries are site-facing interfaces.

Remote MAC—MAC entries learned from a remote site are remote MAC entries. The outgoing interfaces for remote MAC addresses are VXLAN tunnel interfaces.

Unicast forwarding

Overlay-to-physical unicast flow

Figure 9 shows the process to forward a packet from a VM to a physical server.

1. When the packet from the VM arrives, the OVS (source VTEP) looks up the destination IP (server's IP) address in the flow table.

2. If a match is found, the OVS performs the actions in the entry:

a. Replaces the source and destination MAC addresses with the MAC addresses of itself and the VX-GW (destination VTEP), respectively.

b. Encapsulates the packet with the VXLAN/UDP/IP header based on the matching flow entry.

c. Forwards the encapsulated packet out of the VXLAN tunnel interface in the matching entry.

3. When the VX-GW receives the packet from the VXLAN tunnel, the VX-GW performs the following operations:

a. Removes the VXLAN header of the packet.

b. Searches the VXLAN to VPN mappings to identify the VPN that matches the VNI in the packet.

c. Forwards the packet by searching the VPN's FIB based on the destination IP to the physical server.

The packet forwarded to the physical server is a regular IP packet, in which the source and destination MAC address have been replaced with the MAC addresses of the VX-GW and physical server, respectively.

Page 15: H3C Overlay Technologies White Paper

13

Figure 9 Overlay-to-physical flow

Physical-to-overlay unicast flow

Figure 10 shows the process to forward a packet from a physical server to a VM.

1. When the packet from the physical server arrives, the VX-GW searches the flow table based on the VPN, destination IP address, and destination MAC address of the packet.

2. If a matching entry is found, the VX-GW performs the actions in the entry on the packet, as follows:

a. Replaces the destination MAC address with the global MAC address (00163FAAAAA) of the OVS, and replaces the source MAC address with the MAC address of the VX-GW.

b. Encapsulates the packet with the VXLAN/UDP/IP header and forwards the packet out of the specified VXLAN tunnel interface.

3. When the OVS receives the packet from the VXLAN tunnel, the OVS searches the flow table based on the VNI and destination IP address.

4. If a matching entry is found, the OVS performs the actions in the entry on the packet, as follows:

a. Removes the VXLAN header of the packet.

b. Replaces the destination and source MAC addresses with the MAC addresses of the VM and the OVS, respectively.

c. Forwards the packet to the VM.

Page 16: H3C Overlay Technologies White Paper

14

Figure 10 Physical-to-overlay flow

VM migration on the overlay network

VM migration might occur because of a VM failure, dynamic resource scheduling, host failure, or scheduled maintenance. It is important to ensure service continuity when a VM migrates from one host to another. To ensure service consistency, it is desirable that a VM migrates with its port profile.

Figure 11 shows a typical migration process.

1. The VM manager instructs the master VCF controller to perform a VM pre-migration.

2. The VCF controller performs the following operations:

a. Marks the vPorts of the VMs to be moved.

b. Notifies the controllers of the source and destination hosts to mark the vPorts.

c. After vPort marking is finished on the hosts, the controller notifies the VM manager that the hosts are ready for VM migration.

3. The VM manager creates the VMs, assigns resources including IP addresses, and starts the VMs on the destination host.

4. The destination host reports the vPort creation event to the controller.

5. The controller performs the following operations:

a. Stores the vPort information before and after migration based on the migration marks.

b. Transfers the port profile for the vPorts from the source host to the destination host. This policy persistence ensures that the tenants obtain the same services before and after a VM migration.

6. Destination VM copies data from the memory of source VM, and then comes online. At the same time, the source VM shuts down.

7. The source host reports the vPort deletion event to the controller.

8. The master controller performs the following operations:

Page 17: H3C Overlay Technologies White Paper

15

a. Deletes information about the vPorts before migration, and the associated flow table information.

b. Notifies the member controllers in the controller cluster to delete the vPort information and the associated flow table information.

9. The controller of the source host notifies the updated vPort reachability information to other controllers in the cluster.

10. The other member controllers in the cluster update their vPort reachability information.

NOTE:

The controller cluster will generate new flow entries for the migrated VMs when it receives packet-in messages.

Figure 11 VM migration with port profile

High availability of overlay gateways

To provide reliable load-balanced gateway services, you can deploy a gateway group.

A gateway group is a virtual gateway identified by a VTEP IP address and a virtual MAC address. All gateways in a gateway group are identified by that virtual MAC address and provide services at that VTEP IP address. The virtual gateway advertises its reachability information to the internal network through a routing protocol.

The virtual gateway distributes traffic between the gateways dynamically. When a member gateway fails, the virtual gateway selects an active member gateway to take over its gateway service.

A gateway group ensures node availability. To ensure link availability, connect each gateway with the downstream and upstream devices through link aggregation or ECMP for link backup and load balancing. On each gateway, you can also use two main processing units (MPU) in active/standby mode to ensure control plane availability. If the gateways use a controller as the control plane, MPU redundancy reduces the workload that a single point of failure might incur on the controller.

Page 18: H3C Overlay Technologies White Paper

16

Figure 12 High availability of gateways

Scale-out of overlay gateways

As shown in Figure 13, the VCF controller can dynamically distribute the VXLAN tunnels across multiple overlay gateways depending on the number of tenants connected to them. With the controller, you can also add new overlay gateways before the existing gateways are overloaded.

NOTE:

At the time of this writing, the VCF controller can support a maximum of 6, 4000 tenants.

Page 19: H3C Overlay Technologies White Paper

17

Figure 13 Scale-out of overlay gateways

Overlay security deployment

VXLAN security deployment includes the following aspects:

Security between VXLAN and VLAN—To control traffic between a VXLAN and a VLAN, deploy VXLAN firewalls at their boundaries. A VXLAN firewall can be collocated with a VXLAN gateway and VXLAN IP gateway.

Security between VXLANs—Inter-VXLAN traffic control must consider the traffic between virtual machines running on the same server. This type of traffic is switched inside the server, bypassing any external security mechanisms. To control this type of traffic, redirect the traffic to the ToR switch for traffic forwarding or perform VM-based protection.

The deployed security appliances can be hardware-based, software-based, or virtualized.

Table 1 describes the security deployment methods available to protect overlay networks (see Table 1Figure 14). You can use these methods separately or in combination.

Table 1 Overlay security deployment method

Deployment method Description

One-arm deployment Attach a hardware security appliance to the core device to control access between VXLANs and VLANs. A security appliance in one-arm mode can also act as the Layer 3 gateway.

Host-side deployment Deploy a virtual machine on a host to act as the security gateway for other virtual machines running on the same host. This security gateway can control access between VXLANs and VLANs if it supports VXLANs.

Security zone

Attach all hardware-based or software-based security appliances to a TOR switch to control access between VXLANs.

If the security appliances support VXLANs, configure VXLAN security policies on them.

If security appliances do not support VXLANs, map VXLANs to VLANs on the TOR switch and configure security policies on the security appliances to control access between VLANs.

Page 20: H3C Overlay Technologies White Paper

18

Figure 14 Overlay security deployment

(1) One-arm deployment (2) Server-side deployment

(3) Security zone deployment

Overlay solutions

You can use an overlay solution based on the network-based, host-based, or hybrid deployment model, depending on the business goals and budgets. If the network is large scale or complex, deploy VCF controllers for the ease of deployment and management.

Comparison of overlay deployment models

Item Network-based overlay Host-based overlay Hybrid overlay

Application

scenarios

High-performance forwarding.

Connectivity to both physical and virtualized servers.

No overlay capability requirements for servers.

Independent of hypervisors used in virtualized servers.

No need to connect physical servers to the overlay network.

Used in conjunction with hypervisors in servers, for example, hypervisor offerings of VMware or Microsoft.

Combined benefits of network-based and host-based overlays.

Networking flexibility.

Deployment of H3C vSwitches.

Deployed

products

Overlay gateways in different forms.

(Optional) H3C overlay controllers.

Overlay gateways in different forms.

(Optional) H3C overlay controllers.

H3C vSwitch.

Overlay gateways in different forms.

H3C overlay controllers.

Page 21: H3C Overlay Technologies White Paper

19

Network-based overlay solution

A network-based overlay solution uses physical switches (or routers) as VTEPs or gateways of the overlay network. The solution can also include a VCF controller if centralized management and control is desired.

As shown in Figure 15, you can deploy a network-based overlay network as follows:

Use ToR switches as VTEPs.

Install SR-IOV adapters on hosts, bind the VMs to the VFs virtualized from the physical network adapters to direct traffic to the ToR switches, bypassing the vSwitches.

Deploy VXLAN gateways and VXLAN IP gateways on ToR switches and spine nodes.

Optionally, deploy a VCF controller to perform centralized management of VXLAN VTEPs and gateways.

Network-based overlay solution offers the following benefits:

Hardware-based high forwarding performance.

Higher policy processing performance (for example, QoS and ACL performance) offered by ToR switches than vSwitches.

Networking flexibility without dependence on virtualization hypervisors.

Controller deployment by choice.

Use of a SDN controller cluster as the control plane to simplify deployment and improve reliability and scalability.

Use of a gateway group to have gateways share traffic load and back up each other to offer highly reliable gateway services.

Use of distributed gateways to avoid movement of VMs between sites introducing reconfiguration of gateway and other network settings.

Figure 15 Network-based overlay network solution

Host-based overlay solution

A host-based overlay solution uses virtual devices on hosts as VTEPs of the overlay network. In this solution, overlay functionality is provided on servers in conjunction with their hypervisors.

Page 22: H3C Overlay Technologies White Paper

20

As shown in Figure 16, you can deploy a host-based overlay network as follows:

Use vSwitches on hosts as VXLAN VTEPs.

Deploy VXLAN gateways and VXLAN IP gateways on ToR switches and spine nodes.

Optionally, deploy a VCF controller to perform centralized management of VXLAN gateways.

This solution offers the following benefits:

Applicable to the communication within a fully virtualized environment.

Low costs.

Controller deployment by choice.

Use of a SDN controller cluster as the control plane to simplify deployment and improve reliability and scalability.

Use of a gateway group to have gateways share traffic load and back up each other to offer highly reliable gateway services.

Use of distributed gateways to avoid movement of VMs between sites introducing reconfiguration of gateway and other network settings.

Figure 16 Host-based overlay network solution

Hybrid overlay application scenario

A hybrid overlay solution uses virtual devices as VTEPs and uses physical devices as overlay network gateways.

As shown in Figure 17, you can deploy a hybrid overlay solution as follows:

In a virtualized site, deploy H3C vSwitches on hosts as VTEPs to connect VMs to the VXLAN network and to forward traffic between VXLANs.

H3C vSwitches can be deployed on hypervisors, such as H3C CAS, Linux KVM, VMware vSphere, and Microsoft Hyper-V.

Deploy a VCF controller cluster to manage VXLAN VTEPs and gateways.

Use core nodes as VXLAN gateways and VXLAN IP gateways to provide Layer 3 forwarding between VXLANs and Layer 3 forwarding between VXLANs and VLANs.

Page 23: H3C Overlay Technologies White Paper

21

In a non-virtualized data center, use the physical edge devices as VTEPs. This deployment allows physical servers to connect to the overlay network. The VTEPs perform typical Layer 2 forwarding based on the destination MAC address.

This solution uses both software-based and hardware-based platforms as VTEPs. It offers the following benefits:

Attachment of virtualized sites to physical devices that lack VXLAN capability.

Layer 2 connectivity between VMs and physical servers.

Hardware-based high performance in forwarding and policy processing.

Use of a SDN controller cluster as the control plane to simplify deployment and improve reliability and scalability.

Use of distributed gateways to avoid movement of VMs between sites introducing reconfiguration of gateway and other network settings.

Figure 17 Hybrid overlay network solution

New H3C Technologies Co., Limited

Beijing base

8 GuangShun South Street, Chaoyang District,Beijing

Zip: 100102

Hangzhou base

466 Changhe Road, Binjiang District, Hangzhou,

Zhejiang Province 310052 P.R.China

Zip: 310052

Tel: +86-571-86760000

Fax: +86-571-86760001

Copyright © 2018 New H3C Technologies Co., Limited Reserves all rights

Disclaimer: Though H3C strives to provide accurate information in this document, we cannot guarantee that details do

not contain any technical error or printing error. Therefore, H3C cannot accept responsibility for any inaccuracy in this

document. H3C reserves the right for the modification of the contents herein without prior notification

http://www.h3c.com