vmware hcx on ibm cloud deployment and operations · 2020. 5. 28. · vmware hcx on ibm cloud...

34
Copyright IBM Corporation 2018 Page 1 of 34 VMware HCX on IBM Cloud Deployment and Operations Date:2018-1-31 Version: 1.1

Upload: others

Post on 02-Sep-2020

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 1 of 34

VMware HCX on IBM Cloud Deployment and Operations

Date:2018-1-31

Version: 1.1

Page 2: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 2 of 34

Table of Contents 1 Introduction ............................................................................................................................... 4 2 HCX Components and terminology .......................................................................................... 5

2.1 Cloud side, Client side ........................................................................................................ 5 2.2 HCX Manager ..................................................................................................................... 5 2.3 Fleet components ............................................................................................................... 5 2.4 HCX UIs .............................................................................................................................. 6

3 Pre-Deployment Planning ........................................................................................................ 8 3.1 Avoiding Analysis Paralysis ................................................................................................ 8

Stretched networks - What we know so far (an observation) .................................... 8 3.2 Migration Life-cycle ............................................................................................................. 8

vSphere inventory ...................................................................................................... 8 Baseline Network Configuration ................................................................................ 9 Network Extension ..................................................................................................... 9 Pre-flight Tests ........................................................................................................... 9 Migration of non-production apps .............................................................................. 9 Cloud network design / implementation begins ......................................................... 9 Additional Network Connectivity Considerations ....................................................... 9 Physical servers ......................................................................................................... 9 Migrate Production / Complex Applications ............................................................. 10

Network Swing ......................................................................................................... 10 HCX removal or continued use ................................................................................ 10

3.3 Supported Prerequisites ................................................................................................... 10 HCX Source Platforms Supported ........................................................................... 10 HCX Cloud Platforms Supported ............................................................................. 10

3.4 Connectivity Options ......................................................................................................... 10 Standard HCX connectivity ...................................................................................... 10 Optional Connectivity ............................................................................................... 10

4 Licensing ................................................................................................................................ 12 5 Cloud and Client Deployment ................................................................................................ 13

5.1 Requirements – HCX Cloud and Source .......................................................................... 13 5.2 Cloud ................................................................................................................................ 13 5.3 Client ................................................................................................................................. 13

6 Network Stretching and VM migration .................................................................................... 16 6.1 Network Stretching ........................................................................................................... 16

Concepts and best practice ..................................................................................... 16 Process .................................................................................................................... 16 Proximity Routing Option ......................................................................................... 17

6.2 vMotion ............................................................................................................................. 18 Concepts and best practice ..................................................................................... 18 Operation ................................................................................................................. 18

6.3 Bulk Migration ................................................................................................................... 19 Concepts and Best practice ..................................................................................... 19

6.4 Migration type best practice .............................................................................................. 20 Shared disk clusters ................................................................................................. 21 General VMs ............................................................................................................ 21 VMs utilizing direct attached NAS ............................................................................ 21

6.5 Network Swing .................................................................................................................. 22 7 Monitoring ............................................................................................................................... 23

7.1 WAN Opt ........................................................................................................................... 23 Migration Bandwidth Throttling ................................................................................ 23

7.2 HCX Components ............................................................................................................. 23 7.3 Bandwidth Utilization ........................................................................................................ 24

VM Migration vMotion Traffic ................................................................................... 24 Stretched Layer 2 Traffic ......................................................................................... 25

Page 3: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 3 of 34

8 Troubleshooting ...................................................................................................................... 27 8.1 HCX Client UI ................................................................................................................... 27 8.2 Migration ........................................................................................................................... 28 8.3 Stretched L2 ..................................................................................................................... 30

9 Upgrading ............................................................................................................................... 32 10 Removal ............................................................................................................................ 33 Appendix – Links to HCX documentation ...................................................................................... 34

Summary of Changes This section records the history of significant changes to this document.

Version Date Author Description of Change

1.0 2018-1-29 Frank Chodacki, Simon J Kofkin-Hansen,

Daniel De Araujo, Bob Kellenberger, Jack

C Benney, Joseph M Brown

Initial release

1.1 2018-1-31 Same Remove WAN opt UI

discussion

Page 4: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 4 of 34

1 Introduction VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC),

to interoperate across a variety of network types. These include LAN or WAN whether or not secured on

public internet. HCX is designed to addresses the security, compatibility, complexity and performance

concerns one would encounter in trying to achieve a multi-instance, multi-site, deployment of vSphere

extending across on premise to cloud provider boundaries. As such, HCX has become the preferred method

for interoperation between the fully automated vSphere offerings within IBM Cloud and any other

deployed instance of vSphere. HCX is now a fully integrated offering within the VMware on IBM Cloud

solution set.

This document is intended as a guide for the deployment and operations of HCX including best practices

and troubleshooting as currently accepted and understood. As HCX is developed using the Agile

development methodology, this document should not be relied upon as the de facto source of supported

versions or supported platforms. See the appendix for links to updated documentation.

Figure 1

Page 5: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 5 of 34

2 HCX Components and terminology HCX is comprised of a Cloud side and a client side install at a minimum. An instance of HCX must be

deployed per vCenter, regardless of if the vCenters where HCX is to be deployed are linked in the same

SSO domain on the client or cloud side. Site configurations supported by HCX Client to Cloud are; one to

one, one to many, many to one and many to many.

2.1 Cloud side, Client side HCX has the concept of cloud side install and client side.

Cloud side = destination (VMware Cloud Foundation or vCenter as a Service on IBM Cloud). The

cloud side install of HCX is the “slave” instance of an HCX client to cloud relationship. It is

controlled by the client-side install.

Client side = source (Any vSphere instances meeting the prerequisites for install and operation).

The client side of the HCX install is the “master” which controls the cloud side slave instance via

it’s vCenter web client UI snap-in.

2.2 HCX Manager The Cloud side HCX Manager is the first part of an HCX install to be deployed on the cloud side

by the IBM VMware Solutions automation. Initially it is a single deployed OVA image file

specific to the cloud side in conjunction with an NSX edge load balancer-firewall which is

configured specifically for the HCX role. The HCX Manager is configured to “listen” for

incoming client-side registration, management and control traffic via the configured NSX edge

load balancer / firewall.

The Client side HCX Manager a specific OVA image file which provides the UI functionality for

managing and operating HCX. The client side HCX manager is responsible for registration with

the cloud side HCX manager and creating a management plane between the client and cloud side.

Furthermore, it is responsible for deploying fleet components on the client side and instructing the

cloud side to do the same.

2.3 Fleet components HCX Fleet components are responsible for creating the data and control planes between client and cloud

side. Deployed as VMs in mirrored pairs, the fleet consists of the following:

Cloud Gateway (CGW): The Cloud Gateway is an optional component responsible for creating

encrypted tunnels for the needs of vMotion and replication (bulk migration) traffic. Only one pair

exists per linked HCX site.

Layer 2 Concentrator (L2C): The Layer 2 Concentrator is an optional component responsible for

creating encrypted tunnels for the data and control plane corresponding to stretched layer 2 traffic.

Each L2C pair can handle up to 4096 stretched networks. Additional L2C pairs can be deployed

as needed. Note: 4096 stretched networks have not been tested. The bandwidth capability is

limited to ~4Gbps. Where the external bandwidth capacity is greater than 4Gbps using additional

L2C pairs will allow for greater utilization of the underlying network.

WAN Opt: HCX includes an optionally deployed Silver Peak™ WAN optimization appliance. It

is deployed as a VM appliance. When deployed the CGW tunnel traffic will be redirected to

traverse the WAN Opt pair. The WAN opt significantly decreases traffic across the WAN

(typically 3:1 to 6:1 observed) while increasing connection reliability, thus it is recommended to

always deploy the WAN opt with the CGW. The added benefit of deploying the WAN opt is

limiting the WAN bandwidth consumed by VM migration traffic. The WAN Opt interface is not

configured by default. See WAN Opt in the Cloud and Client Deployment Section.

Proxy ESX host: Whenever the CGW is configured to connect to the cloud side HCX site, a

proxy ESXi host will appear in vCenter outside of any cluster. This ESXi host has the same

management and vMotion IP address as the corresponding CGW appliance. This allows for the

vSphere environment at both the client and cloud side to function as if they were vMotion-ing a

VM to a local ESXi host. Additional benefits to this method a) The management IP ranges on

Page 6: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 6 of 34

either side could be overlapping with no loss in functionality b) the cloud side requires NO

vSphere visibility into the client side (security).

2.4 HCX UIs Client Web UI – The HCX Client Web UI is the main end user interface for HCX. Once

the client side HCX manager is installed it shows up as a snap in to the vCenter Web UI.

The HCX Client controls remote cloud HCX registration, fleet component deployment,

network stretching and VM migration into and out of the Cloud.

Figure 2

Cloud Side UI – The cloud side HCX UI is accessible via the public registration URL

given for HCX client registration if used directly in a browser. By default, it utilizes the

cloud side vSphere SSO login (i.e. [email protected]). It is typically used for

upgrading the installation and modifying some network configuration. It can also be used

to build virtual networks within HCX.

Client / Cloud HCX Manager appliance management UI – Access the Appliance

management UI for either the cloud or the client side via the VM’s private IP address as

viewed in vCenter. https://<hcxmanager_IP>:9443 . The ID (typically “admin”) and

password will be provided via the IBM VMware Solutions portal. The management UI is

used start and stop HCX Manager services, configure log monitoring, basic networking

configuration, manual upgrades, support log gathering (if the web UI is not functioning),

Page 7: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 7 of 34

registration with vSphere components (vCenter, PSC, NSX Manager) and certificate

management.

Page 8: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 8 of 34

3 Pre-Deployment Planning Much of the time spent in deploying HCX is in the pre-deployment stage. While it is typical for

information systems migration projects to take many months to years, HCX allows for migrations and/or

network connectivity to the cloud to begin immediately after deployment. Since the deployment of HCX

for an enterprise level customer will typically involve Security, Network, Storage and vSphere

infrastructure teams, it makes sense to involve these teams in the pre deployment planning phase if

possible. Effective project management and early inclusion of stake holders, is critical to insure the speed

of deployment and operation of HCX is taken advantage of during the migration.

3.1 Avoiding Analysis Paralysis Many of the hurdles and time taken in migration of a VM or group of VMs are there because of the need to

modify parts of the application environment, the design of those changes and the scheduling of the

downtime needed to make those changes. Once these changes are made, the migration becomes difficult to

back out of, further adding to Analysis Paralysis. Trying to capture all aspects of the migration and

coordinate this across teams within an environment and key stake holders that may change in the time it

takes to finalize the plan usually means the project never gets off the ground, or worse, it is forced forward.

HCX allows for cross vSphere instance migration of a VM or group of VMs that represent a part or

complete composite application without any modifications to the application. Backing out of a migration is

as simple as moving the VM(s) back or re-stretching the networks. HCX negates the need for a large part of

migration planning and allows for some “parallelism” in the planning process.

After a selection of the applications to be moved and a high-level network design is created, the

applications can be migrated with minimal configuration on the cloud instance while the final network

connectivity and design is worked out. The following sections outline the process.

Stretched networks - What we know so far (an observation) The network stretch components of the HCX fleet are extremely stable. At one particular customer with

greater than 20 VLANS stretched into the IBM cloud across a 1Gbps WAN shared with other traffic and

HCX migration tunnels, there have been no application issues attributed to the network. The network links

have been up for greater than 6 months in this configuration. Additional stretched networks have been

added and removed without issue. Picking an IBM data center in close proximity (< 6ms latency for this

particular customer) also improves network stability of stretched networking. Leaving the stretched

networks up long term, should not be a negative factor in your design given you have enough bandwidth

and low enough latency for your applications.

3.2 Migration Life-cycle The following sections describe the phases within a typical HCX migration life cycle, denoting where

workstreams can be done in parallel.

vSphere inventory Use an application such as “RVTools” https://www.robware.net/rvtools/ to dump the inventory of

the source vCenter(s) into a spreadsheet form.

Coarse grained assessment of VMs within an application to be migrated. “Coarse grained” implies

understanding the VMs that participate in an application, without delving into the details.

If many VMs are to be migrated and/or network bandwidth is limited between the source and

cloud sites, further group VMs by VLAN (or VXLAN if NSX is employed at the source).

Grouping allows for a “cascading” HCX migration plan where groups of VMs by VLAN are

migrated and the L2 networks they reside upon are stretched only until the point the VLANs are

evacuated. The initial group of related L2 stretched networks can only be “un-stretched” when the

cloud side network design is finalized and deployed. Un-stretching implies “swinging” the

particular resulting VXLAN traffic to now route thru the cloud instance NSX infrastructure.

Page 9: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 9 of 34

Baseline Network Configuration Create a hardened “perimeter” network within the cloud side vSphere instance. This usually

consists of a NSX DLR or Edge appliance. No need to create any firewall rules or uplink topology

at this time as it can be completed later or concurrently without effecting the stretched L2 traffic

(assuming HCX Proximity Routing is not used). See 6.1.3 Proximity Routing.

Important Note: Early versions of HCX force the selection of a routing device on the

cloud side when stretching a network. If design changes are later made to the cloud side

overlay, and the routing devices will change, you can NOT change the initial routing

device chosen when stretching the network without un-stretching and re-stretching that

network. This poses a problem if the stretched network is “live”. The solution is to wait

until it is time for the network to be un-stretched (see network swing section) and then

migrate the cloud side network connection to a different NSX routing device if desired. By

the time of this writing the latest versions of HCX should allow selection of a cloud

routing device at any time or post network stretch.

Network Extension Extending the network merely means to take the existing VLAN or VXLAN from the source

vSphere environment, represented by a virtual distributed switch port group (vDS), and extending

it to a NSX VXLAN on the cloud side of HCX. (see above note)

Pre-flight Tests Pre-flight tests involve doing a HCX migration with both the vMotion and bulk migration function

in order to establish a baseline transfer rate.

Migration of non-production apps At this point, migration of VMs begins with the planned “waves” of less critical VMs. For

example: Dev/test etc. utilizing internet connectivity for migration and stretched L2 traffic.

Cloud network design / implementation begins While migrations continue, cloud side network designs are finalized and implemented within the

cloud side vSphere instance.

Additional Network Connectivity Considerations While migrations continue, private WAN network connectivity can be ordered as it typically takes

a few weeks to months to be established with the cloud provider.

Once private network connectivity is completed, HCX can be configured to utilize both the

dedicated private network link and the internet for migration and stretched L2 traffic.

Physical servers When the goal is data center migration into the cloud, any physical servers that interact with the VMs

being migrated can be assessed for migration into the IBM cloud as either a VM (P2V), bare metal or

remain at the source. If the physical server is to remain at the source, and HCX will only be used during

migration until a dedicated network is established, it is important to understand if the physical server

resides on any network that is stretched into the cloud with HCX. In this scenario, HCX is allowing not

only the VMs, but the entire subnet to be migrated into the cloud. To remove HCX at the end of the

migration, the subnet cannot exist in the source and destination if connection between physical devices

and the migrated VMs is to be maintained. This implies that any physical devices left behind at the

source site that exist on stretched L2 networks must be migrated to another network subnet that would

be capable of routing to the cloud side. The exception to this is if some other stretched L2 technology,

such as NSX L2 vpn, is setup to replace HCX stretched L2 endpoints on an ongoing basis.

Page 10: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 10 of 34

Migrate Production / Complex Applications VMs with shared multi-writer VMDKs such as Oracle RAC or MS Exchange / SQL clusters or

VMs with raw device mappings (RDMs) are examples of VMs that need extra consideration prior

to migration.

Network Swing Network Swing occurs once the evacuation of the VMs on source side network(s) is complete and

the network design / implementation is completed on the cloud side. Configuring HCX to un-

stretch the networks related to the completed VMs within migration waves allows the migrated

VMs to route network traffic via the cloud side NSX infrastructure.

HCX removal or continued use Depending on the use case, HCX can be left in place indefinitely. Examples are:

Capacity expansion – burst into cloud. End of the month processing requires extra resources to

continue to complete within SLA’s. Adding extra application VMs for this need and then

removing them or shutting them down, (scaling the cluster back as well) allows for paying for just

what is needed.

BYOIP – Allowing the customer to bring their IP subnet ranges into IBM Cloud can be difficult.

Especially attempting to route into the IBM Cloud environment from the underlay network. HCX

makes this simple by connecting the source vSphere environments directly into the cloud vSphere

NSX overlay networking.

Security – Utilizing the internet for connectivity becomes attractive because of its ready

availability for a variety of connectivity options and redundancy in paths, however security

becomes a concern. HCX adds a high level of security (Suite B encryption) which allows the use

of the internet for site to cloud connectivity without sacrificing performance.

3.3 Supported Prerequisites Again, this document describes current HCX capability at the time of its writing. See

https://www.ibm.com/cloud/garage/files/HCX_Architecture_Design.pdf for current HCX supported

platforms and limitations.

HCX Source Platforms Supported For network extension, only portgroups with a vSphere virtual distributed switch vDS are supported. This

also implies that stand alone ESXi hosts are not supported as you can only have a vDS when ESXi hosts

are managed by vCenter.

vSphere 5.1 (cmd line only for vCenter 5.1 via API)

vSphere 5.5 ( Web client UI supported on vCenter 5.5u3 and above.)

vSphere 6.0

vSphere 6.5 (vDS must be at a 6.0 level)

HCX Cloud Platforms Supported HCX Cloud side is provisioned by IBM cloud automation. All current versions of the automated install of

vSphere on IBM Cloud (VCF and VCS) are supported.

3.4 Connectivity Options

Standard HCX connectivity As deployed by the IBM Cloud VMware Solutions automation, HCX cloud side install is configured to

connect across the public internet.

Optional Connectivity Private network. Once the cloud side is deployed, HCX can be manually reconfigured to allow for

connection across the IBM cloud private network via dedicated link.

Hybrid. HCX will only deploy its “Fleet” components to a single configured external endpoint

network. If it is desired to have HCX components traverse the public internet, while others

Page 11: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 11 of 34

traverse a dedicated link over the IBM Cloud private network, then the end point pairs must be

reconfigured post deployment. The downside to this is that should it be necessary to redeploy the

Fleet pair, the configuration will revert back to the standard configuration. This should be

considered only if required temporarily. See Cloud and Client deployment for specifics.

Page 12: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 12 of 34

4 Licensing HCX is a service. As such, HCX is licensed per site and per VM managed via licensing servers maintained

by VMware based on what is ordered via the IBM VMware Solutions portal. The HCX Cloud and Client-

side instances are required to be able to communicate to a VMware registration site throughout their

lifecycle.

Traffic on 80 and 443 must be allowed to https://connect.hcx.vmware.com

A one-time use registration key will be provided for the client-side install provided via the IBM

Cloud VMware Solutions portal. A key is required for each client side HCX installation.

The Cloud side HCX registration is automatically completed by the IBM Cloud HCX deployment

automation.

Page 13: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 13 of 34

5 Cloud and Client Deployment As described at the beginning of this document, a minimal HCX install consists of a single Cloud and

Client-side deployment. The HCX client-side install, can be any version of vSphere supported by HCX

assuming there is network connectivity between the client and cloud sides. (This section is subject to

change and is here as reference – please refer to the IBM HCX architecture document for up to date

information. See Appendix for links to documentation)

5.1 Requirements – HCX Cloud and Source HCX Manager – CPU Count 4, Memory 12G, Disk 60G

CGW – CPU Count 8, Memory 3G, Disk 2G

L2C – CPU Count 8, Memory 3G, Disk 2G

WAN Opt – CPU Count 8, Memory 14G, Disk 100G.

5.2 Cloud The cloud side HCX deployment is handled by the VMware Solutions for IBM cloud automation on either

a VMware Cloud Foundation (VCF) or vCenter as a Service (VCS) deployed instance. The default

configuration uses the VCF or VCS public port group for configuring any fleet components end point

connectivity. The fleet components on the cloud side are deployed with their end point interfaces

configured with public IPs as they are security hardened appliances. It is possible to deploy them behind a

firewall such as an IBM Cloud deployed Vyatta or Fortigate. It is not recommended nor supported to

have the client and cloud sides connect to each other across an existing VPN tunnel.

If it is desired to utilize the private network for HCX endpoint connectivity, IBM support must be contacted

to reconfigure HCX cloud side for private network fleet deployment for use across dedicated link. (MPLS

etc.)

Requirements:

New private portable IP range allocated to the default private VLAN.

HCX Edge configured to use singled legged configuration and moved to private default vlan.

HCX manager reconfigured to utilize private port group and new portable IP range for “external”

facing fleet interfaces.

AS OF THIS WRITING, THIS MUST BE DONE VIA API CALLS OR MONGO DB EDITS WITHIN THE HCX

MANAGER, HOWEVER A FUTURE RELEASE OF HCX WILL ALLOW THE RECONFIGURATION OF THE

FLEET COMPONENTS VIA THE HCX CLIENT SIDE UI.

Once this configuration is complete, the cloud side HCX manager will automatically deploy the cloud side

fleet components with the correct network settings.

For Hybrid network connectivity, the standard HCX configuration is used to deploy fleet components.

5.3 Client The Client Side HCX install is end user deployed and requires Administrator level permissions to the

source vCenter. As of this writing, the HCX client-side Manager ova is approximately 1.7 GB in size. Upon

ordering HCX via the VMware Solutions for IBM Cloud portal. A link is provided in the cloud side

manager HCX UI (URL) to download the version of HCX for the client side that matches the version of

HCX deployed on the cloud side. A one-time use registration key will also be provided.

Configuration steps:

1. Download HCX Client (Enterprise) OVA from the link provided in the cloud side HCX UI.

Page 14: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 14 of 34

1.1. Login to the cloud side HCX UI by using the HCX registration UI provided by IBM.

Figure 3

1.2. Use the cloud vCenter ID and password to login to the UI

1.3. On the administration tab, select “request download link” to download the client-side OVA. (It is

recommended you do this from a “jump box” that is local to the source vCenter where the OVA

will be deployed).

Figure 4

2. Login to vSphere C++ client (or webclient with functioning client integration snapin) and import the

OVA using the vCenter import wizard. Insure the network that the HCX manager is configured upon

has access to both the source vCenter and the internet. Enter the registration key when prompted. (You

can use the web client if you have the client tools snap in setup.)

Page 15: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 15 of 34

3. Log out and back into the vCenter web client. You should now see a “HCX” menu selection / icon

under the home screen / menu.

4. If using self-signed certificates, select the HCX menu item to enter into the HCX snapin UI and select

the Administration tab.

5. Select to import certificates from URL and key in the HCX cloud side registration URL provided by

the VMware Solutions on IBM Cloud portal for HCX.

6. Next, select “New Site Paring” in the Site Parings pane within the Dashboard Tab. Follow the prompts

and key in the HCX cloud side registration URL and Cloud side VCF/VCS vCenter administrator ID

and password.

7. Continue to follow the prompts in the site registration wizard in configuring the Fleet components

including the Cloud Gateway, Layer 2 Concentrator and WAN Optimizer.

Monitoring of the client-side Fleet components is possible by using the “Tasks” menu item.

Cloud side deployment monitoring is available under the tasks menu item within the vCenter web UI for

the particular VCS/F instance.

Deployment failure on either side, causes fleet component deployment back out and deletion. Following

failure remediation, select the “Interconnect” tab in the HCX vCenter web UI on the client side, then select

“Install HCX Components” at the top of the screen.

Several minutes after successful deployment of Fleet components the tunnel status for CGW and L2C

components should be “Up” as viewed from the Interconnect tab.

Figure 5

Page 16: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 16 of 34

6 Network Stretching and VM migration

6.1 Network Stretching

Concepts and best practice

The “glue” that bridges the client-side network to the cloud side VXLAN is a sophisticated multi-tunnel

VPN that is comprised of proprietary HCX technology. It is not based on NSX, but does work with NSX

and extend its capabilities. The process is controlled by the client side vCenter web UI and automates the

deployment and bring up of both endpoints on the client and cloud side by selecting the network to be

stretched individually or in batch. Additionally, as part of the network stretching workflow, NSX on the

cloud side is commanded to build a VXLAN and connect it to an interface created on the specified cloud

side L3 device (DLR or ESG left in an unconnected state) and the cloud side L2C appliance.

Given a set of VMs that are intended to be migrated in a particular “wave”, all of the client-side vSphere

vDS port groups that these VMs are connected to, will typically need to be stretched into the cloud side

with HCX.

Why “typically” and not “always”? Because it may be advantageous to disconnect certain traffic from the

client side once the VM is migrated. An example case are VMs running guest backup clients, which could

cause high bandwidth utilization when moved to the cloud. The in-guest backup client is not required when

the VM is migrated since the relocated VM is automatically picked up by a more modern block level

backup on the cloud side. Rather than accessing each VM to “shut off” the in-guest client backup schedule,

disconnecting the VM’s backup network adapter (if a backup network is was in use) causes the network

client backup to fail. Post migration the VMs are accessible and the in-guest backup client may then be

disabled.

Bandwidth of a single L2C is theoretically 4Gbps, however it has been observed that this is the limit for all

stretched networks within a single L2C pair and is not achievable by a single stretched network. A single

stretched network can achieve ~1Gbps given there is enough underlay bandwidth allotted and latency is

low (<~10ms).

Process

To stretch a network (VLAN or VXLAN) with HCX, perform the following from the Client side vCenter

Web UI:

Page 17: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 17 of 34

1. For individual selection of port groups, navigate to the networks tab within the vCenter web UI, right

click on the network to be stretched and select the “Hybridity Actions”-> “Extend Networks to the

Cloud”, submenu item.

Figure 6

2. In the following screen select the cloud side L3 device to connect to and the L2C appliance that will be

used (if there is more than one provisioned). Key in the current default gateway and subnet mask

(CIDR format).

3. Click the “Stretch” button at the bottom of the screen to begin the network stretch workflow.

Figure 7

4. Network progress is monitored in the vCenter client “tasks”

Proximity Routing Option Without any type of route optimization, extended Networks must be routed back to the client side for any

L3 access. This tromboning introduces a very inefficient traffic pattern as packets need to travel back and

forth between Client source and Cloud even for cases where both source and destination are within the

Page 18: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 18 of 34

Cloud. The Proximity Routing feature of HCX addresses traffic “hair-pinning” and enables local egress of

traffic where desired.

PROXIMITY ROUTING SHOULD ONLY BE ENABLED after updating HCX to the latest available

version. As of this writing proximity routing is only available when stretching L2 networks into an NSX

ESG. Future releases of HCX will support the use of a destination DLR for proximity routing.

6.2 vMotion The vMotion capability within HCX extends the vSphere vMotion capability to work across differing

versions of vSphere, separate SSO domains and various types of network connectivity including across the

internet. HCX assumes that the network used to connect across is insecure and will always move traffic via

encrypted tunnels regardless of the type of connectivity.

Concepts and best practice HCX is essentially a vMotion two-way proxy. HCX emulates a single ESXi host within the vSphere data

center, outside any clusters that is itself a “front” for the cloud gateway fleet component (CGW). A proxy

host will appear for each HCX site that is linked to the currently viewed site.

When a vMotion is initiated to a remote host, the local ESXi host will vMotion that VM to the local proxy

ESXi host that fronts the CGW that also is maintaining an encrypted tunnel with the CGW on the remote

side. At the same time, a vMotion is initiated from the remote ESXi proxy host to the destination vSphere

physical ESXi host, as it receives data from the source CGW across the tunnel.

When vMotion is employed, unlike bulk migration option, only a single VM migration operation will run at

a time. Because of this, for large amounts of VMs to be migrated, it is recommended it only be used where

downtime is not an option or there is risk in rebooting the VM.

However, like standard vMotion, the VM can be “live” during the process. It has been observed that a

single vMotion will top out at around ~1.7Gbps on the LAN and 300 to 400Mbps on the WAN via the

WAN Optimizer. This does not mean 1.7Gbps on the LAN equals 400Mbps on the WAN! These are

simply observed maximums per the environment observed. The environment this was observed on

consisted of 1GB LAN vMotion Network, 1GB Internet uplink, shared with production web traffic. See the

WAN Opt section under Monitoring for more bandwidth related observation.

vMotion is recommended where:

The VM is troublesome to shut down / start up or uptime has been very long and would introduce

risk by shutting it down

Any cluster type application that requires disk UUIDs. Such as Oracle RAC clusters. vMotion will

not changed the disk UUIDs on the destination.

You wish to move a single VM as quickly as possible.

Scheduled migration is not required.

Operation To initiate a cross cloud vMotion, the HCX web UI snap in portal can be used, or the vSphere web client

contextual extension menus. In either case, the same migration wizard comes up. For the contextual menus,

only a single VM is selected for migration operations. Via the portal, multiple VMs can be selected.

Page 19: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 19 of 34

Figure 8

Figure 9

Reverse migration of VMs can only be done from the Web UI portal using the “reverse migration” check

box in the HCX Migration wizard.

6.3 Bulk Migration

Concepts and Best practice The bulk migration capability of HCX utilizes vSphere replication to migrate disk data while re-creating

the VM on the destination vSphere HCX instance. A

Migration of a VM triggers the following workflow:

Creation of a new VM on the destination side and its corresponding virtual disks.

Replication of VM data to the new VM. (replication will start as soon as the wizard is completed

regardless of “switch over” scheduling.)

Power down of the original VM.

Final replication of any changed data during the power off period.

Power up of the new VM on the destination side.

Rename and movement of the original VM to the “moved to cloud” folder.

Advantages of bulk migration over vMotion are as follows:

Page 20: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 20 of 34

Migration of many VMs concurrently.

Figure 10

More consistent bandwidth utilization. vMotion can generate fluctuations in bandwidth utilization

which are visible as peaks and valleys within network monitoring tool or the WAN Opt UI.

Figure 11

Bulk migration can achieve higher overall utilization of a given network bandwidth than a single

vMotion is capable of.

Scheduling. Bulk migration can be scheduled to “flip” over to the newly migrated VMs during a

scheduled outage window within the HCX migration UI wizard.

May allow VMs to migrate that are currently using virtual CPU features that differ from the cloud

side, where vMotion fails.

Disadvantages

Individual VMs migrate much more slowly than with vMotion.

The VM will incur downtime briefly as the new “clone” VM is brought up on the destination side.

VMs that depend on disk ordering and disk UUIDs (Oracle RAC) may have issues and/or

have disks that show up differently as the UUIDs will be changed, which might change the

OS paths to the virtual disk devices.

6.4 Migration type best practice

Page 21: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 21 of 34

Shared disk clusters Oracle RAC, MS Exchange clusters, and MS SQL clusters are examples of applications where two or more

VMs participate in a cluster that requires shared disk across all cluster nodes. VMware multi-writer flag

must be enabled on all VM nodes for disks that are part of the application cluster (non-OS virtual disks).

VMs with the multi-writer flag enabled for any virtual disk are not supported.

Migrating a multi-writer virtual disk enabled cluster:

Must use vMotion to maintain the original VM disk / UUID mappings.

The cluster can remain up in a degraded state (single node) while being migrated.

The cluster will incur downtime prior to the start of the migration and after the migration is

complete to “re-assemble” the multi-writer configuration across cluster VM nodes.

1) Bring down the cluster and all nodes per the application best practice.

2) Capture / note the disk order (if the application requires it) in each node VM for the multi-writer

configured virtual disks.

a) For oracle and any other application that utilizes the virtual disk UUID feature, login to a

particular ESXi host and run the “ vmkfstools -J getuuid /vmfs/volumes/datastore/VM/vm.vmdk”

command to get the UUID of each virtual disk file that requires the multi-writer flag set for the

cluster. This is necessary IF as best practice it is necessary to align disk device names in the VM

definition with how they show up naming/path wise in the OS. vMotion can reorder the disks

(disk1, disk2, disk3 etc. but the UUIDs will remain the same. Use the noted UUID to disk

mapping information to recreate the disk naming order / scsi ID when the migration completes if

necessary. The application should function either way. This is used where an Oracle instance has

many virtual disks mapped for troubleshooting of the application.

3) Remove the virtual disks from all cluster VM nodes except the one deemed primary.

4) Remove the multi-writer flag from the primary VM cluster node that should be the only one owning

the cluster disks at this time.

5) Bring the primary cluster node back up if required for minimal down time.

6) Migrate all cluster nodes with vMotion. Primary first. All other nodes will migrate “cold” (powered

off).

7) When the primary node that owns the disk(s) completes migration, power it down gracefully.

8) Remap disk order / with the proper disk UUID / scsi ID if required as per above. (NOT required for the

application to function.)

9) Re-enable the multi-writer flag on the primary node.

10) Start the primary node and verify operation.

11) Map disk / enable multi-writer flag on all other cluster node VMs and power them on.

12) Verify other cluster nodes operation.

General VMs Demonstration of proper HCX function is achieved with the successful vMotion of a single general VM.

Bulk migration of general VM is recommended where redundant applications are concerned (web servers

etc.) or where many VMs (100s to 1000s) are targeted for migration.

VMs utilizing direct attached NAS NFS is typically employed for use to share data across many servers, an example is a web server content

share. iSCSI can be employed across VM nodes comprising an application cluster such as email or

RDBMS and is typically more latency sensitive that NFS. In either case, if latency can be kept low to the

IBM data center (< ~7ms for iSCSI, whatever the application will tolerate for NFS) and allowing that the

application can operate with bandwidth of ~1Gbps or less, then the NAS network can be stretched with

HCX into an IBM cloud location. Once the NAS network is stretched the VMs can be migrated /

vMotioned with HCX as normal. Post migration, iSCSI volumes can be mirrored with the OS to another

local cloud storage solution and NFS data can be replicated over to any in cloud solution. The

considerations are:

Latency (iSCSI or application tolerance for NFS)

Bandwidth (~1Gbps per stretched network)

Underlay link bandwidth

Page 22: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 22 of 34

Again, following the migration life cycle, test with dev/QA or staging applications prior to attempting with

production. QoS can be employed for the underlay tunnel traffic (udp 500 / 4500) between the L2C HCX

appliances carrying latency sensitive stretched L2 networks.

6.5 Network Swing When the goal is data center evacuation into the IBM cloud, then the next to last step prior to HCX removal

is the network swing. Network swing achieves a migration of the network subnet that supports the migrated

VMs from the source data center onto a NSX overlay network within the IBM Cloud.

Swinging the network, involves the following:

Verify network is evacuated of all workloads and any non VM networked devices are either

moved to another network, functionally migrated to cloud or deprecated.

Verify the NSX topology and/or IBM Cloud supporting network topology is ready to support the

network swing (dynamic routing protocols, firewalls etc.)

Execute a HCX un-stretch network workflow in the UI and select the appropriate NSX router to

take over the un-stretched networks default gateway.

Execute any external route changes which may include: insertion of changed routes for networks

that were migrated, removal of routes to source site from the migrated network and validation of

routes from the migrated subnet across the WAN link when connectivity to applications not

migrated is necessary.

Application owner testing of migrated applications from all possible access points: internet,

intranet, VPN.

Observation: A customer wishes to network swing a particular application that has

all VMs completely migrated to cloud. The customer is utilizing a vyatta on the

private network side to 1) insert routes into their MPLS cloud, 2) tunnel to the edge

routing devices on the MPLS to avoid the IBM Cloud IP space. The customer has

their account set with an IBM Cloud VRF. Some of the applications are behind a

network load balanced virtual IP (VIP). Those VIPs are on a customer owned subnet

residing on a virtual F5 behind the vyatta. While advertising more specific routing

into the MPLS for networks that are swung over to IBM Cloud via HCX works fine for

other networks it does not work for the individual VIPs as a /32 route is being

inserted. Solution: It is common for WAN providers to filter out /32 routes being

advertised. Work with the WAN vendor to allow.

The following are considerations and implications:

Applications that share the same subnet / vLAN / VXLAN should migrate together.

Applications behind a load balancer using an internal routable IP may require route changes if

they cannot migrate together. (too many applications migrating in one swing may cause a

perception of excessive risk)

Involvement of VMware admins, network admins (including customer / WAN vendors)

application owners is required, even if the planned changes do not impact a particular system /

network equipment etc.

Page 23: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 23 of 34

7 Monitoring

7.1 WAN Opt The Silver Peak™ WAN optimization appliances that are deployed as part of HCX do not get their

management UIs configured. The WAN Opt web UI is a valuable tool to use when baselining traffic

throughput and throttling network migration bandwidth utilization.

Note: Only the HCX CGW Gateway WAN tunnel traffic flows thru the WAN Opt appliance. It cannot

monitor stretched L2 traffic. Please contact support if it is required to access the WAN opt web UI /

console. At this time the UI is not supported function and is disabled due to security advisories around the

web UI only.

Migration Bandwidth Throttling Prior to VM migration, an assessment of the current network link should be undertaken. Talking to the

networking engineers responsible for the network containing the source instance of vSphere and reviewing

weekly and monthly traffic utilization are basic approaches to understanding the network design and

performance. It is advisable to restrict bandwidth for migrations if that traffic will traverse a link critical to

the customer’s business, this is especially important where the link capacity is less than 1Gbps. Traffic

throttling should be performed where the bandwidth is most constrained. The limiting side is typically the

client side.

Bandwidth throttling is done when deploying the fleet components in the HCX Client UI. Post deployment

changes are performed using the WAN opt UI.

7.2 HCX Components HCX components: HCX Manager, Cloud Gateway, WAN Opt and the Layer 2 Concentrator operations are

monitored in the following ways:

Configure HCX Manager to send logs to a syslog server. This is done in the HCX manager

appliance management utility. https ://<hcxhostname or IP>>:9443

Figure 12

Setup a ping to a VM that has been migrated (prior to network swing) for each stretched L2

network

Page 24: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 24 of 34

Monitor the HCX component VM health with VMware vRealize Operations Manager, or other

VMware VM monitoring tools.

7.3 Bandwidth Utilization Various methods are available to monitor bandwidth utilization and latency.

VM Migration vMotion Traffic The WAN Opt web UI is the preferred method of monitoring migration traffic. The WAN Opt will

dramatically reduce the traffic going over the WAN and reduce packet loss by sending redundant packets.

It has been observed that the typical ratio LAN to WAN bandwidth utilization is ~ 3:1 (350 Mbps LAN =

90-120Mbps WAN).

A replication based (bulk) migration of VMs within HCX will result in VM’s being moved thick. While

this may not be desirable, the WAN opt UI will reveal an extremely high ratio between LAN and WAN

when moving “empty” disk data. Conversely, it will be observed that when non-compressible data is

migrated (DB data, digital media etc.) that the WAN utilization will be at its highest as it comes closer to

LAN input utilization.

Some observations:

vMotion of a VM within HCX will result in no more than the throughput of the vMotion network to a

single ESX host.

As bulk migration can have multiple migrations in flight simultaneously, it can achieve higher

bandwidth utilization than a vMotion migration. The ratio observed at a customer side with 1Gbps

vMotion links to the ESX hosts was: 8 replications = bandwidth utilization of 1 vMotion.

Figure 13

Moving empty space on disk will show up as high LAN utilization with a high ratio and subsequently

low WAN utilization. Note on the following real-time chart, that 1Gbps seems to be the limit. Indeed,

in this particular case, for a vMotion the vMotion network is only capable of 1Gbps which is the

Page 25: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 25 of 34

bottleneck.

Figure 14

vMotion Migration of a multi TB Oracle DB. With a WAN link of 1Gbps, the limitation is the

vMotion network of 1Gbps.

Figure 15

Stretched Layer 2 Traffic The HCX Layer 2 Concentrator has a bandwidth limitation of ~4Gbps aggregate for all L2 network traffic

traversing it. Individual “flows” within stretched networks have a bandwidth limit of ~ 1Gbps or less

depending on the traffic type while an individual stretched network can use up to the maximum bandwidth

of the L2C.. It is possible to have many stretched L2 networks across a single L2C pair (theoretical

allowable max of 4096 networks per L2C pair). While the L2C is engineered to detect and protect small

traffic flows from large flows within the same L2C pair, it can be advantageous to identify when this

situation is occurring and bring up additional L2Cs to increase overall bandwidth capability. Deploying

multiple L2Cs may also be advantageous where multiple paths exist between the customer site and IBM

Cloud (i.e. Direct Link and internet) so that all available paths are used to spread the migration network

traffic load across them. A single network cannot be made redundant or provided increased bandwidth

across multiple L2C pairs.

Page 26: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 26 of 34

Figure 16

Prior versions of HCX L2C allowed you to identify the vNic of the L2C to the virtual network it was

stretching by seeing what port group it was connected to, however as of 3.5.1 this is no longer the case as

the L2C appliance as of 3.5.1 now trunks to the networks it will connect to including the WAN connection

to a distributed switch port group.

Monitor the traffic across all interfaces using the “Monitoring” tab of the L2C VM. If the total data rate is

approaching 4Gbps, consider adding another L2 pair and redistribute stretched networks to it to rebalance.

Figure 17

Page 27: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 27 of 34

8 Troubleshooting Here are some HCX common issues and fixes.

8.1 HCX Client UI HCX UI token time out.

When the vCenter UI has been left opened for some time, you may encounter the following in the HCX UI.

Figure 18

This is occurs when the login token to the HCX manager server has timed out. Simply log out of

the vSphere web UI and back in to refresh the token.

HCX Client UI displaying “NaN” for all metrics on the dashboard screen. This is a permissions issue

for the currently logged in vCenter account. Verify the “Enterprise Administrator” group is set in the

HCX Cloud side appliance manager UI.

Figure 19

Page 28: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 28 of 34

8.2 Migration Migration issues in the current versions of HCX are usually in three categories: Licensing, cloud gateway

networking connectivity and destination hardware compatibility.

Licensing – if a migration fails because of a licensing issue, current versions of HCX display this

error message in the client web UI within the vCenter UI.

Figure 20

Network (WAN) Connectivity – If there is an issue with WAN connectivity, always check the

“Interconnect -> HCX Components screen within the HCX UI for tunnel status.

Figure 21

The fleet components typically do not need to be reset / rebooted. When WAN connectivity is

restored, they reconnect automatically. If there are fixes/updates applied to the HCX Manager(s)

(Client and Cloud) and those updates also patch issues with the fleet components, you must

“redeploy” the Cloud Gateway and any L2Cs deployed.

It is possible to do further tunnel status debugging, by connecting to the HCX manager via an SSH client

such as “putty” and running the ccli utility

1. SSH to the HCX manager using the id “admin” and the supplied password.

2. Execute “su –“and the root password (same as admin password) to change to root.

3. change directory to “/opt/vmware/bin” and run “./ccli” (if this fails because the environment is not

setup for root, run the “./ccliSetup.pl”)

4. Execute a “list” comment within the ccli shell to list the fleet components registered with the HCX

manager.

5. Select the focus of the ccli by typing in the “id” listed for the fleet component. Ie “go 8”

6. Run “debug remoteaccess enable” to ssh into the desired fleet component.

7. Exit ccli

8. Ssh to the ip of the ssh enabled fleet component.

9. Continue to troubleshoot.

10. Return to the ccli and disable the ssh service for the component!!

Page 29: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 29 of 34

Figure 22

Page 30: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 30 of 34

11. You can also use the ccli to run a health check on the components using the “hc” ccli command

Figure 23

Destination hardware compatibility issues – Vmotion migration can be an issue when the client source

side is of newer hardware and/or vSphere than the cloud destination. Since replication based

migration copies data to a newly built VM on the destination side, changing the migration type to

“Bulk Migration” should allow the migration to succeed in most cases.

8.3 Stretched L2 As of this writing, few if any issues have been experienced with the operation of the L2 concentrator.

Similar to the CGW, if the L2C loses connectivity it will reconnect automatically once the network

connectivity is restored. The ccli shell is useful to further check health and operation. Once ssh has been

enabled and the L2C has been connected “ip tunnel” and “ip link |grep t_” are used to view the status of the

tunnels.

Page 31: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 31 of 34

Figure 24

Page 32: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 32 of 34

9 Upgrading Upgrading HCX is a simple process executed through the use of both the Client web UI (client side HCX

Manager update) and the Cloud Web UI (cloud side HCX Manager update). It is important that you

upgrade both as any fleet deployments cause both sides to redeploy the fleet components with the level of

code the manager is at. Updates are made available upon request from VMware as of this writing. VMware

support staff may ask for the “system ID” shown below for both the client and cloud side. It can also be

retrieved via SSH into the HCX manager (client or cloud) and executing “cat /common/location”

Figure 25 - Client Side

Figure 26 - Cloud Side

Page 33: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 33 of 34

10 Removal Steps to remove HCX – Assumes stretched networks are no longer in use.

Un-stretch all stretched networks.

In the client UI, delete the any L2C appliances

Wait a few minutes for it to disappear from the web UI.

Delete the CGW

This will also remove the WAN Opt. Allow a few minutes for the CGW and WAN Opt appliances to

be removed.

Shutdown and delete the HCX Manager Appliance VMs from the client and cloud side.

Remove the HCX ESG from NSX (cloud side)

Use the vCenter Mob browser to remove the HCX snap-in.

Page 34: VMware HCX on IBM Cloud Deployment and Operations · 2020. 5. 28. · VMware HCX on IBM Cloud allows disparate instances of vSphere software defined data centers (SDDC), to interoperate

Copyright IBM Corporation 2018 Page 34 of 34

Appendix – Links to HCX documentation General Information - https://cloud.vmware.com/vmware-hcx

Hands on lab for HCX (pre- HCX) http://labs.hol.vmware.com/HOL/catalogs/lab/3762

IBM Marketing - https://www.youtube.com/watch?v=qW4nzXXbTZ8

IBM Architecture Center - https://www.ibm.com/cloud/garage/files/HCX_Architecture_Design.pdf