vmware vsphere 6.5 migration guide - …h20628. · vmware vsphere 6.5 migration guide . for hpe...
TRANSCRIPT
VMware vSphere 6.5 Migration Guide
for HPE Hyper Converged 380 1.1 Update 2
Abstract
This guide outlines the necessary procedures required to migrate an HPE Hyper Converged 380 1.1 Update 2 system to VMware vSphere 6.5.
Part Number: P04016-001
Published: December 2017
Edition: 1
Page | 2
© Copyright 2017 Hewlett Packard Enterprise Development LP
Notices
© Copyright 2017 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice. The only warranties for Hewlett
Packard Enterprise products and services are set forth in the express warranty statements
accompanying such products and services. Nothing herein should be construed as constituting an
additional warranty.
Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions
contained herein.
Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett
Packard Enterprise has no control over and is not responsible for information outside the Hewlett
Packard Enterprise website.
Microsoft® and Windows® are either registered trademarks or trademarks of Microsoft Corporation in the
United States and/or other countries.
VMware®, vCenter™, and vSphere™ are registered trademarks or trademarks of VMware, Inc. in the
United States and/or other jurisdictions.
Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries.
NVIDIA, the NVIDIA logo, and NVIDIA Tesla are trademarks and/or registered trademarks of NVIDIA
Corporation in the U.S. and other countries.
Page | 3
Revision History
Version Date Changes
1 December
2017
Initial release
Page | 4
Table of Contents
Hyper Converged 380 migration overview (vCenter 6.0 to 6.5) .............................................. 8
Checklist for Migration ............................................................................................................... 9
Migration Flow Diagram ........................................................................................................... 12
Brief of Flow Diagram: ......................................................................................................... 13
Expansion ................................................................................................................................ 14
Brief of Flow Chart: .............................................................................................................. 15
NVIDIA Grid Manager ............................................................................................................. 16
Brief of Flow Chart: .............................................................................................................. 16
Prerequisite for migrating HC 380 ........................................................................................... 17
HC 380 Management UI prerequisites .................................................................................... 17
Ensure that the HC 380 is running under normal operation ................................................ 17
Verify node management ..................................................................................................... 18
VMware vSphere Web Client prerequisites ............................................................................. 18
Ensure that the VMs reside in the StoreVirtual VSA management datastore ...................... 18
Checking the CD/DVD settings for all VMs .......................................................................... 19
Disabling Admission control in vSphere HA settings ........................................................... 20
HPE StoreVirtual CMC prerequisites ...................................................................................... 21
Verification of no critical alerts on vCenter and iLO ................................................................ 23
HPE OneView Global Dashboard Configuration ..................................................................... 24
HC 380 system migration - 1.1 Update 2 to 1.1 Update 2 vSphere 6.5 ................................. 26
Migration checklist ................................................................................................................... 26
Download and stage the required files .................................................................................... 26
Download the HPE upgrade component files ...................................................................... 26
Transfer the upgrade files to HC 380 Management VM ...................................................... 27
Update HPE OneView InstantOn ............................................................................................ 27
Update HPE StoreVirtual CMC and VSA software .................................................................. 28
Update the HPE StoreVirtual CMC and VSA software Online ............................................. 28
Update the HPE StoreVirtual CMC and VSA software Offline ............................................. 29
Configure HPE OneView for VMware vCenter ........................................................................ 30
Deployment of 8.3.xx version of HPE OneView for VMware vCenter.................................. 30
Configuration of HPE OneView for VMware vCenter with vCenter 6.5................................ 37
Configure StoreVirtual VSA with HPE OneView for VMware vCenter .................................. 47
Page | 5
Migrate ESXi hosts to remote vCenter 6.5 .............................................................................. 49
Migrate HC 380 Nodes to remote vCenter 6.5 .................................................................... 50
Update Cluster Settings in Remote vCenter ........................................................................ 50
Manual DRS Configuration at Remote vCenter ................................................................... 53
Upgrade Mozilla Firefox and verify proxy setting on HC 380 Management VM ...................... 58
Upgrade Mozilla Firefox ....................................................................................................... 58
Verify and reset the proxy settings on the HC 380 Management VM .................................. 59
Upgrade VMware vSphere using the HPE Customized ESXi Image on the cluster nodes ..... 60
Migrate the HC380 VMs to a Different Host ........................................................................ 60
Migrate User VMs to a Different Host .................................................................................. 61
Power off the StoreVirtual VSA VM using HPE StoreVirtual CMC ...................................... 61
Put the ESXi host in maintenance mode ............................................................................. 63
Apply Service Pack for ProLiant .......................................................................................... 65
Update HPE Intelligent Provisioning .................................................................................... 69
Upgrade VMware vSphere ESXi ......................................................................................... 70
Take the VMware vSphere ESXi server out of maintenance mode ..................................... 74
Power ON the StoreVirtual VSA VM .................................................................................... 74
Redeploy Management UI & HPE OneView VM ..................................................................... 74
System in TLS 1.1 ............................................................................................................... 75
System in TLS 1.2 ............................................................................................................... 75
Removal of HPE OneView Global Dashboard entries ......................................................... 75
Removal of Management UI and HPE OneView VM from the vCenter. .............................. 76
Configure the iLO Cert (TLS 1.2) ......................................................................................... 78
Configure custom certificates of the nodes (TLS 1.2) .......................................................... 78
Modify Registry settings in Management VM (TLS 1.2) ...................................................... 79
Enable TLS 1.2 in Management VM Browser settings ........................................................ 85
Rename of Host with FQDN (TLS 1.2) ................................................................................ 86
Renaming and registering the Management VM to DNS (TLS 1.2) ..................................... 95
Copy of the OVA files to the tmp directory (TLS 1.2) ........................................................... 97
Replacement of Post deployment scripts (TLS 1.2) ............................................................ 97
Update DirectoryMap.xml, Current.xml, and Deployment1.xml file on Management VM. ... 97
Lock iLO to TLS 1.2 (Enable FIPS) (TLS 1.2) ................................................................... 101
Redeploy Management UI & HPE OneView VM ............................................................... 102
Run Post Deployment scripts for expanded nodes (if any) ................................................ 106
Post migration procedures .................................................................................................... 108
Page | 6
Disable SSH and ESXi Shell access ................................................................................. 108
Reconfigure with HPE OneView Global Dashboard .......................................................... 109
Upgrade VMware vSphere PowerCLI ................................................................................ 110
Expansion ................................................................................................................................ 112
Network Configuration of ESXi Host and VSA ...................................................................... 113
Find new Node in CMC ......................................................................................................... 131
Upgrade the VSA .................................................................................................................. 137
Place the new ESXi node into maintenance mode ............................................................... 137
Apply Service Pack for ProLiant ............................................................................................ 137
Update HPE Intelligent Provisioning ..................................................................................... 137
Upgrade VMWare vSphere ESXi .......................................................................................... 137
Install NVIDIA GRID Manager ............................................................................................... 137
Add new ESXi host to the vCenter ........................................................................................ 137
Take the VMware vSphere ESXi server out of maintenance mode ...................................... 144
Add the VSA in Existing Management Group and Cluster .................................................... 144
Create New Server in Management Group ........................................................................... 146
Configure the assigned volume(s) on vCenter ...................................................................... 148
Add expanded node to Management UI ................................................................................ 153
Update DRS settings in Remote vCenter .......................................................................... 153
Create Deploymentxx.xml file on Management VM ........................................................... 155
Execute PostDeployment PowerShell script on Management VM .................................... 160
Migrate NVIDIA Grid Manager Post ESXi Upgrade .............................................................. 163
System Recovery .................................................................................................................... 165
Troubleshooting ...................................................................................................................... 166
Fetching of support bundle fails from HPE OneView InstantOn ............................................ 166
Manual Log Collection: ...................................................................................................... 167
Accessing VM Console from Management UI fails ............................................................... 176
Management UI unable to configure HPE OneView IP after migration ................................. 177
VM templates are not visible in vCenter, after migrating Hosts from vCenter 6.0 to 6.5 ....... 178
VMs are not visible in Management UI VM, after Redeployment of Management UI and HPE
OneView VM ......................................................................................................................... 181
After ESXi 6.5 upgrade Host will be red flagged and vCenter will not communicate with host
.............................................................................................................................................. 182
Error message on the ESXi Host after ESXi 6.5u1 upgrade ................................................. 183
Don’t use HPE OneView InstantOn summary page is not functional. ................................... 184
Datastore expansion operation will take a while. .................................................................. 184
Page | 7
ADU log collection command is no longer functional ............................................................ 185
VM vending operations from Management UI may fail in case of VCenter failover under
VCenter High Availability environment. ................................................................................. 185
Issue in downloading upgrade manifest or all upgrade files while Updating HPE StoreVirtual
CMC and VSA to 12.7 ........................................................................................................... 186
Workaround 1 .................................................................................................................... 186
Workaround 2 .................................................................................................................... 188
HPE OneView InstantOn takes longer time to complete installation on Management VM .... 188
Appendix .................................................................................................................................. 191
Sample Deployment XML Automation Scripts ...................................................................... 191
esx-cmd.sh ........................................................................................................................ 191
Create-VM.ps1 ................................................................................................................... 192
Support and other resources ................................................................................................. 199
Accessing Hewlett Packard Enterprise Support .................................................................... 199
Information to collect .......................................................................................................... 199
Accessing updates ................................................................................................................ 199
Customer self-repair .............................................................................................................. 200
Remote support ..................................................................................................................... 200
Warranty information ............................................................................................................. 201
Regulatory information .......................................................................................................... 201
Documentation feedback ....................................................................................................... 202
Page | 8
Hyper Converged 380 migration
overview (vCenter 6.0 to 6.5)
NOTE: Before performing migration, refer to the HPE Hyper Converged 380 Firmware and Software Compatibility Matrix for the supported firmware and software versions.
NOTE: For information and downloads required for migrating to vSphere 6.5, visit the HPE Support Center (https://support.hpe.com/hpesc/public/home/). Search for “HPE Hyper Converged 380 Cluster Appliance” and select “Drivers and Software”.
The HPE Hyper Converged 380 Migration Guide provides a complete set of instructions to migrate a deployed HPE Hyper Converged 380 system. This version of the guide outlines migrating from HC 380 1.1 U2 system to 1.1 U2 vSphere 6.5 Migration paths outlined in this guide:
1. Complete the Migration prerequisites.
2. Complete the Migration procedure.
3. Complete the Post migrate procedures.
This guide also includes:
1. Expansion of new HC 380 node to existing migrated environment.
Note: HPE OneView InstantOn cannot be used to expand an HPE HC 380 on version 1.1 U2 with VMware vSphere 6.5. Instructions for expanding an HC 380 that has been migrated to VMware vSphere 6.5 are provided in this document.
Page | 9
Checklist for Migration
NOTE: The points mentioned with ** are applicable only for Transport Layer Security (TLS) 1.2 systems
Migration Check List Item for TLS 1.1 and TLS 1.2
Step TLS1.2
Prerequisite for migrating HC 380
HC 380 Management UI Prerequisites
1 Ensure that the HC 380 is running under normal operation
2 Verify node management
VMware vSphere Web Client prerequisites
3 Ensure that the VMs reside in the StoreVirtual VSA management datastore
4 Checking the CD/DVD settings for all VMs
5 Disabling Admission control in vSphere HA settings
6 HPE StoreVirtual CMC prerequisites
7 Verification of no critical alerts on vCenter and iLO
8 HPE OneView Global Dashboard Configuration
HC 380 system migration - 1.1 Update 2 to 1.1 Update 2 vSphere 6.5
Download and stage the required files
9 Downloading the HPE upgrade component files
10 Transfer the upgrade files to HC 380 Management VM
11 Download the SPP & Intelligent Provisioning files
12 Update HPE OneView InstantOn
Update HPE StoreVirtual CMC and VSA software
13 Update the HPE StoreVirtual CMC and VSA software Online/Offline
Configure HPE OneView for VMware vCenter
14 Deployment of 8.3.xx version of HPE OneView for VMware vCenter
15 Configuration of HPE OneView for VMware vCenter with vCenter 6.5
16 Configure StoreVirtual VSA with HPE OneView for VMware vCenter
Migrate hosts to remote vCenter 6.5
17 Migrate HC 380 Nodes to remote vCenter 6.5
18 Update Cluster Settings in Remote vCenter
Page | 10
19 Manual DRS Configuration at Remote vCenter
Upgrade Mozilla Firefox and verify proxy setting on HC 380 Management VM
20 Upgrade Mozilla Firefox
21 Verify and reset the proxy settings on the HC 380 Management VM
Upgrade VMware vSphere using the HPE Customized ESXi Image on the cluster nodes
22 Power off the StoreVirtual VSA VM using StoreVirtual CMC
23 Put the ESXi server in maintenance mode
24 Apply Service Pack for ProLiant
25 Update HPE Intelligent Provisioning
26 Upgrade VMware vSphere ESXi
27 Take the VMware vSphere ESXi server out of maintenance mode
28 Power ON the StoreVirtual VSA VM
Redeploy Management UI & HPE OneView VM
System In TLS 1.1
System in TLS 1.2
29 Removal of HPE OneView Global Dashboard entries
30 Removal of Management UI and HPE OneView VM from the vCenter
31 ** Configure the iLO Cert (TLS 1.2)
32 ** Configure custom certificates of the nodes (TLS 1.2)
33 ** Modify Registry settings in Management VM (TLS 1.2)
34 ** Rename of Host with FQDN (TLS 1.2)
35 ** Renaming and registering the Management VM to DNS (TLS 1.2)
36 ** Copy of the OVA files to the temp directory (TLS 1.2)
37 ** Replacement of Post deployment scripts (TLS 1.2)
38
Update DirectoryMap.xml, Current.xml, and Deployment1.xml file on Management VM
39 ** Lock iLO to TLS 1.2 (Enable FIPS) (TLS 1.2)
40 Redeploy Management UI & HPE OneView VM
41 Run Post Deployment scripts for expanded nodes (if any)
Post migration procedures
42 Disabling SSH and ESXi Shell access
43 Reconfigure with HPE OneView Global Dashboard
44 Upgrade VMware vSphere PowerCLI
Page | 11
45 Modify PowerShell Script
Expansion
46 Network Configuration of ESXi Host and VSA
47 Find new Node in CMC
48 Upgrade the VSA
49 Apply Service Pack for ProLiant
50 Update HPE Intelligent Provisioning
51 Upgrade VMWare vSphere ESXi
52 Add the VSA in Existing Management Group and Cluster
53 Create New Server in Management Group
54 Add new ESXi host to the vCenter
55 Configure the assigned volumes on vCenter
Add expanded node to Management UI
56 System in TLS 1.1
57 ** System in TLS 1.2
58 Update DRS settings in Remote vCenter
Create DeploymentXX.xml file on Management VM
59 Create deploymentxx.xml file manually
60 Create deploymentxx.xml file with an automated script
61 Execute PostDeployment PowerShell script on Management VM
62
Migrate NVIDIA Grid Manager Post ESXi Upgrade (Applicable to VDI setups with NVIDIA GRID GPUs)
Page | 12
Migration Flow Diagram
Page | 13
Brief of Flow Diagram:
1. To start Migration, first fulfill the prerequisites mentioned in Flow Chart, then follow the process: check for the license for all components, download the required files, upgrade HPE OneView InstantOn, update CMC and VSA, configure HPE OneView for VMware vCenter, migrate ESXi hosts to remote vCenter 6.5, and upgrade Mozilla Firefox,
2. Upgrade ESXi host to 6.5? (Optional if hosts are staying at 6.0)
a. Yes, then follow the process: Power off VSA, put node in maintenance mode, apply Service Pack for ProLiant, update iLO, update Intelligent Provisioning, upgrade VMware vSphere ESXi6.5 and redeploy Management UI and HPE OneView VM
b. No, then follow the process. Redeploy Management UI and HPE OneView VM.
3. For TLS 1.2(Redeploy Management UI and HPE OneView VM), follow the process: remove entries from HPE OneView for Global Dashboard, remove existing Management UI and HPE OneView VM, configure the iLO certs and nodes certification, modify registry settings in Management VM, rename host with FQDN, update cluster settings, rename, and register Management VM to DNS, copy OVA files, replace post deployment scripts, update xml files, lock iLO to TLS1.2, redeploy Management UI and HPE OneView VM, run post deployment scripts for extension of nodes(if any).
4. For TLS 1.1(Redeploy Management UI and HPE OneView VM), follow the process: remove entries from HPE OneView Global Dashboard, remove existing Management UI and HPE OneView VM, update xml files, redeploy Management UI and HPE OneView VM, run post deployment scripts for extension of nodes (if any).
5. Post Migration: follow the process: disable SSH, reconfigure HPE OneView for Global Dashboard, and upgrade VMware vSphere PowerCLI. Migration completed.
Page | 14
Expansion
Page | 15
Brief of Flow Chart:
1. To start expansion, first fulfill the prerequisites mentioned in Flow Chart, then follow the process: Network-configure ESXi host, find new node in CMC with its VSA IP, upgrade the VSA.
2. Migrate VMware vSphere ESXi?
a. Yes, then follow the process, apply Service Pack for ProLiant, update iLO, update Intelligent Provisioning, add VSA to existing Management Group and cluster.
b. NO, then follow the process: add VSA to existing Management Group and cluster.
3. Create a new server in Management Group, add new ESXi Host to vCenter, configure the assigned volumes on the vCenter.
4. For TLS1.2 (Add expanded node to Management UI VM) then follow the process: Configure the iLO certs, configure the custom certificates of the nodes, rename ESXi Host with FQDN, update DRS settings in Remote vCenter, create Deploymentxx.XML file, lock iLO to TLS1.2(Enable FIPS).
5. For TLS1.1 (Add expanded node to Management UI VM) then follow the process: update DRS settings in Remote vCenter, create Deploymentxx.XML file.
6. Execute PostDeployment PowerShell script on Management VM.
7. Expansion Completed.
Page | 16
NVIDIA Grid Manager
Brief of Flow Chart:
1. To start migration of NVIDIA Grid manager, put Host in Maintenance Mode
2. NVIDIA Grid Manager 6.0 VIB installed?
a. Yes, Uninstall 6.0 NVIDIA Grid Manager VIB. Then, check if NVIDIA GPUModeswitch installed? Yes, Uninstall the NVIDIA GPUModeswitch and reboot the system
b. No, check if NVIDIA GPUModeswitch installed? Yes, Uninstall the NVIDIA GPUModeswitch and reboot the system
3. Copy the 6.5 vib of NVIDIA Grid manager Host Driver, Install New vib, Reboot the Host, check the correct vib is installed, Refer to NVIDIA user guide for VGPU and VM Configuration. Migration completed.
Page | 17
Prerequisite for migrating HC 380 Before migrating the system, complete the following prerequisite.
1. HC 380 Management UI prerequisites
a. Ensure that the HC 380 is running under normal operation
b. Verify node management
2. VMware vSphere Web Client prerequisites
a. Ensure that Management UI, HPE OneView, and Management VMs reside in the StoreVirtual VSA Management datastore.
b. Check the CD/DVD settings for all VMs
c. Disable Admission Control in vSphere HA settings
3. HPE StoreVirtual CMC prerequisites
a. Check for alerts in CMC
b. Check for VSA health status
4. Verification of no critical alerts on vCenter and iLO
a. Ensure that there are no critical alarms on each host in the vCenter
Ensure that there are no critical alarms on iLO of each node and the system health shows OK in green.
5. HPE OneView Global Dashboard Configuration
NOTE: Customer needs to provide VMWare vCenter Server 6.5. Do not use underscore character (“_”) on any of the vCenter names (datacenter, and cluster) HPE OneView for VMware vCenter should be configured with vCenter. Refer to this link.
HC 380 Management UI prerequisites
1. Ensure that the HC 380 is running under normal operation
2. Verify node management
Ensure that the HC 380 is running under normal operation
Procedure
1. Go to the HC 380 Management UI dashboard.
2. Check for any critical alerts or warning messages. If any critical alerts exist, they must be corrected before performing the migration.
Page | 18
Verify node management
Ensure all nodes managed by VMware vCenter are also managed by the HC 380
Management UI.
Procedure
1. Go to Management UI Settings.
2. Verify that the Nodes tab has the Manage check box enabled.
3. Verify that the correct iLO IP address is assigned for each node in the cluster.
4. Verify that the HPE OneView tab is visible in iLO for each node in the cluster.
VMware vSphere Web Client prerequisites
1. Ensure that Management UI, HPE OneView & Management VM reside in the VSA Management datastore.
2. Check the CD/DVD settings for all VMs
3. Disable Admission control in VMware vSphere HA settings
Ensure that the VMs reside in the StoreVirtual VSA management
datastore
Ensure the following VMs reside in the StoreVirtual VSA management datastore and are
powered on.
1. HC 380 Management UI VM
2. HC 380 HPE OneView VM
3. HC 380 Management VM
Important: The HC 380 HPE OneView and Management UI VMs must be deleted and redeployed after the VMware vCenter 6.5 migration.
All user-defined VMs should reside in the user-defined shared datastores.
Page | 19
Procedure
1. Log in to the vCenter using web client with credentials having administrative privileges.
2. Go to Home > Storage.
3. Click the StoreVirtual VSA Management datastore then select Related Objects > Virtual Machines.
4. Check that the status for the existing virtual machines report as "Normal".
5. Check the VMs in the HC 380 Cluster to ensure that they are in a shared datastore.
6. If any of these management VMs exist on a local datastore, migrate the VMs to the StoreVirtual VSA management datastore.
Important: Do not attempt to migrate the StoreVirtual VSA VMs from the local datastore.
Checking the CD/DVD settings for all VMs
If CD/DVD is mounted to a local datastore, the virtual machine cannot be migrated.
Procedure
To unmount the CD/DVD:
1. Go to Home > Hosts and Clusters.
2. Expand the datacenter then the cluster.
3. Right-click VM > Edit Settings.
Page | 20
4. If CD/DVD drive 1 reports that it is connected to an ISO file, disable the Connected check box and select Client Device on the drop-down.
5. Click OK.
Disabling Admission control in vSphere HA settings
NOTE: Re-enable Admission Control after Service Pack for ProLiant and VMware vSphere ESXi migration is complete.
Procedure
1. Log in to the vCenter using web client and credentials with administrative privileges.
2. Go to Home > Hosts and Clusters.
3. Select the “CLUSTER” then select Manage tab > Settings.
4. Under Services > vSphere HA click Edit…
5. Expand Admission Control.
Page | 21
6. Select Do not reserve failover capacity.
7. Click OK.
HPE StoreVirtual CMC prerequisites
Procedure
1. Log in to the StoreVirtual CMC on the HC 380 Management VM.
Page | 22
2. Check the StoreVirtual VSA health for any critical alerts and ensure that the storage status is "Normal". Resolve any alerts before continuing.
3. For a cluster configuration with more than two nodes, verify that an odd number of managers is running in the StoreVirtual CMC. The max number of managers is 5.
For the required number of managers, refer to "Appendix G: Management group quorum
consideration" in HPE Hyper Converged 380 Installation Guide for the required number of
managers. The number of managers will depend on the number of nodes in the cluster.
4. For a two node cluster configuration, verify that the Quorum Witness status reports as "Normal".
NOTE: Virtual Manager and Failover Manager are not supported for migration in a two node cluster configuration.
5. Ensure that the Data protection level of all volumes does not report as "Network RAID-0 (None)".
6. Ensure the volumes status report as "Normal".
Page | 23
Verification of no critical alerts on vCenter and iLO
1. Log in to the HC 380 vCenter and remote vCenter 6.5 targeted for migration through the web client using credentials with administrative privileges.
2. Ensure that there are no critical alarms on each host in the vCenter. Resolve critical alarms before proceeding with the rest of procedures.
3. Log out of the vCenter server.
4. Log in to iLO using a web browser.
5. Ensure that there are no critical alarms on the iLO of each node and the system health shows OK in green.
6. Resolve critical alarms before proceeding.
Page | 24
HPE OneView Global Dashboard Configuration
HPE OneView Global Dashboard (OVGD) is an optional component. If it is being used in
the setup, ensure it is configured correctly and running without any issues.
1. Point the browser to the HPE OneView Global Dashboard page, https://<OVGD-IPaddress>
2. Log in as an OVGD administrator user.
3. Select Converged Systems and verify that the HC380 system is connected and configured in OVGD.
Page | 25
Page | 26
HC 380 system migration - 1.1 Update 2 to
1.1 Update 2 vSphere 6.5
The procedures outlined in this section are required to migrate the system from version 1.1
Update 2 to 1.1 Update 2 vSphere 6.5
Migration checklist
Use this checklist to ensure that all components have been updated.
The update should be completed in the listed order.
Procedure
1. Perform all steps outlined in "Prerequisite for migrating the HC 380".
2. Download and stage the required files:
a. Download the HPE upgrade component files
b. Transfer the upgrade files to HC 380 Management VM
c. Download the SPP & Intelligent Provisioning files
3. Update HPE OneView InstantOn
4. Update HPE StoreVirtual CMC and VSA software
5. Configure HPE OneView for VMware vCenter
6. Migrate ESXi hosts to remote vCenter 6.5
7. Update Mozilla Firefox and verify proxy setting on HC 380 Management VM
8. Upgrade VMware vSphere using the HPE Customized ESXi Image file on the cluster nodes
9. Post migration procedures
Download and stage the required files
Download the HPE upgrade component files
Prerequisites
1. An active HPE Passport Account. For more information about creating and configuring your HPE Passport Account, see
the Hewlett Packard Enterprise Support Center website.
(http:www.hpe.com/support/hpesc)
2. Upgrade files Files downloaded from the Upgrade page on the Software Depot have names based on
their description, with a 10-digit part number appended. If an updated version of a file
has been published, the appended part number will change. Always check for and
download the latest file.
Page | 27
3. HPE Support site files Files downloaded from HPE Support site only have a 10-digit part number for their
name. If an updated version of a file has been published, the part number will change.
Always check for and download the latest file.
Procedure
1. Go to the Hewlett Packard Enterprise Support Center website.
2. In the search box, enter "Hyper Converged 380".
3. Select Get drivers, software, and firmware.
4. Locate and select the current version as specified in the Hewlett Packard Enterprise Support Center website.
5. When prompted, enter your HPE Passport credentials.
6. Download all files required for the software upgrade process from the HPE Software Depot.
7. Download the HPE Service Pack for ProLiant file from the Hewlett Packard Enterprise Support Center website.
8. Download the HPE Intelligent Provisioning file from the Hewlett Packard Enterprise Support Center website.
9. Move all downloaded files on a laptop or workstation that will connect to the cluster nodes to perform the upgrade. Some files will be transferred to the HC 380 Management VM. Ensure that there is sufficient space on the HC 380 Management VM.
10. Unzip the downloaded “Software_Upgrade.zip" file to a temporary location on the laptop/workstation.
Transfer the upgrade files to HC 380 Management VM
NOTE: Do not upgrade PowerShell at this point. It can be upgraded later in the post-migration procedures.
Transfer the following downloads to a temporary location on the HC 380 Management VM.
Upgrade files
HPE OneView InstantOn.
HPE HC 380 1.2U1 VMware ESXi 6.5U1 (ISO file) – ESXi6.5u1 ISO file required only if user will upgrade ESXi 6.0 to 6.5u1.
HPE HC 380 Management User Interface (TLS 1.2 only)
HPE OneView (TLS 1.2 only)
Mozilla Firefox
Update HPE OneView InstantOn
The following are the steps to upgrade HPE OneView InstantOn on the Management VM.
Page | 28
HPE OneView InstantOn Upgrade
1. Double-click the HPE OneView InstantOn installer, the HPE_InstantOn_Installer_1.3.5.35.exe file uploaded earlier to the Windows desktop.
2. The Welcome Screen should confirm that you are upgrading the old version 1.3.5.28 to the new one. Click Next to proceed with the upgrade. It could take up to 10 minutes to process.
3. Click Finish once the update completes.
4. Launch HPE OneView InstantOn and verify the new version on the top right corner to show as 1.3.5.35. Close the application after verifying.
5. Delete the HPE OneView InstantOn installer file.
Update HPE StoreVirtual CMC and VSA software
Prerequisite:
1. Ensure that all the volumes are in Normal State and restriping of volumes is not in progress.
2. Ensure that the StoreVirtual VSA and Volume health is Normal.
3. Ensure that the StoreVirtual VSA is having proper license before performing the update to 12.7
This update can be done in following two ways:
1. Update the HPE StoreVirtual CMC and VSA software Online
2. Update the HPE StoreVirtual CMC and VSA software Offline
Update the HPE StoreVirtual CMC and VSA software Online
NOTE: Updating the StoreVirtual CMC and StoreVirtual VSA requires Internet access to HC 380 management VM. If Internet access is unavailable, then follow offline upgrade steps under "Updating the HPE StoreVirtual CMC and VSA software offline."
Upgrade the Centralized Management Console to version 12.7
Procedure
1. Go to the StoreVirtual downloads page.
2. Click Select.
3. Enter your customer information and agree to the license terms to continue.
4. Click Next.
5. Enter your HPE Passport credentials. For information on creating an HPE Passport account, see the HPE Support Center
website.
Click How to get started near the top of the page, or Register for HPE Passport on the
right of the page.
6. Along the top of the software download list, select whether to use the HPE Download Manager or a Standard Download.
Page | 29
7. In the list of software downloads, find "HPE StoreVirtual Centralized Management Console for Windows or Linux", depending on the system hosting the StoreVirtual CMC.
8. Click Download and save the files.
9. Upload the downloaded CMC installer into the management VM.
10. Upgrade the CMC as described in the “Updating the StoreVirtual CMC section of the HPE
StoreVirtual Storage Upgrade Guide”. The upgrade guide can be found in the Storage Information Library.
11. Upgrade StoreVirtual VSA as described in the “Updating the StoreVirtual CMC section of
the HPE StoreVirtual Storage Upgrade Guide”
Update the HPE StoreVirtual CMC and VSA software Offline
The StoreVirtual CMC requires an update to accommodate a change in the location from which
StoreVirtual software and patches are downloaded.
The offline upgrade package for the VSA is approximately 16GB.
Use the offline procedure if you are unable to connect to the Internet.
Procedure
1. Install the StoreVirtual CMC on a laptop or workstation with Internet access
2. Open StoreVirtual CMC, and close the Find Systems dialog box.
3. Go to Help > Preferences > Upgrades.
4. The Download Directory is the folder that all the downloaded upgrades files are stored. If necessary, set up the proper HTTP or SOCKS Proxy.
5. Click OK to close the Preferences dialog.
6. The HPE StoreVirtual Notifications dialog box may popup. Review the dialog box and close.
7. Go to Tasks > Download All Upgrade Files.
8. Click Start Download.
Page | 30
The file download will start. The file is approximately 16GB.
9. After all the files have downloaded, close the dialog box.
10. Copy the downloaded upgrade files from the Downloaded Directory to the system running the StoreVirtual CMC for the StoreVirtual VSA cluster under: C:\Program Files (x86)\HPE\StoreVirtual\UI\downloads
11. Start the StoreVirtual CMC and log on to the StoreVirtual VSA cluster.
12. Ensure that StoreVirtual VSA has proper license applied. You will see a license reminder popup the license needs to be upgraded. Do not
proceed with the update unless the license is current.
13. Go to Configuration Summary > Upgrades.
14. To enable Local Media, make sure FTP link is not reachable, then only Local Media tab will get enabled.
15. Click Use Local Media and browse to the directory in step 10: C:\Program Files (x86)\HPE\StoreVirtual\UI\downloads
16. To upgrade the necessary components, follow the on-screen instructions. The following StoreVirtual components are updated.
o StoreVirtual CMC
o StoreVirtual CLI
o StoreVirtual VSA
Configure HPE OneView for VMware vCenter
NOTE: HPE OneView for VMware vCenter 8.x onward run as separate appliance on separate VM.
Prerequisites
1. Verify that there are no errors reported by VMware vCenter web client or the cluster.
2. HPE OneView for vCenter VM must have access to the HC 380 management and storage network
Procedures
1. Deployment of 8.3.x version of HPE OneView for VMware vCenter
2. Configuration of HPE OneView for VMware vCenter with vCenter 6.5
3. Configure StoreVirtual VSA with HPE OneView for VMware vCenter
Deployment of 8.3.xx version of HPE OneView for VMware vCenter
NOTE: HPE OneView for VMware vCenter (OV4VC) should be outside the HC 380 and should have access to the VM network.
Procedure
1. Log out of the VMware vSphere Web Client.
Page | 31
2. Copy the Zip file HPE HC 380 HPE OneView for VMware vCenter for 8.3.xx to the installation computer.
3. Unzip the file and extract the OVA file.
4. Log in to the vSphere web client.
5. Connect to the vCenter where the HPE OneView for VMware vCenter will be deployed.
6. Select the ESXi cluster and right-click Deploy OVF Template....
7. On the Deploy OVF Template page, provide the source location by either selecting the URL or the Local file. For local file, click Browse and provide the OVF location. Click Next.
Page | 32
8. Select Name and location and click Next.
Page | 33
9. On the Select resource page, select the host and click Next.
10. Review the details and click Next.
11. Accept the License Agreement.
Page | 34
12. Accept the License Agreement’s 2nd page and click Next.
13. Select Datastore for the template and select disk format as Thin Provision.
14. Click Next.
Page | 35
15. Select the desired network for your virtual machine and click Next.
16. AssignNetwork1 to “Mgmt Network”, Network2 to “Storage” and Network3 is optional.
NOTE: OV4VC should have access to storage network & management network. Refer HPE OneView for VMware vCenter installation guide for detailed Network configuration.
17. On the Customize template page, provide the valid deployment properties i.e. IP address, subnet mask, gateway. FQDN, etc.
Page | 36
NOTE: The hostname must be a fully qualified domain name (FQDN) registered in the DNS server.
18. Verify the setting selection on the Ready to complete page then click Finish to start the deployment.
19. After successful deployment, power on the appliance.
20. Once HPE OneView for VMware vCenter appliance is deployed and running, the next step is to configure the appliance, to start using it.
Page | 37
For instructions about configuration, see Configuring HPE OneView for VMware vCenter
appliance section in the HPE OneView for VMware vCenter User Guide.
Configuration of HPE OneView for VMware vCenter with vCenter 6.5
1. Power on the appliance from the vCenter.
When OV4VC is ready, the following screen will be seen from the VM console.
2. Close the console and log out of vCenter.
NOTE: There should not be any open session of vSphere web client during the below operations
3. Open the link https://<OV4VC IP>/ui/index.html and click the Setup button.
4. Enter the new password for HPE OneView for VMware vCenter.
Page | 38
5. Log in to OV4VC as “Admin” user with the new password.
6. After successful login, configure the Storage network VSAeth on vCenter and OV4VC.
Page | 39
7. Configure Storage network on OV4VC.
a. Navigate to Settings > Management VM > Networks > Add Network.
b. Select MAC address of the storage NIC.
c. Provide “Network Label” name, i.e. VSAeth.
d. Select Data in Network type.
e. Click OK.
Page | 40
f. Select the newly added network and click Edit.
g. Edit IP address popup appears
h. Select Static as IP Type.
i. Provide a free IP address within the storage range.
j. Provide Prefix length, Gateway address, and VLAN tag (if applicable).
k. Click OK, as per below screenshot.
Page | 41
l. Verify that the newly added network status displays Green (Normal).
8. On the main menu of OV4VC, select vCenters under the managers groups
Page | 42
9. In the vCenters page, click Add vCenter.
10. Provide the IP/hostname in the name text box, and the administrator username and password of the vCenter.
Page | 43
11. Click Add to complete the addition of the vCenter.
12. Click Accept on the Accept Certificate page.
Page | 44
13. Verify that vCenter is added.
14. Open the vCenter web client, using the following link:
https://<vCenterIP>/vsphere-client
This launches Adobe Flash session which is required for OV4vC.
15. Configure the browser with Adobe Flash and login to web client.
16. Verify that all HPE Management Administration tab is visible in vCenter.
Page | 45
17. If HPE infrastructure tab is not visible, restart the vCenter and check again.
18. With the HPE infrastructure tab available in the web client, you should see the menu for All HPE Management Actions as shown below.
Page | 46
Page | 47
Configure StoreVirtual VSA with HPE
OneView for VMware vCenter 1. Once the HPE OneView for VMware vCenter (OV4VC) is linked with the web-client, the
next step is to add the storage system (i.e. HPE StoreVirtual VSA) in the OV4VC link, so we can use the OV4VC to create datastore and other storage operations.
2. Log in to the vCenter using credentials with administrative privileges.
3. Navigate to the HPE Infrastructure tab.
4. Click the Launch the Administrator Console link.
5. Log in to the HPE OneView For VMware vCenter, navigate to the storage system tab in the dropdown menu
6. Click Add Storage System.
7. On the pop-up, select type as HPE StoreVirtual. Provide the storage system virtual IP, username, and password of the Management Group.
Page | 48
NOTE: The storage system virtual IP can be obtained from CMC. Expand the storage management group and select the storage cluster. Go to the iSCSI tab to get the virtual IP.
8. Click Connect and accept the certificate popup.
Page | 49
9. Change the storage pool permission to Allow Provisioning allowing full access to the storage system through OV4VC.
10. Click Add.
11. Click Yes, refresh data on the popup.
12. Ensure that the storage system is added to the HPE OneView for VMware vCenter. When the refresh data task successfully completes, the storage system information will be populated.
Migrate ESXi hosts to remote vCenter 6.5
NOTE:
These steps are applicable only when HC 380 hosts are not already managed by remote vCenter 6.5
Procedure
1. Migrate HC 380 to remote vCenter 6.5
2. Update Cluster Settings in Remote vCenter
3. Manual DRS Configuration at Remote vCenter
Page | 50
Migrate HC 380 Nodes to remote vCenter 6.5
The following are the steps to migrate HC 380 hosts from existing vCenter 6.0 to remote
vCenter 6.5.
1. Launch the vCenter 6.0 Web Client.
2. Log in to vCenter with appropriate user name and password.
3. Go to the Host and Clusters view. Right-click each node and disconnect from vCenter.
4. Remove the host from the cluster. Right-click each (disconnected) host and select Remove from Inventory.
5. Repeat the step 3 and 4 for all the host managed by the vCenter.
6. Log off from Web Client of the vCenter 6.0.
The following steps will add the HC 380 nodes to their respective cluster on remote
vCenter 6.5.
1. Launch the vCenter 6.5 Web Client.
2. Log in to the remote vCenter and access the remote vCenter.
3. From Home, select the Host and Clusters view.
4. Select the cluster where the host will be added.
5. Right-click the cluster name on the remote vCenter and select Add Host…
6. Provide the host FQDN or IP address on the Name and location then click Next.
7. Provide the root login on the Connection settings then click Next.
8. Click Yes to confirm the security alert.
9. Click Next on the Host Summary.
10. Assign license and click Next.
11. Choose whether to enable or leave Lock down Mode disabled and click Next.
12. Select your Resource pool policy and click Next.
13. Review the Ready to Complete page. Click Back to make any changes or click Finish to add the host.
14. Wait for the task to finish before adding the next host.
15. Repeat steps 5 to 14 until all hosts are added to the cluster.
Update Cluster Settings in Remote vCenter
Update HA Cluster Settings
1. On the Web Client’s Host and Clusters view, right-click the cluster.
2. Select Settings.
Page | 51
3. Under Services, select vSphere Availability.
a. Click Edit…
b. Verify the following settings
Page | 52
i. Checkbox for “Turn on vSphere HA” is Enabled.
ii. Checkbox for “Host Monitoring” is Enabled under Failures and Responses.
iii. The “VM Monitoring” is Disabled also under Failures and Responses.
4. Expand Host Failure Response dropdown.
5. Set Default VM Restart Priority to High.
Leave all other values as default.
Page | 53
6. Expand Admission Control and set the following:
a. Enable Define failover capacity by to “Slot Policy (powered-on VMs)” with “Host failure cluster tolerates” value set to 1.
b. For Slot size policy enable Cover all powered-on virtual machines.
7. Click OK to save the changes close the dialog.
Manual DRS Configuration at Remote vCenter
The following are the steps to manually configure DRS for the HC 380 at the remote vCenter
after migrating the nodes.
1. Right-click the cluster and select Settings.
Page | 54
2. Select vSphere DRS.
a. Verify vSphere DRS is turned ON.
b. Verify that DRS Automation level is Fully Automated.
c. Verify that Power Management is set of Off.
d. Create a Virtual Machine DRS Group called HPE-HC Management VM Group
e. Go to Configuration > VM/Host Groups.
f. Click Add… on the VM/DRS Groups.
g. Provide the DRS Group name of HPE-HC Management VM Group.
h. Set Type to VM Group.
i. Assign the management VM in to the DRS group.
j. Click Add, select the management VM check box, and click OK.
k. Click OK on the Create VM/Host Group dialog to close it.
Page | 55
l. Create a Host DRS Group called HPE-HC Management VM Hosts
m. Go to Configuration > VM/Host Groups.
n. Click Add… on the VM/DRS Groups.
o. Provide the DRS Group name of HPE-HC Management VM Hosts.
p. Set Type to Host Group.
q. Assign all the ESXi hosts to the DRS group. Click Add, select the checkboxes of all the ESXi hosts, and click OK.
r. Click OK on the Create VM/Host Group dialog to close it.
Page | 56
3. Add the HC 380 DRS Rules.
a. Go to Configuration > VM/Hosts Rules.
b. Click Add…
c. Create a rule called “HPE-HC Management VM Affinity Rule”.
d. Check Enable rule.
e. Set Type as Virtual Machines to Hosts.
The rest of the DRS Groups values should be automatically populated.
f. Click OK to close the Rule dialog.
Page | 57
4. Verify that the Host Options settings show Disabled for the ESXi nodes.
a. Go to Configuration > Host Options.
b. Verify that the Host Options setting is Disabled.
Page | 58
Upgrade Mozilla Firefox and verify proxy setting on
HC 380 Management VM
This section describes upgrade & verification of following components:
1. Upgrade Mozilla Firefox
2. Verify and reset the proxy settings on the HC 380 Management VM
Upgrade Mozilla Firefox
Upgrade the Mozilla Firefox browser to the current version listed in the HPE Hyper Converged 380 Firmware and Software Compatibility Matrix. The Firefox installer is located in the Firefox sub-folder in the unpacked upgrade folder.
Procedure
1. Double-click the Firefox installer.
2. Select Standard Setup type.
3. Click Next.
4. Click Upgrade.
5. Uncheck Launch Firefox now.
6. Click Finish to complete the upgrade.
Page | 59
7. After the upgrade is complete: Copy the supplied source code files to the "C:\Program Files (x86)\Mozilla Firefox" folder.
Verify and reset the proxy settings on the HC 380 Management VM
Procedure
1. Run the following command
c:\ > netsh winhttp show proxy
2. Check for entries in e.g. :
web-proxy.XXX.yy.com
3. If entries exist, open command prompt and run following commands
c:\ > netsh winhttp reset proxy
c:\ > RunDll32.exe InetCpl.cpl, ClearMyTracksByProcess 8
c:\ > ipconfig /flushdns
4. Open regedit and delete all entries under:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\iphlpsvc\Parameters\Pr
oxyMgr\
5. Open Microsoft Internet Explorer settings page.
Go to Internet Options > Connections > LAN Settings.
6. If the Proxy Server Address field and Advanced table are not blank:
a. Check the Use a proxy Server for your LAN check box.
b. Clear the Address field.
c. Click the Advanced button.
d. Clear all the Advanced fields.
e. Click OK.
f. Uncheck the Use a proxy Server for your LAN check box.
g. Check the Automatically detect Settings check box.
h. In the Local Area Network (LAN) Settings, click OK.
i. In Internet Options, click OK.
7. Clearing Firefox proxy settings
a. Open the Firefox Connection Settings page:
b. Select Menu > Options > Advanced > Network > Settings.
c. Select the Manual proxy configuration radio button.
d. Clear all the fields.
e. Select the Use system proxy settings radio button.
f. Click OK.
g. Close the Options page.
h. Close Firefox.
Page | 60
Upgrade VMware vSphere using the HPE Customized
ESXi Image on the cluster nodes
Procedure:
NOTE: The following steps 1 to 9 should be performed on one node at a time. Do not attempt to power off or upgrade the next node until it completes successfully. These steps can be ignored if you continue using ESXi 6.0 on the cluster nodes.
1. Migrate the HC380 VMs to a Different Host.
2. Migrate User VMs to a Different Host.
3. Power off the StoreVirtual VSA VM using HPE StoreVirtual CMC.
Important: Hewlett Packard Enterprise strongly recommends that you cleanly shut down the StoreVirtual VSA on the VMware vSphere ESXi host prior to node migration.
4. Put the ESXi host in maintenance mode.
5. Apply Service Pack for ProLiant.
6. Upgrade iLO Firmware
7. Update HPE Intelligent Provisioning.
8. Upgrade VMware vSphere ESXi
9. Take the VMware vSphere ESXi server out of Maintenance mode.
Migrate the HC380 VMs to a Different Host
Perform this procedure if the ESXi node to be upgraded is hosting the any of the HC380 VMs –
management, HPE OneView, and management UI VMs.
Procedure
1. On the vCenter Web Client, go to the Host and Clusters view.
2. Expand datacenter and cluster, then right-click one of the HC 380 VMs and select Migrate…
3. On the Select the migration type, select “Change compute resource only” then click Next.
4. On the Select a compute resource, select any host that will not be targeted for Maintenance mode.
5. Click Next.
6. On the Select networks, click Next.
7. If Verify high priority is selected then click Next on the Select vMotion priority.
8. Review the Ready to compete summary then click Finish to proceed with the VM migration.
9. Repeat steps 2 to 7 until all three HC380 VMs are migrated to a different host.
Page | 61
Migrate User VMs to a Different Host
Perform this procedure if the ESXi node to be upgraded is hosting the user VMs.
Procedure
1. On the vCenter Web Client, go to the Host and Clusters view.
2. Expand datacenter and cluster, select the host that will be upgraded, select the VMs tab.
3. Right-click a user VM and select Migrate…
4. On the Select the migration type, select “Change compute resource only” then click Next.
5. On Select a compute resource, select any host that will not targeted for maintenance mode and click Next.
6. On Select networks, click Next.
7. Verify that high priority is selected then click Next on Select vMotion priority.
8. Review the Ready to compete summary then click Finish to proceed with the VM migration.
9. Repeat steps 2 through 8 until all user VMs are migrated to a different host.
Power off the StoreVirtual VSA VM using HPE StoreVirtual CMC
Prerequisites
1. If necessary, management VM and user VMs migrated to a different host that is not targeted for maintenance mode.
2. Using the StoreVirtual CMC, ensure that all StoreVirtual VSA nodes are participating in the cluster and that all volumes are in a "Normal" state.
3. Ensure that required number of Managers are available after power off of the StoreVirtual VSA so that the volumes do not go into off-line state.
4. For 2 Node configuration with Quorum Witness ensure that the Quorum Witness remains connected and always up.
Procedure
1. Click the management group in the left pane.
2. Click one of the Login to view links in the right panel.
3. Log in to the Management group through CMC using StoreVirtual credentials.
Page | 62
4. In the left panel, select Management Group > Cluster > Storage Systems > <VSA name>.
5. Verify that all members of the cluster are powered on and participating in the cluster.
6. Locate the appropriate VSA name hosted by the ESXi node to be updated.
7. Right-click the VSA name, and then select Power off or Reboot.
Page | 63
8. In the dialog box, select Power off radio button, enter "0" for delay minutes, and then click Power Off.
9. Click Power Off System….
10. Click Power off on the confirmation popup.
11. When the message “Do you want to power off?” appears, click Yes.
12. When the message "Are you sure you want to wait 0 (zero) minutes?” appears, click OK.
13. The StoreVirtual VSA VM shuts down. It may take a minute or so for the StoreVirtual VSA VM to appear as shut down in VMware vSphere Web Client.
Put the ESXi host in maintenance mode
Prerequisites
1. The VSA VM is not running.
2. No critical errors/warnings on the host.
3. Management VM is not hosted by the node.
4. All user VMs are not hosted by the node.
Page | 64
Procedure
1. On the vCenter Web Client, go to the Host and Clusters view.
2. Expand datacenter and cluster then select the host targeted for maintenance mode.
3. Select the VMs tab.
4. Verify that the only VM present is the powered off VSA VM.
5. Right-click the ESXi node. Select Maintenance Mode > Enter Maintenance Mode.
6. Confirm that Maintenance Mode dialog box appears, ensure that the check box is not selected.
7. Click OK and the host turns into Maintenance Mode.
Page | 65
Apply Service Pack for ProLiant
Procedure
1. Open Internet Explorer web browser session and log in to iLO webpage.
2. Select Remote Console in the left navigation pane and then click Launch.
3. Click Run on the Security Warning popup. The Remote Console window appears.
NOTE: The update of Service Pack for ProLiant should be processed for one node at a time. Do not attempt to update next node until it completes successfully.
4. In the Remote Console window:
a. Click Virtual Drives on the menu item.
b. Select Image File CD-ROM/DVD to mount the Service Pack for ProLiant ISO image file.
5. Restart the server so that it boots from the ISO.
a. Press F12 on the iLO session and login with the root user.
b. Clear the Forcefully terminate running VMs check box and press F11 to restart.
6. Once the following screen appears on the iLO remote console, press F11.
7. Press Enter twice to access the boot menu.
Page | 66
8. Select 1) One Time Boot to CD-ROM.
Page | 67
9. Select Automatic Firmware Update.
10. The Firmware update process starts.
Page | 68
11. During the SPP upgrade, the iLO remote console will reset.
12. Close the iLO session and wait at least 5 minutes to reconnect to the iLO session.
13. Relaunch the iLO Remote Console.
14. Power on the server.
a. Select Power Switch on the menu
b. Select Momentary Press.
15. Once the server is fully booted, verify on that there are no critical alerts on the iLO session.
a. From the browser iLO session, select Information > System Information
b. Verify that there are no alerts on the Health Summary status.
Page | 69
Update HPE Intelligent Provisioning
NOTE: This procedure should be processed for one node at a time. Do not attempt to update next node until it completes successfully.
Procedure
1. Open Internet Explorer web browser session and log in to iLO webpage.
2. Select Remote Console in the left navigation pane and then click Launch.
3. Click Run on the Security Warning popup.
4. The Remote console windows appears.
5. In the Remote Console window:
a. Click Virtual Drives on the menu item.
b. Select Image File CD-ROM/DVD to mount the Service Pack for ProLiant ISO image file Go to Virtual Drives and select the Image File CD-ROM/DVD to mount the custom HPE Intelligent Provisioning ISO image file.
6. Restart the server so that it boots from the ISO.
7. Press F12 on the iLO session and login with the root user.
8. Clear the “Forcefully terminate running VMs” check box and press F11 to restart.
9. The server restarts and boots from the ISO.
10. Press F11 during POST boot up.
11. Press Enter twice to access the boot menu.
12. Select 1) One Time Boot to CD-ROM.
13. The HPE Intelligent Provisioning Update ISO will automatically load and run the update.
Page | 70
14. The node will reboot after the upgrade completes.
15. Once the server is fully booted, verify that there are no critical alerts on the iLO session.
16. From the browser’s iLO session, select Information > System Information
17. Verify that there are no alerts on the Health Summary status.
18. Verify the new HPE Intelligent Provisioning on the Firmware tab of the System Information.
Upgrade VMware vSphere ESXi
This step requires the HPE Customized ESXi Image file for upgrading the VMware vSphere ESXi software.
NOTE: Complete this procedure on each node individually.
Procedure
1. Open Internet Explorer web browser session and log in to iLO webpage.
2. Select Remote Console in the left navigation pane and then click Launch.
3. Click Run on the Security Warning popup. The Remote Console window appears.
4. In the Remote Console window:
a. Go to Virtual Drives on the menu item.
b. Select Image File CD-ROM/DVD to mount the HPE Customized ESXi Image file.
5. Restart the server so that it boots from the ISO.*
Page | 71
a. Press F12 on the iLO session and login with the root user
b. Clear the Forcefully terminate running VMs check box and press F11 to restart
The node boots and the boot menu appears.
6. Select HPE-ESXi-6.5.0-Update1-iso-650.U1.10.1.5.26 Installer and press Enter.
7. Perform the ESXi upgrade
a. On the Welcome screen, press Enter to continue.
b. Review and press F11 to accept the EULA to continue.
c. Select the disk to be upgraded and press Enter.
d. The 200 GB disk is the default boot disk from the factory.
e. Select the Upgrade ESXi preserve VMFS datastore option, and press Enter.
Page | 72
f. Press F11 to begin the upgrade.
g. A progress window appears.
h. When the upgrade is finished, the Upgrade Complete window appears.
i. Press Enter to reboot the node.
8. After ESXi is upgraded on the node, verify that the correct version and build number was installed.
9. Transfer and install some additional vib files on the host using a laptop/workstation or the Management VM.
a. The necessary files are located in the unpacked Software Upgrade directory.
b. If transferring from the Management VM it may be necessary to install utilities to perform the transfer and to access the hosts (for example, PuTTY, WinSCP). This will also need to be installed on the laptop/workstation if not already present.
c. Enable SSH on the host:
i. Browse to the iLO, log in, and launch the Remote Console.
ii. Press F2 and log in to ESXi.
iii. Select Troubleshooting Options from the menu.
iv. Select Enable SSH and press Enter.
d. Transfer the vib files from the unpacked Software Upgrade directory to the ESXi server /tmp directory.
i. Use the chosen file transfer software (for example, WinSCP or PuTTY’s psftp) to copy the files to the ESXi server /tmp directory. This example uses psftp.
ii. Open a Command Prompt window and change the working directory to the location of the unpacked files.
iii. Launch the file transfer client, connect to the host, and transfer the files:
iv. C:\Users\Administrator>cd C:\HC 380upg
v. C:\HC 380upg>psftp
psftp> open <host-IPaddress>
… …
psftp> cd /tmp
Remote directory is now /tmp
psftp> mput ESX65host/*.vib
local:ESX65host/hpe-iscsi-rescan-latest.vib =>
remote:/tmp/hpeiscsi-
rescan-latest.vib
local:ESX65host/ hpe-iscsi-rescan-6.5.0-12.7.0.12.vib=>
remote:/tmp/ hpe-iscsi-rescan-6.5.0-12.7.0.12.vib
psftp>
10. Replace the existing iSCSI Rescan vib by removing and installing them using the esxcli software command
a. Log in to the host using an SSH client (for example, PuTTY), check the existing software versions using the command
Page | 73
[root@HPE-HC-MXQ623037K:/tmp] esxcli software vib list | grep -i -E
'hpe[-d][i]'
hpe-iscsi-rescan 6.0.0-
12.0.0.1 Hewlett-Packard-
Enterprise PartnerSupported 2017-11-08
hpediscoveryagent 6.0.0-
1.3.5.2 Hewlett-Packard-
Enterprise PartnerSupported 2017-11-08
11. Uninstall the existing vib files.
[root@HPE-HC-MXQ623037K:/tmp] esxcli software vib remove -f -n hpe-
iscsi-rescan
Removal Result
Message: Operation finished successfully.
Reboot Required: false
VIBs Installed:
VIBs Removed: Hewlett-Packard-Enterprise_bootbank_hpe-iscsi-
rescan_6.0.0-12.0.0.1
VIBs Skipped
[root@HPE-HC-MXQ623037K:/tmp] esxcli software vib remove -f -n
hpediscoveryagent
Removal Result
Message: Operation finished successfully.
Reboot Required: false
VIBs Installed:
VIBs Removed: Hewlett-Packard-
Enterprise_bootbank_hpediscoveryagent_6.0.0-1.3.5.2
VIBs Skipped:
12. To ensure sanity, perform a dry run of the installation of the new vib file.
[root@HPE-HC-<sernum>:~] esxcli software vib install --dry-run -
v /tmp/ hpe-iscsi-rescan-6.5.0-12.7.0.12.vib
Installation Result
Message: Dryrun only, host not changed.
The following installers
will be applied: [LiveImageInstaller, BootBankInstaller]
Reboot Required: false
Reboot Required: false
VIBs Installed: Hewlett-Packard-Enterprise_bootbank_hpe-iscsi-
rescan_6.5.0-12.7.0.12
VIBs Removed:
Page | 74
VIBs Skipped:
13. Install the new vib file.
[root@HPE-HC-MXQ623037K:/tmp] esxcli software vib install -v /tmp/hpe-
iscsi-rescan-6.5.0-12.7.0.12.vib
Installation Result
Message: Operation finished successfully.
Reboot Required: false
VIBs Installed: Hewlett-Packard-Enterprise_bootbank_hpe-iscsi-
rescan_6.5.0-12.7.0.12
VIBs Removed:
VIBs Skipped:
14. After the vib file installation, perform a restart of the node.
Take the VMware vSphere ESXi server out of maintenance mode
Procedure
1. Log in to the vCenter web client using credentials having administrative rights once the node comes up and ESXi has loaded completely.
2. Go to the Host and Clusters view and select the upgraded node.
NOTE: If the state of the node is “not responding”, reconnect the host. Right-click the host, select Connection > click Connect. Click Yes on the Reconnect host pop up.
3. Right-click the ESXi node and select Maintenance Mode > Exit Maintenance Mode.
4. Wait for the VMware vSphere ESXi node to exit Maintenance Mode.
Power ON the StoreVirtual VSA VM
Procedure
1. To power on the StoreVirtual VSA VM, right-click the VM and select Power > Power on.
2. Wait for the StoreVirtual VSA VM to complete the resync process. Before proceeding to the next node.
a. Check the volume health in the StoreVirtual CMC console to ensure, all volumes are in a "Normal" state and no restriping is in progress.
b. Check the VMware vSphere Web Client to ensure that all nodes are in a "Normal" state with no errors.
3. Repeat the procedure of "Upgrade VMware vSphere using the HPE Customized ESXi
Image on the cluster nodes" for remaining nodes in the cluster
Redeploy Management UI & HPE OneView VM
There are two different procedures to redeploy management UI & HPE OneView VM for TLS1.1
& TLS 1.2 Setup:
Page | 75
System in TLS 1.1
Procedure
1. Removal of HPE OneView Global Dashboard entries.
2. Removal of Management UI and HPE OneView VM from the vCenter.
3. Update Cluster Settings in Remote vCenter
4. Update xml file on DirectoryMap.xml, Current.xml, and Deployment1.xml file on Management VM.
5. Redeploy HPE OneView VM & Management UI.
System in TLS 1.2
Procedure
1. Removal of HPE OneView Global Dashboard entries.
2. Removal of the existing HPE OneView and management UI.
3. Configure the iLO Cert
4. Configure custom certificates of the nodes
5. Modify Registry settings in Management VM (TLS 1.2)
6. Rename of Host with FQDN.
7. Update Cluster Settings in Remote vCenter.
8. Renaming and registering the Management VM to DNS.
9. Copy of the OVA files to the temp directory of Management VM
10. Replacement of Post deployment scripts
11. Update DirectoryMap.xml, Current.xml, and Deployment1.xml file on Management VM.
12. Lock iLO to TLS 1.2 (Enable FIPS)
13. Redeploy HPE OneView VM & Management UI
14. Run Post Deployment scripts for expanded nodes(if any)
Removal of HPE OneView Global Dashboard entries
1. Log in to the HPE OneView global dashboard.
2. Navigate to Settings > Appliances.
Page | 76
3. Select the appliance that needs to be removed. Click Remove button.
4. Confirm on Remove button to remove the appliance.
5. Verify that the appliance is removed from the list.
Removal of Management UI and HPE OneView VM from the vCenter.
1. Log in to the vCenter.
2. Power off both HPE OneView and the Management UI VMs.
3. Select the Management UI VM.
4. Right-click and select Delete from Disk.
Page | 77
5. On the Confirm Delete dialog box, select Yes.
6. Repeat step 3 and 4 to delete HPE OneView VM.
7. Ensure that both the VMs are deleted.
Page | 78
Configure the iLO Cert (TLS 1.2)
The following are the steps for configuring the iLO certs of each HC 380 node.
1. From the installation computer, verify the forward and reverse lookup of the iLO’s of the ESXi nodes by running an nslookup on the installation computer.
2. Configure the iLO 4 network settings of the nodes from the BIOS.
a. During POST, press F9.
b. Go to System Configuration > iLO 4 Configuration Utility > Network Options.
c. Provide the DNS name, IP address, Subnet mask, and Gateway.
d. Press ESC to go up one level to Advanced Network Options. Provide the DNS Server 1 and Domain name.
e. Press F10 to then Y to confirm the save.
f. Press Enter to continue.
iLO will reset.
g. Reconnect after at least 30 seconds.
h. Log back in to iLO and reboot the server for the settings to take place.
3. Generate CSR from iLO.
a. Once logged in to iLO, go to Administration > Security > SSL Certificate.
b. Click Customize Certificate on the SSL Certificate Information page.
c. On the Certificate Signing Request Information page, click Generate CSR.
d. Wait up to 10 minutes and click Generate CSR again.
e. If the CSR is ready, you will get a popup with the information needed to send to the Certificate Authority.
f. Copy the CSR to a Notepad or text editor.
g. Save the file with the .csr extension.
4. Send CSR to a CA and receive a certificate.
The certificate has to be Base64-encoded X.509 format.
5. Import the certificate into iLO.
The iLO will automatically reset if the certificate is valid.
6. Wait at least 30 seconds before logging back in to the iLO.
7. Verify the new certificate after you logged back in to the iLO.
Configure custom certificates of the nodes (TLS 1.2)
There are three steps involved:
1. Generate CSR for the ESXi Node.
2. Request certificate with the Certificate Authority.
3. Replace certificate on ESXi host.
Prerequisites
The following are the prerequisites to configure a custom certificate on the ESXi nodes:
1. Certificate Authority accessible to generate certificates.
Page | 79
2. Template for VMware is configured in Certificate Authority.
a. Follow VMware Knowledge Base Article for details: Creating a Microsoft Certificate
Authority Template for SSL certificate creation in vSphere 6.0 (2112009).
3. OpenSSL 0.9.8 installed on the vSphere environment.
a. Follow VMware Knowledge Base Article for details: Configuring OpenSSL for
installation and configuration of CA signed certificates in the vSphere environment
(2015387).
b. XCA – Optional to replace OpenSSL.
c. There are different ways to configure custom certificates on ESXi nodes. The sample shown in this document is just one of them. To generate SSL certificate for your ESXi host you need OpenSSL version 0.9.8 installed on your local system or tool called XCA
4. SSH access to ESXi installed on the installation computer.
Modify Registry settings in Management VM (TLS 1.2)
NOTE: If any SSL and TLS protocol key is missing, create those protocol keys and modify them with their respective Dword entries as mentioned below
Procedure to Create Key:
1. Open Registry editor > HKEY_LOCAL_MACHINE > SYSTEM > CurrentControlSet > Control > Security Providers > SCHANNEL > Protocols
2. Right-click Protocols > New > Key.
3. Name the key as required. In the Registry, create the following keys under Protocols.
TLS1.0
TLS1.1
TLS1.2
SSL3.0
Page | 80
4. Under each key create a subkey named “Client” as shown.
Enable Protocol TLS 1.2
Refer the following steps and screenshots to enable TLS 1.2
1. Go to HKEY_LOCAL_MACHINE > SYSTEM > CurrentControlSet > Control > Security Providers > SCHANNEL > Protocols > TLS 1.2.
2. Create DWORD “enabled” with value “ffffffff’” in “Client” subkey of TLS 1.2 Protocol.
a. Go to Key TLS1.2 > Client.
b. Right-click Client > New > DWORD Value, as per below screenshot
Page | 81
c. Provide name to DWORD as “Enabled”.
d. Right-click Enabled DWORD and Modify the value as “ffffffff’”.
e. Click OK.
Page | 82
f. The value of DWORD is set as “ffffffff”.
3. Create another DWORD “DisabledByDefault” with value “0” in “Client” subkey of TLS 1.2 Protocol.
4. Provide name to DWORD as “DisabledByDefault” and modify the value with “0”
Page | 83
Disable TLS 1.0, TLS 1.1 and SSL 3.0 Protocols:
1. Create DWORD “enabled” with value “0” in “Client” subkeys of TLS 1.0,TLS 1.1 and TLS 3.0 Protocols.
2. Create DWORD “DisabledByDefault” with value “1” in “Client” subkeys of TLS 1.0,TLS 1.1 and TLS 3.0 Protocols.
Page | 84
Page | 85
Set Strong Cryptography on 64-bit .Net framework
1. Go to the path:
HKLM:\SOFTWARE\Wow6432Node\Microsoft\.NetFramework\v4.0.30319
2. Create an entry in Dword as “SchUseStrongCrypto” Value as '1'.
Enable TLS 1.2 in Management VM Browser settings
1. Open Internet Explorer > Tools > Internet Options > Advanced > under security tab Enable only TLS 1.2.
Page | 86
Rename of Host with FQDN (TLS 1.2)
NOTE: Following steps are only applicable when FQDN is not configured yet
1. Log in to vCenter and select the ESXI hosts to renamed as per below screenshot.
2. Log in to ESXi host using the IP address on separate window as shown in below figure.
Page | 87
3. Select Networking > TCP/IP Stacks as shown in below figure.
4. Go to Default TCP/IP stack and click Edit settings.
5. On the pop-up display select Manually configure the settings for this TCP/IP stack.
6. Modify Host name and Domain name.
7. Provide the Primary DNS server address.
8. Provide the Search domains.
9. Click Save.
Page | 88
10. Go to vCenter and select the ESXI hosts, migrate all the VM’s of this ESXi host to other hosts apart from VSA VM.
a. Select the ESXi host > VMs > Virtual Machines.
b. Select the VM (example Management VM).
c. Right-click Migrate.
d. On the Migrate dialog select Change compute resource Only.
e. Click Next.
f. Select the appropriate ESXi host on compute resource page and click Next.
Page | 89
g. Select networks page will appear and click Next.
h. Select vMotion Priority page will appear and click Next.
i. Click Finish on Ready to complete page.
11. Repeat Step 10 for migrating all the VM to other ESXi host in the cluster.
12. Power off the VSA VM.
Page | 90
Refer to the section called Power off the StoreVirtual VSA VM using HPE
StoreVirtual CMC for details.
13. Put the ESXi host into maintenance mode using the following procedure.
a. Right-click the host.
b. Select Maintenance Mode > Enter Maintenance Mode.
A warning pop-up will be displayed.
c. Click OK.
The host is placed in Maintenance Mode.
14. Disconnect the host from vCenter as per below figure.
a. Right-click the host.
b. Select Connection > Disconnect.
Page | 91
15. Now remove ESXi host from the inventory as per below figure.
a. Right-click the host.
b. Select Remove from Inventory.
Page | 92
16. Connect to ESXi SSH and verify the hostname entry with IP under /etc/hosts as shown in below screenshot.
17. Edit the /etc/rc.local.d/local.sh file if you want the hostname to be persistent after a reboot as shown below.
The changes are shown in red.
Page | 93
18. Reboot the host.
19. Follow the steps below to add the ESXi host back to cluster of vCenter using its FQDN.
a. Log in to the vCenter using administrative credentials.
b. Right-click the cluster name on the remote vCenter and select Add Host…
c. Provide the host FQDN Name.
Page | 94
d. Provide the root login on the Connection settings then click Next.
e. Click Yes to confirm the security alert.
f. Click Next on the Host Summary.
g. Assign license and click Next
h. Apply your policy whether to enable or leave Lockdown Mode disabled.
i. Click Next.
j. Click Next on the Resource pool page and proceed to Ready to Complete page.
k. Review Ready to Complete page.
l. Click Back to make any changes or click Finish to add the host.
m. Wait for the task to finish before adding the host.
20. Exit host from Maintenance Mode.
a. Right-click the host.
b. Select Maintenance Mode > Exit Maintenance Mode.
Page | 95
21. Power on the VSA virtual machines.
22. Check the StoreVirtual VSA health for any critical alerts and ensure that the volume status is "Normal" in CMC.
23. Repeat the steps 1 to 22 for all the nodes in cluster.
Renaming and registering the Management VM to DNS (TLS 1.2)
The following are the steps to rename and register the mgmt VM to the DNS.
1. Launch Server Manager and go to the Local Server.
If HPE OneView InstantOn is running, close it.
2. Select the hyperlink for the mgmtVMNetwork network.
3. Right-click the network and click Properties.
4. Add back the DNS entry to the mgmtVMNetwork network.
Page | 96
a. From the Local server, click the mgmtVMNetwork network hyperlink.
b. Right-click the mgmtVMNetwork and select Properties.
c. Open/edit the IPv4 properties.
d. Add the Preferred (and if available Alternate) DNS server IP.
e. Ignore the multiple gateway warning after adding the DNS IP.
5. Register the VM to the DNS.
a. Click the computer name hpe-hc-mgmt hyperlink from the Server Manager.
b. Click Change… on the Computer Name tab of the System Properties.
c. Enter a new hostname as registered on the DNS, i.e. host63, and provide the Domain value, i.e. LHN67.LOCAL then click OK.
d. Provide the administrator user login name and password then click OK.
e. If successful, you will get the welcome to the domain message then click OK.
f. Click OK again on the restart message.
g. Close the System Properties and click Restart Later to complete the DNS registration.
6. Edit the Windows hosts file.
a. Open Windows Explorer and navigate to:
C:\Windows\System32\Drivers\etc directory.
b. Use Notepad to edit the hosts file.
c. Comment out the 192.* entry and add the new IP address and FQDN entry, host63.lhn67.local, like the sample shown in below screenshot.
There might be two 192.* IP’s to comment out.
d. Save the changes and close Notepad.
7. Restart the Management VM.
8. Wait for the Management VM to power on.
Page | 97
9. Open Server Manager and verify that the new computer name, domain on the Local Server and the DNS entries on the mgmtVMNetwork network.
10. Open a command prompt and run nslookup commands of both forward and reverse lookups to test the configuration.
Do not proceed if any of the commands failed.
Copy of the OVA files to the tmp directory (TLS 1.2)
NOTE: This is applicable only for TLS 1.2
To support TLS 1.2 protocol, a specific version of the Management UI and HPE OneView
VM needs to be deployed.
Verify the TLS 1.2-compliant OVA files under the c:\tmp\HPE-HC380-pkgs directory.
HPE-HC 380-mgmtui-NoSSH_1.10.05_300065.ova
HPE-HC 380-HPE OneView-2.00.08.ova
Replacement of Post deployment scripts (TLS 1.2)
Verify the four key scripts with critical fixes specific for this deployment via their timestamp.
They could be found at
C:\ProgramData\HewlettPackard\StoreVirtual\InstantOn\PostDeployment directory
Create-MgmtVM.ps1
DRS.ps1
HPE_HC 380_FTS.ps1
PostDeploymentMgmtUI.ps1
Update DirectoryMap.xml, Current.xml, and Deployment1.xml file on
Management VM.
In order for the HC 380 post deployment to function on the remote vCenter to work, the HPE
OneView InstantOn configuration on the Management VM needs to be updated.
1. Log on to the Management VM.
2. Launch the Windows File explorer and unhide the ProgramData folder.
Select View and enable the check box for Hidden Items.
3. Navigate to the HPE OneView InstantOn folder and make backup copies of the three HPE OneView InstantOn deployment XML files.
C:\ProgramData\Hewlett-Packard\StoreVirtual\InstantOn\DirectoryMap.xml
C:\ProgramData\Hewlett-Packard\StoreVirtual\InstantOn\config\Current.xml
C:\ProgramData\Hewlett-Packard\StoreVirtual\InstantOn\config\<DESTINATION>\Deployment1.xml
Where: <DESTINATION> is the cluster-datacenter name, e.g. hpe-hc-clus_hpe-hc-dc
Page | 98
If there are more than one Deployment<X>.xml deployment files present in the
directory, it also needs to be backup.
NOTE: Do not use underscore character (“_”) on any of the vCenter names (datacenter, cluster and hostname)
4. Use Notepad to edit DirectoryMap.xml file.
a. Open the file located under:
C:\ProgramData\Hewlett-Packard\StoreVirtual\InstantOn directory.
b. Update the tag information for the cluster and datacenter.
Sample screenshot below for the Before and After changes.
5. Change the folder name under the HPE OneView InstantOn config folder C:\ProgramData\Hewlett-Packard\StoreVirtual\InstantOn\config to match what is on the DirectoryMap.xml file.
For example, change the original folder or DESTINATION name of hpe-hc-clus_hpe-hc-dc to CLTC1_DCTC1.
6. Create a new folder with the new DESTINATION value under the C:\ProgramData\Hewlett-Packard\StoreVirtual\InstantOn\log folder. Keep the original local deployment log folder. It has information from the previous deployment.
7. Use Notepad to edit Current.xml file.
a. Open the file located under C:\ProgramData\Hewlett-Packard\StoreVirtual\InstantOn\config directory
b. Change the following values:
<VCENTERIP> and <VCENTERUsername> with remote vCenter values.
c. If you migrated the cluster to a new datacenter and a new cluster on the remote vCenter, change the values for <VCENTERClusterName> and <VCENTERDatacenter>
Page | 99
<VCenterLocation> from Local to Remote
8. Use Notepad to edit Deployment1.xml file.
a. Open the file located under:
C:\ProgramData\Hewlett-Packard\StoreVirtual\InstantOn\config\<DESTINATION> directory
b. Change the following values:
<VCENTERIP> and <VCENTERUsername> with remote vCenter values.
If you migrated the cluster to a new datacenter and cluster on the remote vCenter,
change the values for <VCENTERClusterName> and <VCENTERDatacenter>
<VCenterLocation> from Local to Remote.
c. Since the original deployment was local, you will need to add a new entry for <VCenterRemoteIP>
Page | 100
Page | 101
Lock iLO to TLS 1.2 (Enable FIPS) (TLS 1.2)
A way to lock down iLO for TLS 1.2 is to enable FIPS mode. The procedures described here could be found in “Enabling FIPS mode” in the HPE iLO 4 User Guide (http://www.hpe.com/info/docs).
1. Point the browser to the iLO page of the ESXi node, https://<iLO-IP-address>.
2. Navigate to Administration > Security > Encryption tab.
3. Under Encryption Enforcement Settings, enable both FIPS Mode and Enforce AES/3DES Encryption options.
4. Click Apply.
5. Review the pop-up message and click OK to confirm.
6. The iLO session will reset.
7. On the login prompt, use the administrator user name and password specified on the pull-out label in the front of the server.
8. Once you have successfully logged back in, re-apply the Advanced iLO license.
Page | 102
a. Navigate to Administration > Licensing.
b. Enter the license key on the Activation Key field entry.
9. Configure the common iLO admin user if there is one.
Redeploy Management UI & HPE OneView VM
Run the HC 380 postdeployment script
1. Optional: Delete or rename the previous PostDeployment log file located at C:\ProgramData\Hewlett-Packard\StoreVirtual\InstantOn\log before starting the script.
2. Open a PowerShell session.
3. Change the directory to
cd C:\ProgramData\Hewlett-Packard\StoreVirtual\InstantOn\PostDeployment
4. Run the Postdeployment script
.\PostDeployment.ps1 –manifestFilename C:\ProgramData\Hewlett-
Packard\StoreVirtual\InstantOn\config\<DESTINATION-
NAME>\Deployment1.xml
5. The HC 380 postdeployment UI pop-up will launch. Provide the required details like the usual HC 380 postdeployment
Administrator Password: type the password used for the MGMTUI OVA
IP address: IP assigned to the MGMTUI VM. For the mission, build use x.x.x.150 IP address.
Perform a ping test of the mgmt UI VM IP address to make sure is not used.
Page | 103
Subnet Mask: SM assigned to the subnet
Default Gateway: DG assigned to the subnet
Username: use service Account with access to the remote vCenter
Password: password of the Service Account with access to the remote vCenter
6. It takes a while to deploy the Management UI VM and HPE OneView VM.
Once the post deployment is complete, you will get a dialog with a hyperlink shortcut to the HC 380 UI as shown in below screenshot.
After successful deployment, click the link of Management UI.
NOTE: Ensure that the Management UI is opened in Mozilla Firefox.
7. Click Set-up button to continue.
8. Enter the username and password provided during deployment.
Page | 104
9. Navigate to Edit Identity.
10. Enter an available IP address into the Embedded HPE OneView IPv4 field and click OK.
Page | 105
11. Add the HC 380 nodes to the Management UI.
12. Click the Submit button the complete the configuration.
13. Click the Dashboard button to log in to the Management UI.
Page | 106
14. Click Submit button to complete the configuration.
15. Log in to iLO and verify HPE OneView Tab should be visible in iLO.
NOTE: (Optional) Refer the sections on creating Datastores & VM vending in the HPE Hyper Converged 380 User Guide (http://www.hpe.com/support/hc380ugen).
Run Post Deployment scripts for expanded nodes (if any)
Procedure:
1. Open a PowerShell session.
2. Change directory to:
3. cd C:\ProgramData\Hewlett-Packard\StoreVirtual\InstantOn\PostDeployment
4. Run the Postdeployment script
.\PostDeployment –manifestFilename C:\ProgramData\Hewlett-
Packard\StoreVirtual\InstantOn\config\<DESTINATION-NAME>\Deployment< n
>.xml
5. Close the PowerShell session before proceeding with next expanded node.
Page | 107
NOTE: If there are nodes which have been expanded before migration, take the backup of Deployment<n>.xml files and execute PostDeployment scripts for all the Deployment<n>.xml files one after another. Do not attempt to use the same PowerShell session for multiple expansion xml. This may result in incorrect configuration and expanded node will not get reflected in Management UI.
6. The HC 380 UI pop-up will launch.
Provide the Management UI VM-related information like IP Address, User Name &
Password.
7. Once node is added successfully, following pop screen will appear.
8. Click OK.
Page | 108
9. Check the expanded node in Management UI.
10. Manage the expanded HC 380 nodes via management UI.
11. Click Submit button the complete the configuration.
12. Click Dashboard button to log in to the Management UI.
13. Log in to iLO and verify HPE OneView Tab should be visible in iLO.
Post migration procedures
Following are the post upgrade procedure:
1. Disable SSH and ESXi Shell access
2. Reconfigure with HPE OneView Global dashboard
3. Upgrade VMware vSphere PowerCLI
Disable SSH and ESXi Shell access
After the cluster upgrade is complete, perform the following clean-up tasks
Procedure
1. Delete the temporary folders containing upgrade files from the Management VM and the laptop or workstation.
2. The downloaded files can either be preserved (some of them are used for USB Recovery) or deleted. They can be obtained again if needed.
3. Disable SSH and ESXi Shell access on each host.
a. Log in to iLO and launch the Remote Console.
b. Press F2 and login to ESXi.
c. Select Troubleshooting Options from the menu.
Page | 109
d. Select Disable SSH and press Enter.
e. Select ESXi Shell and press Enter.
f. Press ESC to save the settings.
Reconfigure with HPE OneView Global Dashboard
1. Log in to the HPE OneView Global Dashboard.
2. Navigate to Settings > Appliances.
3. Click the (+) button to add the appliance.
4. Add all the information and click Add.
5. Verify that the appliance is added.
Page | 110
Upgrade VMware vSphere PowerCLI
NOTE: The Management VM should be able to access the Internet to download the PowerCLI components
Procedure
1. Uninstall PowerCLI 6.3
2. Update PowerShell to 5.1 on Management VM
3. Refer the link to install PowerCLI for vSphere 6.5.
4. Refer the link to verify PowerCLI version.
(Refer the sample screenshot below)
5. Modify PowerShell Script.
Modify PowerShell Script
After upgrading the PowerCLI to 6.5.3, modify the PowerShell script with the below procedure.
Procedure:
1. Recommended to close the PowerShell sessions before making any modifications to the PowerShell scripts
2. Go to directory:
cd C:\ProgramData\Hewlett-Packard\StoreVirtual\InstantOn\PostDeployment
3. Modify following below 4 files:
Create-MgmtVM.ps1
DRS.ps1
HPE_HC 380_FTS.ps1
PostDeploymentMgmtUI.ps1
4. To edit PowerShell files, change the attributes of the file.
a. Uncheck the Read-only box.
Page | 111
b. Now right-click the file and click Edit.
c. Search for “Add-PSSnapin VMware.VimAutomation.Core” and comment this line.
d. Add the line “Import-Module VMware.VimAutomation.Core”.
e. Save and close the file.
f. Repeat the step 4a to e for rest of the three PowerShell scripts (step 3).
g. Restart a PowerShell session.
Page | 112
Expansion Expansion for HC 380 1.1 U2 vSphere 6.5 can only be performed one system at a time. It is not
supported with the HPE OneView InstantOn (HPE OneView InstantOn) utility. Following
procedure explains the expansion.
Prerequisites
The new node IP and VLAN's for ESXmgmt, vMotion, VSAVM IP Address, HostStorage2 and HostStorage3 should be on the same range and values as the existing nodes.
o For used range and to avoid duplicate IPs refer to Deployment*.xml
The 3 storage IPs should be contiguous starting with VSAVM IP, HostStorage2 and then HostStorage3.
Port 1 of the embedded 1GbE NIC of the expansion node (or vmnic0) should be connected to either the installation computer or the Top of Rack switch where the installation computer is connected into.
VSA license key for the new node.
Use the table below to list the networking values for the expansion node.
Network IP Address Subnet Mask Gateway VLAN
ESXmgmt
vMotion
VSA VM *
HostStorage2 *
HostStorage3 *
* 3 storage IP’s must be contiguous
Procedure
These are the steps to add a node to an existing HC 380 cluster. The overall flow is outlined
below:
1. Network Configuration of ESXi host and VSA
2. Find the new node in CMC
3. Upgrade the VSA
4. Upgrade ESXi host to 6.5
NOTE: Follow the below mentioned points only if ESXi upgrade is performed.
a. Place the new ESXi node into maintenance mode
b. Apply Service Pack for ProLiant
c. Update HPE Intelligent Provisioning
d. Upgrade VMWare vSphere ESXi
Page | 113
e. Install NVIDIA GRID Manager (Optional)
5. Add new ESXi host to the vCenter
a. If step 4 is performed, take the ESXi host out of maintenance mode.
6. Add the VSA to Existing Management Group and Cluster
7. Create New Server in Management Group
8. Configure the assigned volume(s) on vCenter
Network Configuration of ESXi Host and VSA
1. Rack and connect the new node to the Top of Rack (ToR) as specified on the HC 380 Installation Guide.
2. Open an iLO session to the node and power it on.
3. Configure a free IP address reachable from the installation computer on vmnic0.
a. Press F2 to access the System Customization menu.
b. Default login is root | HyperConv!234.
c. Select Configure Management Network.
d. Verify that vmnic0 is active on Network Adapters.
e. Select IPv4 Configuration.
i. Select Set static IPv4 address and network configuration.
ii. Provide the IPv4 Address, Subnet Mask, and Default Gateway values. reachable by the installation computer.
f. Press Esc to exit the Configure Management Network menu.
g. Press Y for Yes to confirm the changes.
4. Use the vSphere Web client and connect to the node using the IP assigned on the step 3d above in a web browser.
a. Click Open the VMware Host Client link in VMware ESXi Welcome screen.
Page | 114
b. Provide default credentials root | HyperConv!234 once login screen is launched as shown in below screenshot.
c. Deselect the Join the VMware Customer Experience Improvement Program check box on the popup then click OK.
Page | 115
5. Power off the HPE-HC-management VM.
a. Go to Virtual Machines tab > select HPE-HC-management VM > then right-click and select Power > Power off.
6. Delete the HPE-HC management VM from disk.
Right-click the VM and select Delete.
Page | 116
a. Click the Delete button on confirmation window.
b. Click Refresh on the Virtual Machines view.
At this point there should be only the SVVSA VM left and keep it powered off.
7. Remove the dash "-" at the end of the SVVSA VM name.
a. Go to the Virtual Machines tab, select SVVSA VM, and then right-click and select Rename.
Page | 117
b. On Rename virtual machine pop-up window remove the dash “-” as shown in below screenshot.
c. Click Rename to confirm the VM name change.
8. Edit the networks of the node using the vSphere web client.
a. Go to Networking tab > Port groups.
Page | 118
NOTE: Make sure that the IP’s being used are free addresses.
NOTE: For configuration of Network use as reference any of the existing fully deployed HC380 node.
9. Configure ESXmgmt.
a. Go to Networking > Port groups > ESXmgmt.
b. Under vSwitch topology, click vmk3, as shown in below screenshot.
c. The vmk3 page will appear, as shown in below screenshot.
Page | 119
d. Assign a static IP to the ESXmgmt VMkernel port.
i. Click Edit Settings.
ii. Expand the IPv4 settings dropdown.
iii. Assign a static IP and Subnet mask to ESXmgmt.
iv. Click Save.
Page | 120
e. Assign a Gateway IP
i. Click defaultTcpipStack under TCP/IP configuration of the vmk3 adapter.
ii. Click Edit Settings.
iii. Set IPv4 gateway under Routing section.
iv. Click Save.
Page | 121
f. Verify that the ESXmgmt IP is reachable using ping command
10. Configure vMotion.
a. Go to Networking > Port groups tab > vMotion.
b. Click vmk2 under vSwitch topology, as shown in below screenshot.
Page | 122
c. The vmk2 page will appear, as shown in below screenshot.
d. Assign a static IP to vMotion VMkernel Port.
i. Click Edit Settings.
ii. Expand the IPv4 settings dropdown.
Page | 123
iii. Assign a static IP and Subnet mask to vMotion VMkernel Port.
iv. Click Save.
e. If applicable, set the vMotion VLAN ID
i. Go to Networking > Port groups > vMotion.
ii. Click Edit settings.
iii. Set VLAN ID then click Save as shown in below screenshot.
Page | 124
11. Configure VSAeth0.
a. Go to Networking > Port groups > VSAeth0.
b. Click Edit Settings.
c. If applicable provide the storage VLAN ID then click Save.
12. Configure HostStorage2.
a. Go to Networking > Port groups > HostStorage2.
b. Click vmk1 under vSwitch topology, as shown in below screenshot.
Page | 125
c. The vmk1 page will appear, as shown in below screenshot.
d. Assign a static IP to the HostStorage2 VMkernel Port.
i. Click Edit Settings.
ii. Expand the IPv4 settings dropdown.
iii. Assign a static IP and Subnet mask to HostStorage2.
Page | 126
iv. Click Save.
e. If applicable, set the HostStorage2 VLAN ID.
i. Go to Networking > Port groups > HostStorage2.
ii. Click Edit settings.
iii. Set VLAN ID then click Save as shown in below screenshot.
13. Configure HostStorage3.
a. Go to Networking > Port groups > HostStorage3.
Page | 127
b. Click vmk4 under vSwitch topology, as shown in below screenshot.
c. The vmk4 page will appear, as shown in below screenshot.
d. Assign a static IP to the HostStorage3 VMkernel Port.
i. Click Edit Settings.
Page | 128
ii. Expand the IPv4 settings dropdown.
iii. Assign a static IP and Subnet mask to HostStorage3.
iv. Click Save.
e. If applicable, set the HostStorage3 VLAN ID.
i. Go to Networking > Port groups > HostStorage3.
ii. Click Edit settings.
iii. Set VLAN ID the click Save as shown in below screenshot.
14. Configure the VSA VM using the vSphere Web client.
Page | 129
a. Select Virtual Machines on the Navigator menu.
b. Click the VSA VM named SVVSA-<SerialNum>.
c. Click Power on button.
d. Launch the console of the VSA VM.
NOTE: If VSA VM console is launched using Microsoft Internet Explorer, you may get a user login prompt. Please provide ESXi host default credentials root | HyperConv!234
e. Type start at the login prompt as shown in below screenshot.
Page | 130
f. Press Enter to select Log in.
g. Go to Network TCP/IP Settings > eth0 and provide the following (use the Tab key to navigate.)
i. Hostname: in the format of SVVSA-<node_serial_number>
ii. Provide IP address, subnet mask, gateway on the storage network.
Page | 131
h. Confirm the Network settings.
i. Log out of the session and close the VM console.
j. Close the vSphere Web client.
15. Disconnect vmnic0 once Network configuration on ESXi host and VSA are done
Find new Node in CMC
Procedure:
1. Configure new VSA in CMC.
a. Launch CMC and login as the administrator user.
b. On the Toolbar, select Find > Find Systems… > Add.
c. Enter the VSA IP address configured on step 14g of Network Configuration of ESXi Host and VSA.
Page | 132
d. Close the Find Systems dialog once the Status of the new node is found.
Page | 133
e. The new system will show up on the Available Systems list.
f. Click Log in to view.
Page | 134
2. Apply the VSA license of the new node. a. On the CMC, select Available Systems > <the new system> > Feature
Registration. b. Expand the Feature Registration Tasks dropdown and select either Edit or
Import License Key.
Page | 135
i. To import, you will need the license key file located. Once located, click Apply License Keys on the dialog.
Page | 136
ii. The other option is to edit the existing (temporary) license key with the new license key. Simply paste the new license key to the dialog.
Page | 137
Upgrade the VSA
Refer Update HPE StoreVirtual CMC and VSA software for detailed procedure.
Place the new ESXi node into maintenance mode
Refer Put the ESXi host in maintenance for detailed procedure.
Apply Service Pack for ProLiant
Refer Apply Service Pack for ProLiant section for detailed procedure
Update HPE Intelligent Provisioning
Refer Update HPE Intelligent Provisioning section for detailed procedure
Upgrade VMWare vSphere ESXi
Refer Upgrade VMWare vSphere ESXi section for detailed procedure
Install NVIDIA GRID Manager
NOTE: Applicable to VDI setups with NVIDIA GRID GPUs
Refer to steps 5 through 16 of the Migrate NVIDIA Grid Manager Post ESXi Upgrade
section.
Add new ESXi host to the vCenter
Procedure:
1. Add the new ESXi host to the existing HC 380 cluster.
a. Log in to the vCenter server web client.
b. Select the existing cluster.
Right-click and select Add host.
c. On the Add Host page enter the Host IP address or FQDN.
NOTE: For TLS 1.2 FQDN is required.
Page | 138
d. Enter the IP address of the new host and click Next.
Page | 139
e. Enter the ESXi host credentials.
f. Select Yes on the security Alert dialog.
g. Select Next on Host Summary page.
Page | 140
h. Select the default license and click Next.
i. Leave Lockdown mode as Disabled and click Next.
Page | 141
j. Leave Resource pool as the default and click Next.
k. Click Finish to complete the addition of the host.
Page | 142
2. Verify, if necessary edit, the NIC teaming settings of the port groups.
a. Select the newly added expansion node from the cluster.
b. Select the Configure tab, select Networking > Virtual switches.
c. On the list of Virtual switches, select vSwitch1.
d. On the list of Standard switch, select ESXmgmt. Verify that the Physical Adapters are both highlighted like what is shown below.
This means that both adapters are set to Active.
If one of the adapters is not Active, click Edit Settings (the pencil icon).
i. Select Teaming and Failover tab
Page | 143
ii. Select the adapter that is non Active on the Failover order section and move it up to Active.
For ESXmgmt, both vmnic4 and vmnic5 should be Active.
e. Repeat step 2d on HostStorage2.
Verify that vmnic4 is Active (or highlighted as shown on the sample screenshot below).
If not, edit the NIC Teaming settings so that vmnic4 will be Active and vmnic5 will be Unused.
f. Repeat step 2d on HostStorage3.
Page | 144
g. Verify that vmnic5 is Active (or highlighted as shown on the sample screenshotbelow).
If not, edit the NIC Teaming settings so that vmnic5 will be Active and vmnic4 willbe Unused.
h. Repeat step 2d on VSAeth0.
Verify that both vmnic4 and vmnic5 are Active (or highlighted) like ESXmgmt.
If not, edit the NIC Teaming settings so that both adapters are Active.
i. Repeat step 2d on mgmtVMNetwork.
Verify that both vmnic4 and vmnic5 are Active (or highlighted) like ESXmgmt.
If not, edit the NIC Teaming settings so that both adapters are Active.
j. Repeat step 2d on vMotion.
Verify that both vmnic4 and vmnic5 are Active (or highlighted) like ESXmgmt.
If not, edit the NIC Teaming settings so that both adapters are Active.
Take the VMware vSphere ESXi server out of
maintenance mode
Refer to Take the VMware vSphere ESXi server out of maintenance mode.
Add the VSA in Existing Management Group and
Cluster
1. Power on the VSA VM from vCenter.
2. Right-click the new system and select Add to Existing Management Group…
Page | 145
3. Click Continue…
4. Select the Group Name (confirm if there is only one management group in the CMC) and click Add.
5. The new node will show up on the management group. Right-click and select Add to Existing Cluster…
6. Confirm the Cluster name and click OK.
7. Click OK again on the WARNING pop-up message.
Page | 146
8. The new node shows up on the Storage Systems group.
Create New Server in Management Group
1. Add new server (ESXi host) to Management Group.
a. Right click Servers and select New Server…
b. On the New Server dialog:
i. Use the ESXmgmt IP of the new node for the Name and CHAP Name.
ii. Enter the iSCSI name from the ESXi host to the Initiator Node Name.
iii. To get the initiator Node Name go to vCenter.
iv. Click Host and Clusters and select the newly added host and select the Configure tab > Storage Adapters under Storage.
v. Select ISCSI Software Adapter, i.e. vmhba32.
vi. Copy the initiator name, i.e. “iqn…”.
vii. Paste in the Initiator node name on the New Server dialog.
viii. Keep the two iSCSI security checkboxes enabled (default)
ix. Enter the CHAP passwords, Target and Initiator should be 12 characters long and be different.
For example, use "Password!234" for Target and "HyperConv!234" for Initiator.
x. Click OK
Page | 147
2. Present the volume(s) to the newly added server.
a. Right Click the newly adder server and select Assign and Unassign Volumes and Snapshots…
b. Check the Assigned check box for the volumes to assign then click OK.
c. Click Continue on the popup to confirm.
Page | 148
Configure the assigned volume(s) on vCenter
Procedure
1. Select the Host and navigate to the Configure tab.
2. Select Storage Adapters under Storage.
3. Select the adapter under iSCSI Software Adapter, i.e. vmhbaxx.
4. Click the Edit button for the Authentication under Properties of the Adapter Details.
Page | 149
5. Select Use bidirectional CHAP.
6. Provide the values for the initiator names, Outgoing Secret, and Incoming Secret. The passwords should be same as mentioned during new server creation in HPE Centralized Management Console i.e. “Password!234” for the Outgoing and “HyperConv!234” for the incoming.
(Refer section ‘Create New Server in Management Group’) .
7. Configure the Dynamic Discovery tab.
a. Select the Targets tab.
b. Select Dynamic Discovery.
c. Select Add…
Page | 150
d. Provide the VSA virtual IP value.
e. The virtual IP can be obtained as below-
From the CMC, navigate to the VSA cluster then go to the iSCSI tab.
The iSCSI Server IP is the value of the Virtual IP.
Page | 151
f. Verify that Inherit from Parent check box is selected in Authentication Settings.
g. Click OK.
The Static Discovery tab should be automatically updated.
8. If the Static Discovery tab is automatically updated, perform these steps
a. Under the Targets tab, select Static Discovery.
b. Select Add…
c. Provide the VSA virtual IP similar to the Dynamic Discovery.
d. On the iSCSI Target Name enter the IQN you obtained in earlier step.
e. Verify that Inherit from Parent check box is selected in Authentication Settings.
Page | 152
f. Click OK.
9. Close the iSCSI Initiator Properties dialog.
Click the Rescan all storage adaptors icon as shown in the following screenshot.
10. Navigate to the storage and verify that the datastore shows up on the newly added node.
Page | 153
Add expanded node to Management UI
NOTE: Following feature is not supported on Management UI on vSphere 6.5. SPP upgrade cannot be applied using Management UI.
Use following procedure to add the node to the management UI if the events need to be
monitored using HPE OneView Global Dash Board
System in TLS 1.1
Procedure
The following procedures need to be applied on individual expanded nodes one after the
other.
1. Update DRS Settings in Remote vCenter
2. Create DeploymentXX.xml file on Management VM
3. Execute PostDeployment PowerShell script on Management VM
System in TLS 1.2
Procedure
Following procedures need to be applied on individual expanded nodes one after the other.
1. Configure the iLO Cert
2. Configure custom certificates of the nodes
3. Rename of Host with FQDN.
4. Update DRS Settings in Remote vCenter.
5. Create Deploymentxx.xml file on Management VM
6. Lock iLO to TLS 1.2 (Enable FIPS)
7. Execute PostDeployment PowerShell script on Management VM
Update DRS settings in Remote vCenter
The following are the steps to manually configure DRS for H380 at the remote vCenter after
migrating the nodes.
1. Launch the vSphere Web Client and connect to the remote vCenter.
2. Right-click the cluster and select Settings.
3. Go to Services and select vSphere DRS.
4. Verify vSphere DRS is turned on.
5. Verify that the Automation Level is Fully automated.
Page | 154
6. Select Cluster > Configure tab > Configuration pane and select VM Groups.
7. Click Edit… on VM Groups.
8. Select the DRS Group of HPE-HC Management VM Group.
9. Verify the management VM is assigned to the DRS group.
10. Click OK on the DRS Group dialog to close it.
11. Go to DRS VM/Host Groups.
12. Click Edit… on the Host DRS Groups.
13. Select the DRS Group of HPE-HC Management VM Hosts.
14. Assign all the expanded ESXi hosts to the DRS group.
15. Click OK on the DRS Group dialog to close it.
Page | 155
Create Deploymentxx.xml file on Management VM
There are two ways to create Deployementxx.xml file.
Create deployementxx.xml file manually.
Create deployemnetxx.xml file with an automated script.
Create deployementxx.xml file manually
1. Remotely access the Management VM.
2. Using file explorer, navigate to following location.
C:\ProgramData\Hewlett-Packard\StoreVirtual\InstantOn\config\<DESTINATION>\
Where: <DESTINATION> is the cluster-datacenter name, e.g. hpe-hc-clus_hpe-hc-dc
3. The folder may have one or many Deployment<NUM>.xml file. Where <NUM> is running incremental number.
NOTE: Deployment1.xml will always be the initial HC 380 deployment. Deployment numbers 2 and above are for the expansion nodes. Do not overwrite or delete Deployment1.xml file.
4. Create a new xml file with name Deployment<NUM_Max>.xml where NUM_Max will be the next available number
Page | 156
For example: If the folder already contains files with names like Deployment1.xml, Deployment2.xml & Deployment3.xml, the file name for newly created xml file would be Deployment4.xml.
5. Copy the column “XML file content” from below table into the newly created Deployment<NUM_Max>.xml file.
XML File content
1 > 2 > 3 > 4 > 5 > 1 > 6 > 7 > 8 > 9 > 10 > 11 > 12 > 13 > 14 > 15 > 16 > 17 > 18 >
<?xml version="1.0" encoding="utf-8"?> <Config> <VCENTERIP>10.100.12.14</VCENTERIP> <VCENTERUsername>[email protected]</VCENTERUsername> <VCENTERClusterName>hpe-hc-clus</VCENTERClusterName> <VCENTERDatacenter>hpe-hc-dc</VCENTERDatacenter> <VCenterLocation>Remote</VCenterLocation> <VCenterRemoteIP>10.100.12.14</VCenterRemoteIP> <EULA>Accepted</EULA> <OV4VCEULA>Accepted</OV4VCEULA> <VCENTEREULA>Accepted</VCENTEREULA> <VSPHEREEULA>Accepted</VSPHEREEULA> <VSPHERECLIEULA>Accepted</VSPHERECLIEULA> <vCenterMD5>79-36-4C-EF-F3-72-4E-00-6A-08-2F-18-7B-32-D5-BB</vCenterMD5> <vSphereMD5>FF-F2-31-20-3C-72-68-EE-A4-E5-11-43-3A-A2-F4-46</vSphereMD5> <HPMD5>84-E0-37-D2-BB-E7-47-3D-B8-FB-44-81-26-F9-93-9E</HPMD5> <OV4VCMD5>BD-62-26-1F-16-06-0F-1A-33-AE-56-AC-AA-B2-C8-03</OV4VCMD5> <vSphereCliMD5>0D-C3-D5-C1-CC-82-68-42-C6-7D-73-0F-60-AF-12-73</vSphereCliMD5> <ESXSubnet>255.255.192.0</ESXSubnet> <ESXGateway>10.100.0.1</ESXGateway> <ESXVlan></ESXVlan> <VMotionSubnet>255.255.192.0</VMotionSubnet> <VMotionVlan></VMotionVlan> <VSASubnet>255.255.192.0</VSASubnet> <VSAGateway>10.100.0.1</VSAGateway> <VSAVlan></VSAVlan> <VSAUsername>LHNUser</VSAUsername> <ESXIP1>10.100.12.13</ESXIP1> <ESXIP2>192.168.42.102</ESXIP2> <ESXIP3>192.168.42.103</ESXIP3> <ESXIP4>192.168.42.104</ESXIP4> <ESXUsername>root</ESXUsername> <WarningMsgCount>0</WarningMsgCount> <ShowCluster>Visible</ShowCluster> <ReqSystemType>ProLiant DL</ReqSystemType> <VSAQuorumRequired>false</VSAQuorumRequired> <ShowVCCredentials></ShowVCCredentials> <ShowConnectivity></ShowConnectivity> <ShowRemoteVC></ShowRemoteVC> <ESXIPStart>10.100.12.13</ESXIPStart> <ESXIPAddress1>10.100.12.13</ESXIPAddress1> <VMotionIPStart>10.100.12.23</VMotionIPStart> <VMotionIPAddress1>10.100.12.23</VMotionIPAddress1> <VSAIPStart>10.100.12.42</VSAIPStart> <VSAVip>10.100.12.42</VSAVip> <VSAIPAddress1>10.100.12.43</VSAIPAddress1> <VSAIPAddress2>10.100.12.44</VSAIPAddress2>
Page | 157
19 > 20 > 21 > 22 >
<VSAIPAddress3>10.100.12.45</VSAIPAddress3> <VSAIPAddress4></VSAIPAddress4> <ConfigurationStatus>Complete</ConfigurationStatus> <VipSubnet>255.255.192.0</VipSubnet> <VipIPAddress>10.100.12.31</VipIPAddress> <InstallType>Expand</InstallType> <LocalChassis>MXQ55200C5</LocalChassis> <VCenterVersion>6.0.0</VCenterVersion> <NodesSelected>1</NodesSelected> <Locked>Locked</Locked> <GroupVersion>12.6</GroupVersion> <MACAddress1>00:50:56:BF:05:BC</MACAddress1> <VSAVersion1>12.6</VSAVersion1> <OrigVSA1>SVVSA-MXQ55200C2-</OrigVSA1> <ESXMoref1>host-38</ESXMoref1> <VCENTERClusterId>domain-c7</VCENTERClusterId> <ESXName1>srvr-2278.lhn.com</ESXName1> <QuorumWitness></QuorumWitness> <LocalManagementVMName>HPE-HC-mgmt-MXQ55200C5</LocalManagementVMName> <OV4VCURI>https://hpe-hc-mgmt:3504</OV4VCURI> <NodeChassis1>MXQ55200C2</NodeChassis1> <NodePosition1>Node01</NodePosition1> <NodeVM1>HPE-HC-mgmt-MXQ55200C2</NodeVM1> <ESXV6IP1>fe80::250:56ff:fe6b:1092</ESXV6IP1> <SystemType1>ProLiant DL380 Gen9</SystemType1> <ProductID1>P9D74A</ProductID1> <EONNumber1>PRPHNXVSA1</EONNumber1> <FullPSF1>$PSF?W=P9D74A#001:H261NP0004+L=PRPHNXVSA1+S=P9D85A</FullPSF1> <VSACapacity1>7994843992</VSACapacity1> <VSACluster1>HPE-HyperConv-274-Storage</VSACluster1> <VSA1>SVVSA-MXQ55200C2</VSA1> <ChassisESXVersion1>6.0.0</ChassisESXVersion1> <ChassisVSAVersion1>12.6</ChassisVSAVersion1> <ChassisGenVersion1>9.0</ChassisGenVersion1> </Config>
6. Replace the xml node data as per following table (except for item 19, ESXMoref1):
1 vCenter IP Address
2 vCenter Username
3 vCenter ClusterName
4 vCenter Datacenter
5 vCenter Location
6 ESXi Subnet
7 ESXi Gateway
8 ESXi Vlan
Page | 158
9 VMotion Subnet
10 VMotion Vlan
11 VSA Subnet
12 VSA Gateway
13 VSA Vlan
14 VSA Username
15 Expanded Node IP Address
16 ESXi Username
17 Expanded Node IP Address
18 Expanded Node IP Address
19 ESXMoref1 (use step 7 below on how to get this value)
20 Server number from iLO
21 Server Serial number from iLO
22 VSA Name
7. The following are the steps to capture the ESXMoref1 host information of the expansion node using vCenter managed object browser (mob).
a. By default mob is disabled on the host. Enable the mob with these steps:
i. Browse to the host in the vSphere Web Client navigator’s Host and Clusters view.
ii. Select the Configure tab.
iii. Under System, select Advanced System Settings.
iv. Click Edit…
v. Select Config.HostAgent.plugins.solo.enableMob option to enable mob. You could quickly access the setting by cutting and pasting the option name on the Filter field.
b. Get the local datastore name associated with the expansion node.
i. With the expansion host selected, go to the Datastores tab.
ii. Take Note of the name of the local datastore. It will be named “datastore1 (n)”.
c. Get the host number information from the mob of vCenter.
i. Point the browser to https://<vcenter_IPaddress>/mob.
ii. Log in with the vCenter administrator user.
iii. Select the content value hyperlink.
iv. Select the rootFolder value hyperlink, i.e. group-d1.
v. Under childEntity property, select the hyperlink value associated with the datacenter used by the HC380 cluster.
vi. Under the datastore property, select the hyperlink value associated with the local datastore of the expansion host you obtained on step 2b above.
Page | 159
vii. On the host property, the hyperlink value “host-n” is the value associated with ESXMoref1 of the expansion host on the deployment xml file.
Create Deployementxx.xml file with an automated script
Procedure
1. SSH to the node and capture host information required for the XML file.
a. A sample shell script, esx-cmd.sh, is provided in the Appendix.
b. Paste the script in Notepad and save it as esx-cmd.sh
c. Copy the file to root directory of the node.
d. Execute the file:
i. Run the command Chmod 777 esx-cmd.sh
ii. Run the command ./esx-cmd.sh
This will generate the file “myvalues.txt”.
e. Capture the content of the script’s output file, myvalues.txt.
2. Capture the ESXi host information of the expansion node from vCenter managed object browser (mob).
a. By default mob is disabled on the host. Enable the mob with these steps:
b. Browse to the host in the vSphere Web Client navigator’s Host and Clusters view.
c. Select the Configure tab.
d. Under System, select Advanced System Settings.
e. Click Edit…
f. Select Config.HostAgent.plugins.solo.enableMob option to enable mob. You could quickly access the setting by cutting and pasting the option name on the Filter field.
g. Get the local datastore name associated with the expansion node.
i. With the expansion host selected, go to the Datastores tab.
ii. Take Note of the name of the local datastore. It will be named “datastore1 (n)”.
h. Get the host number information from the mob of vCenter
i. Point the browser to https://<vcenter_IPaddress>/mob
ii. Log in with the vCenter administrator user
iii. Select the content value hyperlink
iv. Select the rootFolder value hyperlink, i.e. group-d1
v. Under childEntity property, select the hyperlink value associated with the datacenter used by the HC380 cluster.
vi. Under the datastore property, select the hyperlink value associated with the local datastore of the expansion host you obtained on step 2b above.
vii. On the host property, the hyperlink value “host-n” is the value associated with ESXMoref1 of the expansion host on the deployment xml file.
3. Create a PowerShell script named as Create-XML.ps1, is provided on the Appendix.
4. Edit the Create-XML.ps1 script.
Page | 160
5. Cut and paste the contents of myvalues.txt from step 1(d)(iii).
6. Provide the value obtained on step 2(h)(vii) for ESXMoref1 on the Create-XML.ps1 file.
7. Run Create-XML.ps1 script. It will create a Deploymentxx.xml file on the <DESTINATION> folder under the config directory.
C:\ProgramData\Hewlett-Packard\StoreVirtual\InstantOn\config\ <DESTINATION-
NAME>\Deploymentxx.xml
8. Verify Deploymentxx.xml created.
Execute PostDeployment PowerShell script on Management VM
Procedure:
1. Open a PowerShell session.
2. Change directory to:
cd C:\ProgramData\Hewlett-Packard\StoreVirtual\InstantOn\PostDeployment
3. Run the Postdeployment script:
\PostDeployment –manifestFilename C:\ProgramData\Hewlett-
Packard\StoreVirtual\InstantOn\config\<DESTINATION-NAME>\Deployment< n
>.xml
4. Close the PowerShell session before proceeding with next expanded node.
Page | 161
NOTE: If there are nodes which have been expanded before migration, take the backup of Deployment<n>.xml files and execute PostDeployment scripts for all the Deployment<n>.xml files one after another. Do not attempt to use the same PowerShell session for multiple expansion xml. This may result in incorrect configuration and expanded node will not get reflected in Management UI.
5. The HC 380 UI pop-up will launch.
6. Provide the Management UI VM-related information like IP Address, User Name & Password.
7. Once node is added successfully, following pop screen will appear.
8. Click OK.
Page | 162
9. Check expanded node in the Management UI Settings tab.
10. Configure the iLO of the expanded HC 380 nodes on the Management UI.
11. Click the Submit button to complete the configuration.
12. Click the Dashboard button to log in to the Management UI.
13. Log in to iLO and verify that the HPE OneView tab is be visible in iLO.
Page | 163
Migrate NVIDIA Grid Manager Post
ESXi Upgrade
NOTE: Applicable to VDI setups with NVIDIA GRID GPUs
Procedure:
NOTE: The following steps 1 to 9 should be performed on one node at a time. Do not attempt to power off or upgrade the next node until it completes successfully. These steps can be ignored if you continue using ESXi 6.0 on the cluster nodes.
1. Migrate the Management VM to a Different Host.
2. Migrate User VMs to a Different Host.
3. Power off the StoreVirtual VSA VM using HPE StoreVirtual CMC.
Important: Hewlett Packard Enterprise strongly recommends that you cleanly shut down the StoreVirtual VSA on the VMware vSphere ESXi host prior to node migration.
4. Put the ESXi host in maintenance mode.
5. Uninstall the NVIDIA GPUModeswitch utility. If it was installed as well using below command:
esxcli software vib remove –n
NVIDIA_VMware_ESXi_6.0_GpuModeSwitch_Drive
6. Reboot the system once VIB uninstall completed successfully.
7. Uninstall 6.0 NVIDIA grid manager VIB with the following command:
esxcli software vib remove -n NVIDIA_vGPU_VMware_ESXi_6.0_Host_Driver
8. Reboot the system once VIB uninstall completed successfully.
9. Copy the 6.5 NVIDIA grid manager host driver VIB mentioned below to ESXi targeted host via WinScp. VMware_ESXi_6.5_Host_Driver_3xx.xx-1OEM.650.0.0.xxxxxxx.vib
10. Install new VIB by using esxcli command
esxcli software vib install –v
<path_to_the_nvidia_host_driver_for_VMware_ESXi_6.5>
11. Once installation is complete, reboot the host.
12. Run following command to make sure that correct VIB got installed on Host.
Page | 164
13. Verify the Installation of the NVIDIA GRID Package.
14. After the ESXi host has rebooted, verify the installation of the GRID package for vSphere by performing the following steps:
a. Verify that the GRID package installed and loaded correctly by checking for the NVIDIA kernel driver in the list of kernel loaded modules.
[root@esxi:~] vmkload_mod -l | grep nvidia
nvidia 5 8420
b. If the NVIDIA driver is not listed in the output, check dmesg for any load-time errors
reported by the driver.
c. Verify that the NVIDIA kernel driver can successfully communicate with the GRID
physical GPUs in your system by running the nvidia-smi command.
d. The nvidia-smi command is described in more detail in NVIDIA System
Management Interface nvidia-smi.
e. Running the nvidia-smi command should produce a listing of the GPUs in your
Platform.
15. Take the VMware vSphere ESXi server out of maintenance mode
16. Power ON the StoreVirtual VSA VM
Refer below link to get NVIDIA Grid manager User guide explaining steps to configure graphics
cards.
http://images.nvidia.com/content/grid/pdf/GRID-vGPU-User-Guide.pdf
Page | 165
System Recovery A system migrated to the HC 380 1.1 Update 2 vSphere 6.5 version cannot be directly reset
back to HC 380 1.1 Update 2 vSphere 6.5. You must first follow the steps outlined in “USB-
based node recovery or system reset” procedure in the HPE Hyper Converged 380 User Guide
(http://www.hpe.com/support/hc380ugen).
The “USB-based node recovery or system reset” procedure will restore the system to vSphere
6.0. Once restored, follow the steps outlined in this guide before restoring any tenant VMs to the
system.
If you have concerns about performing this process, contact your HPE sales representative and
ask about HPE Pointnext services assistance.
If HPE Hyper Converged 380 1.1 or 1.0 is configured on your system it must first be upgraded
to 1.1 Update 2 before following the steps in the “VMware vSphere 6.5 Migration Guide for HPE
Hyper Converged 380 1.1 Update 2”.
Page | 166
Troubleshooting Following section describes post migrate troubleshooting:
1. Fetching of support bundle fails from HPE OneView InstantOn
2. Accessing VM Console from Management UI fails
3. Management UI unable to configure HPE OneView IP after migration
4. VM templates are not visible in vCenter, after migrating Hosts from vCenter6.0 to 6.5
5. VMs are not visible in ManagementUI VM, after Redeployment of Management UI and HPE OneView VM
6. After ESXi 6.5 upgrade Host will be red flagged and vCenter will not communicate with host
7. Error message on the ESXi Host after ESXi6.5u1 upgrade->System logs on host are not stored in persistent storage error
8. Don’t use HPE OneView InstantOn summary page is not functional.
9. Datastore expansion operation will take a while.
10. ADU log collection command is no longer functional
11. VM vending operations from Management UI may fail in case of VCenter failover under VCenter High Availability environment.
12. Issue in downloading upgrade manifest or all upgrade files while Updating HPE StoreVirtual CMC and VSA to 12.7
13. HPE OneView InstantOn takes longer time to complete installation on Management VM.
Fetching of support bundle fails from HPE OneView
InstantOn
While extracting support bundle from HPE OneView InstantOn, vCenter support bundle
extraction operation fails with errors as shown in following screenshots:
Page | 167
Workaround
The Logs can be collected manually using the following procedure.
Manual Log Collection:
Always generate the log file from the Management VM.
Downloading the HPE OneView InstantOn log file
NOTE: These files should be zipped into a single file for easier collection and transmission.
Procedure
1. Open File Explorer and select the C:\ drive in the left navigation pane.
2. Select the View menu and then select Hidden items in the Show/hide section.
Page | 168
3. In the left navigation pane, navigate to the /ProgramData/Hewlett-Packard/StoreVirtual directory.
4. Select both the InstantOn directory and the HPCS200_management_version.txt file by pressing and holding the Shift key while selecting both items.
5. Right-click the selection and select Send to > Compressed (zipped) folder A new ZIP file appears in the /ProgramData/Hewlett-Packard/StoreVirtual directory. This
ZIP file contains the HPE OneView InstantOn log files.
Downloading the StoreVirtual CMC Service Bundle logs
Procedure
1. Select the HP StoreVirtual Centralized Management Console (CMC) icon from the Windows start menu.
2. In the CMC navigation window, select the management group with the prefix HP-HyperConv and log in using any of the following methods:
a. Double-click the management group.
b. Open the Management Group Tasks menu, and select Log in to Management Group.
c. You can also open this menu by right-clicking the management group.
d. Click any of the Log in to view links on the Details tab.
Page | 169
3. Enter the user name and password established during installation of HPE OneView InstantOn and click Log In.
4. Once logged in, select the management group with the prefix HP-HyperConv again. Right-click the management group to show the management group task menu.
5. Select Export Management Group Support Bundle.
6. You will be prompted to select the directory for saving the support bundle ZIP file. The default location to save the support bundle is C:\Users\Administrator\Documents.
You may also enter a different location.
7. To start exporting the support bundle to the designated location, click Save. The CMC will confirm a successful export when complete.
Page | 170
Downloading HPE OneView for vCenter logs for 7.8.X
Procedure
1. In the Windows Apps menu, select the Support Tool icon in the HPE OneView for VMware vCenter section. The HP HPE OneView for VMware vCenter Support Tool window opens.
2. Click Run Tests.
3. Click Zip Logs. Enter the desired location to save the SupportLogs.zip file.
Page | 171
Downloading HPE OneView for vCenter logs for 8.3.X
Procedure
1. Open the IP address of the HPE OneView For vCenter 8.x version in web browser and Login with the Credentials you have provided during the installation of HPE OneView for vCenter.
Page | 172
2. Click HPE OneView for VMware vCenter dropdown and select Settings tab under Administration.
3. Click Log Collection.
Page | 173
4. Provide the Name, Description and click Generate.
5. Once it is generated, click the Download button.
Downloading VMware vCenter logs
Procedure
1. From the Windows Start menu, select the Generate vCenter Server log bundle – Extended icon.
2. A DOS command window titled Generate vCenter Server log bundle – Extended opens and remains open while the support bundle is generated. It closes automatically when the bundle completes.
3. A folder icon appears on your Windows desktop. The folder is titled vcsupport-<date/time>.
When the support bundle is complete, this file contains the necessary log files.
Page | 174
Manually collecting Virgo logs
NOTE: Virgo logs only need to be collected from a remote vCenter setup.
Procedure
1. Open File Explorer and select the C:\ drive in the left navigation pane.
2. From the File or Organize menu, select Folder and search options.
3. Click the View tab.
4. Select Show hidden files, folders, and drives.
5. In the left navigation pane, navigate to one of the following directories.
a. vCenter 6.0:
C:\ProgramData\VMware\vCenterServer\logs\vsphere-client
b. vCenter 6.5:
C:\ProgramData\VMware\vCenterServer\logs\vsphere-client
6. Right-click the logs directory and select Send to > Compressed (zipped) folder.
A ZIP file, which contains the Virgo log files, is created. The new ZIP file appears in the
C:\ProgramData\VMware\vCenterServer\logs\vsphere-client directory for vCenter 6.5.
Page | 175
Manually collecting Management UI logs
1. Open the IP address of the Management-UI VM in web browser and Login with the Credentials you have provided during the installation of Management UI VM.
2. Go to Settings and select Create support dump.
3. Once downloaded click Download Support dump.
Page | 176
Accessing VM Console from Management UI fails
If the VM Console does not open up via the Management UI perform the following steps.
Workaround
1. The VM console can be accessed using vCenter Web client.
a. Right click on the VM.
b. Select Open Console (as per below screenshots).
Page | 177
Management UI unable to configure HPE OneView IP
after migration
HPE OneView IP cannot be configured on the Identity settings during the initial setup of
HC 380 UI. Due to this, HC 380 cannot be configured and added in Management UI.
Page | 178
Workaround
1. While configuring IP for HPE OneView, use a different available/free IP.
2. Once properly configured, subsequently, this IP can be changed/reverted with original IP.
VM templates are not visible in vCenter, after
migrating Hosts from vCenter 6.0 to 6.5
1. Launch the vSphere Web Client and connect to the remote vCenter.
2. Go to Templates.
3. Verify that VM template deployed using vCenter 6.0 is visible.
4. If it is not visible, follow the workaround:
Workaround:
1. Launch the vSphere Web Client and connect to the remote vCenter.
2. Go to Datastore.
Select the datastore on which template is deployed.
3. Go to Files > <Template Name>.
4. Right-click <Template Name>.vmtx.
5. Select Register VM.
Page | 179
6. When the following pop-up displays, click Finish.
7. The VM template will be re-registered and visible under template icon.
Page | 180
8. Verify the VM template is visible under Images in ManagementUI VM.
Page | 181
VMs are not visible in Management UI VM, after
Redeployment of Management UI and HPE OneView
VM
NOTE: Deployed VMs should not reside on to the Local Datastore and VSAManagement Datastore (Except Mgmt VM, mgmt UI and HPE OneView VM)
Cause:
VM gets moved to “Discovered Virtual Machine” Folder from Root directory.
How to Verify:
1. Log in to the Redeployed Management UI VM.
2. Go to Virtual Machine.
3. Verify that previously deployed VMs are visible.
4. If No, follow the workaround.
Workaround:
1. Launch the vSphere Web Client and connect to the remote vCenter
2. Go to Templates > discovered Virtual Machine.
3. Select the Virtual Machine, which was deployed previously.
4. Right-click the Virtual Machine and select Move To..
5. Select the Datacenter.
6. Click OK.
Page | 182
7. VM will migrate to root directory.
8. Log in to the ManagementUI VM.
9. Go to Virtual Machines.
10. Verify that, the previously deployed VM is visible.
11. Repeat the steps 3 to 9, for all other virtual machines which are present under “discovered Virtual Machine” directory and are not visible in Redeployed ManagementUI VM.
After ESXi 6.5 upgrade Host will be red flagged and
vCenter will not communicate with host
Cause:
Host will be in Disconnected state after ESXi 6.5 u1 upgrade.
1. Log in to vCenter the web Client.
2. Go to Host and Clusters.
3. Verify that Host is in disconnected state.
4. If Yes, follow the workaround.
Workaround:
1. Log in to vCenter the web Client
2. Go to Host and Clusters.
3. Select the Host which is in disconnected state.
4. Right-click the host and select Connection > Connect.
Page | 183
Error message on the ESXi Host after ESXi 6.5u1
upgrade
System logs on host are not stored in persistent storage error
Cause:
Host will show an error message after ESXi 6.5 u1 upgrade.
1. Log in to vCenter the web Client.
2. Go to Host and Clusters.
3. Select Host > Summary tab.
4. Verify error message “System logs on host are not stored in persistent storage error”.
5. As shown in below screenshot.
6. If Yes, follow the workaround.
Page | 184
Workaround:
1. Take the SSH of the Host.
2. List the vib “elx-esx-libelxima.so”.
3. Remove the vib from the host.
Run the command:
esxcli software vib remove –n elx-esx-libelxima.so
4. Reboot the host.
5. Verify the error message is not persisting on the ESXi Host.
Don’t use HPE OneView InstantOn summary page is
not functional.
After modification of xml files for expansion, do not use the HPE OneView InstantOn
summary page is not functional after migration.
Datastore expansion operation will take a while.
Datastore Expansion operation may take up to 3 minutes to update, please wait for
expansion operation to complete.
Page | 185
ADU log collection command is no longer functional
ADU log collection command is not functional. ADU logs cannot be collected from HPE-
HC Management VM. Please use iLO to collect ADU logs.
VM vending operations from Management UI may fail
in case of VCenter failover under VCenter High
Availability environment.
VM vending operations like VM creation and deletion may fail from management UI in the event of vCenter failover under VCenter High Availability environment.
Management UI restart is required after vCenter failover to resolve VM vending issues.
1. Log in to Management UI.
2. Go to Settings.
3. Click Restart, located at right side of panel.
Page | 186
Issue in downloading upgrade manifest or all upgrade
files while Updating HPE StoreVirtual CMC and VSA to
12.7
StoreVirtual upgrades and patches can also be accessed using the HTTP protocol. Use the
following workaround to re-enable Online Upgrades with an HTTP connection.
Workaround 1
Procedure
1. Identify the CMC downloads folder. This can be found in the CMC Help menu under Preferences > Upgrades.
The default download folder is:
C:\Program Files (x86)\HP\StoreVirtual\UI\downloads
2. Update the CmcUpgradePreference.upgradeUrl setting in the preferences.txt file:
a. Close your StoreVirtual Centralized Management Console (CMC).
NOTE: This setting change will not take effect if the CMC remains open during the following procedure.
b. Locate the preferences.txt file in the .storage_system folder of the user’s home directory.
C:\Users\user_name\.storage_system\preferences.txt
c. Make a back-up copy of the preferences.txt file (for example, to preferences_old.txt).
d. Open preferences.txt with your favorite text editor.
e. Find the line for the CmcUpgradePreference.upgradeUrl setting.
Page | 187
f. Replace the entire line with the following single line:
CmcUpgradePreference.upgradeUrl= http://ftp.hp.com/pub/hp_LeftHandOS/
g. Save the changes and close the file.
This new setting will apply only to your user profile.
NOTE: Repeat this procedure to edit the preferences.txt file in the user’s home directory for each CMC user that must perform upgrades.
3. With the CMC still closed, update the upgrade manifest:
a. Delete all files in the CMC downloads folder identified above.
b. Start the CMC.
A new upgrades.xml manifest file is automatically downloaded to the CMC downloads folder.
c. Using a text editor, open the upgrades.xml file and edit the FTP section as follows:
<DownloadURLs> <URL>http://ftp.hp.com/pub/hp_LeftHandOS/</URL> <URL>http://ftp.hp.com/pub/hp_LeftHandOS/</URL> <URL>http://ftp.hp.com/pub/hp_LeftHandOS/</URL> <URL>http://ftp.hp.com/pub/hp_LeftHandOS/</URL> <URL>http://ftp.hp.com/pub/hp_LeftHandOS/</URL> <URL>http://ftp.hp.com/pub/hp_LeftHandOS/</URL> </DownloadURLs>
d. Save the changes and close the upgrades.xml file.
e. Close the CMC and reopen it.
f. Open the upgrades.xml file again and verify the above changes. Close the file.
g. Using the CMC, log into the desired management group. From the Tasks menu, select Download All Upgrade Files to begin the upgrade process.
Page | 188
Workaround 2
Download CMC and VSA updates component from software Depot.
HPE OneView InstantOn takes longer time to complete
installation on Management VM
HPE OneView InstantOn InstallShield Installer takes long time to complete on Mgmt VM that
does not have access to the Internet
Cause:
Microsoft Windows tries to validate the HPE publisher Certificate’s Certificate Revocation
List (CRL) located at http://crl.comodoca.com/COMODORSACodeSigningCA.crl. On a
system that cannot access that CRL URL, the validation process blocks the installer until it
times out after 10 minutes.
HPE’s publisher certs’s CRL.
Page | 189
Workaround:
1. Go to Control Panel > Internet Options > Advanced.
2. Uncheck Check for Publisher’s certificate revocation prior to run the OVIO installer
Page | 190
Page | 191
Appendix
Sample Deployment XML Automation Scripts
esx-cmd.sh
This is a sample shell script you could run on the ESXi host to collect XML data to be used on
the new deployment XML file for the expansion node. The results of this script will be used by
the sample PowerShell script to generate the deployment XML file.
#!/bin/sh
# ************************* SAMPLE SHELL SCRIPT *************************
entity=$(esxcli network ip interface ipv4 get | grep vmk3 | awk '{print $2}')
echo "\$ESXIP1=\"$entity\"" > myvalues.txt
entity=$(esxcli network ip interface ipv4 get | grep vmk2 | awk '{print $2}')
echo "\$VMotionIPStart=\"$entity\"" >> myvalues.txt
entity=$(esxcli network ip interface ipv4 get | grep vmk1 | awk '{print $2}')
echo "\$VSAIPStart=\"$entity\"" >> myvalues.txt
entity=$(esxcli network ip interface ipv6 address list | grep vmk3 | awk '{print
$2}')
echo "\$ESXV6IP1=\"$entity\"" >> myvalues.txt
entity=$(/bin/smbiosDump | grep -m 1 Serial: | awk '{print $2}')
# returns with quotes
echo "\$NodeChassis1=$entity" >> myvalues.txt
entity1=$(/bin/smbiosDump | grep PSF | awk -F'+' '{print $2}' | awk -F'=' '{print
$2}')
echo "\$VSALicense=\"$entity1\"" >> myvalues.txt
myid=$(vim-cmd vmsvc/getallvms | grep VSA | awk '{print $1}')
entity=$(vim-cmd vmsvc/device.getdevices $myid | grep -i macAddress | awk -F'='
'{print $2}' | sed 's/,//')
# returns with quotes
echo "\$MACAddress1=$entity" >> myvalues.txt
Page | 192
Create-VM.ps1
This is sample PowerShell script will create the deployment XML file for an expansion node.
# ********************************* SAMPLE POWERSHELL SCRIPT
*********************************
#
# Place this PowerShell script on the PostDeployment directory:
# C:\ProgramData\Hewlett-Packard\StoreVirtual\InstantOn\PostDeployment
# =============================== REQUIRED VALUES FOR XML FILE
===============================
# A sample shell script is provided to capture the information. SSH to the node
run, then run
# the shell script. Cut and paste the results here for quick and accurate entries.
#$ESXIP1="x.x.x.x"
#$VMotionIPStart="y.y.y.y"
#$VSAIPStart="z.z.z.z"
#$ESXV6IP1="xxxx::xxx:xxxx:xxxx:xxx"
#$NodeChassis1="SERIALNUM"
#$VSALicense="SAMPLEVSA1"
#$MACAddress1="xx:xx:xx:xx:xx:xx"
# Use value of ZERO to use the same value on Deployment1.xml
$VSACapacity1=0
# Optional field, only required by HPE OneView InstantOn
$ESXMoref1="host-380"
# ================================ NO EDITS BEYOND THIS POINT
================================
# Required values derived from the values above
$FullPSF1="$" + "PSF?W=P9D74A#001:" + $NodeChassis1 + "+L=" + $VSALicense +
"+S=P9D85A"
$ESXIPStart=$ESXIP1
$ESXIPAddress1=$ESXIP1
$VMotionIPAddress1=$VMotionIPStart
Page | 193
$ESXName1="HPE-HC-" + $NodeChassis1
$NodeVM1="HPE-HC-mgmt-" + $NodeChassis1
$VSAVip=$VSAIPStart
$OrigVSA1="SVVSA-" + $NodeChassis1 + "-"
$VSA1="SVVSA-" + $NodeChassis1
#Set the VSA IP Address based on the VSAIPStart value above
$fixed=$VSAIPStart.Split('.')[0..2]
$fixed=$fixed -Join "."
$last=[int]($VSAIPStart.Split('.')[3])
$last=$last+1
$VSAIPAddress1=$fixed + "." + $last
$last=$last+1
$VSAIPAddress2=$fixed + "." + $last
$last=$last+1
$VSAIPAddress3=$fixed + "." + $last
# Get the Destination path value
$destpath="C:\ProgramData\Hewlett-Packard\StoreVirtual\InstantOn\DirectoryMap.xml"
$destxml= [xml] (Get-Content $destpath)
$Destination= $destxml.Config.InnerText
# Read the first deployment XML file
$cfgpath="C:\ProgramData\Hewlett-Packard\StoreVirtual\InstantOn\config\" +
"$Destination" + "\"
$oldxmlpath=$cfgpath + "Deployment1.xml"
if (Test-Path $oldxmlpath) {
$oldxml= [xml] (Get-Content $oldxmlpath)
}
else {
Write-Host ""
Write-Host "FILE NOT FOUND! Deployment1.xml"
Write-Host ""
exit
}
Page | 194
# Set the Deployment file number based on number of Deployment XML files on the
directory
$xmlfilepath=$cfgpath + "Deployment*.xml"
$fileCount=(Get-ChildItem -Path $xmlfilepath | Measure-Object).Count
$Deploy_Number=$fileCount + 1
# Create the new deployment XML file
$encoding = [System.Text.Encoding]::UTF8
$xmlpath=$cfgpath + "Deployment" + $Deploy_Number + ".xml"
if (Test-Path $xmlpath) {
Write-Host ""
Write-Host "ERROR! $xmlpath file already exists. XML file number is out of
sequence."
Write-Host ""
exit
}
$xmlWriter = New-Object System.Xml.XmlTextWriter($xmlpath,$encoding)
$xmlWriter.Formatting = 'Indented'
$xmlWriter.Indentation = 2
$xmlWriter.WriteStartDocument()
$xmlWriter.WriteStartElement('Config')
# Copy common content between the XML files
$entity = $oldxml.Config.VCENTERIP
$xmlWriter.WriteElementString("VCENTERIP",$entity)
$entity = $oldxml.Config.VCENTERUsername
$xmlWriter.WriteElementString("VCENTERUsername",$entity)
$entity = $oldxml.Config.VCENTERClusterName
$xmlWriter.WriteElementString("VCENTERClusterName",$entity)
$entity = $oldxml.Config.VCENTERDatacenter
$xmlWriter.WriteElementString("VCENTERDatacenter",$entity)
$entity = $oldxml.Config.VCenterLocation
$xmlWriter.WriteElementString("VCenterLocation",$entity)
$entity = $oldxml.Config.VCenterRemoteIP
$xmlWriter.WriteElementString("VCenterRemoteIP",$entity)
Page | 195
$entity = $oldxml.Config.EULA
$xmlWriter.WriteElementString("EULA",$entity)
$entity = $oldxml.Config.OV4VCEULA
$xmlWriter.WriteElementString("OV4VCEULA",$entity)
$entity = $oldxml.Config.VCENTEREULA
$xmlWriter.WriteElementString("VCENTEREULA",$entity)
$entity = $oldxml.Config.VSPHEREEULA
$xmlWriter.WriteElementString("VSPHEREEULA",$entity)
$entity = $oldxml.Config.VSPHERECLIEULA
$xmlWriter.WriteElementString("VSPHERECLIEULA",$entity)
$entity = $oldxml.Config.vCenterMD5
$xmlWriter.WriteElementString("vCenterMD5",$entity)
$entity = $oldxml.Config.vSphereMD5
$xmlWriter.WriteElementString("vSphereMD5",$entity)
$entity = $oldxml.Config.HPMD5
$xmlWriter.WriteElementString("HPMD5",$entity)
$entity = $oldxml.Config.OV4VCMD5
$xmlWriter.WriteElementString("OV4VCMD5",$entity)
$entity = $oldxml.Config.vSphereCliMD5
$xmlWriter.WriteElementString("vSphereCliMD5",$entity)
$entity = $oldxml.Config.ESXSubnet
$xmlWriter.WriteElementString("ESXSubnet",$entity)
$entity = $oldxml.Config.ESXGateway
$xmlWriter.WriteElementString("ESXGateway",$entity)
$entity = $oldxml.Config.ESXVlan
$xmlWriter.WriteElementString("ESXVlan",$entity)
$entity = $oldxml.Config.VMotionSubnet
$xmlWriter.WriteElementString("VMotionSubnet",$entity)
$entity = $oldxml.Config.VMotionVlan
$xmlWriter.WriteElementString("VMotionVlan",$entity)
$entity = $oldxml.Config.VSASubnet
$xmlWriter.WriteElementString("VSASubnet",$entity)
$entity = $oldxml.Config.VSAGateway
$xmlWriter.WriteElementString("VSAGateway",$entity)
$entity = $oldxml.Config.VSAVlan
$xmlWriter.WriteElementString("VSAVlan",$entity)
$entity = $oldxml.Config.VSAUsername
$xmlWriter.WriteElementString("VSAUsername",$entity)
$entity = $oldxml.Config.VSAVersion1
Page | 196
$xmlWriter.WriteElementString("VSAVersion1",$entity)
$entity = $oldxml.Config.VSACluster1
$xmlWriter.WriteElementString("VSACluster1",$entity)
if ($VSACapacity1 -eq 0) {
$entity = $oldxml.Config.VSACapacity1
$xmlWriter.WriteElementString("VSACapacity1",$entity)
} else {
$xmlWriter.WriteElementString("VSACapacity1",$VSACapacity1)
}
$entity = $oldxml.Config.ESXUsername
$xmlWriter.WriteElementString("ESXUsername",$entity)
$entity = $oldxml.Config.WarningMsgCount
$xmlWriter.WriteElementString("WarningMsgCount",$entity)
$entity = $oldxml.Config.ShowCluster
$xmlWriter.WriteElementString("ShowCluster",$entity)
$entity = $oldxml.Config.ReqSystemType
$xmlWriter.WriteElementString("ReqSystemType",$entity)
$entity = $oldxml.Config.ShowVCCredentials
$xmlWriter.WriteElementString("ShowVCCredentials",$entity)
$entity = $oldxml.Config.ShowConnectivity
$xmlWriter.WriteElementString("ShowConnectivity",$entity)
$entity = $oldxml.Config.ShowRemoteVC
$xmlWriter.WriteElementString("ShowRemoteVC",$entity)
$entity = $oldxml.Config.ConfigurationStatus
$xmlWriter.WriteElementString("ConfigurationStatus",$entity)
$entity = $oldxml.Config.VipSubnet
$xmlWriter.WriteElementString("VipSubnet",$entity)
$entity = $oldxml.Config.VipIPAddress
$xmlWriter.WriteElementString("VipIPAddress",$entity)
$entity = $oldxml.Config.LocalChassis
$xmlWriter.WriteElementString("LocalChassis",$entity)
$entity = $oldxml.Config.VCenterVersion
$xmlWriter.WriteElementString("VCenterVersion",$entity)
$entity = $oldxml.Config.Locked
$xmlWriter.WriteElementString("Locked",$entity)
$entity = $oldxml.Config.GroupVersion
$xmlWriter.WriteElementString("GroupVersion",$entity)
Page | 197
$entity = $oldxml.Config.VCENTERClusterId
$xmlWriter.WriteElementString("VCENTERClusterId",$entity)
$entity = $oldxml.Config.LocalManagementVMName
$xmlWriter.WriteElementString("LocalManagementVMName",$entity)
$entity = $oldxml.Config.OV4VCURI
$xmlWriter.WriteElementString("OV4VCURI",$entity)
$entity = $oldxml.Config.NodePosition1
$xmlWriter.WriteElementString("NodePosition1",$entity)
$entity = $oldxml.Config.SystemType1
$xmlWriter.WriteElementString("SystemType1",$entity)
$entity = $oldxml.Config.ProductID1
$xmlWriter.WriteElementString("ProductID1",$entity)
$entity = $oldxml.Config.EONNumber1
$xmlWriter.WriteElementString("EONNumber1",$entity)
$entity = $oldxml.Config.ChassisESXVersion1
$xmlWriter.WriteElementString("ChassisESXVersion1",$entity)
$entity = $oldxml.Config.ChassisVSAVersion1
$xmlWriter.WriteElementString("ChassisVSAVersion1",$entity)
$entity = $oldxml.Config.ChassisGenVersion1
$xmlWriter.WriteElementString("ChassisGenVersion1",$entity)
# Fixed values for one node expansion
$xmlWriter.WriteElementString("InstallType","Expand")
$xmlWriter.WriteElementString("VSAQuorumRequired","false")
$xmlWriter.WriteElementString("ESXIP2","192.168.42.102")
$xmlWriter.WriteElementString("ESXIP3","192.168.42.103")
$xmlWriter.WriteElementString("ESXIP4","192.168.42.104")
$xmlWriter.WriteElementString("NodesSelected", 1)
$xmlWriter.WriteElementString("VSAIPAddress4","")
$xmlWriter.WriteElementString("QuorumWitness","")
# The unique values of the expansion node
$xmlWriter.WriteElementString("ESXIP1",$ESXIP1)
$xmlWriter.WriteElementString("ESXIPStart",$ESXIPStart)
$xmlWriter.WriteElementString("ESXIPAddress1",$ESXIPAddress1)
$xmlWriter.WriteElementString("VMotionIPStart",$VMotionIPStart)
$xmlWriter.WriteElementString("ESXMoref1",$ESXMoref1)
Page | 198
$xmlWriter.WriteElementString("ESXName1",$ESXName1)
$xmlWriter.WriteElementString("NodeChassis1",$NodeChassis1)
$xmlWriter.WriteElementString("NodeVM1",$NodeVM1)
$xmlWriter.WriteElementString("ESXV6IP1",$ESXV6IP1)
$xmlWriter.WriteElementString("VMotionIPStart",$VMotionIPStart)
$xmlWriter.WriteElementString("VMotionIPAddress1",$VMotionIPAddress1)
$xmlWriter.WriteElementString("VSAIPStart",$VSAIPStart)
$xmlWriter.WriteElementString("VSAVip",$VSAVip)
$xmlWriter.WriteElementString("VSAIPAddress1",$VSAIPAddress1)
$xmlWriter.WriteElementString("VSAIPAddress2",$VSAIPAddress2)
$xmlWriter.WriteElementString("VSAIPAddress3",$VSAIPAddress3)
$xmlWriter.WriteElementString("MACAddress1",$MACAddress1)
$xmlWriter.WriteElementString("OrigVSA1",$OrigVSA1)
$xmlWriter.WriteElementString("VSACluster1",$VSACluster1)
$xmlWriter.WriteElementString("VSA1",$VSA1)
$xmlWriter.WriteElementString("FullPSF1",$FullPSF1)
# Close the document
$xmlWriter.WriteEndElement()
$xmlWriter.WriteEndDocument()
$xmlwriter.Finalize
$xmlwriter.Flush
$xmlwriter.close()
Write-Host "$xmlpath file successfully created!"
Page | 199
Support and other resources
Accessing Hewlett Packard Enterprise Support
For live assistance, go to the Contact Hewlett Packard Enterprise Worldwide website: http://www.hpe.com/assistance
To access documentation and support services, go to the Hewlett Packard Enterprise Support Center website: http://www.hpe.com/support/hpesc
Information to collect
Technical support registration number (if applicable)
Product name, model or version, and serial number
Operating system name and version
Firmware version
Error messages
Product-specific reports and logs
Add-on products or components
Third-party products or components
Accessing updates
Some software products provide a mechanism for accessing software updates through the
product interface. Review your product documentation to identify the recommended software
update method.
To download product updates:
1. Hewlett Packard Enterprise Support Center
http://www.support.hpe.com
2. Hewlett Packard Enterprise Support Center: Software
http://www.hpe.com/support/downloads
3. Software Depot
http://www.hpe.com/support/softwaredepot
4. To subscribe to eNewsletters and alerts:
www.hpe.com/support/e-updates
5. To view and update your entitlements, and to link your contracts and warranties with your profile, go to the Hewlett Packard Enterprise Support Center More Information on Access to Support Materials page:
www.hpe.com/support/AccessToSupportMaterials
Page | 200
IMPORTANT: Access to some updates might require product entitlement when accessed through the Hewlett Packard Enterprise Support Center. You must have an HP Passport set up with relevant entitlements.
Customer self-repair
Hewlett Packard Enterprise customer self-repair (CSR) programs allow you to repair your
product. If a CSR part needs to be replaced, it will be shipped directly to you so that you can
install it at your convenience. Some parts do not qualify for CSR. Your Hewlett Packard
Enterprise authorized service provider will determine whether a repair can be accomplished by
CSR.
For more information about CSR, contact your local service provider or go to the CSR website:
http://www.hpe.com/support/selfrepair
Remote support
Remote support is available with supported devices as part of your warranty or contractual
support agreement. It provides intelligent event diagnosis, and automatic, secure submission of
hardware event notifications to Hewlett Packard Enterprise, which will initiate a fast and
accurate resolution based on your product's service level. Hewlett Packard Enterprise strongly
recommends that you register your device for remote support.
If your product includes additional remote support details, use search to locate that information.
Remote support and Proactive Care information
HPE Get Connected http://www.hpe.com/services/getconnected
HPE Proactive Care
services http://www.hpe.com/services/proactivecare
HPE Proactive Care
service: http://www.hpe.com/services/proactivecaresupportedproducts
Supported products list
HPE Proactive Care
advanced service:
Supported products list
http://www.hpe.com/services/proactivecareadvancedsupportedproducts
Proactive Care customer information
Proactive Care central http://www.hpe.com/services/proactivecarecentral
Proactive Care service activation http://www.hpe.com/services/proactivecarecentralgetstarted
Page | 201
Warranty information
To view the warranty for your product, see the Safety and Compliance Information for Server,
Storage, Power, Networking, and Rack Products document, available at the Hewlett Packard
Enterprise Support Center:
www.hpe.com/support/Safety-Compliance-EnterpriseProducts
Additional warranty information
HPE ProLiant and x86 Servers and Options www.hpe.com/support/ProLiantServers-
Warranties
Supported products list www.hpe.com/support/EnterpriseServers-
Warranties
HPE Storage Products http://www.hpe.com/support/Storage-Warranties
HPE Networking Products http://www.hpe.com/support/Networking-
Warranties
Regulatory information
To view the regulatory information for your product, view the Safety and Compliance Information for
Server, Storage, Power, Networking, and Rack Products, available at the Hewlett Packard Enterprise
Support Center:
www.hpe.com/support/Safety-Compliance-EnterpriseProducts
Additional regulatory information
Hewlett Packard Enterprise is committed to providing our customers with information about the chemical
substances in our products as needed to comply with legal requirements such as REACH (Regulation
EC No 1907/2006 of the European Parliament and the Council). A chemical information report for this
product can be found at:
www.hpe.com/info/reach
For Hewlett Packard Enterprise product environmental and safety information and compliance
data, including RoHS and REACH, see:
www.hpe.com/info/ecodata
For Hewlett Packard Enterprise environmental information, including company programs,
product recycling, and energy efficiency, see:
www.hpe.com/info/environment
Page | 202
Documentation feedback Hewlett Packard Enterprise is committed to providing documentation that meets your needs. To
help us improve the documentation, send any errors, suggestions, or comments to
Documentation Feedback ([email protected]). When submitting your feedback, include
the document title, part number, edition, and publication date located on the front cover of the
document. For online help content, include the product name, product version, help edition, and
publication date located on the legal notices page.