vspex with vplexve
Post on 08-Jul-2016
29 Views
Preview:
DESCRIPTION
TRANSCRIPT
1 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Design and Implementation Guide
EMC VSPEX with
EMC VPLEX/VE for
VMware vSphere
EMC VSPEX
Abstract
This document describes the EMC® VSPEX™ Proven Infrastructure solution for private cloud deployments with
EMC VPLEX/VE™, VMware® vSphere®, and EMC VNXe3200® Unified Storage System for up to 125 virtual
machines.
June, 2014
2 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Copyright © 2014 EMC Corporation. All Rights Reserved. Published in the
USA.
Published June, 2014
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
The information in this publication is provided “as is.” EMC Corporation
makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties
of merchantability or fitness for a particular purpose.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date
listing of EMC product names, see EMC Corporation Trademarks on
EMC.com.
VMware, vSphere, vCenter, ESXi, HA, DRS, and SRM are registered trademarks or trademarks of VMware, Inc. in the United States and/or
other jurisdictions. All other trademarks used herein are the property of their respective owners.
EMC® VSPEX™ with EMC® VPLEX/VE™ for VMware® vSphere™ Design
and Implementation Guide
Part Number H13193
3 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Contents
1. EXECUTIVE SUMMARY 7
1.1 Audience 8
1.2 Business Needs 8
2. DOCUMENT PURPOSE 9
3. SOLUTION OVERVIEW 9
4. MOBILITY 10
5. AVAILABILITY 12
6. SOLUTION ARCHITECTURE 12
6.1 Overview 12
6.2 Solution configurations 14
7. VPLEX/VE KEY COMPONENTS 17
7.1 VPLEX/VE Management Extension for VMware vSphere 17
7.2 VPLEX/VE Configuration Manager Web Application 19
7.3 vSphere Deployment Concepts 20
7.4 VPLEX/VE Storage Concepts 22
7.5 VPLEX/VE Networking Concepts 23
8. SYSTEM REQUIREMENTS 26
8.1 VMware vSphere 5 requirements 26
8.2 Web interface support 27
8.3 Network requirements 27
8.4 Storage requirements 27
9. PRE-DEPLOYMENT PLANNING 28
9.1 vSphere stretched cluster requirements 28
9.2 ESXi Host Minimum requirements 28
9.3 Datastore minimum requirements 29
9.4 Networking requirements for deployment 29
9.5 Networking and Virtual Switch requirements 30
9.6 Storage Device Requirements 32
4 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
9.7 Security Certificate requirements 32
9.8 WAN COM (Inter-Site) Network requirements 33
10. VPLEX/VE PRE-CONFIGURATION WORKSHEET 33
10.1 Overview 33
10.2 OVF Deployment 33
10.3 Setup Wizard – Part One 35
10.4 ESXi Hosts 35
10.5 Network Setup - Virtual Switches and Ports 36
10.6 Security Certificates 38
10.7 Identify Storage Arrays for System Volumes 38
10.8 Setup Wizard - Part Two 39
10.9 WAN COM (Inter-Site) Network Configuration 40
10.10 Configuration Information for Cluster Witness 41
11. VPLEX/VE INSTALLATION AND CONFIGURATION 43
11.1 Deploying a VPLEX/VE vApp 43
11.2 Configuring VPLEX/VE using the Setup Wizard 53
11.3 Provisioning iSCSI Storage to vDirectors 70
11.4 Configuring VPLEX/VE Storage and WAN 72
11.5 Configuring the ESXi hosts to use the VPLEX/VE storage 79
11.6 Verifying the Solution 86
12. APPENDIX-A – ADDITIONAL RECOMMENDATIONS 87
13. APPENDIX-B -- REFERENCES 91
13.1 EMC documentation 91
13.2 Other documentation 92
5 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Table Of Figures
Figure 1 - Application and Mobility Example ......................................... 11
Figure 2 - VPLEX/VE Manager Solution Icon ......................................... 18
Figure 3 - VPLEX/VE vCenter Inventory Action Menu ............................. 19
Figure 4 - VPLEX/VE Configuration Manager ......................................... 20
Figure 5 - VPLEX/VE Initial OVF Deployment; One Host per Site ............. 21
Figure 6 - VPLEX/VE Initial OVF Deployment onto Four Hosts per Site ..... 21
Figure 7 - The VPLEX/VE Storage Virtualization Layers .......................... 23
Figure 8 - The VPLEX/VE Storage Virtualization Layers .......................... 23
Figure 9 - VPLEX/VE Sample Network Configuration .............................. 25
Figure 10 - Deploy OVF Template option ............................................. 44
Figure 11 - The Select Source screen .................................................. 45
Figure 12 - Review details screen ....................................................... 45
Figure 13 - Select Name and Folder .................................................... 47
Figure 14 - Select Host and Cluster..................................................... 48
Figure 15 - Select resource pool ......................................................... 48
Figure 16 - Select Datastore .............................................................. 49
Figure 17 - Select disk format ............................................................ 49
Figure 18 - Select networks ............................................................... 50
Figure 19 - Customize template ......................................................... 51
Figure 20 - Ready to complete ........................................................... 52
Figure 21 - vApp deployment completed ............................................. 53
Figure 22 - Complete Configuration Worksheet .................................... 54
Figure 23 - Welcome to Phase I .......................................................... 55
Figure 24 - Enter vCenter credentials .................................................. 55
Figure 25 - Enter site 2 vManagement server ....................................... 56
Figure 26 - List of servers in the ESXi cluster ....................................... 56
Figure 27 - Select hosts for VPLEX/VE servers ...................................... 57
Figure 28 - Assign vDirectors to hosts ................................................. 58
Figure 29 - Assign datastores ............................................................. 58
Figure 30 - Configure network ............................................................ 59
Figure 31 - Virtual switch connections at site 1..................................... 60
Figure 32 - Virtual switch connections at site 2..................................... 60
Figure 33 - Front-end IP configuration at site 1 .................................... 61
Figure 34 - Back-end IP configuration at site 1 ..................................... 62
Figure 35 - Front-end IP configuration at site 2 .................................... 63
Figure 36 - Back-end IP configuration at site 2 ..................................... 64
Figure 37 - Security certificates, site 1 ................................................ 65
Figure 38 - Security certificates, site 2 ................................................ 65
Figure 39 - Storage for VPLEX/VE ....................................................... 66
Figure 40 - Review and run ................................................................ 67
Figure 41 - Run Phase I..................................................................... 68
Figure 42 - Phase I completed successfully .......................................... 69
6 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Figure 43 - Unisphere for VNXe3200 ................................................... 70
Figure 44 - IQNs from VPLEX/VE ........................................................ 71
Figure 45 - Phase II Setup Wizard ...................................................... 72
Figure 46 - List of arrays ................................................................... 73
Figure 47 - Select Metavolumes ......................................................... 73
Figure 48 - Select Meta backup volumes ............................................. 74
Figure 49 - Select logging volumes ..................................................... 75
Figure 50 - WAN COM configuration .................................................... 76
Figure 51 - WAN COM configuration .................................................... 76
Figure 52 - WAN COM configuration, site 2 .......................................... 77
Figure 53 - WAN COM configuration, site 2 .......................................... 77
Figure 54 - Review and Run ............................................................... 78
Figure 55 - Phase II completed successfully ......................................... 78
Figure 56 - Create VMKernel virtual adapters ....................................... 79
Figure 57 - Select VMkernel Network Adapter ...................................... 80
Figure 58 - Select VPLEX/VE front-end port group ................................ 80
Figure 59 - Specify IP settings ........................................................... 81
Figure 60 - Two VMkernel virtual adapters added ................................. 81
Figure 61 - Bind iSCSI adapter to VMkernel adapters ............................ 82
Figure 62 - Select VMkernel adapter ................................................... 83
Figure 63 - Bind storage adapter to both VMkernel adapters .................. 84
Figure 64 - Dynamic discovery of iSCSI storage ................................... 85
Figure 65 - Enter iSCSI storage target ................................................ 85
Figure 66 - Static addresses automatically discovered ........................... 86
Table Of Tables
Table 1 Solution hardware ................................................................. 15
Table 2 Solution software .................................................................. 16
Table 3 Profile characteristics ................... Error! Bookmark not defined.
Table 4 - vDirector Naming Conventions .............................................. 31
Table 5 - VPLEX/VE Pre-Config Information ......................................... 33
Table 6 - Setup Wizard - Part One ...................................................... 35
Table 7 - ESXi Hosts ......................................................................... 35
Table 8 - Network Setup ................................................................... 36
Table 9 - Security Certificates ............................................................ 38
Table 10 - Indentify Arrays for System Volumes ................................... 38
Table 11 - System Volumes - site 1 .................................................... 39
Table 12 - System Volumes - site 2 .................................................... 39
Table 13 - WAN COM Network Configuration ........................................ 40
Table 14 - VPLEX Witness Components ............................................... 41
7 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
1. Executive Summary
Modern business growth requires increased application availability for continuous 24x7 operations. This requirement applies to large datacenter
scale workloads, as well as small single application workloads. By utilizing storage virtualization technologies, companies are able to lower costs and
improve application availability and mobility. For smaller deployments, where application availability is a requirement, EMC VPLEX Virtual Edition
Standard (VPLEX/VE) is a new storage virtualization product in the EMC VPLEX family that provides increased availability and mobility to VMware
virtual machines and the applications they run.
This document provides up-front software and hardware material lists, step-by-step guidance and worksheets, and verified deployment steps to
implement a VPLEX/VE solution with a VSPEX Private Cloud that is built off the next generation EMC VNXe3200 Unified Storage System that supports
up to 125 virtual machines.
EMC VPLEX/VE is a unique virtualized storage technology that federates
data located on heterogeneous storage systems, allowing the storage resources in multiple data centers to be pooled together and accessed
anywhere. Using the virtualization infrastructure provided by VMware vSphere, VPLEX/VE enables you to use the AccessAnywhere feature, which
provides cache-coherent active-active access to data across two VPLEX/VE sites within a geographical region, from a single management view.
VPLEX/VE requires four ESXi™ servers at each location. The servers will
host the vDirectors and other VPLEX/VE components. A vDirector is a Linux virtual machine running the virtualization storage software. These servers
may also be used to run a portion or all of your Private Cloud, thus eliminating the need to purchase additional servers. However, you can
leverage an existing virtualization infrastructure and the resource
management capabilities provided by VMware by importing the VPLEX/VE servers into the same vCenter managing your data center. After you
deploy and configure VPLEX/VE, you can start to use its features from a plug-in installed on the VMware vCenter Server. This plug-in is the single
gateway to all the functionalities of VPLEX/VE.
8 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
VMware vSphere is simpler and more cost effective than traditional
environments and provides higher levels of availability for business critical applications. With vSphere, organizations can easily increase the baseline
level of availability provided for all applications, as well as provide higher levels of availability more easily and cost-effectively. The revolutionary
VMware vMotion™ (vMotion) capabilities in vSphere make it possible to perform planned maintenance with zero application downtime. VMware
High Availability (HA) reduces unplanned downtime by leveraging multiple VMware ESX and VMware ESXi hosts configured as a cluster, to provide
automatic recovery from outages as well as cost-effective high availability for applications running in virtual machines.
By leveraging distance bridging technology, VPLEX/VE builds on the
strengths of VMware HA to provide solutions that go beyond traditional “Disaster Recovery”. These solutions provide a new type of deployment
that achieves the absolute highest levels of continuous availability over
distance for today’s cloud environments. This white paper is designed to give technology decision-makers a deeper understanding of VPLEX/VE in
conjunction with VMware vSphere.
1.1 Audience
The readers of this document should have the necessary training and
background to install and configure VMware vSphere, EMC VNXe series storage systems, VPLEX/VE, and associated infrastructure as required by
this implementation. External references are provided where applicable, and the readers should be familiar with these documents. After purchase,
Implementers of this solution should focus on the configuration guidelines of the solution validation phase and the appropriate references and
appendices.
1.2 Business Needs
VSPEX solutions are built with proven best-of-breed technologies to create
complete virtualization solutions that enable you to make an informed decision in the hypervisor, server, and networking layers.
Business applications are moving into consolidated compute, network, and
storage environments. EMC VSPEX Private Cloud using VMware reduces the complexity of configuring every component of a traditional deployment
model. The complexity of integration management is reduced while
9 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
maintaining the application design and implementation options.
Administration is unified, while process separation can be adequately controlled and monitored. The business needs for the VSPEX private cloud
for VMware architectures are listed as follows:
Provide an end-to-end virtualization solution to use the capabilities of the unified infrastructure components.
Provide a VSPEX Private Cloud Solution for VMware for efficiently virtualizing up to 125 reference virtual machines.
Provide a reliable, flexible, and scalable reference design.
2. Document Purpose
This document is an initial introduction to using VPLEX/VE to leverage VSPEX proven architecture, an explanation on how to modify the
architecture for specific engagements, and instructions on how to effectively deploy and monitor the overall system.
This document applies to VSPEX deployed with EMC VPLEX/VE. The details
provided in this white paper are based on the following configurations:
VPLEX/VE Version 2 or higher.
ESXi Clusters are within 10 milliseconds (ms) of each other for VMware HA.
ESXi and vSphere 5.1 or later are used.
Proven VSPEX solution with a pair of iSCSI EMC VNXe3200 Unified Storage System.
3. Solution Overview
The EMC VSPEX Private Cloud for VMware vSphere coupled with EMC
VPLEX/VE represents the next-generation architecture for data mobility and information access. This architecture is based on EMC‘s 20+years of
expertise in designing; implementing and perfecting enterprise class intelligent cache and distributed data protection solutions. The combined
VSPEX VPLEX/VE solution provides a complete system architecture capable of supporting up to 125 reference virtual machines with a redundant server
or network topology and highly available storage within or across
geographically dispersed datacenters. A reference virtual machine is
10 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
running a representative customer reference workload. The characteristics
are described in the EMC document, EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 125 Virtual Machines.
VPLEX/VE federates data located on heterogeneous storage arrays to
create dynamic, distributed, highly available data centers. With VPLEX/VE, you can transform the delivery of IT to a flexible, efficient, reliable, and
resilient service. VPLEX/VE moves data non-disruptively between storage arrays without any downtime for the host. VPLEX/VE moves data
transparently and the virtual volumes retain the same identities and the same access points to the host. The host does not need to be reconfigured.
In a virtualized environment, VPLEX/VE delivers:
Mobility: The ability to move applications and data across different storage installations - across a campus, or within a geographical region.
Availability: The ability to create high-availability storage infrastructure across these same varied geographies with unmatched resiliency.
4. Mobility
EMC VPLEX/VE enables you to move data located on the storage arrays
non-disruptively. Using the VMware vMotion features, you can move the application virtual machines that use the virtual volumes of VPLEX/VE
between two sites. VPLEX/VE simplifies your data center management and eliminates outages when you migrate data or refresh technology. With
VPLEX/VE, you can:
Provide virtualized workload mobility to your applications.
Move your data and perform the technology refresh tasks using the two-way data exchange between locations.
Create an active-active configuration for the active use of resources at both sites.
Provide instant access to data between data centers.
Combine VPLEX/VE with server virtualization to move and relocate virtual machines and their corresponding applications and data without downtime.
Relocate, share, and balance resources between sites, within a campus or between data centers.
11 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Move data between sites, over distance, while the data remains online and available during the move.
Use the storage and computer resources at either VPLEX/VE site to automatically balance loads.
Move extents from a very busy storage volume shared by other busy extents.
Defragment a storage volume to create more contiguous free space.
Migrate data between dissimilar arrays.
Note: VPLEX/VE uses synchronous replication between sites, these sites
may be separated by up to 10 ms of latency (RTT).
During a VPLEX/VE Mobility operation any jobs in progress can be paused or stopped without affecting data integrity. Data Mobility creates a mirror
of the source and target devices allowing the user to commit or cancel the job without affecting the actual data. A record of all mobility jobs is
maintained until the user purges the list for organizational purposes.
Figure 1 - Application and Mobility Example
12 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
5. Availability
The AccessAnywhere feature of VPLEX/VE ensures cache-coherent active-active access to data across VPLEX/VE sites. The features of VPLEX/VE
allow the highest possible resiliency in the event of a site outage. The data is protected in the event of disasters or failure of components in your data
centers. With VPLEX/VE, the applications can withstand failures of storage arrays and site components. The VPLEX/VE components are not disrupted
by a sequential failure of up to two vDirectors in a site. The failure of a VPLEX/VE site or an inter-site partition is tolerated to the extent that the
site configured with site bias continues to access the storage infrastructure. This essentially means that if a storage array is unavailable, another
storage array in the system continues to serve the I/O.
6. Solution Architecture
6.1 Overview
This VSPEX solution for VMware vSphere Private Cloud with EMC VNXe3200 Unified Storage System and EMC VPLEX/VE validates one configuration
with up to 125 virtual machines. The defined configuration forms the basis
of creating this custom solution.
VPLEX/VE leverages the virtualization capabilities of VMware vSphere, which enables you to federate your data storage in a virtualized
environment. VPLEX/VE provides storage federation for operating systems and applications that support clustered file systems in the virtual server
environments with VMware ESXi.
A VPLEX/VE deployment spans across two sites. Each site contains a vApp that is made up of four VPLEX/VE director virtual machines and a
management server virtual machine.
Each vDirector must be deployed on a unique ESXi host. The vManagement Server must also be installed on an ESXi host. The vManagement Server
can share an ESXi host with a vDirector. All of the ESXi hosts at both sites
must be members of the same stretched ESXi cluster. Each vDirector of a VPLEX/VE system is stored on a virtual machine that is hosted on an ESXi
host.
13 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
All the ESXi hosts of a VPLEX/VE system must be part of one ESXi cluster. VPLEX/VE is a virtualized software application; the physical components to
run the system are provided by the underlying VMware infrastructure.
The communication between the components of a VPLEX/VE system happens through a set of virtual ports. These ports are created out of the
virtual network adapters of vDirectors and vManagement Server. A vDirector in a VPLEX/VE site has virtual ports as follows:
Two ports for the back-end IP SAN interface that facilitates the communication with the arrays to which it is connected.
Two ports for the front-end IP SAN interface that facilitates the communication with hosts that initiate the I/O.
Two ports for the Intrasite interface that facilitates the communication with other vDirectors in the same site.
Two ports for the Intersite interface that facilitates the communication with vDirectors in the remote site.
Two ports for the management interface that facilitates the communication with vManagement Server in the same site.
A vManagement Server in a VPLEX/VE site has:
Two virtual ports for communicating to the vDirectors in its site.
One virtual port for communicating with the vManagement Server in the other site. This port is also used for configuring the system through a web browser.
One virtual port for service console access.
Each of these virtual ports is connected to a virtual switch. The virtual
switch is connected to a physical switch in your environment. The physical switch carries the data across the components of a VPLEX/VE system.
The redundancy ensures the robust high availability of a VPLEX/VE system.
A proper deployment and configuration of the system ensures a highly available distributed storage that is resilient to single points of failure. The
vDirectors are placed across the ESXi hosts in such a way that multiple vDirectors are not hosted on a single ESXi host. This ensures the
availability of the vDirectors in the event of an ESXi host failure. This is achieved by pinning the vDirectors to the hosts by using the VMware DRS
14 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
affinity rules. The high availability design of VPLEX/VE ensures that
vSphere high availability features do not move the vDirectors to an ESXi host that already hosts a vDirector from the same vApp. Additionally, a
variety of integrated validation mechanisms help you verify the physical compatibility of your environment to use the high availability features of
VPLEX/VE. These integrated validation mechanisms are available through a few health check options.
The VMware vMotion features enable you to move the application virtual
machines and their applications to another ESXi host in a stretched ESXi cluster without causing downtime. A properly configured VPLEX/VE system
can leverage these features to ensure high availability.
Note: To ensure the high availability of the management server virtual machines and the mobility of the application virtual machines, ensure that
there is vMotion compatibility between the ESXi hosts on which VPLEX/VE
is deployed. Use the Enhanced vMotion Compatibility (EVC) feature in VMware vSphere to verify the vMotion compatibility between the ESXi
hosts. The VMware VMotion and CPU Compatibility document provides you more information on the vMotion compatibility.
Note: The EMC Simplified Support Matrix for VPLEX/VE, available at the
EMC Online Support, provides you more information on the storage arrays that VPLEX/VE supports.
Note: VSPEX uses the concept of a reference workload to describe and
define a virtual machine. Therefore, one physical or virtual server in an existing environment may not be equal to one virtual machine in a VSPEX
solution. Evaluate your workload in terms of the reference to arrive at an appropriate point of scale. Please refer to the Private Cloud Proven
Infrastructure document for proper workload sizing guidelines.
6.2 Solution configurations
The solution architecture used to run 125 reference virtual machines
combines the requirements of VSPEX 125 VM Private Cloud for VMware vSphere with EMC VNXe and VPLEX/VE. The VSPEX Private Cloud
configuration is listed in the tables below.
15 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Table 1 Solution hardware
Component Configuration
VMware vSphere
servers
CPU 1 vCPU per virtual machine
4 vCPUs per physical core
For 125 virtual machines:
• 125 vCPUs
• Minimum of 32 physical CPUs
Memory 2 GB RAM per virtual machine
2 GB RAM reservation per
server
For 125 virtual machines:
• Minimum of 250 GB RAM
• Add 2 GB for each server
Network 2 x 10 GbE NICs per server
2 HBA per server
Note: Add at least one additional server to the
infrastructure beyond the minimum requirements to implement VMware vSphere High-Availability (HA)
functionality and to meet the listed minimums.
Network infrastructure
Minimum switching
capacity
2 physical switches
2 x 10 GbE ports per VMware vSphere server
1 x 1 GbE port per storage processor for management
2 ports per VMware vSphere server, for storage network
2 ports per SP, for storage
data
16 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Component Configuration
EMC VNXe series
storage array 2 x EMC VNXe3200
2 x 10GbE interfaces per
storage processor (iSCSI)
1 x 1 GbE interface per storage processor for management
System disks for VNXe OE
For 125 virtual machines:
40 x 600 GB 10k rpm 2.5-inch SAS drives
2 x 600 GB 10k rpm 2.5-inch SAS drives as hot
spares
Shared infrastructure
In most cases, a customer environment already has infrastructure services such as Active Directory, DNS,
and other services configured. The setup of these services is beyond the scope of this document.
Table 2 lists the software used in this solution
Table 2 Solution software
Software Configuration
VMware vSphere 5.5
vSphere Server Minimum: Enterprise Edition
Recommended: Enterprise Plus Edition
vCenter Server Standard Edition
Operating system for
vCenter Server Windows Server 2012 R2 Standard Edition
Note : Any operating system that is supported for vCenter can be used.
Microsoft SQL Server Version 2012 R2 Standard Edition
Note: Any supported database for vCenter can be used.
EMC VNXe
17 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Software Configuration
VNXe OE 3.0
EMC VPLEX/VE
VPLEX/VE VPLEX/VE-2.1
This solution is built upon the Private Cloud solution highlighted in EMC
VSPEX Private Cloud: VMware vSphere 5.5 for up to 125 Virtual Machines. Additional requirements for VPLEX/VE are listed below:
VMware servers:
Minimum 4 servers per site.
VPLEX/VE will require 6 virtual CPU cores and 16GB of virtual memory to support vDirector and vManagement Server operations on each server.
Storage:
4 datastores per site to host VPLEX/VE virtual machines.
o 1 datastore with 240 GB free capacity.
o 3 datastores with 40 GB free capacity.
1 iSCSI storage array per site.
Additional details can be found in the EMC VPLEX/VE for VMware vSphere Product Guide.
7. VPLEX/VE Key Components
This section introduces the components of the EMC VPLEX/VE solution, including
VPLEX/VE Management Extension for VMware vSphere
VPLEX/VE Configuration Manager Web Application
7.1 VPLEX/VE Management Extension for VMware vSphere
The VPLEX/VE Management Extension for vSphere provides seamless integration with the vSphere environment for day-to-day management operations. The VPLEX/VE Management Extension is available for vSphere
versions 5.1 and above.
18 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
The VPLEX/VE Management solution icon can be found in the
Administration section of the vSphere Web Client home page. From here you can monitor system health, collect diagnostics, and manage
credentials. You can add and remove the backing storage arrays for VPLEX/VE. You can also upgrade the system software to a newer version.
Figure 2 - VPLEX/VE Manager Solution Icon
The VPLEX/VE vCenter Inventory module allows you to create and manage distributed datastores, which are backed by VPLEX/VE distributed virtual
volumes. VPLEX/VE Actions are integrated into the vSphere Actions menu. From here you can create new distributed datastores using a guided
wizard. You can manage the datastore capacity and migrate the backing
storage to a new array with no application downtime. From the vCenter inventory, you see the distributed device backing for a distributed
datastore, as well as the two physical backing devices on the storage arrays at each site.
19 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Figure 3 - VPLEX/VE vCenter Inventory Action Menu
The vCenter Server module includes automated health checks with event
and alarm management. Automated health checks ensure that the system is configured and operating properly for high availability (HA). The
VPLEX/VE HA feature ensures that applications have optimal access to their data through redundancy in management control paths and data I/O paths.
When errors are detected in the VPLEX/VE System or the surrounding
vSphere infrastructure, the system will generate vCenter events. Those events and the associated alarms will appear in the vSphere client.
7.2 VPLEX/VE Configuration Manager Web Application
The VPLEX/VE Configuration Manager is a web application used to configure your VPLEX/VE system. The Setup Wizard guides you through
the initial configuration settings for the system. You will configure VPLEX/VE to the vCenter server, to the vSphere cluster, to the virtual
network, and to the storage arrays.
20 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Figure 4 - VPLEX/VE Configuration Manager
7.3 vSphere Deployment Concepts
VPLEX/VE is a virtual application that is deployed into a single vSphere Stretched Cluster. One VPLEX/VE virtual application (vApp) is deployed at
each site. This site-level redundancy is the first aspect of the high availability architecture; a single site failure will not cause a system
outage.
The VPLEX/VE vApp contains several virtual machines. The vApp includes
five virtual machines:
Four virtual directors or vDirectors.
One management console or vManagement Server.
The initial OVF (Open Virtualization Format) deployment places one VPLEX/VE vApp on one ESXi host at each site. For each site, all the virtual
machines are deployed onto a single host and the related VMFS files are stored in a single datastore. These are referred to as the “deployment
host” and “deployment datastore”.
21 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Figure 5 - VPLEX/VE Initial OVF Deployment; One Host per Site
After the initial deployment, you will run the Setup Wizard to configure the
system. The VPLEX/VE vDirectors will be distributed across 4 different ESXi hosts using 4 different datastores at each site. Note that the vManagement
Server can share an ESXi host with a vDirector.
This configuration provides the second aspect of the high availability
architecture; a single server failure will not cause a system outage. VPLEX/VE can continue to provide data access with one vDirector alive per
site, and one vManagement Server alive for both sites.
Figure 6 - VPLEX/VE Initial OVF Deployment onto Four Hosts per Site
VMware vMotion is used to move the vDirector virtual machines during the
configuration process. The process will create DRS affinity rules to
22 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
constrain the virtual machines to those specific hosts. Once configuration is
complete, automated vMotion of VPLEX/VE vDirectors is not allowed.
Note: Use the Enhanced vMotion Compatibility (EVC) feature in VMware vSphere to verify the vMotion compatibility between the ESXi hosts. The
VMware VMotion and CPU Compatibility document provides you more information on the vMotion compatibility.
7.4 VPLEX/VE Storage Concepts
Traditionally, ESXi hosts in a stretched cluster can consume storage devices from physical storage that is within the same physical site. The ESXi hosts and the virtual machines at one site cannot use the storage or
datastores from the other site. If you were to vMotion a virtual machine across sites, the datastore containing the VMFS files would not be available
to the ESXi host at the second site. This creates the need for Storage vMotion, to move the data across sites prior to using vMotion to move the
virtual machine.
VPLEX/VE creates distributed virtual volumes with backing devices at both
sites. The VPLEX/VE distributed cache coherence feature ensure that the data is identical and available in an active-active configuration at both
sites.
VPLEX/VE enables the creation of vSphere distribute datastores which are backed by distributed virtual volumes. These distributed datastores are
available to the ESXi Hosts at both sites within the vSphere cluster. Virtual machines provisioned with distributed datastores can vMotion across sites
with no limitation from the underlying physical storage.
The following illustration shows the data path from physical to virtual storage.
23 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Figure 7 - The VPLEX/VE Storage Virtualization Layers
The recommended storage configuration is two iSCSI storage arrays per
site. This is a recommendation for VPLEX/VE to avoid any single point of
failure. This provides optimal redundancy and resiliency for storage operations. The minimum supported configuration is one iSCSI array per
site. This configuration provides the 4th aspect the high availability architecture; a single array failure will not impact the availability of data to
the application. VPLEX/VE can continue I/O operations from the surviving storage array(s).
7.5 VPLEX/VE Networking Concepts
VPLEX/VE distributed cache coherence requires sophisticated communication between individual vDirectors within the site, across the sites, with the storage arrays and with the vSphere management client.
The following networking concepts are important to understand:
24 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Front-end IP SAN (ESXi-facing Network) is the IP network that presents distributed virtual storage from the vDirectors to ESXi hosts. Distributed virtual storage is used to provision vSphere distributed datastores.
Back-end IP SAN (Array-facing Network) is the IP network that connects physical storage from the iSCSI storage arrays to the vDirectors. The vDirectors consume the physical storage from the array and produce distributed virtual volumes.
Local COM Network (Intra-Site Network) is the private IP networks connecting VPLEX/VE virtual machines within one site. For high availability, there are two such networks at each site. This is a private network. The virtual switches for this network are selected during installation, but the IP addresses are static and cannot be controlled by the installer.
WAN COM (Inter-Site Network) is the IP network connecting the vDirectors across sites. As a best practice, this should be a private network. For high availability, there are two such networks at each site.
Inter-Site Management Network (not pictured above) is the virtual private network (VPN) tunnel between the management servers across sites. For high availability, there are two such networks between sites. There is no manual configuration for VPN. It is configured automatically by the setup wizard during the configuration phase.
As a virtual application, VPLEX/VE has dependencies on the vSphere virtual network infrastructure and the physical network infrastructure. At
minimum, VPLEX/VE requires six virtual Distributed Switches (vDS) and six physical switches. These can be newly configured or existing switches in
your environments.
The diagram below shows an example of the minimum virtual network configuration.
25 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Figure 9 - VPLEX/VE Sample Network Configuration
The minimum six virtual switch configurations are as follows:
vDS1 and vDS2 are used for WAN COM traffic for Site-1 and Site-2. These switches should be connected to vNICs from all ESXi hosts that are running VPLEX/VE components at both sites.
vDS3 and vDS4 are used for Site-1 Front-end SAN (ESXi-consumer-facing), Back-end SAN (Array-facing) and Local COM (VPLEX/VE Intra-Site). These switches should be connected to vNICs from all the ESXi hosts that are part of the site.
VDS 6 and vDS7 (not shown) are used for Site-2, with the same setup as vDS3 & vDS4.
vSS1 is a standard switch that already exists in your environment for VMkernel traffic.
Each connection type requires two paths for redundancy and high availability. In the illustration, you see two connections for communication
within the site (boxes are blue and green) and two connections for inter-site or WAN COM (yellow and purple). This redundancy provides the third
important aspect of VPLEX/VE high availability; a single communication
path failure will not cause a system outage.
26 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Each VPLEX/VE vDirector network connection requires an address on your
IP Network.
For each of the VPLEX/VE vDirectors 8 IP addresses are required, as follows:
Two Front-end IP SAN (ESXi-facing) ports.
Two Back-end IP SAN (Array-facing) ports.
Two VPLEX/VE Local COM (Intra-Site) ports.
Two WAN COM (Inter-Site) ports.
For each site, an IP address is assigned for management communication from a web browser.
One Management Server IP address per site.
Different subnets are required for Front-end, Back-end and WAN COM traffic.
Additional adapters and vDS can be added to isolate network traffic, as desired.
Ensure multicast traffic support is enabled on external switches and vlans.
You can use an existing vDS with appropriate VLAN tagging.
8. System Requirements
This section describes the prerequisites for configuring and using VPLEX/VE.
8.1 VMware vSphere 5 requirements
vSphere ESXi v5.0, v5.1, v5.5 or later, Enterprise edition (Enterprise Plus edition recommended).
vCenter Server v5.1 (update 1a or later), v5.5 or later.
vSphere High Availability (HA) enabled.
vSphere VMFS-5.
vSphere Distributed Resource Scheduler (DRS) enabled.
vSphere Update Manager enabled.
vSphere vMotion enabled.
Virtual machines x64 or x86 OS releases as specified in the applicable release HCL.
27 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Note: Non-ESXi physical servers are not supported.
8.2 Web interface support
One of the following web browsers using Adobe Flash Player Plug-in version
10 or later.
Internet Explorer version 10 or later.
Firefox 24 or later.
Chrome 30.0.1599.101 or later.
8.3 Network requirements
Physical network requirements:
EMC recommends using 10GbE NICS or greater.
Using 1GbE NICs can cause reduced IOPS rates.
New IP Network requirements:
50 new IP addresses – 2 Sites x 25 (1 Mgmt + 8 FE + 8 BE + 8 WAN).
12 subnet masks – 2 Sites x 6 (2 FE + 2 BE + 2 WAN).
8.4 Storage requirements
VPLEX/VE high availability features are based on redundancy, including redundant storage arrays. The recommended configuration is two iSCSI
storage arrays per site. This ensures that the meta-volume, backup meta-volume and logging volumes can be mirrored across different arrays to
lower the risk of data loss in an error recovery situation.
At a minimum, VPLEX/VE requires one iSCSI array per site. This ensures that the VPLEX/VE distributed datastores will be available even in the event
of a single array failure.
Recommended Best Practice:
o 2 iSCSI storage arrays per site.
o 4 iSCSI connections per array; 2 from each storage processor.
Minimum Requirement:
o 1 iSCSI storage array per site.
o 4 iSCSI connections per array; 2 from each storage processor.
28 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
The current release supports the EMC VNXe3200 Unified Storage System.
See the EMC Simple Support Matrix for VPLEX at support.emc.com for more information on supported storage array types.
9. Pre-deployment Planning
This section explains planning and preparation required prior to the
deployment.
Refer to the EMC VPLEX/VE Configuration Worksheet as you work through this section. The worksheet is provided in a Microsoft Word format and is
available on http://support.emc.com. You can save a copy and enter your configuration settings directly into the file. You can also print a copy and
write the settings down on paper. In either case, the worksheet is provided so that you can capture the details of your environment as you step
through the planning tasks.
As you complete the steps below, collect the details on the worksheet.
When you start the deployment and configuration tasks, refer back to the worksheet to make the process simple and easy.
9.1 vSphere stretched cluster requirements
This information is required for the OVF Deployment and Setup Wizard:
A VPLEX/VE system is deployed on a single vSphere Stretched Cluster with a minimum of 4 ESXi hosts per site.
Up to three VPLEX/VE systems can be deployed on a vCenter Server.
Up to three VPLEX/VE systems can be deployed on a single vSphere stretched cluster.
Note: In a VPLEX/VE configuration, operations are synchronous and the
two sites in the vSphere stretched cluster can be separated by up to 10 ms round trip time (RTT) latency.
9.2 ESXi Host Minimum requirements
The vSphere stretched cluster must contain a minimum of 8 ESXi hosts, with a minimum of 4 ESXi hosts per site to run the vDirector virtual
machines. For the OVF deployment, you have a choice:
29 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Deploy to the cluster; vSphere will select an ESXi host on which to place the vApp.
Deploy to a specific ESXi Host; note the ESXi host on the worksheet.
For the Setup Wizard:
o Select 4 ESXi hosts per site, each with sufficient available resources to host an additional VM configure with 6 virtual CPU cores and 16 GB of virtual memory to run a vDirector and vManagement Server.
o For any remaining ESXi hosts in the cluster, assign to Site-1 or Site-2.
o Only ESXi hosts can be configured to VPLEX/VE. Non-ESXi hosts will be ignored.
Note: Do not set the VMware HA admission control policy that configures
the ESXi hosts used for VPLEX/VE deployment as failover hosts. This admission control policy restricts the ESXi hosts from participating in a
VPLEX/VE system.
9.3 Datastore minimum requirements
VPLEX/VE virtual machines require storage on traditional datastores for their VMFS files. You will need a total of 8 datastores; 4 per site. For the
OVF deployment, configure the deployment datastore:
1 datastore per site with at least 240 GB free capacity.
For the Setup Wizard, configure the vDirector datastores:
3 datastores per site with at least 40 GB free capacity.
Within a site, all 4 datastores must be visible to the 4 ESXi hosts that are running vDirectors on that site. Resource pools are optional.
9.4 Networking requirements for deployment
The OVF deployment will use LAN and WAN resources to deploy the vApp.
The following choices are required:
Select VPLEX/VE Local COM network.
Select Customer Inter-Site network.
30 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
The OVF deployment creates the vManagement Server for both sites and
creates the management IP connection on the LAN. For the vManagement Server at each site:
Assign a new IP address.
Assign a subnet mask.
Assign a gateway.
9.5 Networking and Virtual Switch requirements
This section describes the network configuration for the Setup Wizard Part One using vNetwork Distributed Switches (vDS). Configure 6 vDS
(minimum):
2 vDS for Site-1 share Front-end, Back-end and Local COM.
These must be connected to all ESXi hosts at Site-1.
2 VDS for Site-2 share Front-end, Back-end and Local COM.
These must be connected to all ESXi hosts at Site-2.
2 vDS for WAN COM, connected all ESXi hosts at both sites.
VLANs can be used to isolate network traffic over the virtual LAN.
VLAN IDs are optional in a single VPLEX/VE deployment.
Configure 16 VLANs (optional):
2 VLAN IDs each for FE, BE, Local and WAN Traffic for Site-1.
2 VLAN IDs each for FE, BE, Local and WAN Traffic for Site-2.
Note: If you deploy multiple VPLEX/VE systems on a single vCenter
Server, then the VLAN IDs become mandatory to avoid cross-communication between systems.
Different subnets are required for per COM type and connection. For each site, assign 2 subnets per COM type:
2 for Front-end IP SAN (ESXi- facing) subnets.
2 for Back-end IP SAN (Array-facing) subnets.
2 for WAN COM (Inter-Site) subnets.
Each COM type requires 2 connections on the IP network. For each site, for 4 vDirectors, assign 8 IP addresses:
31 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
2 for Front-end IP ports.
2 for Back-end IP ports.
2 for WAN COM (Inter-Site) ports.
This is a total of 48 IP Addresses: 2 Sites x 4 vDirectors x 6 IPs.
Note:
Different subnets are required for Front-end, Back-end and WAN COM traffic.
Additional adapters and vDS can be added to isolate network traffic, as desired.
Ensure multicast traffic support is enabled on external switches and VLANs.
You can use an existing vDS with appropriate VLAN tagging.
Refer to the Networking Concepts section in the previous chapter for more information on network planning.
To learn more about vSphere Distributed Switches are created, go to kb
article 1010555 in kb.vmware.com.
A note about vDirector default names: Internally, vDirectors are managed in pairs. For that reason, the default names use a pair notation. You can
change the vDirector virtual machine names in the vSphere client, but this does not change the internal naming. Some events will include references
using the internal names.
Table 3 below illustrates the default vDirector naming convention:
Table 3 - vDirector Naming Conventions
Default
Name Site Pair
Number Pair
Member vDirector-1-1-A Site-1 1 A
vDirector-1-1-B B
vDirector-1-2-A 2 A
vDirector-1-2-B B
vDirector-2-1-A Site-2 1 A
vDirector-2-1-B B
vDirector-2-2-A 2 A
32 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
vDirector-2-2-B B
9.6 Storage Device Requirements
VPLEX/VE uses SAN storage for meta-volumes, backups and logging operations. Meta-Volumes and Backup Meta-Volumes are essential for system recovery. Recommendations for provisioning storage:
4 – 20 GB LUNs per site (note: 1GB is 1024 MB).
Mirror the meta-volume across two or more back-end arrays to eliminate the possibility of data loss.
The physical spindles for meta-volumes should be isolated from application workloads.
Read caching should be enabled.
For a CLARiiON array, the meta-volumes must not be placed on the vault drives.
You can create a schedule for the meta-volume backup.
Logging Volume are critical to recover from an inter-site link failure.
Recommendations for provisioning storage:
2 - 20 GB LUNS per site (note: 1GB is 1024 MB).
Stripe logging volumes across several disks to accommodate the high level of I/O that occurs during and after link outages.
Mirror logging volumes across two or more back-end arrays, as they are critical to recovery after the link is restored.
Note: To learn about iSCSI storage provisioning, go to the EMC Online
Support Site https://support.emc.com
9.7 Security Certificate requirements
Security certificates ensure authorized access to the resources in a VPLEX/VE system. These are self-signed certificates.
Passphrases require a minimum of 8 alphanumeric characters.
The certificate authority for both the sites can have a validity of 1 to 5
years.
The host certificate for a VPLEX/VE site can expire in 1-2 years.
EMC recommends that you use the same values for both sites.
33 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
9.8 WAN COM (Inter-Site) Network requirements
Notes for WAN COM configuration:
Both bridged (L2) and routed (L3) networks are supported.
The default values work in most cases for a bridged network:
Different subnets are required for Front-end, Back-end and WAN COM (Inter-Site) networks.
Connection 1 & 2 must be different subnets, with static IP addresses.
MTU sizes must be the same for Site-1 and Site-2.
WAN COM ports do not support trunking.
VLANs are optional. If you use VLANs, ensure that all the ports for connection 1 are assigned to one VLAN and all ports for Connection 2 assigned to another VLAN.
VLAN IDs are required to avoid cross-communication between VPLEX/VE systems if multiple systems are deployed in a single vCenter.
10. VPLEX/VE Pre-Configuration Worksheet
10.1 Overview
The VPLEX/VE pre-deployment gathering of data consists of the items listed below. The first step is to collect all appropriate site data and fill out
the configuration worksheets. These worksheets may be found in the Installation and Configuration section of the VPLEX/VE Procedure
Generator.
The information collection includes some or all of the following for each
cluster:
10.2 OVF Deployment
The following information is required to deploy the VPLEX/VE OVA File:
Table 4 - VPLEX/VE Pre-Config Information
vSphere Environment
Field Name
Value
vCenter Server Credentials IP Address
User
34 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Password
VPLEX/VE OVA file location OVA File Location
vSphere Stretched Cluster
VPLEX/VE is not previously deployed.
Cluster Name
Deploy Site-1
VPLEX/VE vApp Name vApp Name
Deploy to vSphere Cluster or to a specific ESXi Host
ESXi Host (optional)
Deployment datastore
at least 240 GB free capacity
Datastore
Resource Pool Resource Pool (optional)
Networks for relocating vDirectors;use the defaults if unsure
VPLEX/VE Local
Customer LAN
Custom Deployment attributers for
vManagementServer
IP Address
Subnet Mask
Gateway
Deploy Site-2 Field Name Value
VPLEX/VE vApp Name vApp Name
Deploy to vSphere Cluster
or to a specific ESXi Host ESXi Host
(optional)
Deployment datastore
at least 240 GB free
capacity
Deployment Datastore
Resource Pool Resource Pool (optional)
Networks for relocating
vDirectors;
use the defaults if unsure
VPLEX/VE Local
Customer LAN
Custom Deployment attributers for vManagementServer
IP Address
Subnet Mask
Gateway
35 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
10.3 Setup Wizard – Part One
Table 5 - Setup Wizard - Part One
Launch the Setup Wizard
Field Value
Site-1 vManagement Server
IP Address
User ID service
Password Mi@Dim7T
vCenter Credentials
vCenter Server Credentials
IP Address
Username
Password
Site-2 vManagement
Server IP
Site-2 vManagementServer
IP Address
10.4 ESXi Hosts
Table 6 - ESXi Hosts
Assign Hosts to Sites
Site-1 Site-2
Assign all ESXi hosts in the deployment cluster to a
VPLEX/VE Site.
There must be at least 4 per site.
Non-ESXi servers are not
supported.
Assign vDirectors to ESXi Hosts
Site-1 Site-2
Each vDirector requires at least 6 virtual CPU cores and 16 GB of virtual memory
36 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Select Datastores Site-1 Site-2
Select three datastores with 40 GB free capacity (minimum)
shared by the 4 ESXi hosts running vDirectors within the site
10.5 Network Setup - Virtual Switches and Ports
Use the table below if the switch names are the same for all ESXi Hosts, in
the following cases:
vSphere Distributed Switches (vDS) are used.
vSphere Standard Switches (vSS) are used.
Table 7 - Network Setup
Virtual Switch Connections for Site-1
Connection-1 Connection 2
VPLEX/VE Local COM (Intra-Site)
Switch Name
VLAN ID(optional)
WAN COM (Inter-
Site)
Switch Name
VLAN ID(optional)
Front-end IP SAN (ESXi- facing)
Switch Name
Back-end IP SAN (Array-facing)
Switch Name
Virtual Switch Connections for Site-2
Connection-1 Connection 2
VPLEX/VE Local COM (Intra-Site)
Switch Name
VLAN ID(optional)
WAN COM (Inter-Site)
Switch Name
VLAN ID(optional)
Front-end IP SAN (ESXi- facing)
Switch Name
Back-end IP SAN (Array-facing)
Switch Name
37 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Front-end IP Configuration for Site-1
Connection-1 Connection 2
Front-end IP Ports
(ESXi-facing)
Use different subnets for front-end, back-end and WAN (Inter-
Site).
Connection 1 & 2 must be different
subnets.
Subnet Mask
VLAN ID(optional)
vDirector-1-1-A
vDirector-1-1-B
vDirector-1-2-A
vDirector-1-2-B
Front-end IP Configuration for Site-2
Connection-1 Connection 2
Front-end IP Ports (ESXi-facing)
Use different subnets
for front-end, back-end and WAN (Inter-
Site).
Connection 1 & 2 must be different
subnets.
Subnet Mask
VLAN ID(optional)
vDirector-2-1-A
vDirector-2-1-B
vDirector-2-2-A
vDirector-2-2-B
Back-end IP Configuration for Site-1
Connection-1 Connection 2
Back-end IP Ports (Array-facing)
Use different subnets
for front-end, back-end and WAN (Inter-Site).
Connection 1 & 2
must be different subnets..
Subnet Mask
VLAN ID(optional)
vDirector-1-1-A
vDirector-1-1-B
vDirector-1-2-A
vDirector-1-2-B
Back-end IP Configuration for Site-2
Connection-1 Connection 2
Back-end IP Ports (Array-facing)
Use different subnets for front-end, back-
end and WAN (Inter-Site).
Subnet Mask
VLAN ID(optional)
vDirector-2-1-A
vDirector-2-1-B
vDirector-2-2-A
38 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Connection 1 & 2
must be different subnets..
vDirector-2-2-B
10.6 Security Certificates
Each passphrase must be a minimum of eight alphanumeric characters.
Table 8 - Security Certificates
Security Certificates for Site-1
Certificate Authority for both sites
Expiration can be 1-5 years
Passphrase
Expiration
Host Certificatesfor Site-1
Expiration can be 1-2 years
Passphrase
Expiration
Security Certificates for Site-2
Host Certificates for Site-2
(EMC recommends using the same
values for both sites.)
Passphrase
Expriation
10.7 Identify Storage Arrays for System Volumes
Table 9 - Indentify Arrays for System Volumes
Site Target Port #
IP Address Port
CHAP Credentials (optional)
Username Secret
Site-1 1
2
3
4
Site-2 1
2
3
4
39 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
10.8 Setup Wizard - Part Two
Assign storage system volumes for Site-1.
Table 10 - System Volumes - site 1
Meta-volume on Site-1 Array Name Volume Name
Select two volumes with at least 20 GB (1GB = 1024 MB)
Use two Arrays per site, if
available.
Meta-volume Backup on Site-1
Array Name Volume Name
Select two volumes with at least 20 GB
Use two Arrays per site, if available.
Logging Volume on Site-1 Array Name Volume Name
Select two RAID 1 volumes with at least 20 GB or one RAID 0
volume with at least 20 GB
Use two Arrays per site, if available.
Table 11 - System Volumes - site 2
Meta-volume on Site-2 Array Name Volume Name
Select two volumes with at least 20 GB
Use two Arrays per site, if
available.
Meta-volume Backup on Site-2
Array Name Volume Name
Select two volumes with at least 20 GB
Use two Arrays per site, if available.
Logging Volume on Site-2 Array Name Volume Name
Select 2 RAID 1 volumes with at
40 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
least 20 GB or one RAID 0
volume with at least 20 GB
Use two Arrays per site, if available.
10.9 WAN COM (Inter-Site) Network Configuration
Table 12 - WAN COM Network Configuration
WAN COM Network for Site-1
Network type is Bridged or Routed.
Default values should work in most cases for a bridged
network.
Default values are listed in (Bold).
Network Type
Discovery Address
(224.100.100.100)
Discovery Port (10000)
Listening Port (11000)
Subnet Attributes for Site-1 Connection-1 Connection-2
WAN COM must use a different
subnet than Front-end and Back-end
networks.
Connection 1 & 2 must use different subnets.
Gateway is used for
Routed Networks only.
Subnet Prefix
Subnet Mask
Site Address
MTU (1500) (1500)
Gateway (optional)
Inter-Site IP Connections for Site-1
Connection-1 Connection-2
These are connections to the virtual ports on the vDirectors.
vDirector-1-1-A
vDirector-1-1-B
vDirector-1-2-A
vDirector-1-2-B
WAN COM Network for Site-2
Network type is Bridged or Routed.
Default values should work in most cases for a bridged
Network Type
Discovery Address
(224.100.100.100)
Discovery Port (10000)
41 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
network.
Default values are
listed in (Bold).
Listening Port (11000)
Subnet Attributes for Site-2 Connection-1 Connection-2
WAN COM must use a different subnet than Front-
end and Back-end networks.
Connection 1 & 2 must use
different subnets.
Gateway is used for Routed Networks only.
Subnet Prefix
Subnet Mask
Site Address
MTU (1500) (1500)
Gateway (optional)
Inter-Site IP Connections for Site-2
Connection-1 Connection-2
These are connections to the virtual ports on the vDirectors.
vDirector-2-1-A
vDirector-2-1-B
vDirector-2-2-A
vDirector-2-2-B
10.10 Configuration Information for Cluster Witness
VPLEX/VE support the VPLEX Witness feature, which is implemented through the Cluster Witness function.
Table 13 - VPLEX Witness Components
Information Additional description Value
Account and password for to log
into ESXi server where Cluster Witness Server VM
is deployed
This password allows you to log into the Cluster Witness Server VM.
Host certificate passphrase for the
Cluster Witness certificate
Must be at least eight characters (including
spaces):
42 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Cluster Witness requires
management IP network to be separate from
inter-cluster network
Class-C subnet mask for the ESXi server where Cluster Witness Server
guest VM is deployed
IP Address for ESXi server where Cluster Witness Server guest VM is deployed
Cluster Witness Server Guest VM Class-C subnet mask
Cluster Witness Server Guest VM IP Address
Public IP address for management server in Cluster 1
Public IP address for management server in Cluster 2
Cluster Witness functionality requires these
protocols to be enabled by the
firewalls configured on the management
network
Any firewall between the Cluster Witness Server and the management servers need to allow
traffic per the following protocols and ports.
IKE UDP port 500
ESP IP protocol number 50
IP protocol number 51
NAT Traversal in the IKE (IPsec NAT-T)
UDP port 4500
43 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
11. VPLEX/VE Installation and Configuration
The VPLEX/VE deployment process consists of several tasks that are listed below. The first task is to use the collected data in the previous section to
configure the VPLEX/VE for use.
Pre-deployment Planning – Prepare the vSphere cluster for the
deployment.
Deploy the OVF Template for both Sites – Run the OVF Deployment
Wizard in the vSphere Client.
Configure each VPLEX/VE Site to the vSphere Cluster – Run the
Setup Wizard web application to configure VPLEX/VE Sites to the vSphere cluster, ESXi hosts and the virtual network.
Provision iSCSI Storage to VPLEX/VE vDirectors.
Configure the VPLEX/VE System to Storage and WAN – Run the
Setup Wizard to create system volumes, inter-site network and install the VPLEX/VE Management Extension for the vSphere Web
Client.
Update ESXi Settings – Configure ESXi hosts to see VPLEX/VE virtual storage and review ESXi hardware reservation settings.
Configure is Complete.
Note: For detailed installation instructions use the EMC VPLEX/VE for VMware vSphere Product Guide.
11.1 Deploying a VPLEX/VE vApp
This section describes the tasks required to deploy the VPLEX/VE vApp into a VMware ESXi cluster.
1. Perform preliminary tasks.
2. Ensure the server, storage, and network requirements are met.
3. Deploy VPLEX/VE vApps for site1 and site2:
From vCenter Server using vSphere web client, select deploy OVF Template.
44 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
To log in to vCenter Server using the VMware vSphere Web Client, open a Web browser and type the following in the address bar: https://address:9443/vsphere-client.
Where address is one of the following:
The IP address of the host on which the vSphere Web Client server component is installed.
The name of the host on which the vSphere Web Client server component is installed.
In the Username field, type the user name for vCenter Server.
In the Password field, type the password for the vCenter Server.
Click Login.
In the Home screen, click Hosts and Clusters.
To deploy the vApp on the cluster, from the left panel, right-click on the cluster where you want to deploy VPLEX/VE, and select Deploy OVF Template (Figure 10).
Figure 10 - Deploy OVF Template option
4. In the Select Source screen, click Browse to navigate to the VPLEX/VE OVA file on your local folder or in the DVD. Ensure that you choose to
show All Files in the folder.
45 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
5. Select the vApp and click Open (11).
Figure 11 - The Select Source screen
6. In the Select Source screen, click Next.
7. In the Review details screen, verify the OVA template details such as
the name and the version of the product, the vendor (EMC Corporation), the size, and the description. Select the Accept extra configuration
options checkbox if you see a warning, which says that on the OVF package contains extra configuration options.
46 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
8. In the Accept EULAs screen, read the EULA and click Accept.
9. Click Next.
Figure 12 - User license agreement
47 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
10. In the Select Name and Folder screen, do the following:
In the Name field, type a name for the vApp that you are deploying. The name can contain up to 80 characters and it must be unique in the inventory folder. The name of the vApp helps you identify the deployment at a later stage. A sample name for a VPLEX/VE vApp is as follows: VPLEXVE_Instance_Site1.
In the Select a folder or datacenter, click the datacenter to navigate to the folder where you want to place the vApp.
Figure 13 - Select Name and Folder
48 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Figure 14 - Select Host and Cluster
Figure 15 - Select resource pool
11. In the Select storage screen, select a datastore (that is visible to all the ESXi hosts in Site-1 and has a capacity of 240 GB) to store the files of
the vApp. From the select virtual disk format drop-down, select a format
for the virtual disk. Select one of the following options (your selection will not have any significant impact on the performance of VPLEX/VE):
Thick Provision Eager Zeroed – This is the recommended virtual disk format. A type of thick virtual disk that supports clustering features such as fault tolerance. Space required for the virtual disk is allocated at creation time. In contrast to the flat format, the data remaining on the physical device is zeroed out when the virtual disk is created. It might take much longer to create disks in this format than to create other types of disks. An eager-zeroed thick disk has all space allocated and zeroed out at the time of creation. This increases the time it takes
49 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
to create the disk, but results in the best performance, even on the first write to each block.
Thick Provision Lazy Zeroed – Creates a virtual disk in a default thick format. Space required for the virtual disk is allocated when the virtual disk is created. Data remaining on the physical device is not erased during creation, but is zeroed out on demand at a later time on first write from the virtual machine. Using the default thick virtual disk format does not zero out or eliminate the possibility of recovering deleted files or restoring old data that might be present on this allocated space. You cannot convert a thick disk to a thin disk.
Figure 16 - Select Datastore
Figure 17 - Select disk format
50 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
12. In the Setup Networks screen, do the following:
From the Destination drop-down for VPLEX/VE Private, select the destination network, preferably one that has not been used by any other virtual machine. Or choose the same destination network that you want to select for Customer LAN.
From the Destination drop-down for Customer LAN, select a network in your inventory, which has external connectivity. This network is used for the management of the VPLEX/VE system through the vManagement Server.
Figure 18 - Select networks
13. In the Customize template screen, provide the details for the
vManagement Server as follows:
Site – Select the VPLEX/VE site where you want to deploy the OVA template.
Management IP Address – Type an IP Address for the vManagement Server. This IP Address will be used to access the vManagement Server for configuring VPLEX/VE.
Netmask – Type the Netmask for the vManagement Server.
Gateway IP Address – Type the gateway IP address for the vManagement Server.
51 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Figure 19 - Customize template
52 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
14. In the Ready to Complete screen, review the details that you have
provided for the deployment. To power on the virtual machine that contains the VPLEX/VE vManagement Server, select the Power on after
deployment checkbox.
Figure 20 - Ready to complete
15. Click Finish to start the deployment process. The deployment dialog box appears with the status of the vApp deployment. Depending on
your network, the deployment process can take a minimum of 10 minutes.
53 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
16. Repeat these steps for deploying the VPLEX/VE vApp on VPLEX/VE Site-
2. When you deploy the vApp on Site-2, ensure that you:
Select the datastores that are part of Site-2.
Type the IP address and other network details of the vManagement Server in Site-2.
Figure 21 - vApp deployment completed
11.2 Configuring VPLEX/VE using the Setup Wizard
17. The vManagement Server is powered off, power it on manually. All the
vDirectors must be powered off during the VPLEX/VE configuration.
54 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
18. The VPLEX/VE Online Help provides detailed information about the
VPLEX/VE configuration using the Setup Wizard. Before starting the Setup Wizard, ensure that you have filled in the Configuration
Worksheet. You can refer the worksheet to fill in the details that required by the Setup Wizard.
Figure 22 - Complete Configuration Worksheet
19. To access the VPLEX/VE Setup Wizard:
Open a Web browser.
In the address bar of the Web browser, type the following: https://externalIP.
Where externalIP is the IP address of the vManagement Server of VPLEX/VE Site-1. You can ignore the security warning because there is no published security certificate for this application.
Note: To configure VPLEX/VE, you must access only the vManagement
Server of VPLEX/VE Site-1 using its IP address.
20. In the VPLEX/VE Configuration Manager login screen, type the details as follows:
In the User field, type service.
In the Password field, type Mi@Dim7T[default].
55 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
21. The Setup Wizard has two phases. At the end of the first phase, you
must note the IQNs and pass it to your storage administrator. You must do this before you start the second phase of the Setup Wizard.
Figure 23 - Welcome to Phase I
22. To configure VPLEX/VE, you must connect the Setup Wizard to the
vCenter Server application on which you have configured your vSphere cluster.
Figure 24 - Enter vCenter credentials
56 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
23. To configure the VPLEX/VE Site-2, you need the IP address of the
vManagementServer of Site-2.
Figure 25 - Enter site 2 vManagement server
24. To assign ESXi hosts to the VPLEX/VE sites, you need the list of the ESXi hosts (four hosts in a VPLEX/VE site). These ESXi hosts require at
least 6 virtual CPU cores and 18 GB of virtual memory. You cannot configure non-ESXi hosts to VPLEX/VE. Select the ESXi hosts, then
assign to sites with the >> key.
Figure 26 - List of servers in the ESXi cluster
57 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Figure 27 - Select hosts for VPLEX/VE servers
25. To assign the vDirectors to the ESXi hosts, you need the list of hosts that you have assigned to the VPLEX/VE sites already.
58 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Figure 28 - Assign vDirectors to hosts
26. The ESXi hosts that host the director virtual machines in a VPLEX/VE site must share four datastores. When you deployed VPLEX/VE, you
selected a datastore (the deployment datastore) to stored the files of all
the ESXi hosts. The Setup Wizard enables you to select three more datastores for the ESXi hosts in a site. All the ESXi hosts in a site must
share these datastores.
27. The deployment datastore is selected on the Setup Wizard already. You
cannot modify this selection. Each of the datastore that you select here
must have a minimum space of 40 GB.
Figure 29 - Assign datastores
28. To set up the network communication interfaces of a VPLEX/VE system,
you must configure the virtual switches for the interfaces in Site-1 and Site-2. For VPLEX/VE to work, you must use virtual distributed switches
(vDS) or virtual standard switches (vSS).
59 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
29. Using the Setup Wizard, you can configure:
vDS and vSS that have the same names on all the ESXi hosts.
vSS that have different names on the ESXi hosts.
Figure 30 - Configure network
30. To set up the virtual interfaces, you require the following information:
The name of the virtual switch and the VLAN ID (optional) for the VPLEX/VE Local COM (Intra-Site) network.
The name of the virtual switch and the VLAN ID (optional) for the WAN COM (Inter-Site) network.
The name of the virtual switch and the VLAN ID (optional) for the Front-end IP SAN (ESXi- facing) network.
The name of the virtual switch and the VLAN ID (optional) for the Back-end IP SAN (Array-facing) network.
Configure virtual switch connections for site 1 and site 2.
60 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Figure 31 - Virtual switch connections at site 1
Figure 32 - Virtual switch connections at site 2
61 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
31. The first part of the Setup Wizard enables you to configure the Front-
end IP SAN (ESXi-facing) network ports of the vDirectors in VPLEX/VE site 1. The network ports of each of the vDirector must be configured in
different subnets. Each connection of the network interface must also be configured in separate subnets. To configure the IP network ports, you
need the IP addresses and the subnet mask details of these interfaces. VLAN IDs are optional.
Figure 33 - Front-end IP configuration at site 1
62 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
32. Configure the Back-end IP SAN (Array-facing) network ports of the
vDirectors in site 1. The network ports of each of the vDirector must be configured in different subnets. Each connection of the network
interface must also be configured in separate subnets. To configure the IP network ports, you need the IP addresses and the subnet mask
details of these interfaces. VLAN IDs are optional.
Figure 34 - Back-end IP configuration at site 1
63 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
33. Configure the Front-end IP SAN (ESXi-facing) network ports of the
vDirectors in site 2. The network ports of each of the vDirector must be configured in different subnets. Each connection of the network
interface must also be configured in separate subnets. To configure the IP network ports, you need the IP addresses and the subnet mask
details of these interfaces. VLAN IDs are optional.
Figure 35 - Front-end IP configuration at site 2
64 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
34. Configure the Back-end IP SAN (Array-facing) network ports of the
vDirectors in site 2. The network ports of each of the vDirector must be configured in different subnets. Each connection of the network
interface must also be configured in separate subnets. To configure the IP network ports, you need the IP addresses and the subnet mask
details of these interfaces. VLAN IDs are optional.
+
Figure 36 - Back-end IP configuration at site 2
65 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
35. Using the Setup Wizard, you can create security certificates that ensure
authorized access to the resources in a VPLEX/VE system. These are self-signed certificates. Passphrases require a minimum of 8
alphanumeric characters. The certificate authority for both the sites can have a validity of 1 to 5 years. The host certificate for a VPLEX/VE site
can expire in 1-2 years. EMC recommends you to use the same values for both the sites.
Figure 37 - Security certificates, site 1
Figure 38 - Security certificates, site 2
66 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
36. Using the Setup Wizard, you can assign the storage arrays for the
operational storage for the system volumes in VPLEX/VE. Optionally, you can add storage arrays that will be used for VPLEX/VE provisioning.
To add storage, you require:
The iSCSI target IP addresses.
The CHAP credentials (If CHAP is enabled on the arrays).
37. EMC recommends assigning a minimum of two iSCSI targets per an
array and two storage arrays per site.
Figure 39 - Storage for VPLEX/VE
67 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
38. Before you finish the first part of the Setup Wizard, review the settings
that are displayed on the screen. Take a screen print before you start running the configuration. Running the configuration commands can
take up to 30 to 35 minutes. Do not close the Web browser until the configuration is complete.
Figure 40 - Review and run
68 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Figure 41 - Run Phase I
39. When the configuration is complete, export the vDirector back-end IQNs. Your storage administrator will need this information to provision
storage for VPLEX/VE.
69 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Figure 42 - Phase I completed successfully
70 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
11.3 Provisioning iSCSI Storage to vDirectors
40. Before you can continue with the second part of the Setup Wizard, your storage administrator must provision iSCSI storage from VNXe3200 to
the vDirector for the meta-volumes, backup meta-volumes and logging volumes. Give the storage administrator the list of VPLEX/VE vDirector
back-end virtual port IQNs exported from Setup Wizard Part One above. See Storage requirements section for details on the system volume
requirements.
In addition, the storage administrator can provision iSCSI storage for
VPLEX/VE to use as backing devices for distributed datastores. This storage is provisioned to the vDirector back-end port IQNs. Although
the task can be completed at a later time, you will not be able to provision distributed datastores until this task is completed.
To learn about iSCSI storage provisioning for EMC storage arrays, go to the EMC Online Support Site https://support.emc.com.
Figure 43 - Unisphere for VNXe3200
71 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Figure 44 - IQNs from VPLEX/VE
72 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
11.4 Configuring VPLEX/VE Storage and WAN
41. The second part of the VPLEX/VE Setup Wizard enables you to configure the system volumes and the Inter-site network.
42. Launch Phase II of VPLEX/VE EZ Setup wizard, refresh the browser to vManagement Server of VPLEX/VE Site1.
Figure 45 - Phase II Setup Wizard
43. The Setup Wizard enables you to review and modify the array information that you have entered in the first part. You can also
rediscover the arrays that are not appearing on the screen after you added them.
73 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Figure 46 - List of arrays
44. The meta-volumes store the meta-data information of a VPLEX/VE system. The storage for a meta-volume in a VPLEX/VE site must have a
minimum capacity of 20 GB. Select a minimum of two meta-volumes for a site. If you have multiple arrays, select the volumes from two different
arrays.
Figure 47 - Select Metavolumes
74 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
45. The storage for the meta-volume backup in a VPLEX/VE site must have
a minimum capacity of 20 GB. Select a minimum of two meta-volume backup for a site. If you have multiple arrays, select the volumes from
two different arrays.
46. You can create a schedule for the meta-volume backup.
Figure 48 - Select Meta backup volumes
47. Logging volumes store the I/Os from the storage in the event of a site
downtime. The data in the logging volumes is used to synchronize the sites after the site recovery. To create a logging volume for a RAID-1
device in a site, assign two volumes that have a capacity of 20 GB each. For a single-extent device, assign one volume that has a capacity of 20
GB.
75 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Figure 49 - Select logging volumes
48. You must configure the VPLEX/VE WAN COM (Inter-Site) communication on two different networks. It must also be configured on
a different subnet than the Front-end IP (ESXi-facing) and the Back-end (Array-facing) traffic. Before you configure the WAN COM (Inter-Site)
network, note the following:
VLANs are optional. If you use VLANs, ensure that all the ports for connection 1 is assigned to one VLAN and all ports for Connection 2 assigned to another VLAN.
WAN COM ports do not support trunking.
Separate WAN COM IP subnets must be used for Connections 1 and 2, with static IP addresses.
Different subnets are required for Front-end, Back-end and WAN COM (Inter-Site) networks.
Connection 1 & 2 must be different subnets.
MTU sizes must be the same for Site-1 and Site-2.
Both bridged (L2) and routed (L3) networks are supported. The default values work in most cases for a bridged network.
76 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Figure 50 - WAN COM configuration
49. Configure WAN COM ports of the vDirectors in site 1.
Figure 51 - WAN COM configuration
77 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
50. Configure WAN COM subnet attributes for site 2.
Figure 52 - WAN COM configuration, site 2
51. Configure WAN COM for vDirectors for site 2.
Figure 53 - WAN COM configuration, site 2
78 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
52. Before you finish this part of the Setup Wizard, review the settings that
are displayed on the screen.
Figure 54 - Review and Run
53. After review, run configuration. Do not close the Web browser until the configuration is complete.
Figure 55 - Phase II completed successfully
79 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
11.5 Configuring the ESXi hosts to use the VPLEX/VE storage
54. After completing the configuration, you must configure the ESXi hosts to consume the VPLEX/VE storage.
55. To use the VPLEX/VE features, you must connect the ESXi hosts to the storage that is presented by VPLEX/VE. Before doing this, ensure that:
You have configured the product using the VPLEX/VE Setup Wizard successfully.
The ESXi hosts that you want to connect to the VPLEX/VE storage belong to the vSphere cluster where the VPLEX/VE system is running.
The ESXi hosts connect to the VPLEX/VE storage using only the built-in software iSCSI initiator. The initiator must be enabled on each ESXi host.
There are IP addresses available on the VPLEX/VE front-end (FE) network (two per ESXi host) to assign to the VMkernel ports of these ESXi host's software iSCSI initiator.
56. Connecting ESXi hosts to the VPLEX/VE storage involves the following
tasks:
Create VMkernel virtual adapters.
Bind the iSCSI initiator software to the VMkernel adapters.
Add VPLEX/VE iSCSI targets to the initiator.
Figure 56 - Create VMKernel virtual adapters
80 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
57. Select VMkernel network adaptor.
Figure 57 - Select VMkernel Network Adapter
58. Select VPLEX/VE front end port group.
Figure 58 - Select VPLEX/VE front-end port group
81 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
59. Specify IP settings.
Figure 59 - Specify IP settings
60. Create two VMkernel virtual adapters.
Figure 60 - Two VMkernel virtual adapters added
61. To enable the software iSCSI initiator on the ESXi hosts to
communicate to VPLEX/VE, you must bind the initiators to the VMkernel adapters that you have created.
62. In the VMware vSphere Web Client, go to the vCenter > Hosts and Clusters.
82 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
63. On the left panel, under the vSphere cluster, navigate to the ESXi host.
64. Click the Manage tab on the top of the window.
65. Click Storage and then click Storage Adapters on the left.
66. In the list of storage adapters, under iSCSI Software Adapters, select the software iSCSI initiator. Normally, the iSCSI initiator is named
‘vmhba32’.
67. In the Storage Adapter Details section on the bottom, click Network
Port Binding.
68. Click “+” to add a new network port binding.
Figure 61 - Bind iSCSI adapter to VMkernel adapters
83 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
69. In the VMkernel network adapter screen, select the VMkernel adapter
for your VPLEX/VE Front-End Connection 1 network.
Figure 62 - Select VMkernel adapter
84 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
70. Repeat steps for binding the iSCSI initiator software to the VMkernel
adapters for the VPLEX/VE Front-End Connection 2 network.
Figure 63 - Bind storage adapter to both VMkernel adapters
71. To enable the iSCSI initiator to find the VPLEX/VE storage, you must
add the iSCSI send targets on the VPLEX/VE storage to the software
iSCSI initiator on the ESXi host.
72. In the Storage Adapter Details section on the bottom, click Targets.
73. Click Dynamic Discovery.
85 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
74. Click Add... to add a new iSCSI target.
Figure 64 - Dynamic discovery of iSCSI storage
75. In the Add Send Target Server, type the IP address of a VPLEX/VE FE
port. Keep the default port number 3260 intact. Ensure that the Inherit settings from parent checkbox is selected.
Figure 65 - Enter iSCSI storage target
86 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
76. When you add a single port to the Dynamic Discovery, all the front-end
ports of the VPLEX/VE system are added to the static discovery automatically.
Figure 66 - Static addresses automatically discovered
77. Now, you can begin using the VPLEX/VE features.
Note: For detailed installation instructions use the EMC VPLEX/VE Product Guide.
Once the VPLEX/VE has been configured for use, you may then log in to
the vSphere Web Client. All day-to-day operations can be performed in the
vSphere Web Client.
11.6 Verifying the Solution
Post-install checklist, the following configuration items are critical to the functionality of the solution.
On each vSphere server, verify the following items prior to deployment into
production:
87 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
The vSwitch that hosts the client VLANs is configured with sufficient ports to accommodate the maximum number of virtual machines it may host.
All required virtual machine port groups are configured, and each server
has access to the required VMware datastores.
An interface is configured correctly for vMotion using the material in the
vSphere Networking guide.
12. Appendix-A – Additional Recommendations
VMware Distributed Resource Scheduler (DRS) affinity rule is used to
specify affinity relationships between the vDirector VMs and their ESXi servers. The non-VE VMs can be grouped into DRS VM groups that can
roam or have an affinity relationship with a host or DRS host group within the ESXi stretched cluster. In our test, we specified affinity between the
125 reference VMs and the 4 non VE servers in the ESXi cluster. In this way, we were able isolate the performance data collection from a smaller
group of servers. Using esxtop we were able to determine how the load effected the servers.
NTP service should be enabled on VPLEX/VE, ESXi servers, and storage.
Time disparity can cause problems with nondisruptive upgrades (NDU) and
troubleshooting. The vDirector times are synced with the vManagement servers (vSMS) in their site. vSMS2 is synced to vSMS1. Therefore,
vSMS1 needs to time sync with the NTP servers. VPLEX/VE install wizard does not configure external NTP servers. Customers need to use VPlexcli
command “configuration sync-time” to configure NTP servers.
VPlexcli:/> configuration sync-time [<options>] This command will
synchronize the clock of the management server via NTP.
options:
-h | --help
Displays the usage for this command.
--verbose
Provides more output during command execution. This may not
have any effect for some commands.
-i | --remote-ip= <IP of the remote server>
The IP address of the remote NTP server.
-f | --force
Skip verification prompt.
88 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
In case of a VM locking error which will prevent power on, manually
remove the VM, and re-provision. The locking error can be found in ESXi server /var/log/vmkernel. Storage vMotion of the powered off VM will
work, but will take a very long time. And storage vMotion back to the original datastore will fail. When powering on the VMs, they will hang at
15% and then eventually error out. Remove the VMs using vSphere web client will hang at 95. The lock error is associated with an ESXi server.
The customer needs to identify the server, unregister the VM, then remove the VM. Contact VMware support for assistance. The commands used are:
less /var/log/vmkernel.log
cd /vmfs/volumes/<datastore>/<VM>
touch *
vim-cmd vmsvc/getallvms | grep <number>
vim-cmd vmsvc/unregister <number>
rm -rf <VM>
Removing VPLEX/VE does not remove the VPLEX/VE in vSphere web client.
The icon just indicates that the plugin is installed on the vCenter. It should not cause any problems when doing a fresh deployment. The installation
wizard will upgrade the plugin.
In our experience, network issues can be very complex to resolve. We recommend taking a holistic approach when installing VPLEX/VE. Verify
the network is configured properly before installing VPLEX/VE, for example, ping between VE servers and non-VE servers, test connection from servers
to storage. In this way, troubleshooting will be easier, without another layer of complexity. Network diagrams, both logical and physical, are
essential.
Whether the problem is servers not being able to ping bidirectional or
servers not able to connect to the newly provisioned distributed datastore from VPLEX/VE, network may be the problem. In our case, we saw a 2%
packet drop from physical network that triggered network timeout errors. In the ESXi server /var/log/syslog.log shows Nop-out timeout errors.
For example,
2014-04-15T16:49:37Z iscsid: connection 1:0 (iqn.1992-04.com.emc:vplex-
00000000168f3a11-0000000000000006 if=iscsi_vmk@vmk2
addr=192.168.203.10:3260 (TPGT:6 ISID:0x1) (T0 C0)) Nop-out timeout
after 10 sec in state (3).
89 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Check the configuration of the VLANs for the Fabric Interconnect (FI) ports
on the switches. This applies to both private and public VLAN’s. To prevent disjoined network, assigned VLAN’s to the uplinks ports on the
switches from the FI’s so the proper traffic would take the proper path (public management and private iSCSI I/O). In our case, these paths exist
on two totally different switches.
The end to end iSCSI MTO should be the same. By default iSCSI uses jumbo frames. They should be setup on the switches, servers, FI and VNXe
ports. The change helps to improve network performance end to end.
Confirm if the network drivers are compatible with ESXi version. Review network vendor documentation for requirements.
To find the ENIC and FNIC version, login to the Host via SSH (enable SSH first) and issue the following commands:
ENIC: ethtool -I vmnic0
FNIC: vmkload_mod -s fnic
Or you can run
esxcli software vib list | egrep 'enic|fnic'
ENIC driver download:
https://my.vmware.com/web/vmware/details?productId=327&downloadGroup=DT-ESXI5X-CISCO-FNIC-15045#dt_version_history
FNIC driver download:
https://my.vmware.com/web/vmware/details?downloadGroup=DT-
ESXI5X-CISCO-ENIC-21238&productId=285
Driver upgrade procedure:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2005205
vMotion of VMs between servers is possible after the vMotion ports on the
switch are set to trunk mode. Also ensure server virtual switch vSwitch promiscuous mode is set to accept.
A faulted BE device may be preventing meta-data backups. VPLEXCLI cluster status does not check meta-data backups. VPLEX/VE health-check
90 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
shows the error, see Error! Reference source not found.. All operations
s functional except NDU will fail if there are no recent backups.
VPLEX/VE will support distances of up to 10ms RTT between datacenters,
but you must validate connectivity with vplexcli:
VPlexcli> connectivity validate-local-com
VPlexcli> connectivity validate-wan-com
VPlexcli> connectivity validate-be-com
VPlexcli> cluster status
In vSphere web server, if VMkernel adapter is not showing as “Link Up”, it
is not participating on the network. SSH into server and run “services.sh restart”. After storage rescan, the distributed datastores should be listed
under the server.
After completing the install wizard, 4 vDirectors showed warnings “VPLEX-VE HA Violation: Invalid virtual network configuration”. Detailed
description indicated issues with management communication. When we looked at mgmtcom port group, ports to 4 vDirectors were not up.
However, vSphere GUI showed they are connected. The problem is fixed by disconnecting and reconnecting mgmtcom virtual network adapters to
the port group for the 4 vDirectors.
Stale distributed port state is a known issue for the HA validation check.
Previously, the check directly refreshed the port state before doing
comparisons, but this resulted in excessive vSphere tasks reported in the interface. The workaround depends on what the actual port state is. If
you refresh and the port state is actually link-up, then there is no further action to take. If the port state is link-down, reconnecting the virtual
adapter triggers a port refresh that resulted in “good” state or actually corrected the invalid port state.
If the distributed datastore was not deleted before removing VPLEX/VE, it will be a “zombie” datastore. VC will remove it from inventory after 24
hours. To remove it immediately, remove the iSCSI VMkernel ports and reboot the ESXi server.
Storage rescan is needed to reconnect VPLEX/VE storage from any outage.
91 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
Ensure every host in the ESXi cluster be configured to consume the VE
storage, since all distributed datastores are provisioned to the cluster as a whole (not individual hosts). Provisioning of distributed datastore may
error out if any hosts in the ESXi cluster are not configured for software iSCSI. You’ll see a line like this at the end of the Virgo (i.e. web client)
log:
[Found software-based initiator port for host <dsveg187.lss.emc.com>: <[]>]
When a new ESXi host is added, one manual step will be required after
adding the software iSCSI adapter in order to tell VPLEX/VE which site that host is a part of (normally, this is handled during the initial configuration).
Once that is done, the plugin will automatically register its initiator and add it to the storage view, so that provisioning will work for that host.
The command which you need to use is ‘configuration virtual hosts-on-
physical-sites add’. Run a ‘configuration virtual hosts-on-physical-sites list’ first and you should see the listing as it stands today.
For VNXe, to register IQN’s, simply add it to a storage group. The Unisphere is different from other VNX’s.
13. Appendix-B -- References
13.1 EMC documentation
The following documents, available on EMC Online Support provide
additional and relevant information. If you do not have access to a document, contact your EMC representative.
PowerPath VE for VMware vSphere Installation and Administration Guide
VNXe FAST Cache: Introduction to the EMC VNXe3200 FAST Suite
EMC VNX Virtual Provisioning Applied Technology
EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 125 Virtual
Machines
EMC VSPEX with EMC XtremSF and EMC XtremCache Design Guide
Using EMC VNX Storage with VMware vSphere
EMC VPLEX/VE Site Preparation Guide
EMC Best Practices Guide for AC Power Connections in Two-PDP Bays
92 EMC VSPEX with EMC® VPLEX/VE™ For VMware® vSphere®
EMC AC Power Configuration Worksheet
40U-C Unpacking and Setup Guide
EMC VPLEX/VE Hardware Installation Guide
Implementation and Planning Best Practices for EMC VPLEX/VE Technical Notes
EMC VPLEX/VE Release Notes
EMC VPLEX/VE Security Configuration Guide
EMC VPLEX/VE Configuration Worksheet
EMC VPLEX/VE CLI Guide
EMC VPLEX/VE Product Guide
VPLEX/VE Procedure Generator
EMC VMware ESXi Host Connectivity Guide
13.2 Other documentation
The following documents, located on the VMware website, provide additional and relevant information:
vSphere Networking
vSphere Storage Guide
vSphere Virtual Machine Administration
vSphere Installation and Setup
vCenter Server and Host Management
vSphere Resource Management
Installing and Administering VMware vSphere Update Manager
vSphere Storage APIs for Array Integration (VAAI) Plug-in
Interpreting esxtop Statistics
Understanding Memory Resource Management in VMware vSphere 5.1
top related