emc® vipr® controller 2.4 virtual data center requirements and

82
EMC ® ViPR ® Controller Version 2.4 Virtual Data Center Requirements and Information Guide 302-002-415 REV 01

Upload: lamkhue

Post on 27-Dec-2016

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

EMC® ViPR® ControllerVersion 2.4

Virtual Data Center Requirements andInformation Guide302-002-415

REV 01

Page 2: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Copyright © 2013-2015 EMC Corporation. All rights reserved. Published in USA.

Published November, 2015

EMC believes the information in this publication is accurate as of its publication date. The information is subject to changewithout notice.

The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind withrespect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for aparticular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicablesoftware license.

EMC², EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and othercountries. All other trademarks used herein are the property of their respective owners.

For the most up-to-date regulatory document for your product line, go to EMC Online Support (https://support.emc.com).

EMC CorporationHopkinton, Massachusetts 01748-91031-508-435-1000 In North America 1-866-464-7381www.EMC.com

2 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 3: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

ViPR Controller VDC Requirements and Information Overview 7

ViPR Controller user role requirements 9

Physical Asset Requirements and Information 13

Storage systems............................................................................................14Hitachi Data Systems ................................................................................... 14

Known issues with provisioning....................................................... 15Enable performance data logging of HDS storage arrays in StorageNavigator......................................................................................... 16Create a new user using the Storage navigator on HDS .................... 16Host mode and host mode options...................................................17

EMC VMAX.................................................................................................... 19SMI-S provider configuration requirements for VMAX....................... 19SMI-S provider version requirements for SRDF..................................21Upgrade requirements..................................................................... 21VMAX storage system.......................................................................21

EMC VNX for Block ........................................................................................22SMI-S provider configuration requirements for VNX for Block............22VNX for Block storage system .......................................................... 23

VPLEX ...........................................................................................................24Third-party block storage (OpenStack) ..........................................................24

Third-party block storage provider installation requirements............ 24Third-party block storage system support ........................................ 25Supported ViPR Controller operations.............................................. 25OpenStack configuration..................................................................26Cinder service configuration requirements....................................... 26Storage system (backend) configuration settings............................. 26Volume setup requirements for OpenStack...................................... 27

ViPR Controller configuration for third-party block storage ports.................... 28Stand-alone EMC ScaleIO.............................................................................. 29EMC XtremIO ................................................................................................ 29EMC VNXe..................................................................................................... 29General file storage system requirements......................................................30EMC® Data Domain®.................................................................................... 30EMC Isilon .................................................................................................... 30NetApp 7-mode ............................................................................................ 31NetApp Cluster-mode ................................................................................... 31EMC VNX for File ........................................................................................... 32EMC Elastic Cloud Storage.............................................................................32RecoverPoint systems................................................................................... 33Fabric Managers (switches)........................................................................... 33

ViPR Controller switch and network terminology............................... 33Brocade........................................................................................... 33ViPR Controller support for fibre channel routing with Brocadeswitches.......................................................................................... 35Brocade Fibre Channel routing configuration requirements.............. 36Cisco................................................................................................37

Chapter 1

Chapter 2

Chapter 3

CONTENTS

EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide 3

Page 4: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

ViPR Controller support for Inter-VSAN Routing with Cisco switches........................................................................................................ 37Cisco Inter-VSAN routing configuration requirements....................... 39

Vblock system requirements......................................................................... 39Compute image server requirements................................................ 40Compute image server network requirements...................................41Compute images.............................................................................. 42Vblock compute systems................................................................. 42

Hosts............................................................................................................ 43AIX hosts and AIX VIO Servers.......................................................... 43HP-UX.............................................................................................. 44Linux ...............................................................................................44Windows..........................................................................................44

VMware® vCenter..........................................................................................46Required VMware® VirtualCenter role privileges...............................47

Virtual Asset Requirements and Information 49

Overview....................................................................................................... 50Plan to build your virtual array.......................................................................50

SAN zoning requirements.................................................................50Plan how to add the physical assets to the virtual array....................50Virtual array configuration considerations for ECS sytems................ 52Virtual Array requirements for Vblock system services...................... 52

Block storage configuration considerations...................................................53Hitachi Data Systems....................................................................... 53VMAX............................................................................................... 54VMAX3............................................................................................. 55EMC VNX for Block............................................................................56EMC VNXe for Block..........................................................................56EMC VPLEX ...................................................................................... 56Third-party block (OpenStack) storage systems................................56Block storage systems under ViPR Controller management...............56

File storage configuration considerations...................................................... 57EMC® Data Domain®....................................................................... 57EMC VNX for File...............................................................................58

Object storage configuration considerations................................................. 58ECS.................................................................................................. 58

ViPR requirements for service profile templates.............................................58

Selecting Storage Pools for Provisioning 61

Overview....................................................................................................... 62Understanding pool utilization and subscription........................................... 62Selection process for block storage .............................................................. 63Selection process for file storage...................................................................65

Administering Vblock Systems with ViPR Controller 67

Administering Vblock systems in your ViPR Controller virtual data center...... 68How Vblock systems are discovered by ViPR Controller..................................68

Add Vblock system components to ViPR Controller physical assets........................................................................................................ 69ViPR Controller discovery of Vblock system components.................. 70ViPR Controller registration of added and discovered physical assets........................................................................................................ 71

Chapter 4

Chapter 5

Appendix A

CONTENTS

4 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 5: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Additional resources required by ViPR Controller for OS installation onVblock systems................................................................................71

How Vblock system components are virtualized in ViPR Controller.................71Vblock compute systems................................................................. 71Vblock storage systems....................................................................72Vblock networks in the virtual array..................................................72

Examples of virtual arrays for Vblock systems................................................72Traditional Vblock system................................................................ 73Multiple Vblock systems configured in a single ViPR Controller virtualarray................................................................................................ 73Tenant isolation through virtual array networks................................ 75Tenant isolation through compute virtual pools................................76

ViPR Controller Support for EMC Elastic Clould Storage (ECS) 79

ViPR Controller support for EMC Elastic Cloud Storage overview.................... 80Discovery of ECS systems in ViPR Controller.................................................. 80Mapping an ECS Namespace to a ViPR Controller Tenant............................... 80Virtualizing ECS storage in ViPR Controller.....................................................81

Appendix B

CONTENTS

EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide 5

Page 6: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

CONTENTS

6 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 7: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

CHAPTER 1

ViPR Controller VDC Requirements andInformation Overview

This guide is for ViPR Controller System Administrators and Tenant Administrators tounderstand the information needed to configure the physical assets that are added to theViPR Controller Virtual Data Center (VDC), as well as the requirements and information toconvert the ViPR Controller physical assets, discovered by the ViPR Controller, into theViPR Controller networks, virtual arrays, and virtual pools of the ViPR Controller VDC.

Related documentsThese guides explain how to configure VDC:

l ViPR Controller User Interface Virtual Data Center Configuration Guide.

l ViPR Controller REST API Virtual Data Center Configuration Guide.

Access both documents from the ViPR Controller Product Documentation Index .

The ViPR Controller Support Matrix provides the version requirements for the physicalassets. You can find this document on the EMC Community Network atcommunity.emc.com.

ViPR Controller VDC Requirements and Information Overview 7

Page 8: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

ViPR Controller VDC Requirements and Information Overview

8 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 9: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

CHAPTER 2

ViPR Controller user role requirements

ViPR Controller roles fall into two groups: roles that exist at the ViPR Controller virtualdata center level, and roles that exist at the tenant level.

Note

Access to different areas of the ViPR Controller UI is governed by the actions permitted tothe role assigned to the user. The actions authorized when you access ViPR Controllerfrom the UI can differ (be more constrained) from those available when you use the RESTAPI or CLI.

Virtual data center-level rolesVDC roles are used to set up the ViPR Controller environment which is shared by alltenants. The following table lists the authorized actions for each user role at the virtualdata center level.

Table 1 VDC roles

VDC Role Authorized Actions

SecurityAdministrator

l Manages the authentication provider configuration for the ViPR Controllervirtual data center to identify and authenticate users. Authenticationproviders are configured to use Active Directory/Lightweight DirectoryAccess Protocol (AD/LDAP) user accounts/domains to add specified usersinto ViPR Controller.

l Creates ViPR Controller User Groups.

l Assigns VDC and Tenant roles.

l Sets ACL assignments for Projects, and Service Catalog.

l Sets ACL assignments for virtual arrays, and virtual pools, from the ViPRController API and CLI.

l Update vCenter Tenants (ACLs) and Datacenter Tenant from ViPR ControllerREST API and CLI (Only System Administrators can perform any of thesefunctions from the ViPR Controller UI).

l Creates, modifies, and deletes sub-tenants.

l Assigns the tenant quotas, and user mappings.

l Manages ViPR Controller virtual data center software and license updates.

l Configures the repository from which ViPR Controller upgrade files will bedownloaded and installed.

l Manages SSL, and trusted certificates.

l Can change IPs for ViPR Controller nodes deployed on VMware without avApp, and Hyper-V.

l Schedule backups of ViPR Controller instances.

ViPR Controller user role requirements 9

Page 10: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Table 1 VDC roles (continued)

VDC Role Authorized Actions

l Reset local user passwords.

l Configures ACLs.

l Restores access to tenants and projects, if needed. (For example, if theTenant Administrator locks himself/herself out, the Security Administratorcan reset user roles to restore access.)

l Can add or change ViPR Controller node names.

l Initiate a minority node recovery from the ViPR Controller REST API, and CLI.

l View the minority node recovery status from the ViPR Controller CLI.

l Make changes to the ViPR Controller, General Configuration, Securitysettings.

l Shuts down, reboots, and restarts ViPR Controller services from the ViPRController REST API/CLI.

The Security Administrator must also be assigned a System Administrator roleto perform the following operations from the ViPR Controller UI:

l Shut down, reboot, and restart ViPR Controller nodes or services.

l Set ACL assignments for virtual arrays, and virtual pools.

l Initiate a minority node recovery.

In Geo-federated Environment:

l Adds a VDC to create Geo-federated environment

l Has Security Administrator privileges on authentication providers, whichare global resources.

SystemAdministrator

l Performs system upgrades.

l Add ViPR Controller licenses.

l Send support requests.

l Sets up the physical storage infrastructure of the ViPR Controller virtualdata center and configures the physical storage into two types of virtualresources: virtual arrays and virtual pools. Authorized actions include:

n Adding, modifying, and deleting the following physical storageresources into ViPR Controller such as storage systems, storage ports,and storage pools, data protections systems, fabric managers,networks, compute images, Vblock compute systems, and vCenters.

Note

System Administrators cannot add, delete, or modify hosts or clusters.

n Updating vCenter cascade tenancy and vCenter tenants (ACLs) andDatacenter Tenant from the ViPR Controller REST API, UI and CLI.

n Creating virtual pools.

n Creating virtual arrays.

l Manages the ViPR Controller virtual data center resources that tenants donot manage.

ViPR Controller user role requirements

10 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 11: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Table 1 VDC roles (continued)

VDC Role Authorized Actions

l Retrieves ViPR Controller virtual data center status and health information.

l Retrieves bulk event and statistical records for the ViPR Controller virtualdata center.

l View the Database Housekeeping Status.

l View the minority node recovery status from the ViPR Controller CLI.

In Geo-federated Environment:

l Has System Administrator privileges on global virtual pools, which areglobal resources.

l Sets ACL assignments for virtual arrays, and virtual pools, from the ViPRController API

System Monitor l Has read-only access to all resources in the ViPR Controller virtual datacenter. Has no visibility into security-related resources, such asauthentication providers, ACLs, and role assignments.

l Retrieves bulk event and statistical records for the ViPR Controller virtualdata center.

l Retrieves ViPR Controller virtual data center status and health information.

l (API only) Can create an alert event, with error logs attached, as an aid totroubleshooting. The alert event is sent to ConnectEMC.

l View the Database Housekeeping Status.

l View the minority node recovery status from the ViPR Controller UI, and CLI.

System Auditor Has read-only access to the ViPR Controller virtual data center audit logs.

Tenant-level rolesTenant roles are used to administrate the tenant-specific settings, such as the servicecatalog and projects, and to assign additional users to tenant roles. The following tablelists the authorized actions for each user role at the tenant level.

Table 2 Tenant roles

Tenant-Level Role Authorized Actions

Tenant Administrator l Becomes Tenant Administrator of created tenant.

l A single-tenant enterprise private cloud environment has only onetenant, the Provider Tenant, and Tenant Administrators have accessto all projects.

l Modifies the name and description of the tenants.

l Add vCenters to ViPR Controller physical assets in their own tenant.

l Manages tenant resources, such as Hosts, Clusters vCenters, andProjects.

l Configures ACLs for projects and the Service Catalog in their tenant.

ViPR Controller user role requirements

11

Page 12: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Table 2 Tenant roles (continued)

Tenant-Level Role Authorized Actions

l Assigns roles to tenant users. (Can assign Tenant Administrator orProject Administrator roles to other users.)

In Geo-federated Environment:

l Has Tenant Administrator privileges on tenants, which are globalresources.

Tenant Approver l Approves or rejects Service Catalog orders in their tenant.

l Views all approval requests in their tenant.

Project Administrator l Creates projects in their tenant and obtains an OWN ACL on the

created project.

ViPR Controller user role requirements

12 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 13: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

CHAPTER 3

Physical Asset Requirements and Information

This chapter contains the following topics:

l Storage systems....................................................................................................14l Hitachi Data Systems ........................................................................................... 14l EMC VMAX.............................................................................................................19l EMC VNX for Block ................................................................................................22l VPLEX ...................................................................................................................24l Third-party block storage (OpenStack) ..................................................................24l ViPR Controller configuration for third-party block storage ports............................ 28l Stand-alone EMC ScaleIO...................................................................................... 29l EMC XtremIO ........................................................................................................ 29l EMC VNXe............................................................................................................. 29l General file storage system requirements..............................................................30l EMC® Data Domain®............................................................................................ 30l EMC Isilon ............................................................................................................ 30l NetApp 7-mode .................................................................................................... 31l NetApp Cluster-mode ........................................................................................... 31l EMC VNX for File ................................................................................................... 32l EMC Elastic Cloud Storage.....................................................................................32l RecoverPoint systems........................................................................................... 33l Fabric Managers (switches)................................................................................... 33l Vblock system requirements................................................................................. 39l Hosts.................................................................................................................... 43l VMware® vCenter..................................................................................................46

Physical Asset Requirements and Information 13

Page 14: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Storage systemsYou can review the ViPR Controller requirements and information necessary for enteringuser credentials and setting up port-based metrics for storage systems, and review theinformation needed for adding and configuring a storage system in ViPR Controller.

Storage system user credentialsWhen adding storage systems to ViPR Controller, you specify the credentials of the userwho will access the storage system from ViPR Controller. These credentials areindependent of the user who is currently logged into ViPR Controller. All ViPR Controlleroperations performed on a storage system are executed as the user who was givenaccess to the storage system.

ViPR Controller operations require that the ViPR Controller user has administrativeprivileges. If there are additional credential requirements for a specific type of storagesystem, it is explained in more detail in the following storage-specific sections of thisguide.

Hitachi Data SystemsBefore you add Hitachi Data Systems (HDS) storage to ViPR Controller, configure thestorage as follows.

Gather the required informationHitachi HiCommand Device Manager (HiCommand) is required to use HDS storage withViPR Controller. You need to obtain the following information to configure and add theHiCommand manager to ViPR Controller.

Table 3 Hitachi required information

Setting Value

A host or virtual machine for HiCommandDevice Manager setup

HiCommand Device Manager license

HiCommand Device Manager host address

HiCommand Device Manager user credentials

HiCommand Device Manager host port (defaultis 2001)

General configuration requirements

l HiCommand Device Manager software must be installed and licensed.

l Create a HiCommand Device Manager user for ViPR Controller to access the HDSstorage. This user must have administrator privileges to the storage system toperform all ViPR Controller operations.

l HiCommand Device Manager must discover all Hitachi storage systems (that ViPRController will discover) before you can add them to ViPR Controller.

l When you add the HiCommand Device Manager as a ViPR Controller storage provider,all the storage systems that the storage provider manages will be added to ViPRController. If you do not want ViPR Controller to manage all the storage systems,

Physical Asset Requirements and Information

14 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 15: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

before you add the HiCommand Device Manager, configure the HiCommand DeviceManager to manage only the storage systems that will be added to ViPR Controller.

Note

After you add the storage provider to ViPR Controller, you can deregister or deletestorage systems that you will not use in ViPR Controller.

Configuration requirements for auto-tieringViPR Controller provides auto-tiering for the six standard HDS auto-tiering policies forHitachi Dynamic Tiering (HDT) storage pools.

HDS auto-tiering requires the Hitachi Tiered Storage Manager license on the HDS storagesystem.

Configuration requirements for data protection featuresHDS protection requires:

l Hitachi Thin Image Snapshot software for snapshot protectionl Hitachi Shadow Image Replication software for clone and mirror protection.

ViPR Controller requires the following is configured to use the Thin Image, andShadowImage features:

l Requires Hitachi Replication Manager is installed on a separate server.l The HiCommand Device Manager agent must be installed and running on a pair

management server.l To enable ViPR Controller to use the Shadow Image pair operations, create a

ReplicationGroup on the HDS, named ViPR-Replication-Group using either theHiCommand Device Manager or Hitachi Storage Navigator .

l To enable ViPR Controller to use the Thin Image pair operations, create aSnapshotGroup on the HDS, named ViPR-Snapshot-Group using either theHiCommand Device Manager or Hitachi Storage Navigator .

Configuration requirements for ViPR Controller to collect HDS port metricsBefore ViPR Controller can collect statistical information for HDS storage ports, you mustperform the following operations in the HDS Storage Navigator to enable performancelogging on the storage system:

l Enable performance data logging of HDS storage on page 16l Create a new user using Storage Navigator on HDS on page 16l Enable the certificate on the HDS to access SMI-S in secure mode. For details, refer to

the Using the SMI-S function section of the Hitachi Command Suite AdministratorGuide.

Known issues with provisioningWhen provisioning storage from an HDS array using ViPR Controller, you may experiencesome issues. This section explains when these issues may occur and how to resolvethem.

Placing multiple provisioning orders in the Service CatalogIf multiple orders are placed quickly in ViPR Controller or a user with modify permissionsis active on the HiCommand Suite at the same time as the ViPR Controller provisioningorders, the following exception is generated:

Error 25005: Error occurred running HiCommand Device Manager command. An error occurred executing operation deleteSingleVolumeSnapshot.

Physical Asset Requirements and Information

Known issues with provisioning 15

Page 16: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Caused by: Error response received with error code 6,400 and message An error occurred during storage system processing. (error code 1 = "1", error code 2 = "5132", meaning = *"The specified user ID is already logged on, or the previous logon was not properly terminated. Please try again later. (By default, the session/RMI time-out is one minute.)") If the configuration was changed, refresh the storage system information. If you do not have the permissions required to refresh the information, ask the storage system administrator to do it for you.

HDS automatic refreshWhen the Hitachi device manager automatically refreshes its information from the array,it can interrupt ViPR Controller provisioning orders and generate this error message:

Operation failed due to the following error: Error response received with error code 7,473 and message The storage system information ("[email protected]") is being updated.

In the HiCommand suite, you can turn off this automatic refresh option.

What to doIf you experience either of these two errors, rerun the order after a short period of time. Ifyou continually experience these issues, review the HDS user accounts setup. Byseparating the HDS user accounts without modify permissions, you may resolve theseissues.

Enable performance data logging of HDS storage arrays in Storage NavigatorBefore you can set up the metrics-based port selection for HDS, you must enableperformance data logging for the HDS storage arrays in Storage Navigator in order forViPR Controller to collect statistical information about the storage ports.

Procedure

1. Log into the Storage Navigator.

2. Select Performance Monitor.

3. Click Edit Monitor Switch.

4. Enable Monitor Switch.

5. Set the Sample Interval.

Create a new user using the Storage navigator on HDSTo collect statistics using the embedded SMI-S provider, you must create a new userusing the Storage Navigator on the Hitachi storage array.

Procedure

1. Log into the Storage Navigator.

2. Select Administrator.

3. Click User Groups.

4. Select Storage Administrator (View Only) User Group.

5. Click Create User.

Physical Asset Requirements and Information

16 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 17: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Note

The username and password must be the same as the HiCommand Device Managercredentials that ViPR Controller uses to discover the storage provider.

6. Type a User Name.

7. Type a Password.

8. Type the Account status.

9. Type the Authentication details.

Host mode and host mode optionsViPR Controller System Administrators can learn about the Hitachi Data Systems HostMode and Host Mode Option, how ViPR Controller sets the Host Mode and Host ModeOption, the ViPR Controller configuration requirements for ViPR Controller to apply thesettings, and the steps to customize the Host Mode Option using the ViPR Controller UI.

Host Mode and Host Mode OptionsHost Modes are HDS flags which are set on HDS host groups when an HDS storagevolume is exported to the host group. The Host Mode is used to optimize the connectionand communication between HDS storage and the host to which the HDS volume hasbeen exported.

The Host Mode Options are a set of flags which can be enabled or disabled to furtheroptimize the Host Mode set on the HDS host groups.

Refer to the Hitachi Data Systems documentation for details about the HDS Host Mode,and Host Mode Options.

How ViPR Controller sets the Host ModeBy default, when ViPR Controller is used to export an HDS volume to an AIX, ESX, HP-UX,Linux, or Windows host, ViPR Controller sets the following Host Mode on the host groupby determining the host operating system details.

Table 4 ViPR Controller default settings for Host Modes

Operating System Default Host Mode

AIX 0F AIX

ESX 21 VMware

HP-UX 03 HP

Linux 00 Standard

Windows 2C Windows Extension

Host Mode settings for "Other" hostsWhen ViPR Controller is used to export an HDS volume to a host that was added to ViPRController as "Other," or to a host of which ViPR Controller cannot determine the type, the00 Standard Host Mode is set on the host by the Hitachi HiCommand DM. 00 Standard isthe HiCommand DM default Host Mode.

Changing the Host ModeThe ViPR Controller Host Mode settings are automatically set by ViPR Controller during anexport operation, and you cannot change the ViPR Controller Host Mode settings. Once

Physical Asset Requirements and Information

Host mode and host mode options 17

Page 18: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

ViPR Controller has set the Host Mode on a host, you can use Hitachi Storage Navigator tochange the Host Mode on the host.

Prior to ViPR Controller 2.2ViPR Controller did not set the Host Mode for hosts prior to ViPR Controller 2.2. However,HiCommand Device Manager sets the 00 Standard Host Mode as the default on all HDShost groups.

If a host to which an HDS volume was exported by ViPR Controller, prior to ViPR Controller2.2, is re-used by ViPR 2.2 or higher to export another HDS volume, the Host Mode willnot be set by ViPR Controller on the host.

How ViPR Controller sets the Host Mode OptionBy default, when ViPR Controller is used to export an HDS volume to an AIX, ESX, HP-UX,Linux, or Windows host, ViPR Controller sets the following Host Mode Options on the hostgroup.

Table 5 ViPR Controller default settings for Host Mode Optons

Operating System Default Host Mode

AIX, Linux, Windows 2 => VERITAS Database Edition/Advanced Cluster22 => Veritas Cluster Server

ESX 54 => Support option for the EXTENDED COPY command63 => Support option for the vStorage APIs based on T10 standards

HP-UX 12 => No display for ghost LUN

Note

Refer to Hitachi storage system provisioning documentation for details about the HostMode Options.

Host Mode Option settings for "Other," hostsViPR Controller does not set the Host Mode Option when ViPR Controller is used to exportan HDS volume to a host that was added to ViPR Controller as "Other," or to a host ofwhich ViPR Controller cannot determine the type.

Changing the Host Mode OptionThe Host Mode Option is set on the host group the first time you export an HDS volume tothe host. Once ViPR Controller has set the Host Mode Option on the host group, the HostMode Option cannot be changed by ViPR Controller for example:

l If an HDS volume was exported to a host that was added to ViPR Controller as“Other,” the Host Mode Option configured in ViPR Controller is not set on the host. Ifthe type of host is changed in ViPR Controller from Other, to AIX, ESX, HP-UX, Linux, orWindows, and ViPR Controller is used to export another HDS volume to the samehost, the Host Mode Options which are configured in ViPR Controller will not be seton the host by ViPR Controller, since it was not set on the host group the first time thevolume was exported from ViPR Controller to that same host.

l If an HDS volume was exported to an AIX, ESX, HP-UX, Linux, or host, the Host ModeOptions currently configured in ViPR Controller are set on the host. If the Host ModeOptions are changed in ViPR Controller after the export, and ViPR Controller is used toexport another HDS volume to the same host, the original Host Mode Options remainon the host, and are not configured with the new Host Mode Options.

Physical Asset Requirements and Information

18 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 19: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Once the HDS volume has been exported to a host from ViPR Controller, you can changethe Host Mode Setting from Hitachi Storage Navigator. If ViPR Controller is used to exportanother HDS volume to the same host after the Host Mode Option has been changedfrom Storage Navigator, ViPR Controller will reuse the Storage Navigator settings in theexport.

Prior to ViPR 2.2

l The Host Mode options are only set to new host groups created using ViPR Controller2.2 and higher.

l Prior to ViPR Controller 2.2, ViPR Controller did not set any Host Mode Options onHDS host groups.

l If a host group, that was created prior to ViPR Controller 2.2, is re-used by ViPRController, the ViPR Controller Host Mode Options are not set by ViPR Controller onthe host group created prior to ViPR Controller 2.2.

EMC VMAXViPR Controller management of VMAX systems is performed through the EMC SMI-Sprovider. Your SMI-S provider and the VMAX storage system must be configured asfollows before the storage system is added to ViPR Controller.

SMI-S provider configuration requirements for VMAXYou need specific information to validate that the SMI-S provider is correctly configuredfor ViPR Controller and to add storage systems to ViPR Controller.

Gather required information

l SMI-S provider host address

l SMI-S provider credentials (default is admin/#1Password)

l SMI-S provider port (default is 5989)

Enable propertiesWhen using SMI-S provider 8.1, you must always enable these properties:

l SYMAPI_USE_GNS, SYMAPI_USE_RDFD under /var/symapi/config/options

l GNS_REMOTE_MIRROR under /var/symapi/config/daemon_optionsStart the daemon serviceBefore you begin the configuration:

l From the /opt/emc/SYMCLI/bin directory, start the daemon service:stordaemon start storrdfd

l List the daemons: /opt/emc/SYMCLI/bin/stordaemon listYou can see which daemons are currently running:

Available Daemons ('[*]': Currently Running):[*] storapid EMC Solutions Enabler Base Daemon storgnsd EMC Solutions Enabler GNS Daemon storrdfd EMC Solutions Enabler RDF Daemon[*] storevntd EMC Solutions Enabler Event Daemon[*] storwatchd EMC Solutions Enabler Watchdog Daemon storsrmd64 EMC Solutions Enabler SRM Daemon, 64bit storstpd EMC Solutions Enabler STP Daemon

Physical Asset Requirements and Information

EMC VMAX 19

Page 20: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

storsrvd EMC Solutions Enabler SYMAPI Server Daemon

Configuration requirementsSMI-S provider cannot be shared between ViPR Controller and any other applicationrequiring an SMI-S provider to VMAX, such as EMC ViPR SRM.

Note

The ViPR Controller Support Matrix has the most recent version requirements for allsystems supported or required by ViPR Controller. For specific version requirements of theSMI-S provider review the ViPR Controller Support Matrix before taking any action toupgrade or install the SMI-S provider for use with ViPR Controller.

Before adding VMAX storage to ViPR Controller, login to your SMI-S provider to ensure thefollowing configuration requirements are met:

l For VMAX storage systems, use either SMI-S provider 4.6.2 or SMI-S provider 8.1 butnot both versions. However, you must use SMI-S provider 8.1 to use the new featuresprovided with ViPR Controller 2.4. For upgrade requirements, see the EMC ViPRController Installation and Configuration Guide.

l For VMAX3 storage systems, always use SMI-S provider 8.1.

l The host server running Solutions Enabler (SYMAPI Server) and SMI-S provider(ECOM) differs from the server where the VMAX service processors are running.

l The storage system is discovered in the SMI-S provider.

l When the storage provider is added to ViPR Controller, all the storage systemsmanaged by the storage provider will be added to ViPR Controller. If you do not wantall the storage systems on an SMI-S provider to be managed by ViPR Controller,configure the SMI-S provider to only manage the storage systems that will be addedto ViPR Controller, before adding the SMI-S provider to ViPR Controller.

Note

Storage systems that will not be used in ViPR Controller, can also be deregistered, ordeleted after the storage provider is added to ViPR Controller. For steps to deregisteror delete storage from ViPR Controller see either the ViPR Controller User InterfaceVirtual Data Center Configuration Guide or the ViPR Controller REST API Virtual Data CenterConfiguration Guide.

l The remote host, SMI-S provider (Solutions Enabler (SYMAPI Server) and EMC CIMServer (ECOM)) are configured to accept SSL connections.

l The EMC storsrvd daemon is installed and running.

l The SYMAPI Server and the ViPR Controller server hosts are configured in the localDNS server and that their names are resolvable by each other, for propercommunication between the two. If DNS is not used in the environment, be sure touse the hosts files for name resolution (/etc/hosts or c:/Windows/System32/drivers/etc/hosts).

l The EMC CIM Server (ECOM) default user login, password expiration option is set to"Password never expires."

l The SMI-S provider host is able to see the gatekeepers (six minimum).

Physical Asset Requirements and Information

20 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 21: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

SMI-S provider version requirements for SRDFWhen using SRDF in ViPR Controller, you must use specific SMI-S provider versions fordata replication between the source and target sites.

VMAX to VMAX replicationSMI-S provider 4.6.2 or SMI-S provider 8.1 is required on both the source and the targetsites when replicating data from a VMAX array to another VMAX array. You cannot haveSMI-S provider 4.6.2 on one site and SMI-S provider 8.1 on the other site. They must bethe same version on both sites.

VMAX3 to VMAX3 replicationSMI-S provider 8.1 is required on both the source and the target sites when replicatingdata from a VMAX3 array to another VMAX3 array.

VMAX to VMAX3 and VMAX3 to VMAX replicationSMI-S provider 8.1 is required on both the source and the target sites when replicatingdata from a VMAX3 array to a VMAX array or when replicating data from a VMAX3 array toa VMAX array.

Upgrade requirementsThe ViPR Controller Support Matrix has the most recent version requirements for allsystems supported, or required by ViPR Controller. For specific version requirements ofthe SMI-S provider, review the ViPR Controller Support Matrix before taking any action toupgrade or install the SMI-S provider for use with ViPR Controller.

VMAX storage systemYou prepare the VMAX storage system before adding it to ViPR Controller as follows.

l Create a sufficient amount of storage pools for storage provisioning with ViPRController (for example, SSD, SAS, NL-SAS).

l Define FAST policies.Storage Tier and FAST Policy names must be consistent across all VMAX storagesystems.

l It is not required to create any LUNs, storage groups, port groups, initiator groups, ormasking views.

l After a VMAX storage system has been added and discovered in ViPR Controller, thestorage system must be rediscovered if administrative changes are made on thestorage system using the storage system element manager.

l For configuration requirements when working with meta volumes see ViPR ControllerIntegration with VMAX and VNX Storage Systems Guide.

Physical Asset Requirements and Information

SMI-S provider version requirements for SRDF 21

Page 22: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

EMC VNX for BlockViPR Controller management of VNX for Block systems is performed through the EMC SMI-S provider. SMI-S provider and VNX for Block must meet certain configurationrequirements before you can add this storage system intoViPR Controller.

SMI-S provider configuration requirements for VNX for BlockYou need specific information to validate that the SMI-S provider is correctly configuredfor ViPR Controller and to add storage systems to ViPR Controller.

Gather the required information

l SMI-S provider host address

l SMI-S provider credentials (default is admin/#1Password)

l SMI-S provider port (default is 5989)

Configuration requirementsSMI-S provider cannot be shared between VIPR Controller and any other applicationrequiring an SMI-S provider to VNX for Block, such as EMC ViPR SRM.

Before adding VNX for Block storage to ViPR Controller, login to your SMI-S provider toensure the following configuration requirements are met:

l The host server running Solutions Enabler (SYMAPI Server) and SMI-S provider(ECOM) differs from the server where the VNX for Block storage processors arerunning.

l The storage system is discovered in the SMI-S provider.

l When the storage provider is added to ViPR Controller, all the storage systemsmanaged by the storage provider will be added to ViPR Controller. If you do not wantall the storage systems on an SMI-S provider to be managed by ViPR Controller,configure the SMI-S provider to only manage the storage systems that will be addedto ViPR Controller, before adding the SMI-S provider to ViPR Controller.

Note

Storage systems that will not be used in ViPR Controller, can also be deregistered, ordeleted after the storage provider is added to ViPR Controller. For steps to deregisteror delete storage from ViPR Controller see either the ViPR Controller User InterfaceVirtual Data Center Configuration Guide or the ViPR Controller REST API Virtual Data CenterConfiguration Guide.

l The remote host, SMI-S provider (Solutions Enabler (SYMAPI Server) and EMC CIMServer (ECOM)) are configured to accept SSL connections.

l The EMC storsrvd daemon is installed and running.

l The SYMAPI Server and the ViPR Controller server hosts are configured in the localDNS server and that their names are resolvable by each other, for propercommunication between the two. If DNS is not used in the environment, be sure touse the hosts files for name resolution (/etc/hosts or c:/Windows/System32/drivers/etc/hosts).

l The EMC CIM Server (ECOM) default user login, password expiration option is set to"Password never expires."

Physical Asset Requirements and Information

22 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 23: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

l The SMI-S provider host needs IP connectivity over the IP network with connections toboth VNX for Block storage processors.

VNX for Block storage systemYou prepare the VNX for Block storage system before adding it to ViPR Controller asfollows.

Configuration requirements

l Create a sufficient amount of storage pools or RAID groups for storage provisioningwith ViPR Controller.

l If volume full copies are required, install SAN Copy enabler software on the storagesystem.

l If volume continuous-native copies are required, create clone private LUNs on thearray.

l Fibre Channel networks for VNX for Block storage systems require an SP-A and SP-Bport pair in each network, otherwise virtual pools cannot be created for the VNX forBlock storage system.

l For configuration requirements when working with meta volumes see ViPR ControllerIntegration with VMAX and VNX Storage Systems Guide.

Configuration requirements for ViPR Controller to collect HDS port metricsYou must enable performance data logging in EMC Unisphere so that VNX for Block sendsthe required metrics to ViPR Controller before you can set up metrics-based port selectionfor VNX for Block . For steps to enable performance data logging in EMC Unisphere see: Prerequisite configuration settings for VNX for Block on page 23.

Enable performance data logging for VNX for BlockYou must enable performance data logging in EMC Unisphere so that VNX for Block sendsthe required metrics to ViPR Controller before you can set up metrics-based port selectionfor VNX for Block .

Procedure

1. Log into the EMC Unisphere.

2. Select System > Statistics for Block. Statistics for Block can be found in theMonitoring and Alerts section.

3. Click Performance Data Logging.

The Data Logging dialog is displayed.

4. If data logging is not already started:

Note

Do not change the default times for Real Time Interval and Archive Interval.

a. Click Start to start data logging.

b. Click Apply.

5. Click OK.

Physical Asset Requirements and Information

VNX for Block storage system 23

Page 24: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

VPLEXBefore adding VPLEX storage systems to ViPR Controller, validate that the VPLEXenvironment is configured as follows:

l ViPR supports VPLEX in a Local or Metro configuration. VPLEX Geo configurations arenot supported.

l Configure VPLEX metadata back-end storage.

l Create VPLEX logging back-end storage.

l Verify that the:

n Storage systems to be used are connected to the networks containing the VPLEXback-end ports.

n Hosts to be used have initiators in the networks containing the VPLEX front-endports.

l Verify that logging volumes are configured to support distributed volumes in a VPLEXMetro configuration.

l It is not necessary to preconfigure zones between the VPLEX and storage systems, orbetween hosts and the VPLEX , except for those necessary to make the metadatabacking storage and logging backing storage available.

Third-party block storage (OpenStack)ViPR Controller uses the OpenStack Block Storage (Cinder) service to manage OpenStacksupported block storage systems. Your OpenStack block storage systems must meet thefollowing installation and configuration requirements before the storage systems inOpenStack, and the storage system's resources can be managed by ViPR Controller.

Third-party block storage provider installation requirementsUse the OpenStack Block Storage (Cinder) Service to add third-party block storagesystems into ViPR Controller.

Supported Openstack installation platformsOpenstack installation is supported on the following platforms:

l Red Hat Enterprise Linux

l SUSE Enterprise Linux

l Ubuntu Linux

For a list of the supported platform versions, see Openstack documentation at: http://docs.openstack.org.

OpenStack installation requirementsYou must install the following two components on either the same server or separateservers:

l OpenStack Identity Service (Keystone): Required for authentication

l OpenStack Block Storage (Cinder): The core service that provides all storageinformation.

For complete installation and configuration details, refer to the OpenStackdocumentation at: http://docs.openstack.org.

Physical Asset Requirements and Information

24 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 25: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Third-party block storage system supportYou configure third-party storage systems on the OpenStack Block Storage Controllernode (Cinder service).

Supported third-party block storage systemsViPR Controller operations are supported on any third-party block storage systems testedby OpenStack that use Fibre Channel or iSCSI protocols.

Non-supported storage systemsViPR Controller does not support third-party storage systems using:

l Proprietary protocols, such as Ceph.

l Drivers for block over NFS.

l Local drivers such as LVM.

ViPR Controller does not support these OpenStack supported storage systems anddrivers:

l LVM

l NetAppNFS

l NexentaNFS

l RBD (Ceph)

l RemoteFS

l Scality

l Sheepdog

l XenAPINFS

Refer to www.openstack.org for information about OpenStack third-party storagesystems.

ViPR Controller native driver support and recommendationsViPR Controller provides limited support for third-party block storage. For full support ofall ViPR Controller operations, it is recommended that you add and manage the followingstorage systems with ViPR Controller native drivers, and not use the OpenStack third-party block storage provider to add these storage systems to ViPR Controller:

l EMC VMAX

l EMC VNX for Block

l EMC VPLEX

l Hitachi Data Systems (with Fibre Channel only)

Add these storage systems directly in ViPR Controller using the storage system hostaddress or the host address of the proprietary storage provider.

Supported ViPR Controller operationsYou can perform ViPR Controller discovery and various service operations on third-partyblock storage systems.

You can perform these service operations on third-party block storage systems:

l Create Block Volume

l Export Volume to Host

Physical Asset Requirements and Information

Third-party block storage system support 25

Page 26: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

l Create a Block Volume for a Host

l Expand block volume

l Remove Volume by Host

l Remove Block Volume

l Create Full Copy

l Create Block Snapshot

l Create volume from snapshot

l Remove Block Snapshot

Note

You cannot use the ViPR Controller Create VMware Datastore service to create a datastorefrom a block volume created by a third-party storage system. However, you can manuallycreate datastores from third-party block volumes through the VMware vCenter.

OpenStack configurationAfter installing the Keystone and Cinder services, you must modify the Cinderconfiguration file to include the storage systems to be managed by ViPR Controller. Aftermodifying the configuration file, you must create the volume types to map to the backenddrivers. These volume types are discovered as storage pools of a certain storage systemin ViPR Controller.

OpenStack third-party storage configuration recommendationsBefore adding the storage provider to ViPR Controller, configure the storage provider withonly the storage systems to be managed by ViPR Controller through a third-party blockstorage provider. When the third-party block storage provider is added to the ViPRController Physical Assets, all storage systems managed by the OpenStack block storageservice and supported by ViPR Controller are added to ViPR Controller.

If you included storage systems in the storage provider that will not be managed by ViPRController, you can deregister or delete those storage systems from ViPR Controller.However, it is a better practice to configure the storage provider for ViPR Controllerintegration before adding the storage provider to ViPR Controller.

Cinder service configuration requirementsYou must add an entry in the Cinder configuration file for each storage system to bemanaged by ViPR Controller. This file is located in /etc/cinder/cinder.conf.

Cinder does not have any specific standards on backend driver attribute definitions. Seethe vendor-specific recommendations on how to configure the cinder driver, which mayinvolve installing a vendor specific plugin or command line interface.

Storage system (backend) configuration settingsCinder defines backend configurations in individual sections of the cinder.conf file tomanage the storage systems. Each section is specific to a storage system type.

Edit the cinder.conf file:

Physical Asset Requirements and Information

26 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 27: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Procedure

1. Uncomment enabled_backends, which is commented by default, and add themultiple backend names. In the following example, NetApp and IBM SVC are added asbackend configurations.

enabled_backends=netapp-iscsi,ibm-svc-fc

2. Near the end of the file, add these storage system specific entries.

[netapp-iscsi]#NetApp array configuration goes here [ibm-svc-fc]#IBM SVC array configuration goes here

3. Restart the Cinder service.

#service openstack-cinder-volume restart

Volume setup requirements for OpenStackViPR Controller has specific setup requirements for volumes created in OpenStack.

ViPR Controller requires that you do the following for each volume in OpenStack:

l Map the volume to the backend driver.

l Indicate if the volume is set for thin or thick provisioning.

ViPR Controller-specific propertiesYou can create volume types through Cinder CLI commands or the Dashboard (OpenStackUI ). The properties required by ViPR Controller in the Cinder CLI for the volume type are:

l volume_backend_namel vipr:is_thick_pool=truevolume_backend_nameThe following example demonstrates the Cinder CLI commands to create volume types(NetApp, IBM SVC) and map them to the backend driver.

cinder --os-username admin --os-password <password> --os-tenant-name admin type-create "NetAPP-iSCSI"cinder --os-username admin --os-password <password> --os-tenant-name admin type-key "NetAPP-iSCSI" set volume_backend_name=NetAppISCSI cinder --os-username admin --os-password <password> --os-tenant-name admin type-create "IBM-SVC-FC"cinder --os-username admin --os-password <password> --os-tenant-name admin type-key "IBM-SVC-FC" set volume_backend_name=IBMSVC-FC cinder --os-username admin --os-password <password> --os-tenant-name admin extra-specs-list

vipr:is_thick_pool=trueBy default, ViPR Controller sets the provisioning type of OpenStack volumes to thinduring discovery. If the provisioning type is thick, you must set the ViPR Controller-specific property for thick provisioning to true for the volume type. If the provisioningtype of the volume is thin, you do not need to set the provisioning type for the volume inOpenStack.

Physical Asset Requirements and Information

Volume setup requirements for OpenStack 27

Page 28: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

The following example demonstrates the Cinder CLI commands to create a volume type(NetApp), and define the provisioning type of the volume as thick.

cinder --os-username admin --os-password <password> --os-tenant-name admin --os-auth-url=http://<hostname>:35357/v2.0 type-create "NetAPP-iSCSI"cinder --os-username admin --os-password <password> --os-tenant-name admin --os-auth-url=http://<hostname>:35357/v2.0 type-key "NetAPP-iSCSI" set volume_backend_name=NetAppISCSIcinder --os-username admin --os-password <password> --os-tenant-name admin --os-auth-url=http://<hostname>:35357/v2.0 type-key "NetAPP-iSCSI" set vipr:is_thick_pool=true cinder --os-username admin --os-password <password> --os-tenant-name admin --os-auth-url=http://<hostname>:35357/v2.0 extra-specs-list

Validate setupValidate that OpenStack was configured correctly to create volumes for each of the addedstorage systems.

1. Create volumes for each type of volume created in OpenStack.You can create volumes in the OpenStack UI or the Cinder CLI. The Cinder CLIcommand to create a volume is:

cinder --os-username admin --os-tenant-name admin --display-name <volume-name> --volume-type <volume-type-id> <size>

2. Check that the volumes are getting created on the associated storage system. Forexample, NetApp-iSCSI type volumes should be created only on the NetApp storagesystem.

ViPR Controller configuration for third-party block storage portsThe OpenStack API does not provide the storage port World Wide Port Name (WWPN) forFibre Channel connected storage systems and the IQN for iSCSI connected storagesystems. As a result, ViPR Controller cannot retrieve the storage port WWPNs or IQNsduring discovery.

After ViPR Controller discovers a third-party block storage array, a default storage port iscreated for the storage system, and appears in the Storage Port page with the nameDefault and the storage port identifier of Openstack+<storagesystemserialnumber>+Port+Default.

Fibre Channel configured storage portsViPR Controller export operations cannot be performed on an FC connected storagesystem, which was added to ViPR Controller without any WWPNs assigned to the storageport. Therefore, ViPR Controller system administrators must manually add at least oneWWPN to the default storage port before performing any export operations on the storagesystem. WWPNs can be added to ViPR Controller through the ViPR Controller CLI and UI.

After the WWPN is added to the storage port, you can perform export operations on thestorage system from ViPR Controller. At the time of the export, ViPR Controller reads theexport response from the Cinder service. The export response will include the WWPN,which was manually added by the system administrator from the ViPR Controller CLI, andany additional WWPNs listed in the export response. ViPR Controller then creates astorage port for each of the WWPNs listed in the export response during the exportoperation.

After a successful export operation is performed, the Storage Port page displays anynewly created ports and the Default storage port.

Physical Asset Requirements and Information

28 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 29: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Each time another export operation is performed on the same storage system, ViPRController reads the Cinder export response. If the export response presents WWPNs,which are not present in ViPR Controller, then ViPR Controller creates new storage portsfor every new WWPN.

For steps to add WWPNs to FC connected third-party block storage ports, see the ViPRController CLI Reference Guide, which is available from the ViPR Controller ProductDocumentation Index .

iSCSI configured storage portsThe default storage port is used to support the storage system configuration until anexport is performed on the storage system. At the time of the export, ViPR Controllerreads the export response from the Cinder service, which includes the iSCSI IQN. ViPRController then modifies the default storage port's identifier with the IQN received fromthe Cinder export response.

Each time another export operation is performed on the same storage system, ViPRController reads the Cinder export response. If the export response presents an IQN,which is not present in ViPR Controller, then ViPR Controller creates a new storage port.

Stand-alone EMC ScaleIOYour stand-alone ScaleIO should meet the following system requirements and beconfigured as follows before adding the storage to ViPR Controller.

l Protection domains are defined.

l All storage pools are defined.

In addition, you must have the following:

l The IP address of the ScaleIO Gateway host.

l The port used to communicate with the ScaleIO REST API service.

l Th username and password of a user who can access the Primary MDM.

EMC XtremIOBefore adding XtremIO storage to ViPR, ensure that there is physical connectivity betweenhosts, fabrics and the array.

EMC VNXeBefore you add VNXe storage systems to ViPR Controller review the following information.

Table 6 Gather the EMC Unisphere required information

Setting Value

Unisphere host IP address

Unisphere user credentials

Unisphere host port (default is 443)

Create a sufficient amount of storage pools for storage provisioning with ViPR Controller.

Physical Asset Requirements and Information

Stand-alone EMC ScaleIO 29

Page 30: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

General file storage system requirementsWhen configuring a file storage system for ViPR Controller, verify the system meets theserequirements.

l NFS server must be configured in the storage system for ViPR Controller to create theNFS exports.

l CIFS share must be configured in the storage system for ViPR Controller to create theCIFS shares.

l NETBIOS configuration needs to be done to access the share with the NETBIOS name.If NETBIOS is present, the mount point is constructed with the NETBIOS name insteadof IP.

For EMC Isilon, NetApp 7-Mode, and NetApp Cluster-ModeTo add domain users to a share, you must configure the AD server on the storage system(CIFS server).

EMC® Data Domain®

Before adding Data Domain storage to ViPR Controller, configure the storage as follows.

l The Data Domain file system (DDFS) is configured on the Data Domain system.

l Data Domain Management Center (DDMC) is installed and configured.

l Network connectivity is configured between the Data Domain system and DDMC.

While adding Data Domain storage to ViPR Controller, it is helpful to know that:

l A Data Domain Mtree is represented as a file system in ViPR Controller.

l Storage pools are not a feature of Data Domain. However, ViPR Controller usesstorage pools to model storage system capacity. Therefore, ViPR Controller createsone storage pool for each Data Domain storage system registered to ViPR Controller,for example, if three Data Domain storage systems were registered to ViPR Controller,there would be three separate Data Domain storage pools. One storage pool for eachregistered Data Domain storage system.

EMC IsilonBefore adding EMC Isilon storage to ViPR Controller, configure the storage as follows.

Gather the required informationThe following information is needed to configure the storage and add it to ViPR Controller.

Table 7 Isilon required information

Setting Value

IPv4 or IPv6 address

Port (default is 8080)

Credentials for the user to add Isilon storagesystems to ViPR Controller, and to own all of thefile systems created on the Isilon through ViPR

Physical Asset Requirements and Information

30 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 31: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Table 7 Isilon required information (continued)

Setting Value

Controller. This user must have root oradministrator privileges to Isilon.

Configuration requirements

l SmartConnect is licensed and configured as described in Isilon documentation. Besure to verify that:

n The names for SmartConnect zones are set to the appropriate delegated domain.

n DNS is in use for ViPR Controller and provisioned hosts are delegating requests forSmartConnect zones to SmartConnect IP.

l SmartQuota must be licensed and enabled.

l There is a minimum of 3 nodes in the Isilon cluster configured.

l Isilon clusters and zones will be reachable from ViPR Controller Controller VMs.

l When adding an Isilon storage system to ViPR Controller, you will need to use eitherthe root user credentials, or create an account for ViPR Controller users, that hasadministrative privileges to the Isilon storage system.

l The Isilon user is independent of the currently logged in ViPR Controller user. All ViPRController operations performed on the Isilon storage system, are executed as theIsilon user that is entered when the Isilon storage system is added to ViPR Controller.

NetApp 7-modeBefore adding NetApp 7-mode storage to ViPR Controller, configure the storage asfollows.

l ONTAP is in 7-mode configuration.

l Multistore is enabled (7 Mode with Multi store is only supported by ViPR Controller).

l Aggregates are created.

l NetApp licenses for NFS, CIFS, and snapshots are installed and configured.

l vFilers are created and necessary interfaces/ports associated with them

l Setup NFS/CIFS on the vFilers

NetApp Cluster-modeBefore adding NetApp Cluster-mode storage to ViPR Controller, configure the storage asfollows.

l ONTAP is in Cluster-mode configuration

l Aggregates are created.

l NetApp licenses for NFS, CIFS and snapshots are installed and configured.

l Storage Virtual Machines(SVMs) are created and the necessary interfaces and portsare associated with them.

l Setup NFS, and CIFS on SVMs.

Physical Asset Requirements and Information

NetApp 7-mode 31

Page 32: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

l When discovering NetApp Cluster-mode storage systems with ViPR Controller, youmust use the management IP . You cannot discover NetApp Cluster-mode storagesystems using LIF IP.

EMC VNX for FileBefore adding VNX for File storage to ViPR Controller, you must verify that VNX for Filemeets specific requirements.

These are the requirements for adding VNX for File storage to ViPR Controller:

l Storage pools for VNX for File are created.

l Control Stations are operational and reachable from ViPR Controller Controller VMs.

l VNX SnapSure is installed, configured, and licensed.

EMC Elastic Cloud StorageBefore you add EMC Elastic Cloud Storage (ECS) to ViPR Controller, configure the ECSstorage as follows:

l You can use any of the ECS IP addresses when adding the ECS to ViPR Controller.

l When adding the ECS, you need to enter the user credential. The user credentials:

n Must have been created in the ECS with ECS System Admin privileges.

n The ECS user can be an ECS local or domain user. If domain user then the ECSuser must share the same Ldap or AD as the user logged in ViPR Controller.

Note

The user logged into the ViPR Controller must be a domain user.

n Will be used by ViPR Controller to log into the ECS, and to perform operations onthe ECS such as discover the ECS, create a bucket, and assign a bucket owner.

l Replication Groups must be configured on the ECS prior to running discovery orrediscovery from the ViPR Controller.

l ViPR Controller only supports management of ECS local Replication Groups when ECSis configured in a GEO (multi-site) environment.

Note

Discovery of ECS buckets (i.e. ViPR Controller ingestion of buckets) is not supported.

l You must map an ECS Namespace to a ViPR Controller Tenant.

n The ECS namespace must have been created on the ECS prior to mapping it to aViPR Controller tenant.

n The namespace is not discovered by ViPR Controller, you will need to type thenamespace into ViPR Controller exactly as it was configured for the ECS.

n A ViPR Controller Tenant can only be configured with one ECS namespace.

n An ECS namespace can only be mapped to one ViPR Controller. The samenamespace cannot be shared with different ViPR Controller Tenants.

Physical Asset Requirements and Information

32 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 33: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

RecoverPoint systemsReview the ViPR Controller requirements and the information necessary to add andconfigure an EMC RecoverPoint system to ViPR Controller.

RecoverPoint site informationThis information is required to add a RecoverPoint system to ViPR Controller:

l RecoverPoint site management IPv4 or IPv6 address or hostname

l Port

l Credentials for an account that has the RecoverPoint administrator role to access theRecoverPoint site

Configuration requirementsTo add a RecoverPoint system toViPR Controller, do the following:

l Install and license the RecoverPoint systems.

l If ViPR Controller is not managing the SAN network, zone the RecoverPoint systems tothe storage arrays and attach the RecoverPoint splitters.

l Establish IP connectivity between RecoverPoint and the ViPR Controller virtualappliance.

Fabric Managers (switches)ViPR Controller supports Brocade and Cisco switches. System Administrators can reviewthe ViPR Controller requirements and information necessary to add and configure FabricManagers (switches) to ViPR Controller.

ViPR Controller switch and network terminologyViPR Controller has three external interfaces - a REST API, a Command Line Interface (CLI),and a User Interface. The scripting and programming interfaces (API and CLI) use slightlydifferent terminology to refer to the network elements of your Storage Area Network(SAN).

The terminology used in each ViPR Controller Interface is as follows:

l Cisco® switches and Brocade CMCNE management stations are called fabric

managers in the user interface.

l In the ViPR REST API and the ViPR CLI, Brocade CMCNE management stations andCisco switches are called network-systems. Cisco VSANs and Brocade fabrics arecalled networks.

BrocadeYour Brocade switch should meet the following system requirements and be configuredas follows before adding the switch to ViPR Controller.

Software requirementsViPR Controller requires that EMC Connectrix Manager Converged Network Edition(CMCNE) is installed. For supported versions, see the EMC ViPR Controller Support Matrixon the EMC Community Network (community.emc.com).

CMCNE can be downloaded from EMC Support Zone (https://support.emc.com).

Physical Asset Requirements and Information

RecoverPoint systems 33

Page 34: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

CMCNE installation and configuration requirements

l CMCNE must be installed on a different host than the storage system SMI-S provider.CMCNE uses the same port as the SMI-S provider, and therefore causes a port conflictif CMCNE and the SMI-S provider are installed on the same machine.

l The SMI Agent (SMIA) must be installed and started as part of the CMCNE installation. Validating CMCNE SMI Agent Installation on page 34 provides instructions tovalidate that the SMI Agent was installed correctly with CMCNE.

l CMCNE must have access to the switches with administrator privileges and theaccount must be configured with privileges to discover SAN topology, and to activate,create, and delete zones and zonesets..

l Use the CMCNE UI to prediscover the fabrics.

l You must have created the necessary fabrics that will be used by ViPR, assigned theports to the fabrics, and configured any ISL links needed to connect a fabric betweenmultiple switches.

l It is highly recommended that you implement redundant connections among ISLconnections, so that the failure of a single ISL link does not partition fabrics.

l ViPR Controller has no restrictions on the number of fabrics.

l You do not need to create zones.

Validating CMCNE SMI Agent installationThe SMI Agent (SMIA) must have been installed and started as part of the CMCNEinstallation.

To validate that the SMI Agent was installed with the CMCNE installation, enter theCMCNE IP address in a browser to go to the EMC Connectrix Manager Converged NetworkEdition start page.

l If the SMIA provider was installed with CMCNE, a link to start the CMCNE SMIAconfiguration tool appears in the start page as shown below.

Physical Asset Requirements and Information

34 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 35: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

l If a link to start the CMCNE SMIA configuration tool does not appear in the start page,it was not installed with CMCNE as shown below.

If the SMIA is installed, and not functioning as expected, check the C:\<installation path>\cimom\server\logs\smia-provider.log file forany errors.

ViPR Controller support for fibre channel routing with Brocade switchesViPR Controller includes support for Fibre Channel routing configurations using Brocadeswitches. Fibre Channel Routing (FCR) is designed to allow devices from separate fabricsto communicate without merging the fabrics.

Fibre Channel Routing allows separate autonomous fabrics to maintain their own fabric-wide services and provide the following benefits:

l Faults in fabric services, fabric reconfigurations, zoning errors, and misbehavingdevices do not propagate between fabrics.

l Devices that cause performance issues in a fabric do not propagate those issues toother fabrics.

This support expands ViPR Controller network and switch support to includeconfigurations that use a Brocade router to move traffic from one physical data center toanother. The following figure shows an example of a ViPR Controller-supporteddatacenter configuration that uses Fibre Channel routing. The DCX switch acts as therouter between the devices in Fabric 15 and the devices in Fabric 10. In this simpleexample, Host 5 in Fabric 10 can consume storage on VNX 1 in Fabric 15.

Physical Asset Requirements and Information

ViPR Controller support for fibre channel routing with Brocade switches 35

Page 36: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Brocade Fibre Channel routing configuration requirementsThe following guidelines apply to Brocade fibre channel routing configurations.

l As a part of the routing setup, at least one LSAN zone must be created between eachpair of fabrics you expect ViPR Controller to consider. When ViPR Controller discoversthe topology, it assumes that if an LSAN zone exists between two fabrics, thenrouting between the fabrics is allowed. If there is no LSAN zone between the fabrics,ViPR Controller assumes that routing is not allowed between the fabrics.

l CMCNE must discover the router and a seed switch in each participating fabric. In thisexample, CMCNE running on Host 1 would have to discover Brocade_Switch1, theDCX Switch, and Brocade Switch 2.

l ViPR Controller must successfully discover CMCNE. Choose Physical Assets > FabricManagers to add and discover CMCNE.

For more information, refer to the following documents:

l EMC Connectrix Manager Converged Network Edition User Guide for more information oninstalling and configuring CMCNE

l CMCNE Release Notes for a list of supported switch platforms and the minimumfirmware revisions.

Physical Asset Requirements and Information

36 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 37: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

CiscoYou will need the following information and your Cisco switches must be configured asfollows before the switch can be added and used in ViPR Controller.

Cisco switch login credentialsObtain the login credentials to use when adding the Cisco switch to ViPR Controller. Theaccount must have privileges to provision (configure the mode of zone, zonesets, andVSAN) and to show the FCNS database.

Configuration requirementsConfigure the switch as follows before adding it to ViPR Controller:

l Enable SSH.

l Create the VSANs that will be used by ViPR Controller on the Cisco switch.

l Assign the appropriate interfaces to the VSAN database.

l Configure any ISL links, and port channels, for multiple switch networks, and validatethat the ISL links are operational. Doing so ensures that the FCNS database iscorrectly distributed and current on any switches that you add to ViPR Controller.

Configuration recommendationsIt is highly recommended, but not required that you:

l Implement redundant connections between all ISL connections, so that the failure ofa single ISL link does not partition VSANs.

l Enable Enhanced Zoning to avoid simultaneous modifications of the zoning databaseby ViPR and other applications from overwriting each other’s changes. If EnhancedZoning cannot be enabled (not a recommended configuration), it is advisable to limitthe number of fabric managers in ViPR Controller to one fabric manager per VSAN.

ViPR Controller support for Inter-VSAN Routing with Cisco switchesViPR Controller includes support for Inter-VSAN Routing (IVR) configurations using Ciscoswitches. IVR is designed to allow hosts, storage arrays and other devices residing inseparate VSANs to communicate without merging the VSANs.

IVR allows separate autonomous VSANs to maintain their own VSAN-wide services andprovide the following benefits:

l Accesses resources across VSANs without compromising other VSAN benefits.

l Transports data traffic between specific initiators and targets on different VSANswithout merging VSANs into a single logical fabric.

l Establishes proper interconnected routes that traverse one or more VSANs acrossmultiple switches. IVR is not limited to VSANs present on a common switch.

l Shares valuable resources across VSANs without compromise. Fibre Channel trafficdoes not flow between VSANs, nor can initiators access resources across VSANsother than the designated VSAN.

l Provides efficient business continuity or disaster recovery solutions when used inconjunction with FCIP.

l IVR Is in compliance with Fibre Channel standards.

The following figure shows an example of a ViPR Controller-supported datacenterconfiguration that uses Inter-VSAN routing. In this example, the IVR switch acts as therouter between the devices in VSAN 15 and the devices in VSAN 10. Host 5 in VSAN 10can consume storage on VNX 1 in VSAN 15.

Physical Asset Requirements and Information

Cisco 37

Page 38: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Isolated VSANSViPR Controller can recognize and create IVR zones between isolated VSANs, whetherthey exist on a single switch or span multiple physical switches.

In order to accomplish and complete IVR zoning, ViPR Controller requires that the propertransit VSANs exist between the IVR routed VSANs. Additionally, ViPR Controller requiresat least one IVR zone be created between each of the IVR routed VSANs. This allows ViPRController to associate and consider the IVR path as available and as an activeprovisioning path across VSANs.

When ViPR Controller discovers the topology, it assumes that when an IVR zone existsbetween the two VSANs, then routing between the VSANs is allowed and configuredproperly to allow host to storage access.

If there is not a pre-instantiated IVR zone between the IVR routed VSANs, ViPR Controllerassumes routing is not allowed between the VSANs and does not create IVR zones.

For example, ViPR Controller can support this configuration.

Physical Asset Requirements and Information

38 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 39: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Figure 1 VSAN configuration supported in ViPR Controller

(Graphic from Cisco MDS 9000 Family NX-OS Inter-VSAN Routing Configuration Guide)

Cisco Inter-VSAN routing configuration requirementsThe following guidelines apply to all Fibre Channel routing configurations.

l For each VSAN, ViPR Controller must successfully discover the Fabric Manager (Ciscoswtich).

l As a part of the routing setup, at least one IVR zone must be created between eachpair of VSANs you expect ViPR Controller to consider. When ViPR Controller discoversthe topology, it assumes that if an IVR zone exists between two VSANs, then routingbetween the VSANs is allowed. If there is no IVR zone between the VSANs, ViPRController assumes that routing is not allowed between the VSANs.

l ViPR Controller must also discover the IVR-enabled switches that enable routingbetween ViPR Controller managed VSANs in your data center.

Refer to the Cisco MDS 9000 Family NX-OS Inter-VSAN Routing Configuration Guide (http://www.cisco.com/c/en/us/td/docs/switches/datacenter/mds9000/sw/5_0/configuration/guides/ivr/nxos/nxos_ivr.html) for more information on installing andconfiguring Cisco switches with IVR enabled, and creating IVR zones.

Vblock system requirementsReview the information and requirements necessary to configure ViPR Controller tomanage your Vblock system.

Vblock system components that must be added to ViPR ControllerAt a minimum, Vblock system UCS, storage systems, and Cisco MDS switches must beadded to the ViPR Controller physical assets for ViPR Controller to perform bare metalprovisioning operations on a Vblock system. Review the following configurationrequirements and information prior to adding the component to ViPR Controller:

l Vblock compute systems on page 42 for UCS requirements.

l The section of this guide which provides the requirements for the type of storagesystem configured in your Vblock system.

Physical Asset Requirements and Information

Cisco Inter-VSAN routing configuration requirements 39

Page 40: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

l Fabric Managers on page 33 for Cisco MDS switch requirements.

After the Vblock system components are added to ViPR Controller, they are automaticallydiscovered and registered in ViPR Controller. For details about how Vblock systemcomponents are discovered and registered in ViPR Controller refer to AdministeringVblock Systems with ViPR Controller on page 68.

Requirements to perform OS installation during provisioningIf you are planning to use ViPR Controller to perform os installation on Vblock computesystems during ViPR Controller, Vblock service operations, you will need to configure yourcompute image server, compute images, and Vblock compute systems in the ViPRController environment as described below, prior to running the service operation.

1. Load your compute images, which contain the os installation files, onto an FTP site.

2. Deploy each compute image server you wish to use.

3. Configure the compute image server networks for each compute image server you aredeploying.

4. Add the compute image server to the ViPR Controller physical assets, and repeat thisstep for each compute image server you deployed in step 2.

5. Add the compute image to the ViPR Controller physical assets, and repeat this stepfor each compute image you are using.

6. Associate each Vblock compute systems with a compute image server.

Prior to performing any of these operations, review the requirements and informationdescribed in the following sections:

l Compute image server on page 40

l Compute image server networks on page 41

l Compute images on page 42

l Vblock compute systems on page 42

Compute image server requirementsA compute image server is required by ViPR Controller to deploy the compute imageswhen you run a ViPR Controller, Vblock System provisioning service, which performsoperating system installation on the Vblock compute systems.

You can add a single or multiple compute image servers to ViPR Controller.

While deploying a compute image server:

l You can use either the compute image server OVF file provided with ViPR Controller,or build a custom compute image server. Steps to deploy the OVF file, or therequirements to build a custom compute image server are both described in the ViPRController Installation, Upgrade, and Maintenance Guide.

l The management and OS Installation networks must be configured for the computeimage server. For details see: Compute image server networks on page 41.

When adding the compute image server to ViPR Controller:

l The compute image server can only be added to ViPR Controller using the ViPRController REST API or CLI. You cannot use the ViPR Controller UI to view or add thecompute image server to ViPR Controller.

l If you are upgrading to ViPR Controller version 2.4, and had previously deployed acompute image server, your compute image server settings will stay the same,however you will only be able to view, or edit the settings from the ViPR ControllerREST API, or CLI.

Physical Asset Requirements and Information

40 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 41: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

l You will need the following information to add the compute image server to ViPRController:

n FQDN or IP address of the compute image server.

n IP address of the OS Installation network.

n User credentials for ViPR Controller to access the compute image server.

n Path to TFTPBOOT directory on the compute image server. Default is /opt/tftpboot/.

n Timeout value for OS installation (in seconds). Default value is 3600.

After adding the compute image server to ViPR Controller:

l Provisioning with OS Installation requires that one compute image server isassociated with each compute system.

l A compute image server can be associated to multiple compute systems, however acompute system can only be associated with one compute image server.

l The compute image server can only be associated to a compute system using theViPR Controller REST API or CLI. You cannot use the ViPR Controller UI to add, remove,or replace the compute image server associated with a compute system.

l For any Vblock compute systems added to ViPR Controller version 2.4, you will needto associate a compute image server prior to using the compute system forprovisioning with OS installation.

l If you are upgrading to ViPR Controller version 2.4, and had previously configuredViPR Controller for provisioning with OS Installation, the association between thecompute image server, and the compute system will remain after you upgrade ViPRController. However, you will have to use the ViPR Controller REST API, or CLI to makeany changes to the association between the compute image server, and the computesystem.

For specific steps to perform the operations described above see either the ViPR ControllerREST API Virtual Data Center Configuration Guide, or the ViPR Controller CLI Reference Guide,which are available from the ViPR Controller Product Documentation Index .

Compute image server network requirementsA network administrator must configure two networks for each compute image serveryour are deploying.

Management networkThe management network is required for communication between ViPR Controller, andthe compute image server.

Private OS Installation NetworkThe OS Installation Network is a private network for operating system (OS) installation.The OS installation Network is used by ViPR Controller during provisioning, forcommunication between the hosts, and the ViPR Controller compute image server. Oncethe hosts, and ViPR Controller compute image server are connected over the OS InstallNetwork, the operating system installation is then performed over the OS InstallationNetwork. Once installation is complete, the OS Installation Network is removed from thehosts.

The Private OS Installation Network must be:

l Configured with its own private DHCP server. No other DHCP server can be configuredon the OS Installation Network.

Physical Asset Requirements and Information

Compute image server network requirements 41

Page 42: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Note

The OS Image Server, which is provided with ViPR Controller, contains a dedicatedDHCP server.

l Isolated from other networks to avoid conflicts with other VLANs.

Compute imagesCompute images are the operating system (OS) installation files that can be installed bythe ViPR Controller onto Vblock compute systems.

The compute image is not packaged with ViPR Controller, however ViPR Controller can beused to install the operating system on Vblock compute systems during ViPR Controller,Vblock system provisioning operations. To support this functionality:

l You must have access to OS installation files that are supported by ViPR Controller.For supported versions see the ViPR Controller Support Matrix.

l The OS Installation files must be placed on an HTTP, or an FTP site.

l The compute image server, and the required networks, must have been deployed andadded to ViPR Controller, before the compute images can be added to ViPRController. For details see: Compute image server on page 40

l When adding the compute image to ViPR Controller, you will need the HTTP, or FTPaddress where the compute image resides. If a user name and password are requiredto access the site, specify them in the URL for example:ftp://username:password@hostname/ESX/VMware-Installer-5.0-20.x86_64.iso.

l You can add the compute images to ViPR Controller using the ViPR Controller UI, RESTAPI, or CLI.

l Once a compute image is added to ViPR Controller, the compute image isautomatically loaded onto each compute image server that has been added to ViPRController, and the compute image name is stored in the ViPR Controller database.

l If a compute image is removed from ViPR Controller it is removed from all of thecompute image servers.

For specific steps to perform the operations described above see one of the followingdocuments depending on which interface you are using, which are available from the ViPR Controller Product Documentation Index :

l ViPR Controller User Interface Virtual Data Center Configuration Guide

l ViPR Controller REST API Virtual Data Center Configuration Guide

l ViPR Controller CLI Reference Guide

Vblock compute systemsReview the ViPR Controller requirements, and information necessary to add, andconfigure Vblock compute systems to ViPR Controller.

To prepare the Vblock compute systems for ViPR Controller discovery:

l Service Profile Templates must be configured in UCSM for ViPR Controller to apply tothe compute virtual pools when a cluster is created by ViPR Controller. Discuss whichservice profile templates should be used to provision with your UCS administrator.For the ViPR Controller configuration requirements for UCS service profile templates,see ViPR Controller requirements for Service Profile Templates on page 58.

Physical Asset Requirements and Information

42 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 43: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

l While adding the compute system to ViPR Controller you will need the IP address, anduser credentials with administrator privileges to the UCSM being used to manage theUCS.

To prepare the compute system for provisioning with OS Installation:

l If you have added the compute system to ViPR Controller 2.4, you will need toassociate one compute image server with each compute system. The compute imageserver can only be associated to a compute system using the ViPR Controller REST APIor CLI. You cannot use the ViPR Controller UI to add, remove, or replace the computeimage server associated with a compute system.

l If you are upgrading to ViPR Controller version 2.4, and had previously configuredViPR Controller for provisioning with OS Installation, the association between thecompute image server, and the compute system will remain after you upgrade ViPRController. However, you will have to use the ViPR Controller REST API, or CLI to makeany changes to the association between the compute image server, and the computesystem.

l You will need the VLAN ID for the OS Install Network.

For specific steps to perform the operations described above see one of the followingdocuments depending on which interface you are using, which are available from the ViPR Controller Product Documentation Index :

l ViPR Controller User Interface Virtual Data Center Configuration Guide

l ViPR Controller REST API Virtual Data Center Configuration Guide

l ViPR Controller CLI Reference Guide

HostsViPR Controller Tenant Administrators can review the ViPR Controller requirements, andinformation necessary to add and configure hosts in ViPR Controller.

There are two ways to add hosts to ViPR Controller:

l Discoverable - once you have added the host, the ViPR Controller automaticallydiscovers the host, and host initiators, and Windows clusters, and registers them toViPR Controller. Only AIX

®, AIX VIO, Linux

®, and Windows hosts can be added as

discoverable. Only Windows clusters can be automatically discovered in ViPRController.

l Undiscoverable - ViPR Controller does not discover, or register the host or hostinitiators. Any host that is not an AIX, AIX VIO, Linux, and Windows is added to ViPRController as undiscoverable. Optionally, AIX, AIX VIO, Linux, and Windows can beadded as undiscoverable as well. When an undiscoverable host has been added toViPR Controller, you must manually add, and register the host initiators before usingthe host in a service operation.

AIX hosts and AIX VIO ServersAIX hosts must be configured as follows to be discovered by ViPR Controller, and for ViPRController provisioning.

l Either EMC PowerPath , or AIX default MPIO (not both) must be enabled.

l EMC Inquiry (INQ) utility is required to be installed to match the volume World WideNames (WWNs) to the host disks (hdisks).

l SSH must be installed and configured on the AIX hosts.

Physical Asset Requirements and Information

Hosts 43

Page 44: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

HP-UXHP-UX hosts are added to ViPR Controller as undiscoverable. The HP-UX® operatingsystem type option.

l Sets the Volume Set Addressing (VSA) flag, which is required for exporting EMCVMAX, and VPLEX volumes to HP-UX hosts.

l Is required to use the Host Mode Option when provisioning with HDS storagesystems.

l When you add an HP-UX host to ViPR Controller, you will still need to manually addand register the host initiators in ViPR Controller.

LinuxLinux hosts must be configured as follows to be discovered by ViPR Controller, and usedfor ViPR Controller provisioning.

l SSH and LVM are enabled. ViPR Controller uses SSH for Linux hosts.

l EMC PowerPath or native Linux multipathing software is installed.Refer to the EMC PowerPath for Linux Installation and Configuration Guide and the SuSELinux Enterprise Server (SLES): Storage Administration Guide, under "Configuring theSystem for Multipathing."

l Time synchronization is configured.

l In some cases, it may be necessary to install lsb_release. If host discovery fails dueto compatibility, and logs indicate that the lsb_release command is not found, thepackage that includes that command must be installed.

Linux user requirementsWhen ViPR Controller storage is attached to a Linux host it needs to run commands on thehost. To access the host, ViPR Controller uses the credentials entered at the time the hostis added to ViPR Controller. These are usually the credentials for the root account. If youdo not wish to give ViPR Controller root access to a Linux host, it is recommended to givethe sudo user ALL privileges to run the commands ViPR Controller requires.

Using sudo for Linux host discoveryWhen using a sudo user for host discovery, do the following:

1. Make sure tty is disabled for this user in /etc/sudoers.

2. Modify /etc/sudoers by adding the line Defaults:<user1> !requiretty underDefaults requiretty. For example:

Defaults requirettyDefaults:smith !requiretty

WindowsWindows hosts must be configured as follows to be discovered by ViPR Controller, andused for ViPR Controller provisioning.

Register Windows host with a domain user account.For each ViPR Controller node, add the machine domain name to the /etc/hosts file.For example, for a machine named lglap025 under the AD domain ofprovisioning.bourne.local, type <IP adress of themachine>.lglap025.provisioning.bourne.local in the /etc/hosts file.

Physical Asset Requirements and Information

44 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 45: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

In this example, you would specify lglap025.provisioning.bourne.local in the hostfield and <DOMAIN>\user in the user name field when configuring a Windows host inViPR Controller.

Windows Remote Management (WinRM) must be enabled.Refer to Configuring WinRM on a Windows Host for ViPR on page 45.

Either EMC PowerPath, or Microsoft MPIO (not both) must be enabled.For details see either the EMC PowerPath and PowerPath/VE for Microsoft WindowsInstallation and Administration Guide or the Windows: Microsoft Multipath I/O Step-by-StepGuide.

Time synchronization is configured.For host discovery, if using LDAP or Active Directory domain account credentials, thedomain user credentials can be in the same domain or in a different domain (but withinthe same forest) from where the Windows host is located. For example, if a Windows hostis in the child1.example.com domain and the user account is in the parent domain(example.com), ViPR Controller can discover the Windows host in thechild1.example.com domain using the user account from the parent domain(example.com).

Configuring WinRM on a Windows host for ViPR ControllerConfigures a Windows host to allow ViPR Controller to run commands on it.

Before you begin

l You must be logged in to the Windows host as administrator.

l For the ViPR Controller server to connect to Windows remote hosts, the host mustaccept remote Windows PowerShell commands. You can do this by enablingWindows remote access over HTTP.

The commands in the following procedure are executed in the Windows commandprompt. If you are using Windows PowerShell to perform these commands, surround thevalues in single quotes. For example: winrm set winrm/config/service/auth'@{Basic="true"}'

Procedure

1. At an administrator command prompt on the Windows host, issue the followingcommand:

winrm quickconfig

This starts up a listener on port 5985. The port on which you start the listener must beconsistent with the port that you configure for the host in the host asset page.

2. You may need to make some configuration changes depending on how you want toconnect to the host.

l If you want ViPR Controller to connect to the host as a local user, you need to:

a. Check the winrm settings by running:winrm get winrm/config/serviceBasic Authentication and AllowUnencrypted must be set to true.

b. If basic authentication is not set to true, run:winrm set winrm/config/service/auth @{Basic="true"}

c. If AllowUnencrypted is not set to true, run:winrm set winrm/config/service @{AllowUnencrypted="true"}

Physical Asset Requirements and Information

Windows 45

Page 46: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

d. Add the host to the ViPR Controller, Physical Assets > Hosts page.

l If you want ViPR Controller to connect to the host as a domain user, you need to:

a. Ensure Kerberos is enabled. You can check using:winrm get winrm/config/service

b. If you need to enable Kerberos, run:winrm set winrm/config/service/auth @{Kerberos="true"}

c. A System Administrator must ensure that the domain has been configured asan authentication provider in ViPR Controller (Security > AuthenticationProviders).

d. A Tenant Administrator adds the host to ViPR Controller (Physical Assets >Hosts) page.The credentials you supply for the host are of the form:

domain\username3. Check that the host is displayed as valid in the table.

After you finish

After ViPR Controller is deployed, you can check that the host is displayed as valid in thePhysical Assets > Hosts page. If you receive the following message WinRM may not beenabled or configured properly, or there may be a network problem.

Failed to connect to host. Please ensure the connection detailsare correct. [Error connecting: Connection refused]

VMware® vCenterReview the ViPR Controller requirements, and information necessary to add, andconfigure vCenter to ViPR Controller.

For the ViPR Controller user roles required to perform this operation see ViPR Controlleruser role requirements on page 9.

vCenter role requirementsThe role, which will be used by ViPR Controller to access vCenter, must have at least thefollowing privileges enabled:

Datastore privileges:

l Allocate space

l Browse datastore

l Configure datastore

l Remove datastore

Host privileges:

l CIM

l CIM interaction

l Configuration

Physical Asset Requirements and Information

46 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 47: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Required VMware® VirtualCenter role privilegesViPR Controller requires that you have certain role privileges in VMware VirtualCenter toperform vCenter discovery and to create datastores.

Required datastore privileges

l Datastore.Allocate space

l Datastore.Browse datastore

l Datastore.Configure datastore

l Datastore.Remove datastore

Required host privileges

l Host.CIM.CIM Interaction

l Host.Configuration.Storage partition configuration

Physical Asset Requirements and Information

Required VMware® VirtualCenter role privileges 47

Page 48: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Physical Asset Requirements and Information

48 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 49: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

CHAPTER 4

Virtual Asset Requirements and Information

This chapter contains the following topics:

l Overview............................................................................................................... 50l Plan to build your virtual array...............................................................................50l Block storage configuration considerations...........................................................53l File storage configuration considerations.............................................................. 57l Object storage configuration considerations......................................................... 58l ViPR requirements for service profile templates.....................................................58

Virtual Asset Requirements and Information 49

Page 50: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

OverviewViPR Controller System Administrators can review required storage information beforeconfiguring he various storage systems in ViPR Controller virtual arrays and virtual pools,as well understand how ViPR Controller works with the storage system element managersafter the volumes or file systems are under ViPR Controller management.

Plan to build your virtual arrayReview the following before you configure a virtual array.

l To learn about the components that make up a virtual array, and to see examples ofdifferent ways you can abstract from the storage systems and storage components inthe virtual array, see the ViPR Controller Concepts article.

l To decide the type of SAN zoning to set on your virtual array, see SAN zoningrequirements on page 50.

l To decide which is the best method to set up the virtual array, see Plan how to addthe physical assets to the virtual array on page 50.

l If adding a Vblock system to ViPR Controller, see Virtual array requirements for Vblocksystem services on page 52.

SAN zoning requirementsIn the virtual array, you define whether ViPR Controller automates SAN zoning at the timethe storage is exported to the host from ViPR Controller, or if the SAN zoning is handledmanually outside of ViPR Controller operations. This discussion explains what to do whenyou chose manual SAN zoning.

If you chose manual SAN zoning:

l If there is an existing zone for the Host and Array:After the ViPR provisioning operation completes, check the Port Group within theMasking View to identify the FA ports that ViPR selected for the provisioning request.Compare the FA ports in the zone to the FA ports in the Port Group. If they match, nofurther action is required. If they do not match, reconfigure the zone to use the sameFA ports. Alternatively, a new zone can be created.

l If there is no existing zoning for the Host and Array:After the ViPR provisioning operation completes, check the Port Group within theMasking View to identify the FA ports that ViPR selected for the provisioning request.Create a zone with the appropriate initiator and target ports.

Plan how to add the physical assets to the virtual arrayAt a minimum, a virtual array must include one network, and one storage systemconnected to the network.

When configuring the virtual array, you have the option to create a virtual array either byadding:

l Storage systems to the virtual array

l Storage ports to the virtual array

For instructions on creating a virtual array, see the EMC ViPR Controller User InterfaceVirtual Data Center Configuration Guide.

Virtual Asset Requirements and Information

50 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 51: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Add storage systems to the virtual arrayYou may want to add an entire storage system to the virtual array if you are planning tomanage an entire storage system, or multiple storage systems in a single virtual array.

When you add an entire storage system to the virtual array, ViPR Controller automaticallyadds all of the registered networks, and storage ports associated with the storagesystem, to the virtual array. In the following example, when Storage System XYZ wasadded to the virtual array, all the storage ports, Networks A, Network B, VSAN 1, VSAN 2,VSAN 3, and VSAN 4 are all added to the virtual array.

Figure 2 Virtual array created by adding storage systems

Virtual Array XYZ

Network A

VSAN 1 VSAN 2

storage ports

Storage System XYZ

Network B

V V V Viriririrtututualalalal A A Arrrrrrayayay X X XYZYZYZ

N N N Netetetetetwowoworkrkrk A A A

VSVS VSVSVSVSVSVSVSVSVSANANANANANAN 2 2 2 2 2VSVSVSVSVSVSANANANANAN 1 1 1

stststororagage e poportrtss

S S S Stotototorarararagegegege S S S S Sysysysysystetetetem m m m m XYXYXYXYXYZ Z Z Z Z

N N Netetetwowowowowoworkrkrkrkrkrk B B B

VSAN 3 VSAN 4

When you add an entire storage system to a virtual array, you will need to go back intothe virtual array and remove any resources you don't want ViPR Controller to use.

Add storage ports to the virtual arrayIf you want to partition a single storage system into multiple virtual arrays for example,allocate some of the storage system resources for testing and some for production, itmaybe more useful to add the storage ports first to create the virtual array. If you chooseto create the virtual array by first adding the storage ports, ViPR Controller will add onlythe networks, and storage systems associated to the storage ports, and you will start outwith a more defined inventory in your virtual array. The following figure demonstrates howtwo virtual arrays were created from a single storage system by adding the ports firstwhen creating the Production and Test virtual arrays.

Virtual Asset Requirements and Information

Plan how to add the physical assets to the virtual array 51

Page 52: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Figure 3 Virtual array created by adding ports

The Production virtual array was created by adding SP2, and SP3, which automaticallyadds Storage System XYZ, VSAN 2, VSAN 3, Network A, and Network B to the virtual array.While VSAN 1 is part of Network A, it is not added to the virtual array, because no storageports, associated with VSAN 1 were selected to add to the virtual array.

The Test virtual array was created by adding SP4, and SP5, which automatically addsStorage System XYZ, VSAN 4, VSAN 5, Network B, and Network C to the virtual array.While VSAN 6 is part of Network C, it is not added to the virtual array, because no storageports, associated with VSAN 6, were selected to add to the virtual array.

Furthermore, this image demonstrates how a network can be shared across two differentvirtual arrays. Since a storage port associated with Network B was added to each of thevirtual arrays, Network B was added to both virtual arrays.

Virtual array configuration considerations for ECS sytemsWhen creating a virtual array for ECS object storage, you will need to configure an IPnetwork and add the ECS ports to the network..

The network must be configured with the same management storage port that wascreated by ViPR Controller when the ECS was added to ViPR Controller.

If you will be adding multiple ECS systems to a single virtual array, you can use the samenetwork by adding the management port for each ECS to the same network in the virtualarray.

You can configure the network in the ViPR Controller physical assets prior to creating thevirtual array or while creating the virtual array.

Virtual Array requirements for Vblock system servicesFor Vblock systems, storage must be accessible to compute systems through the virtualarray. Vblock systems configured using the VCE logical build guide will have networks

Virtual Asset Requirements and Information

52 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 53: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

configured that connect the Cisco Unified Computing System™ (UCS) compute system tothe storage via the SAN switches.

In ViPR Controller, virtual arrays should be created just as you would for any non-Vblocksystem. The networks that are defined in the virtual arrays will then determine whetherthe UCS systems have visibility to ViPR Controller storage.

The most effective thing is to do is discover all the Vblock system physical assets beforedefining virtual arrays. After discovering all components, consult with the UCSadministrator to determine which networks (VSANs) will be used on a given Vblocksystem. Use those networks to define the ViPR Controller virtual arrays. On lesscomplicated Vblock system configurations, for example, a single Vblock system, simplyadding the storage system to the virtual array may be enough. Once the virtual arrays aredefined, they will be used by ViPR Controller for the following:

l ViPR Controller will automatically determine which UCS compute systems areavailable to compute virtual pools based on the selection of virtual arrays.

l ViPR Controller will automatically determine which blades to use to provision hostsbased on the virtual arrays and compute virtual pools.

ViPR Controller makes these determinations by calculating which UCS compute systemshave visibility to storage through the networks in the virtual arrays.

If working with updating service profile templatesWhen using updating service profile templates, it is recommended to create a dedicatedvirtual array that:

l Includes only the specific storage arrays that are intended to be used with theupdating service profile template.

l Includes only the specific storage ports that are intended to be used with theupdating service profile template.

Block storage configuration considerationsBefore you create virtual arrays and virtual pools for block storage in ViPR Controller,review the following sections for storage system specific configuration requirements andrecommendations:

l Hitachi Data System (HDS) on page 53

l EMC VMAX on page 54

l EMC VMAX3 on page 55

l EMC VNX for Block on page 56

l EMC VNXe for Block on page 56

l EMC VPLEX on page 56

l Third-party block (OpenStack) storage systems on page 56

Hitachi Data SystemsReview the following configuration requirements and recommendations beforevirtualizing your Hitachi Data Systems (HDS) in ViPR Controller.

Virtual pool considerationsViPR Controller provides auto-tiering for the six standard HDS auto-tiering policies forHitachi Dynamic Tiering (HDT) storage pools.

The HDS auto-tiering policy options in ViPR Controller are:

Virtual Asset Requirements and Information

Block storage configuration considerations 53

Page 54: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Policy name inViPR Controller

Policynumber

HDS level Description

All 0 All Places the data on all tiers.

T1 1 Level 1 Places the data preferentially in Tier1.

T1/T2 2 Level 2 Places the data in both tiers when there aretwo tiers, and preferentially in Tiers 1 and 2when there are three tiers

T2 3 Level 3 Places the data in both tiers when there aretwo tiers, and preferentially in Tier 2 whenthere are three tiers.

T2/T3 4 Level 4 Places the data in both tiers when there aretwo tiers, and preferentially in Tiers 2 and 3when there are three tiers.

T3 5 Level 5 Places the data preferentially in Tier 2 whenthere are two tiers, and preferentially in Tier 3when there are three tiers.

VMAXReview the following configuration requirements and recommendations beforevirtualizing your VMAX system in ViPR Controller.

Virtual Pool configuration requirements and recommendationsWhen VMAX is configured with Storage Tiers and FAST Policies:

l Storage Tier and FAST Policy names must be consistent across all VMAX storagesystems.

l For more details about ViPR Controller management with FAST policies see: ViPRController Integration with VMAX and VNX Storage Systems Guide

l Set these options when you build your virtual pool:

Option Description

RAID Level Select which RAID levels the volumes in the virtual pool willconsist of.

Unique Auto-tiering PolicyNames

VMAX only. When you build auto-tiering policies on a VMAXthrough Unisphere, you can assign names to the policies youbuild. These names are visible when you enable Unique Auto-tiering Policy Names.If you do not enable this option, the auto-tiering policy namesdisplayed in the Auto-tiering Policy field are those built by ViPR.

Auto-tieringPolicy

The Fully Automated Storage Tiering (FAST) policy for this virtualpool. FAST policies are supported on VMAX, VNX for Block, andVNXe.ViPR chooses physical storage pools to which the selected auto-tiering policy has been applied. If you create a volume in thisvirtual pool, the auto-tiering policy specified in this field isapplied to that volume.

Virtual Asset Requirements and Information

54 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 55: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Option Description

Fast Expansion VMAX or VNX Block only. If you enable Fast Expansion, ViPRcreates concatenated meta volumes in this virtual pool. If FastExpansion is disabled, ViPR creates striped meta volumes.

Host Front EndBandwidth Limit

0 - set this value to 0 (unlimited). This field limits the amount ofdata that can be consumed by applications on the VMAX volume.Host front end bandwidth limits are measured in MB/S.

Host Front EndI/O Limit

0 - set this value to 0 (unlimited). This field limits the amount ofdata that can be consumed by applications on the VMAX volume.Host front end I/O limits are measured in IOPS.

VMAX3Review the following configuration requirements and recommendations beforevirtualizing your VMAX3 system in ViPR Controller.

Set these options when you build your virtual pool:

Table 8 VMAX3 virtual pool settings

Field Description

Provisioning Type Thin. VMAX3 does not support thick volumes.

Protocols FC.

System Type EMC VMAX

Thin VolumePreallocation

0 or 100. Other values would filter out the VMAX3 SRP pools.

0 - Volumes allocated using this pool are fully-thin.

100 - Volumes allocated using this pool are full-allocated.

Unique Auto-tieringPolicy Names

Enabled.

Auto-tiering Policy VMAX3 is delivered with pre-defined Storage Level Objectives, andworkflows. You can specify the workflow and SLO you want applied to yourvolume during provisioning.

Expandable Expanding VMAX3 volumes through ViPR Controller is not supported.

Host Front EndBandwidth Limit

0 - set this value to 0 (unlimited).This field limits the amount of data that can be consumed by applicationson the VMAX3 volume. Host front end bandwidth limits are measured inMB/S.

Host Front End I/OLimit

0 - set this value to 0 (unlimited).This field limits the amount of data that can be consumed by applicationson the VMAX3 volume. Host front end I/O limits are measured in IOPS.

Virtual Asset Requirements and Information

VMAX3 55

Page 56: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

EMC VNX for BlockReview the following configuration consideration before adding VNX for Block storage tothe ViPR Controller virtual pools.

Virtual pool configuration considerations

l Fibre Channel networks for VNX for Block storage systems require an SP-A and SP-Bport pair in each network, otherwise virtual pools cannot be created for the VNX forBlock storage system.

l Prior to ViPR Controller version 2.2, if no auto-tiering policy was set on the virtual poolcreated from VNX for Block storage, ViPR Controller creates volumes from the virtualpools with auto-tiering enabled. Starting with ViPR Controller version 2.2, if no policyis set on the virtual pool created for VNX for Block storage, ViPR Controller will createvolumes from the virtual pool with the "start high then auto-tier" enabled on the newvolumes created in the same virtual pool.

EMC VNXe for BlockIt is recommended when exporting a VNXe for Block volume to a host using ViPRController that the host is configured with Fibre Channel only or iSCSi connectivity to thestorage.

EMC VPLEXReview the following configuration requirements and recommendations beforevirtualizing third-party block storage in VPLEX .

Virtual array configuration requirements and recommendationsWhile creating virtual arrays, manually assign the VPLEX front-end and back-end ports ofthe cluster (1 or 2) to a virtual array, so that each VPLEX cluster is in its own ViPRController virtual array.

Virtual pool configuration requirements and recommendationsWhen running VPLEX with VMAX, the Storage Tier and FAST Policy names must beconsistent across all VMAX storage systems.

Third-party block (OpenStack) storage systemsReview the following configuration requirements and recommendations beforevirtualizing third-party block storage in ViPR Controller.

Virtual pool recommendations and requirementsIf the discovered storage system is configured for multipathing, you can increase thevalues set in the virtual pool after the target ports are detected by ViPR Controller.

Block storage systems under ViPR Controller managementOnce a volume is under ViPR Controller management and is provisioned or exported to ahost through a ViPR Controller service, do not use the storage system element manager toprovision or export the volume to hosts. Using only ViPR Controller to manage the volumeprevents conflicts between the storage system database and the ViPR Controllerdatabase, and avoids concurrent lock operations being sent to the storage system. Someexamples of failures that could occur when the element manager and the ViPR Controllerdatabase are not synchronized are:

Virtual Asset Requirements and Information

56 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 57: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

l If you use the element manager to create a volume, and at the same time anotheruser tries to run the "Create a Volume" service from ViPR Controller on the samestorage system, the storage system may be locked by the operation run from theelement manager, causing the ViPR Controller “Create a Volume” operation to fail.

l After a volume is exported to a host through ViPR Controller, the same masking view,which was used by ViPR Controller during the export, was changed on the storagesystem through the element manager. When ViPR Controller attempts to use themasking view again, the operation fails because what ViPR Controller has in thedatabase for the masking view is not the same as the actual masking viewreconfigured on the storage system.

However, you can continue to use the storage system element manager to managestorage pools, add capacity, and troubleshoot ViPR Controller issues.

File storage configuration considerationsReview the following information before you add file storage systems to ViPR Controllervirtual arrays and virtual pools, and before you use the file systems in a ViPR Controllerservice.

Virtual pool for configuration settings for all file storage systemsFile systems are only thinly provisioned. You must set the virtual pool to Thin, whenadding file storage to the virtual pool.

File storage systems under ViPR Controller managementOnce a filesystem is under ViPR Controller management, and has been provisioned orexported to a host through a ViPR Controller service, you should no longer use thestorage system element manager to provision or export the filesystem to hosts. Usingonly ViPR Controller to manage the volume will prevent conflicts between the storagesystem database and the ViPR Controller database, as well as avoid concurrent lockoperations being sent to the storage system. You can however continue to use thestorage system element manager to manage storage pools, add capacity, andtroubleshoot ViPR Controller issues.

Specific storage system configuration requirementsBefore you create virtual arrays and virtual pools for File storage in ViPR Controller, reviewthe following sections for storage system specific configuration requirements andrecommendations:

l EMC Data Domain on page 57

l EMC VNX for File on page 58

EMC® Data Domain®

Review the following information before virtualizing the Data Domain storage in the ViPRController virtual arrays and virtual pools.

Virtual pool configuration requirement and considerationsWhen creating the file virtual pool for Data Domain storage, the Long Term Retentionattribute must be enabled.

While configuring the file virtual pools for Data Domain storage systems it is helpful toknow that:

l A Data Domain Mtree is represented as a file system in ViPR Controller.

l Storage pools are not a feature of Data Domain. However, ViPR Controller usesstorage pools to model storage system capacity. Therefore, ViPR Controller createsone storage pool for each Data Domain storage system registered to ViPR Controller,

Virtual Asset Requirements and Information

File storage configuration considerations 57

Page 58: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

for example, if three Data Domain storage systems were registered to ViPR Controller,there would be three separate Data Domain storage pools. One storage pool for eachregistered Data Domain storage system.

EMC VNX for FileWhen configuring a VNX file virtual pool that uses CIFS protocol , there must be at leastone CIFS server on any one of the physical data movers.

Object storage configuration considerationsReview the following information before you add file storage systems to ViPR Controllervirtual arrays and virtual pools, and before you use the object storage systems in a ViPRController service.

ECSWhile creating Object Virtual Pools for ECS systems:

l You must assign the pool to one or more virtual arrays which contain one or more ECSsystems.

l The maximum Retention Period, sets the maximum retention that the buckets createdwith this virtual pool can not exceed.

l If you assign the virtual pool to a tenant, the ViPR Controller tenant must have beenconfigured with an ECS namespace.

ViPR requirements for service profile templatesThe following sections explain the requirements to configure a service profile template forViPR Controller provisioning operations.

Note

If existing service profile templates do not match the following requirements, clone one ofthe service profile template to create a new service profile template and alter the settingsas required by ViPR Controller.

General properties

l The service profile template must not be associated to a server pool. Blade selectionis performed by the ViPR Controller Compute Virtual Pools.

l UUID assignment must be from a valid UUID Suffix Pool set up in the UCS withavailable addresses.

StorageViPR Controller currently supports Fibre Channel boot for UCS servers. The following liststhe Fibre Channel requirements:

l World Wide Node Name (WWNN) assignment must be from a valid UUID Suffix Poolset up in the UCS with available addresses.

l The Local Disk Configuration Policy must be set to a local disk configuration policywhere the Mode is set to No Local Storage.

l There must be at least one vHBA interface.

Virtual Asset Requirements and Information

58 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 59: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

l For each vHBA, the World Wide Port Name (WWPN) assignment must be from a validWWPN pool set up in the UCS with available addresses.

l The VSAN set on each vHBA must be a valid network discovered by ViPR Controller.The VSAN must match one of the networks in a ViPR Controller virtual array.

l Policy settings on the vHBAs are not set by ViPR Controller provisioning and are at theadministrator's discretion.

Network

l Policy settings on the vNICs are not set by ViPR Controller provisioning and are at theadministrator's discretion.

l There must be at least one vNIC interface.

l For each vNIC, the MAC Address Assignment must be from a valid MAC pool that wasset up in the UCS with available addresses.

l Each vNIC must have at least one VLAN.

Boot Policy and Boot OrderThere are no Boot Policy requirements. ViPR Controller ignores all Boot Policy settings inthe service profile template and overwrites any existing parameters when it createsservice profiles.

PoliciesViPR Controller does not set any policies. The UCS administrator is responsible for settingthe policies.

Updating service profile templatesIf provisioning with updating service profile templates,

l The boot policy of the updating service profile template must specify SAN as the firstboot device.

l If the boot policy of the updating service profile template enforces vNIC and vHBAnames, the names of the vNICs and vHBAs in the service profile template must matchthose in its boot policy.

l The compute virtual pool with which the updating service profile template is beingassociated, must be associated to a virtual array that has storage ports on the VSANsthat the vHBAs of the template use.

l If the boot policy of the updating service profile template specifies SAN boot targetWWPNs, then compute virtual pool that the template is associated with must beassociated with a virtual array that includes those storage ports on the appropriateVSANs.

Virtual Asset Requirements and Information

ViPR requirements for service profile templates 59

Page 60: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Virtual Asset Requirements and Information

60 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 61: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

CHAPTER 5

Selecting Storage Pools for Provisioning

This chapter contains the following topics:

l Overview............................................................................................................... 62l Understanding pool utilization and subscription................................................... 62l Selection process for block storage ...................................................................... 63l Selection process for file storage...........................................................................65

Selecting Storage Pools for Provisioning 61

Page 62: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

OverviewLearn how ViPR Controller automatically selects block and file physical storage pools forprovisioning.

ViPR Controller runs filters against a set of storage pools that cover the physical storagesystems associated with the virtual pools in the virtual arrays. If the storage pool meetsall the filter criteria, it becomes a candidate for provisioning.

Understanding pool utilization and subscriptionViPR Controller uses pool utilization and subscription when evaluating storage pools forprovisioning.

The storage pool's capacity and subscription parameters are evaluated for poolutilization. Pool utilization is a calculation of the space that is currently being usedcompared to what is available as a percentage value. If the storage pool is below theutilization threshold, it is considered a match for provisioning.

Thick pool allocations are straight percentage values from 0 to 100%. Thin pools areallocated on-demand, and you can create volumes in excess of the pool size. Asubscription percentage can be over 100%, implying that the aggregate size of thevolumes provisioned pool is over the capacity. This over-subscription is limited to acertain percentage of the pool.

In the ViPR Controller user interface, you set the pool utilization and thin poolsubscription on the Configuration Properties panel, shown below. To access this panel,select the Settings icon and then Configuration > Controller.

If you use the default Pool Utilization of 75% and the default Thin Pool Subscription of300% on the Configuration Properties panel, the following occurs:

l Thick and thin storage pools: If the space used is 75% or more of the availablecapacity, the physical storage pool is not a match for provisioning. If the thin storagepool is utilized more than the utilization limit, it is not a match for provisioning.

l Thin storage pool: If the space subscribed is more than 300%, the physical thinstorage pool is not a match for provisioning.

Selecting Storage Pools for Provisioning

62 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 63: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Figure 4 ViPR Controller Configuration Properties panel

Selection process for block storageViPR Controller runs filters at volume creation and volume provisioning time. If thestorage pool meets all the filter criteria, it becomes a candidate for block volumeplacement.

Filtering processThe following table explains the filters involved in the selection process.

Table 9 Filters for creating block volumes

Filter Category Description

activePoolMatcher Active StoragePools Filters out all inactive pools, suchas storage arrays that areunreachable.

neighborhoodsMatcher VirtualArray Filters out all storage pools that arenot part of the virtual array.

protectionMatcher Volume Protection RecoverPoint and High Availability:

l If the Metro point is enabled,looks for theHigh_availability_RP parametervalue set on the pool.

l If not a Metro point and has amirror attribute and the pooldoes not support mirror

Selecting Storage Pools for Provisioning

Selection process for block storage 63

Page 64: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Table 9 Filters for creating block volumes (continued)

Filter Category Description

capabilities, the pool is filteredout.

maxResourcesMatcher VirtualPool Allocation Checks the setting for the numberof resources that can be created inthe virtual pool, and filters out allstorage pools exceeding thisnumber.

capacityMatcher StoragePool Capacity Filters out the pools that do nothave enough storage capacity.

vnxBlockFastPolicyMatcher VNX Array Tiering Policy (VNX block volumes only) Verifiesthe FAST policy.

storageSystemMatcher StorageSystem Affinity (Multi-volume consistency only)Checks if the pools in the virtualpool are in the virtual array. If not,filters them out. Returns the highestcapacity storage pool for theconsistency pool.

vmaxFASTInitialTierSelectionMatcher

VMAX Array Tiering Policy (VMAX only) Determines the storagepool on which to initially place thevolume (initial placement tier). Thisinitial placement tier is based onthe drive type and RAID levelsettings in the virtual pool. If theseparameters were set, retrieves thestorage pool with the same drivetype and RAID level. This filter runsat volume provisioning time.

poolPreferenceBasedOnDriveMathcer

StoragePool Drive Type (VMAX only) Matches the drive typeof the storage pool against thevirtual pool drive type setting. If thedrive type is not set in the virtualpool, uses the FC pool by default forvolume provisioning. If unable tofind the FC pool, uses the SAS,NL_SAS, SATA, SSD pools (in thatorder). This filter runs at volumeprovisioning time.

Candidate storage pools

After the filters are run, a listing of candidate storage pools is created for volumeplacement. They are ordered according to these rules:

l Least busy arrays and highest capacity pools

l Ascending order of its storage system's average port usage metrics (first order)

l Descending order by free capacity (second order)

l Ascending order by ratio of pool's subscribed capacity to total capacity (suborder).

Selecting Storage Pools for Provisioning

64 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 65: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

ViPR Controller uses this ordered list of storage pools to place the volumes. If the numberof volumes is more than one, the volumes may be spread across the storage pools,depending on the sizes and the available storage pool capacity.

As the ViPR Controller places the volumes, it marks the volume capacity as reservedagainst the storage pool. This ensures that other provisions that happen subsequent toor during a volume creation are correctly distributed based on capacity. After the volumeis created, or if there is an error in creating the volume, the reserved space is removedfrom the storage pool and the actual allocated space takes its place.

Selection process for file storageViPR Controller runs filters at file creation. If the storage pool meets all the filter criteria, itbecomes a candidate for file placement.

The following explains the selection process for file storage.

l activePoolMatcher: Filters out all inactive storage pools, such as storage arrays thatare unreachable.

l neighborhoodsMatcher: eliminates all pools that are not part of the virtual array.

l Retrieves all storage pools that match the passed virtual pool parameters andprotocols. In addition, the storage pool must have enough capacity to hold at leastone resource of the requested size.

l maxResourcesMatcher: Checks the setting for the number of resources that can becreated in the virtual pool, and filters out all storage pools exceeding this number.

l capacityMatcher: Filters out the storage pools that do not have enough storagecapacity.

l After the filters are run, a list of candidate storage pools is created for file placement.They are ordered according to these rules:

n Least busy arrays and highest capacity pools

n Ascending order of its storage system's average port usage metrics (first order)

n Descending order by free capacity (second order)

n Ascending order by ratio of pool's subscribed capacity to total capacity (suborder)

Selecting Storage Pools for Provisioning

Selection process for file storage 65

Page 66: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Selecting Storage Pools for Provisioning

66 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 67: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

APPENDIX A

Administering Vblock Systems with ViPRController

l Administering Vblock systems in your ViPR Controller virtual data center...............68l How Vblock systems are discovered by ViPR Controller..........................................68l How Vblock system components are virtualized in ViPR Controller.........................71l Examples of virtual arrays for Vblock systems........................................................72

Administering Vblock Systems with ViPR Controller 67

Page 68: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Administering Vblock systems in your ViPR Controller virtual datacenter

This appendix provides an overview of how Vblock systems, and the Vblock systemcomponents are administered in the ViPR Controller virtual data center (VDC).

To see how the Vblock compute and storage systems are provisioned using ViPRController refer to the ViPR Controller Service Catalog Reference Guide which is availablefrom the ViPR Controller Product Documentation Index .

How Vblock systems are discovered by ViPR ControllerA Vblock system is a converged hardware system from VCE (VMware

® , Cisco® , and

EMC® ) that is sold as a single unit consisting of the following components:

l Compute: Cisco Unified Computing System™ (UCS)

l Storage: EMC Storage System

l Network:

n Pair of Cisco SAN switches

n A Pair of LAN switches when the UCS will not be plugged directly into a customernetwork

l Virtualization: VMware vSphere®

Administering Vblock Systems with ViPR Controller

68 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 69: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Figure 5 Vblock system

See the VCE Vblock System Release Certification Matrix for a list of ViPR Controller systemsand system component support.

For the Vblock system component versions supported by ViPR Controller, see the ViPRController Support Matrix.

Add Vblock system components to ViPR Controller physical assetsFor ViPR Controller to discover the Vblock system, ViPR Controller requires that you addeach Vblock system component to the ViPR Controller physical assets as follows:

Table 10 Vblock system components in ViPR Controller

Add Vblock components Into ViPR Controller Physical Assets

ComputeCisco Unified Computing System Manager

Vblock Compute System

VMware vSphere vCenter systems vCentersHosts if adding individual ESX or ESXi hosts

Network (SAN) Fabric Managers (Cisco MDS switch)

Network (IP) Networks

Administering Vblock Systems with ViPR Controller

Add Vblock system components to ViPR Controller physical assets 69

Page 70: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Table 10 Vblock system components in ViPR Controller (continued)

Add Vblock components Into ViPR Controller Physical Assets

Storage Storage systems or Storage Providers

ViPR Controller discovery of Vblock system componentsOnce the Vblock system components are added to ViPR Controller, ViPR Controllerautomatically discovers the components and the component resources as follows:

Table 11 ViPR Controller discovery of Vblock system components

When you add thephysical asset toViPR

ViPR discovers

Vblock ComputeSystem

Cisco Unified Computing System ManagerBlades (referred to as Compute Elements in ViPR Controller).

Service Profile Templates

Updating Service Profile Templates (uSPT)

Fabric Managers(Cisco MDS and Nexus5000 series switches)

Cisco MDS switchVSANs

Fibre Channel Ports

VMware vSpherevCenter

Discovering vCenter will discover existing hosts & clusters as well asprovide a target for new hosts & clusters that are provisioned by ViPRController to be loaded into. (ESX hosts not managed by a vCenterinstance can be discovered individually as part of standalone hostdiscovery.)

Network IP For file storage, ViPR Controller can discover the ports of IP connectedstorage systems and hosts, but it cannot discover the paths betweenthem, so it is necessary to create IP networks in ViPR Controller, andmanually add the host, and storage system ports, which will beprovisioned together, to the same IP network.

Storage Storage systemStorage pools

Storage ports

ViPR Controller discovers each Vblock system component as an individual ViPR Controllerphysical asset. The connectivity between the Vblock system components is determinedwithin the context of the ViPR Controller virtual array. When virtual arrays are created,ViPR Controller determines which compute systems have storage connectivity throughthe virtual array definition. The virtual arrays are used when defining compute virtualpools and during provisioning to understand connectivity of the Vblock systemcomponents.

For more information refer to How Vblock system components are virtualized in ViPR onpage 71.

Administering Vblock Systems with ViPR Controller

70 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 71: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Ingestion of compute elementsUpon discovery of an ESX host or clusterViPR Controller discovers the compute elementUUID, which allows ViPR Controller to identify the linkage between the host, or clusterand the compute elements (blades). When a host, or cluster is then decommissionedthrough the ViPR Controller, the ViPR Controller identifies the compute element asavailable, and makes it available to be used in other service operations.

ViPR Controller registration of added and discovered physical assetsAfter a physical asset is successfully added and discovered in ViPR Controller, ViPRController automatically registers all of the physical assets, which are not in use, and itsresources to ViPR Controller. Physical assets that are registered in ViPR Controller areavailable to use as ViPR Controller resources for ViPR Controller service operations.

Compute elements, or blades, which are found to be in use, are automatically set tounregistered to prevent ViPR Controller from disturbing it.

Optionally, you can deregister physical assets that you would like to see in ViPRController, but do not want ViPR Controller to use as a resource, or some resources can bedeleted from ViPR Controller entirely.

Additional resources required by ViPR Controller for OS installation on Vblocksystems

In addition to the Vblock system components, ViPR Controller requires a compute imageserver, a compute image, and networks for the compute image server are configured, andadded to the ViPR Controller physical assets for ViPR Controller to install operatingsystems on the UCS blades during ViPR Controller, Vblock system provisioningoperations

While the compute image server, and compute image are added to the ViPR Controllerphysical assets, neither are discovered, or registered in the ViPR Controller.

The compute images are operating installation files that are added to ViPR Controller fromand FTP site. The compute image server is used to server the os installation files, over thecompute image server networks during a ViPR Controller Vblock system provisioningoperation that is used to install an operating system on the Vblock compute systems.

How Vblock system components are virtualized in ViPR ControllerOnce the Vblock system components have been added to the ViPR Controller physicalassets, the user can begin to virtualize the components into the virtual arrays, and virtualpools.

Vblock compute systemsThe Vblock compute system is virtualized in ViPR Controller in both the compute virtualpools and the virtual array networks.

Compute virtual poolsCompute virtual pools are a group of compute elements (UCS blades). ViPR Controllersystem administrators can manually assign specific blades to a pool, or define qualifiers,which allow ViPR Controller to automatically assign the blades to a pool based on thecriteria of the qualifier.

Service profile templates are also assigned to a compute virtual pool. Service profiles areassociated to blades to assign the required settings. Additionally, the UCS has the

Administering Vblock Systems with ViPR Controller

ViPR Controller registration of added and discovered physical assets 71

Page 72: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

concept of, "service profile templates, (SPTs)," and "updating Service Profile Templates(uSPTs)," that must be set up by UCS administrators. These service profile templates canbe used by non-admin users to create the service profiles that turn a blade into a host.

ViPR Controller does not perform the functions of the UCS administrator, rather ViPRController utilizes service profile templates to assign the required properties to blades. AUCS administrator will need to create service profile templates that ViPR Controller canuse to provision servers and hosts.

When a Vblock system provisioning service is run, ViPR Controller pulls the resourcesfrom the compute virtual pool selected in the service, and creates a cluster from theblades in the virtual pool, and applies the same service profile template settings to eachof the blades in the virtual pool.

Vblock storage systemsVblock storage systems are virtualized in the ViPR Controller block or file virtual pools,and in the virtual array.

Block or File virtual poolsBlock and file virtual pools are storage pools grouped together according to the criteriadefined by the ViPR Controller system administrator. Block and file virtual pools canconsist of storage pools from a single storage system, or storage pools from differentstorage systems as long as the storage pool meets the criteria defined for the virtual pool.The block or file virtual pool can also be shared across different virtual arrays.

Vblock systems require the storage from block virtual pools for the boot LUN when ViPRController will be used to install an operating system on a Vblock compute system. Oncethe hosts are operational, ViPR Controller can use storage from any connected Vblockstorage pools and export storage to those hosts.

Vblock networks in the virtual arrayConnectivity between the Vblock storage system, and Vblock compute system is definedin the networks in the virtual array. The storage system, and Vblock compute systemsthat will be managed together must be on the same VSAN in the ViPR Controller virtualarray.

Examples of virtual arrays for Vblock systemsViPR Controller provides flexibility for how you can manage Vblock system resources inViPR Controller, by how you configure the Vblock systems in ViPR Controller virtual arrays,or create virtual pools with Vblock system resources :

l Traditional Vblock system on page 73

l Multiple Vblock systems configured in a single virtual array for:

n Automatic resource placement managed by ViPR on page 73

n Manual resource placement managed through compute virtual pools or storagevirtual pools on page 74

l Tenant isolation in virtual array through

n Virtual array networks on page 75

n Compute virtual pools on page 76

Administering Vblock Systems with ViPR Controller

72 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 73: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Traditional Vblock systemIn this example, two Vblock systems are defined in two different virtual arrays.

Figure 6 Two Vblock systems configured in two virtual arrays

The two virtual arrays are isolated from each other by the physical connectivity.

l Virtual Array A is defined by VSAN 20, from SAN Switch A and VSAN 21 from SANSwitch B.

l Virtual Array B is defined by VSAN 20, from SAN Switch X and VSAN 21 from SANSwitch Y.

Note

While the UCS is not included in the virtual array, the networks that are defined in thevirtual array will determine the UCS visibility to the ViPR Controller storage.

Multiple Vblock systems configured in a single ViPR Controller virtual arrayIn this example, two Vblock systems are configured in a single virtual array, and all thecompute systems, and storage systems are communicating across the same VSANs:VSAN 20, and VSAN 21.

With this architecture, you could allow ViPR Controller to automate resource placementduring provisioning using:

l A single compute virtual pool to allow ViPR Controller to determine computeplacement.

l A single block virtual pool to allow ViPR Controller to determine storage placement.

Administering Vblock Systems with ViPR Controller

Traditional Vblock system 73

Page 74: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Figure 7 Virtual array configured for automatic resource placement by ViPR Controller

Manual resource management with ViPR Controller virtual poolsYou can also more granularly manage resource placement during provisioning by:

l Creating multiple compute virtual pools, and manually assigning compute elementsto each compute virtual pool.

l Creating multiple storage virtual pools, and manually assigning storage groups toeach storage virtual pool.

During provisioning the desired targets can be specified to ensure only the resources youneed are used for your provisioning operation.

Administering Vblock Systems with ViPR Controller

74 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 75: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Figure 8 Manual resource management with virtual pools

Tenant isolation through virtual array networksYou can allocate Vblock system resources by assigning the virtual array VSANs todifferent tenants. In the following example:

l Tenant A will only have visibility to the resources on VSANs 20, and 21.

l Tenant B will only have visibility to the resources on VSANs 30, and 31.

When using tenant isolation of networks, separate service profile templates would needto be defined for the compute pools used for each network.

Administering Vblock Systems with ViPR Controller

Tenant isolation through virtual array networks 75

Page 76: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Figure 9 Resource management with tenant isolation of VSANs

For details about ViPR Controller tenant functionality, see ViPR Controller User InterfaceTenants, Projects, Security, Users and Multisite Configuration Guide, which is available fromthe ViPR Controller Product Documentation Index .

Tenant isolation through compute virtual poolsTenant isolation using compute pools is achieved by creating compute virtual pools withthe compute resources (blades) dedicated to a specific tenant such as HR or Finance.

Tenant ACLs allow ViPR Controller to restrict access and visibility to the compute virtualpools outside of a user tenancy.

When defining tenant isolation through compute virtual pools, the service profiletemplates can still be shared since network isolation is not an issue.

Administering Vblock Systems with ViPR Controller

76 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 77: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Figure 10 Resource management through tenant isolation of compute virtual pools

Administering Vblock Systems with ViPR Controller

Tenant isolation through compute virtual pools 77

Page 78: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Administering Vblock Systems with ViPR Controller

78 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 79: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

APPENDIX B

ViPR Controller Support for EMC Elastic ClouldStorage (ECS)

l ViPR Controller support for EMC Elastic Cloud Storage overview.............................80l Discovery of ECS systems in ViPR Controller.......................................................... 80l Mapping an ECS Namespace to a ViPR Controller Tenant....................................... 80l Virtualizing ECS storage in ViPR Controller.............................................................81

ViPR Controller Support for EMC Elastic Clould Storage (ECS) 79

Page 80: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

ViPR Controller support for EMC Elastic Cloud Storage overviewViPR Controller support for EMC Elastic Cloud Storage (ECS) includes the following:

ViPR Controller administration of ECS systemsViPR Controller administrators can:

l Discover of ECS systems

l Map ECS Namespace to a ViPR Controller Tenant

l Create virtual arrays which include ECS systems, ports, and networks

l Group ECS Replication Groups into ViPR Controller Object Virtual Pools

Refer to the relevant sections of this guide to review the requirements, and informationnecessary to prepare to administer the ECS for ViPR Controller.

For the specific steps to configure the ECS in the ViPR Controller virtual data center seethe ViPR Controller User Interface Virtual Data Center Configuration Guide which is availablefrom the ViPR Controller Product Documentation Index .

ViPR Controller object storage services to:ViPR Controller users can:

l Create a bucket

l Edit a bucket

l Delete a bucket

For information about ViPR Controller services refer to the ViPR Controller Service CatalogReference Guide which is available from the ViPR Controller Product DocumentationIndex .

Discovery of ECS systems in ViPR ControllerViPR Controller System Administrators add EMC Elastic Cloud Storage (ECS) systems tothe ViPR Controller, Physical Assets, Storage Systems.

Once added, ViPR Controller:

l Discovers, and registers the ECS storage system

l Discovers, and registers the ECS Replication Groups, and adds the ReplicationGroups to the ViPR Controller storage pools.

l Uses the ECS IP address to create the management storage port which will be usedfor communication between ViPR Controller and the ECS.

Note

Discovery of ECS buckets (i.e. ViPR Controller ingestion of buckets) is not supported.

Mapping an ECS Namespace to a ViPR Controller TenantTo use ViPR Controller to manage object storage in an ECS system, you must map a ViPRController Tenant to an ECS Namespace. When a bucket is created from the ViPRController, it is added to the ViPR Controller Project within a given Tenant, and added tothe ECS Namespace associated with the ViPR Controller tenant.

ViPR Controller Support for EMC Elastic Clould Storage (ECS)

80 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide

Page 81: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

Virtualizing ECS storage in ViPR ControllerWhen creating a virtual array for ECS object storage, you will need to configure an IPnetwork and add the ECS ports to the network.

Once the network is added to the virtual array, the storage pools (ECS ReplicationGroups), and object storage system (ECS) will automatically be added to the virtual array.

Grouping of ECS Replication Groups into ViPR Controller virtual poolsViPR Controller Object Virtual Pools are used to group ECS Replication Groups based onuser-defined criteria. Once the criteria for creating the pool is defined by the user, ViPRController filters through the ECS Replication Groups, which were discovered by ViPRController, and only includes the Replication Groups, matching the criteria in the ObjectVirtual Pool.

Once the Replication Groups are put into the Object Virtual Pool, you can manually select,or allow ViPR Controller to automatically select which Replication Group, from the VIPRController virtual pool, to use to create buckets when the ViPR Controller, Create Bucketservice is run.

If the Object Virtual Pool only contains one Replication Group, then all buckets createdfrom that ViPR ControllerObject Virtual Pool will be created from that Replication Group.

ViPR Controller Support for EMC Elastic Clould Storage (ECS)

Virtualizing ECS storage in ViPR Controller 81

Page 82: EMC® ViPR® Controller 2.4 Virtual Data Center Requirements and

ViPR Controller Support for EMC Elastic Clould Storage (ECS)

82 EMC ViPR Controller 2.4 Virtual Data Center Requirements and Information Guide