03 vmware vsphere vstorage [v5.0]

72
In the overall VMware Virtual Datacenter Operating System, vStorage falls under the infrastructure vServices layer and delivers an efficient way to use and manage storage in virtual environments. VMware vSphere® Virtual Machine File System or VMFS is one of two file system types that are supported by vSphere on shared

Upload: vivek-r-koushik

Post on 27-Dec-2015

66 views

Category:

Documents


0 download

DESCRIPTION

VMware VSphere VStorage

TRANSCRIPT

Page 1: 03 VMware VSphere VStorage [V5.0]

In the overall VMware Virtual Datacenter Operating System, vStorage falls under the infrastructure vServices layer and delivers an efficient way to use and manage storage in virtual environments.

VMware vSphere® Virtual Machine File System or VMFS is one of two file system types that are supported by vSphere on shared storage. VMFS is a high-performance cluster file system designed specifically for virtual machines.

Page 2: 03 VMware VSphere VStorage [V5.0]

Apart from VMFS, vStorage also works with Network File System or NFS shared storage to host a virtual machine.

VMware has built a storage interface into the vSphere software that provides a wide range of storage virtualization connectivity options. These options are related network storage, while the internal storage disk is a local storage.

vStorage vMotion technology enables the live migration of virtual machine disk files across storage arrays with no disruption in service.

vStorage thin provisioning reduces the storage requirement for virtual environments by allocating storage only when required. It also enables report and alert capabilities required to track the actual storage usage.

VMware vSphere® vStorage APIs provide third-party storage array and software vendors with a set of standardized interfaces which allow them to integrate their products with VMware vSphere.Storage Distributed Resource Scheduler or Storage DRS provides the virtual disk placement and load balancing recommendations to datastores in a Storage DRS-enabled datastore cluster. Storage DRS is a mechanism that initiates a Storage vMotion when the datastore exceeds a user-specified I/O latency or space utilization threshold. It manages storage resources across ESXi hosts and load balances for space and I/O latency. It also enables automated initial placement of virtual machines disks when installing new virtual machines.

A virtual machine monitor in the VMkernel is the interface between the guest operating system and applications in a virtual machine and the physical storage subsystem in the ESXi host.

Page 3: 03 VMware VSphere VStorage [V5.0]

A guest operating system sees only a virtual disk that is presented to the guest through a virtual SCSI controller. Depending on the virtual machine configuration, available virtual SCSI controllers are BusLogic Parallel, LSI Logic Parallel, LSI Logic SAS, or VMware Paravirtual SCSI controller.

Each virtual machine can be configured with up to four virtual SCSI controllers, each with up to 15 virtual SCSI disks. Each virtual disk accessed through a virtual SCSI controller is mapped to a physical datastore available to the ESXi host. This datastore can be formatted with either a VMFS or NFS file system.

The datastore can be located on either a local SCSI disk or on a FC, iSCSI, or NAS array.

Page 4: 03 VMware VSphere VStorage [V5.0]

A LUN is a Logical Unit Number. It simply means that a logical space has been carved from the storage array by the storage administrator. To ease the identification of this space, the storage administrator assigns a number to each logical volume. In the example illustrated in the diagram, the LUN is ten and it has twenty gigabytes of space. The term LUN can refer to an entire physical disk, a subset of a larger physical disk, or a disk volume. A single LUN can be created from the entire space on the storage disk or array or from a part of the space called a partition. If a virtual machine needs to see a LUN directly, this can be achieved via Raw Disk Mapping or RDM. You will learn about RDM later in this module. Although the word LUN is universally accepted, some storage vendors follow the concept of LUNs and MetaLUNs. In this course, a LUN means the highest level of logical grouping presented by the storage.

When a LUN is mapped to an ESXi, it is referred to as a volume. The volume size can be less than or more than the size of a physical disk drive. When LUN uses disk space on more than one physical disk or partition, it still presents itself as a single volume to an ESXi.

When a volume is formatted with either VMFS or NFS file system, it is referred to as a datastore. Datastores are logical containers, analogous to the file systems that hide specifics of each storage device and provide a uniform model for storing the virtual machine files. Therefore, a datastore is a partition of the volume that is formatted with a file system.

For best performance, a LUN should not be configured with mutliple partitions and multiple VMFS datastores. Each LUN should have only a single VMFS datastore.

Page 5: 03 VMware VSphere VStorage [V5.0]

A datastore serves as a storage space for the virtual disks that stores the virtual machine content.

As shown in the graphic, a virtual machine is stored as a set of files in its own directory in a datastore. A datastore is formatted either as a VMFS or NFS volume depending on the type of physical storage in datacenter. It can be manipulated, such as backed-up, just like a file. In the next few slides, you will learn about the datastore types in detail.

Please note that VMFS5 allows up to 256 VMFS volumes per system with the minimum volume size of 1.3GB and maximum size of 64TB. By default, up to 8 NFS datastores per system can be supported and can be increased to 64 NFS datastores per system. In addition to virtual machine files, a datastore can also be used to store ISO images, virtual machine templates, and floppy disk images.

Page 6: 03 VMware VSphere VStorage [V5.0]

A virtual machine usually resides in a folder or subdirectory that is created by an ESXi host. When a user creates a new virtual machine, virtual machine files are automatically created on a datastore. First is a .vmx file. This is the virtual machine configuration file and is the primary configuration file that stores settings chosen in the New Virtual Machine Wizard or virtual machine settings editor.Second is a .vmxf file that is additional configuration file for virtual machine. Third is a .vmdk file. This is an ASCII text file that stores information about the virtual machine's hard disk drive. There could be one or more virtual disk files.Fourth is a -flat.vmdk file. This is a single pre-allocated disk file containing the virtual disk data.Fifth is a .nvram file. This is a non-volatile RAM that stores virtual machine BIOS information.Sixth is a .vmss file which is the virtual machine suspending state file that stores the state of a suspended virtual machine.Seventh is a .vmsd file. This is a centralized file for storing information and metadata about snapshots.Eighth is a .vmsn file. This is the snapshot state file that stores the running state of a virtual machine at the time you take the snapshot.Ninth is a .vswp file that is virtual machine swap file for memory allocation. And at last there is a .log file, which is virtual machine log file and can be useful in troubleshooting when you encounter problems. This file is stored in the directory that holds the configuration (.vmx) file of the virtual machine.

Page 7: 03 VMware VSphere VStorage [V5.0]

The type of datastore to be used for storage depends upon the type of physical storage devices in the datacenter. The physical storage devices include local SCSI disks and networked storage, such as FC SAN disk arrays, iSCSI SAN disk arrays, and NAS arrays.

Local SCSI disks store virtual machine files on internal or external storage devices attached to the ESXi host through a direct bus connection.

Networked storage stores virtual machine files on external shared storage devices or arrays located outside the ESXi host. The ESXi host communicates with the networked devices through a high speed network. ‐

Please note that you should format local disk, FC SANs and iSCSI SANs to the VMFS file type for the ESXi host to access them. It is important to format NAS arrays to the NFS file type for the ESXi host to access them.

Page 8: 03 VMware VSphere VStorage [V5.0]

A VMFS volume is a clustered file system that allows multiple hosts read and write access to the same storage device simultaneously.

The cluster file system enables key vSphere features, such as live migration of running virtual machines from one host to another. It also enables an automatic restart of a failed virtual machine on a separate host and the clustering of virtual machines across different hosts.

VMFS provides an on-disk distributed locking system to ensure that the same virtual machine is not powered-on by multiple hosts at the same time. If an ESXi host fails, the on-disk lock for each virtual machine can be released so that virtual machines can be restarted on other ESXi hosts.

Besides locking functionality, virtual machines operate safely in SAN environment with even multiple ESXi host sharing the same VMFS datastore. Please note that you can connect up to 128 hosts to a single VMFS5 volume.

The hard disk drive of a virtual

Page 9: 03 VMware VSphere VStorage [V5.0]

NFS is a file-sharing protocol and is used to establish a client-server relationship between the ESXi hosts and the NAS device. As opposed to block storage, the NAS system itself is responsible for managing the layout and the structure of the files and directories on a physical storage. The ESXi host mounts the NFS volume, and creates one directory for each virtual machine. NFS volume provides shared storage capabilities to support ESXi, such as vMotion, DRS, VMware vSphere High Availability, ISO images,and virtual machine snapshot.

NFS allows volumes to be accessed simultaneously by multiple ESXi hosts that run multiple virtual machines.

The strength of NFS is similar to those of VMFS datastores. After the storage is provisioned to the ESXi hosts, the vCenter administrator is free to use the storage as needed. Additional benefits of NFS datastores include high performance and storage savings provided by thin provisioning. Thin provisioning is the default format for VMDKs created on NFS. The NFS client built into ESXi uses NFS protocol version 3 for communicating with the NAS or NFS servers. By default, NFS uses thin disk provisioning for virtual disks. The NFS datastore with VAAI hardware acceleration supports flat disk, thick provision, and thin provision.

Please note that NFS datastores are popular for deploying storage in VMware infrastructure.

Page 10: 03 VMware VSphere VStorage [V5.0]

Like host clusters, you can also create datastore clusters that support resource allocation policies. You can set a threshold on a datastore for space utilization. When the usage exceeds the threshold, Storage DRS recommends or performs Storage vMotion to balance space utilization across the datastores in the cluster. You can also set an I/O threshold for bottlenecks. When I/O latency exceeds the set threshold, Storage DRS either recommends or performs a Storage vMotion to relieve the I/O congestion.

Page 11: 03 VMware VSphere VStorage [V5.0]

NFS and VMFS datastores cannot be combined in the same Storage DRS-enabled datastore cluster. The datastores can be of different sizes and I/O capacities. The datastores can also be of different arrays with different vendors that configure a datastore cluster.

Please note that any host that connects to datastores in a datastore cluster, must be ESXi 5.0 or later. Earlier version of ESX or ESXi is not supported in a datastore cluster.

Apart from regular hard disk drives, ESXi also supports SSDs that are resilient and provide faster access to data. You will learn about SSD in the next slide.

SSDs use semiconductors for the storage data and have no spindles or disks that rotate and move like in the traditional hard disk drives. An ESXi host can automatically distinguish SSDs from regular hard drives. SSDs provide several advantages. For improved performance, you can use SSDs for per-virtual machine swap areas. It provides high I/O throughput that helps increase the virtual machine consolidation ratio.

Please note that a guest operating system can identify an SSD as avirtual SSD. A virtual SSD allows a user to create a virtual disk on the SSD device and allows the guest OS to see that it is an SSD.

You can use PSA SATP claim rules for tagging SSD devices that are not detected automatically. A virtual SSD is supported on virtual hardware version 8, ESXi 5.0 hosts, or VMFS 5 file type or later.

Page 12: 03 VMware VSphere VStorage [V5.0]

RDM provides a mechanism for a virtual machine to have direct access to LUN on the physical storage subsystem. RDM is available only on block-based storage arrays. RDM is a mapping file in a separate VMFS volume that acts as a proxy for a raw physical storage device. It allows a virtual machine to directly access and use the storage device and contains metadata for managing and redirecting the disk access to the physical device.

The mapping file gives some of the advantages of direct access to a physical device while keeping some advantages of a virtual disk in VMFS. As a result, it merges the VMFS manageability with the raw device access. There are various terms describing RDM, such as mapping a raw device into a datastore, mapping a system LUN, or a disk file to a physical disk volume.

You can use the vSphere Client to add raw LUNs to virtual machines. You can also use vMotion to migrate virtual machines with RDMs as long as both the source and target hosts have access to the raw LUN. Additional benefits of RDM include distributed file locking, permissions, and naming functionalities.

Please note that VMware recommends using VMFS datastores for most virtual disk storage.

Page 13: 03 VMware VSphere VStorage [V5.0]

There are two compatibility modes available for RDMs, virtual and physical.

The virtual compatibility mode appears exactly as a VMFS virtual disk to a virtual machine. It provides the benefits of VMFS, such as advanced file locking system for data protection and snapshots. However, the real hardware characteristics of the storage disk are hidden from the virtual machine.

In the physical compatibility mode, the VMkernel passes all SCSI commands directly to the device except for the Report LUNs command. Because of this, all characteristics of the underlying storage are exposed to the virtual machine. However, blocking Report LUNs prevents the virtual machine from discovering any other SCSI devices except for the device mapped by the RDM file. This SCSI command capability is useful when the virtual machine is running SAN management agents or SCSI target-based software.

Please note that for RDMs in physical compatibility mode, you can neither convert RDMs to virtual disks nor can you perform operations such as Storage vMotion, migration, or cloning. Also, you cannot relocate RDMs except to the VMFS5 datastore. VMFS 5 supports RDMs in physical compatibility mode that is more than 2TB disk size.

Page 14: 03 VMware VSphere VStorage [V5.0]

You might need to use raw LUNs with RDMs in situations when SAN snapshot or other layered applications run in a virtual machine. The RDM enables the scalable backup off-loading systems by using features inherent to SAN.

You may also need to use RDM in any Microsoft Cluster Services or MSCS clustering scenario that spans physical hosts in virtual-to-virtual clusters and physical-to-virtual clusters. In this case, the cluster data and quorum disks should be configured as RDM rather than as files on a shared VMFS.

A new LUN is required for each virtual machine with RDM.

Page 15: 03 VMware VSphere VStorage [V5.0]

The components of FC SAN can be divided into three categories, the host, the fabric, and the storage component.

The host components of SAN consist of hosts themselves. The host components also contain Host Bus Adapter or HBA components that enable hosts to be physically connected to SAN. HBAs are located in individual host servers. Each host connects to the fabric ports through its HBAs. HBA drivers running on hosts enable the server’s operating systems to communicate with the HBA.

In an FC SAN environment, the ESXi hosts access the disk array through a dedicated network called fabric components. All hosts connect to the storage devices on SAN through the SAN fabric. The network portion of a SAN consists of the fabric components.

SAN switches can connect to hosts, storage devices, and other switches. Therefore, they provide the connection points for the SAN fabric. The type of SAN switch, its design features, and port capacity contributes to its overall capacity, performance, and fault tolerance. The number of switches, types of switches, and manner in which the switches are connected define the fabric topology.

SAN cables are usually special fiber optic cables that connect all the fabric components. The type of SAN cable, the fiber optic signal, and switch licensing determines the maximum distances among SAN components and contributeto the total bandwidth rating of SAN.

The fabric components communicate using the FC communications protocol. FC is the storage interface protocol used for most SANs. FC was developed as a protocol for transferring data

Page 16: 03 VMware VSphere VStorage [V5.0]

between two ports on a serial I/O bus cable at high speed. It supports point-to-point, arbitrated loop, and switchedfabric topologies.

The storage components of a SAN are the storage arrays. Storage arrays include the storage processors or SPs that provide the front-end of the storage array. SPs communicate with the disk array which includes all the disks in the storage array and provide the Redundant Array of Independent Drives or RAID and volume functionality.

SPs provide front-side host attachments to the storage devices from the servers, either directly or through a switch. SPs provide internal access to the drives, which can use either a switch or a bus architecture. In high-end storage systems, drives are normally connected in loops. The back-end loop technology employed by the SP provides several benefits, such as high-speed access to the drives, ability to add more drives to the loop, and redundant access to a single drive from multiple loops when drives are dual-ported and attached to two loops.

Data is stored on disk arrays or tape devices (or both).

Disk arrays are groups of multiple disk devices and are the typical SAN disk storage devices. They can vary greatly in design, capacity, performance, and other features. The distance between the server and the disk array can also be greater than that permitted in a directly attached SCSI environment. Disk arrays are managed by OEM vendor proprietary operating system with built-in intelligence to managethe arrays.

Please note that switched fabric topology is the basis for most current SANs. The iSCSI is also considered as SAN.

Page 17: 03 VMware VSphere VStorage [V5.0]

The iSCSI allows block-level data to be transported over the IP networks. iSCSI builds on the SCSI protocol by encapsulating the SCSI commands in IP datagrams. It allows these encapsulated data blocks to be transported to an unlimited distance through the TCP/IP packets over traditional Ethernet networks or the Internet.

iSCSI uses client-server architecture. With an iSCSI connection, the ESXi host system and the initiator communicates with a remote storage device and the target in the same manner as it communicates with a local hard disk.

An initiator is typically a host, hosting an application that makes periodic requests for data to a related storage device. Initiators are also referred to as host computers. The iSCSI device driver that resides on a host may also be called an initiator.

The initiators begin iSCSI data transport transactions by sending an application request to send or receive data. The application request is immediately converted into SCSI commands. It is then encapsulated into iSCSI where a packet and a header are added for transportation through the TCP/IP over the Internet or traditional Ethernet networks.

There are two types of SCSI initiators. Both store data are on remote iSCSI storage devices. First is the hardware initiator that uses hardware-based iSCSI HBAs to access data, and second is the software initiator that uses software-based iSCSI code program in the VMkernel to access the data. This type of SCSI initiator requires a standard network adapter for network connectivity.

Targets are the storage devices that reside on a network. Targets receive iSCSI commands from various initiators or hosts on a network. On the target side, these commands are then broken into their original SCSI format to allow the block data to be transported between the initiator

Page 18: 03 VMware VSphere VStorage [V5.0]

and the storage device. The target responds to a host data request by sending SCSI commands back to that host. These commands are again encapsulated through iSCSI for transportation over the Ethernet or the Internet. The targets can be of any type of storage device, such as a storage array that is part of a larger IP SAN.

The NAS device attached to an existing network provides a standalone storage solution that can be used for data backup or additional storage capabilities for the virtual network clients. The primary difference between NAS and SAN depends on how they process communications. NAS communicates over the network using a network share, while SAN primarily uses the FC communication channel.

The NAS devices transfer data from a storage device to a host in the form of files. It uses files systems that are managed independently. They also manage user authentication.

Page 19: 03 VMware VSphere VStorage [V5.0]

The SAN storage provides multiple hosts with access to the same storage space. This capability means that all virtual machine templates and ISO images are located on shared storage and also helps with vMotion because the virtual machine data is located on shared storage. It allows clusters of virtual machines across different ESXi hosts. The SAN storage helps perform backups of machines and run these machines quickly after the host failure. This storage also ensures that important data is not lost by minimizing or preventing downtime. The SAN Storage allows moving virtual machines from one ESXi host to another for regular maintenance or other issues. In addition, this storage provides data replication technologies that need to be used for disaster recovery from primary to secondary sites. The SAN storage improves datastore load balancing and performance, by moving virtual disks from one datastore to another along with the Storage DRS technology. The SAN Storage also provides back up solutions by mounting the virtual disks with snapshot technology. Finally, the SAN storage provides great redundancy to the virtual machines with VMware clustering features, such as DRS, vSphere HA, and VMware Fault Toleranceor FT.

The local storage offers extremely high-speed access to data depending on the type of SCSI controller used. A local storage is certainly more economical than the SAN infrastructure. The local storage is also best suited for smaller environments with one or two hosts. Though SAN provides significant benefits over locally attached storage, sometimes these benefits do not outweighthe costs.

Page 20: 03 VMware VSphere VStorage [V5.0]

Shared storage is more expensive than local storage, but it supports a larger number of vSphere features. However, local storage might be more practical in a small environment with only a few ESXi hosts.

Shared VMFS partitions offer a number of benefits over local storage. The simple use of vMotion is a huge benefit to any environment, such as the ability to have a fast and central repository for virtual machine templates, to recover virtual machines on another host if a host fails, and the ability to allocate large amounts of storage (terabytes) to the ESXi hosts, and many more. The real idea here is that a shared implementation offers a truly scalable and recoverable ESXi solution.

You can carry out ESXi maintenance without any disruption to virtual machines or users, if shared storage is SAN. Once you have decided on local or shared storage, the next important decision to make is whether the storage is isolated or consolidated.

Page 21: 03 VMware VSphere VStorage [V5.0]

The isolated storage suggests limiting the access of a single LUN to a single virtual machine. In the physical world, this is quite common. When using RDMs, such isolation is implicit because each RDM volume is mapped to a single virtual machine. The disadvantage of this approach is that when you scale the virtual environment, soon you will reach the upper limit of 256 LUNs. You also need to provide an additional disk or LUN each time you want to increase the storage capacity for a virtual machine. This situation can lead to a significant management overhead. In some environments, the storage administration team may need several days notice to provide a new disk or a LUN.

Another consideration is that every time you need to grow the capacity for a virtual machine, the minimum commit size is that of an allocation of a LUN. Although many arrays allow LUN to be of any size, the storage administration team may avoid carving up lots of small LUNs because this configuration makes it harder for them to manage the array.

Most storage administration teams prefer to allocate LUNs that are fairly large. They like to have the system administration or application teams divide those LUNs into smaller chunks that are higher up in the stack. VMFS suits this allocation scheme perfectly and is one of the reasons VMFS is so effective in the virtualization storage management layer.

Page 22: 03 VMware VSphere VStorage [V5.0]

When using consolidated storage, you gain additional management productivity and resource utilization by pooling the storage resource and sharing it with many virtual machines running on several ESXi hosts. Dividing this shared resource between many virtual machines allows better flexibility, easier provisioning, and ongoing management of the storage resources for the virtual environment. Keeping all your storage consolidated allows you to use vMotion and DRS. This is so because when the virtual disks are located on shared storage and are accessible to multiple ESXi hosts, the virtual machines can be easily transferred from one ESXi host to another in case of failure or for maintenance as well as load balancing.

Compared to strict isolation, consolidation normally offers better utilization of storage resources. The cost is additional resource contention that, under some circumstances, can lead to a reduction in virtual machine I/O performance.

Please note that by including consolidated storage in your original design, you can save money in your hardware budget in the long run. Think about investing early in a consolidated storage plan for your environment.

Now, which option should you choose when you implement storage: isolated or consolidated? You will come to know about it in the next slide.

Page 23: 03 VMware VSphere VStorage [V5.0]

The answers to these questions will help you decide if you need isolated or consolidated storage. In general, it’s wise to separate heavy I/O workloads from the shared pool of storage. This separation helps optimize the performance of those high transactional throughput applications — an approach best characterized as “consolidation with some level of isolation.”

Due to varying workloads, there is no exact rule to determine the limits of performance and scalability for allocating the number of virtual machines per LUN. These limits also depend on the number of ESXi hosts sharing concurrent access to a given VMFS volume. The key is to recognize the upper limit of 256 LUNs and understand that this number can limit the consolidation ratio if you take the concept of “1 LUN per virtual machine” too far.

Many different applications can easily and effectively share a clustered pool of storage. After considering all these points, the best practice is to have a mix of consolidated and isolated storage.

Page 24: 03 VMware VSphere VStorage [V5.0]

Before you implement a virtualization environment, you must understand some of the common storage administration issues.

Common storage administration issues include:- How frequently the storage administrator provisions new LUNs,- Monitoring current datastore utilization,- Configuring and maintaining proper LUN masking and zoning configuration, and- Properly configuring multipath configurations for active/active or active/passive arrays

Page 25: 03 VMware VSphere VStorage [V5.0]

You must keep few points in mind while configuring datastores and storage types. For VMFS volumes, make sure you have one VMFS volume per LUN and carve up the VMFS volume into many VMDKs.

Use spanning to add more capacity. When virtual machines running on this datastore require more space, you can dynamically increase the capacity of a VMFS datastore by adding a new extent. An extent is a partition on a storage device, or LUN. You can add up to 32 new extents of the same storage type to an existing VMFS datastore. The spanned VMFS datastore can use any of all its extents at any time. It does not need to fill up a particular extent before using the next one.

Keep the test and production environment on separate VMFS volumes and use RDMs with virtual machines that use physical-to-virtual clusters or clusteracross boxes.

You must keep iSCSI on a separate and isolated IP network for best performance, and keep NAS on a separate and isolated IP network for best performance.

Please note that you have no more than eight NFS mounts per ESXi host because this is the default number while the maximum number of NFS mounts is sixty four.

Another point to remember is that ESXi 5.0 does not support VMFS 2 file systems. Therefore, you have to first upgrade VMFS 2 to VMFS 3 and then to VMFS 5.

Page 26: 03 VMware VSphere VStorage [V5.0]

This concludes the VMware vSphere vStorage Overview module. In summary:

- The VMware vStorage architecture consists of layers of abstraction that hide and manage the complexity and differences among physical storage subsystems. The virtual machine monitor in the VMkernel handles the storage subsystem for the applications and guest operating systems inside each virtual machine.

- VMware has built a storage interface into the vSphere software that gives you a wide range of storage virtualization connectivity options while providing a consistent presentation of storage to virtual machines. Connectivity options include FC, iSCSI SAN, NFS, NAS, and SAS.- Virtual machines are storedin datastores. The datastoreis located on a physical storage device and is treated as VMFS,RDM, or NFS volume depending on the type of physical storage the datacenter has. The physical disk device could be SCSI, iSCSI, NAS, FC, or SAS.

Page 27: 03 VMware VSphere VStorage [V5.0]

To know how to optimize the space, first you should know the storage-related requirements. Important requirements for customers today are how to avoid unused disk space, reduce operational costs, bring storage capacity upgrades in line with actual business usage, and optimize storage space utilization.vStorage provides control over space optimization by enabling vCenter Server administrators to thin provision, grow volume, add extent, and increase the virtual disk size. You will learn about space optimization techniques in detail in this section.

Page 28: 03 VMware VSphere VStorage [V5.0]

vSphere thin provisioning can be done at the array level and the virtual disk level. Thin provisioning at the array level is done by the storage administrator. In this module, you will learn about thin provisioning at the virtual disk level only.

When you create a virtual machine, a certain amount of storage space on a datastore is provisioned, or

Page 29: 03 VMware VSphere VStorage [V5.0]

allocated, to the virtual disk files.By default, ESXi offers a traditional storage provisioning method. In this method, the amount of storage the virtual machine will need for its entire lifecycle is estimated, a fixed amount of storage space is provisioned to its virtual disk, and the entire provisioned space is committed to the virtual disk during its creation. This type of virtual disk that occupies the entire provisioned space is called a thick disk format.

A virtual disk in the thick format does not change its size. From the beginning, it occupies its entire space on the datastore to which it is assigned. However, creating virtual disks in the thick format leads to underutilization of datastore capacity because large amounts of storage space that are pre allocated to individual virtual machines might remain unused.‐

To avoid over-allocating storage space and minimize stranded storage, vSphere supports storage over-commitment in the form of thin provisioning. When a disk is thin provisioned, the virtual machine thinks it has access to a large amount of storage. However, the actual physical footprint is much smaller. Disks in thin format look just like disks in thick format in terms of logical size. However, the VMware vSphere® Virtual Machine File System or VMFS drivers manage the disks differently in terms of physical size. The VMFS drivers allocate physical space for the thin-provisioned disks on first write and expand the disk on demand, if and when the guest operating system needs it. This capability enables the vCenter Server administrator to allocate the total provisioned space for disks on a datastore at a greater amount than the actual capacity.

Page 30: 03 VMware VSphere VStorage [V5.0]

If the VMFS volume is full and a thin disk needs to allocate more space for itself, the virtual machine prompts the vCenter Server administrator to provide more space on the underlying VMFS datastore.

vSphere also provides alarms and reports that specifically track allocation versus current usage of storage capacity so that the vCenter Server administrator can optimize allocation of storage for the virtual environment.

A virtual machine can be assigned the thin disk format when creating the virtual machine, cloning templates and virtual machines, and migrating virtual machines. When performing the migration task of the datastore or both host and the datastore, disks are converted from thin to thick format or thick to thin format. If you choose to leave a disk in its original location, the diskformat does not change.Thin provisioning is supportedonly on VMFS 3 version and later.

Page 31: 03 VMware VSphere VStorage [V5.0]

VMware vSphere® Storage APIs - Array Integration enable you to monitor the use of space on thin-provisioned LUN to avoid running out of physical space. As your datastore grows or if you use VMware vSphere® vMotion® to migrate virtual machines to a thin provisioned LUN, the host communicates with the LUN and warns you about breaches in the physical space and out-of-space conditions. It also informs the array about the free datastore space that is created when files and Raw Disk Mappings or RDMs are deleted or removed from the datastore by vSphere Storage vMotion. The array can then reclaim the freed blocks of space.

Page 32: 03 VMware VSphere VStorage [V5.0]

When virtual machines running on a VMFS datastore require more space, you can dynamically increase the capacity of the datastore by using the add extent method. This method enables you to expand a VMFS datastore by attaching an available hard disk space as an extent. The datastore can span 32 physical storage extents and up to 64TB.

Page 33: 03 VMware VSphere VStorage [V5.0]

Certain storage arrays enable you to dynamically increase the size of a LUN within the array. After the LUN size is increased, the VMFS volume grow method can be used to increase the size of the VMFS datastore up to the 64TB limit.

Another use of volume grow is that if the original LUN was larger than the VMFS volume created, then you can use the additional capacity of the LUN by growing the VMFS volume.

Please note that volume grow for RDM is not supported.

The criteria applicable to volume grow and extent grow are listed in the table. For both methods, there is no need to power off virtual machines. Both methods can be used on an existing array that has expanded LUN.

Additionally, you can grow a volume any number of times, up to a limit of 64TB. A datastore can have a maximum of 32 extents, but the maximum datastore size is still 64TB. No new partition is added for volume grow, but a new partition is added when performing an extent grow. This new partition has a dependency on the first extent. Therefore, if the first extent fails, virtual machines lose access to the entire volume. With volume grow, as long as the datastore has only one extent, virtual machine availability is never affected.

Page 34: 03 VMware VSphere VStorage [V5.0]

It is important for users to know how much space their virtual machines are using, where to locate their snapshots, and how much space are the snapshots consuming.vStorage provides control over the environment space utilization by enabling vCenter Server administrators to add alarms that send notification for set conditions. It also provides utilization reports and charts. Based on organizational policy, you can put a virtual machine either in a specific datastore or in the virtual machine’s home directory.

Page 35: 03 VMware VSphere VStorage [V5.0]

vCenter Server administrators can monitor space utilization by setting up alarms that send a notification when a certain threshold is reached. They can also analyze reports and charts that graphically represent statistical data for various devices and entities and give real-time data on the utilization.

Alarms are notifications that are set on events or conditions for an object. For example, the vCenter Server administrator can configure an alarm on disk usage percentage, to be notified when the amount of disk space used by a datastore reaches a certain level. The administrator can also set alarms that are triggered when a virtual machine is powered off, the amount of configured RAM used by the virtual machine exceeds a set capacity, or a host’s CPU usage reaches a certain percentage.

The vSphere administrator can set alarms on all managed objects in the inventory. When an alarm is set on a parent entity, such as a cluster, all child entities inherit the alarm. Alarms cannot be changed or overridden at the child level.

Alarms have a trigger and an action. A trigger is a set of conditions that must be met for an alarm to register. An action is the operation that occurs in response to the trigger.The triggers for the default alarms are defined, but the actions are not. The vCenter Server administrator must manually configure the alarm actions, for example, sending an email notification.

Triggers and actions answer three questions.First, what is the threshold that your environment can tolerate?Second, when should a notification

Page 36: 03 VMware VSphere VStorage [V5.0]

be sent?And last, what action should be taken in response to the alarm?

The Storage Views tab is a part of vCenter Management Web services called the Storage Management Service. This service provides a greater insight into the storage infrastructure, particularly in the areas of storage connectivity and capacity utilization. It assists the vCenter Server administrator in quickly viewing information to answer questions, such as how much space on a datastore is used for snapshots and are there redundant paths to a virtual machine’s storage.

All data used to compute information shown on the tab comes from the vCenter Server database. The Storage Management Service makes direct database calls periodically, computes the information, and stores it in an in-memory cache.

A display in the top-right corner shows the last time when the report was updated. The Update link enables you to manually update the report as required.

The Storage Views tab includes two view pages: Reports and Maps.

Page 37: 03 VMware VSphere VStorage [V5.0]

The Reports page of the Storage Views tab enables you to view the relationship between storage entities and other vSphere entities. For example, you can view the relationship between a datastore and a virtual machine or host. You can also view the relationship between a virtual machine and a SCSI volume, path, adapter, or target. All reports are searchable and include links to drill down to specific entities.

Page 38: 03 VMware VSphere VStorage [V5.0]

Performance charts graphically represent statistical data for various devices and entities managed by vCenter Server. They display data for a variety of metrics including CPU, disk, memory, and network usage.

VMware provides several preconfigured charts for datacenters, hosts, clusters, datastores, resource pools, and virtual machines. Each metric for an inventory object is displayed on a separate chart and is specific to that object. For example, the metrics for a host are different from the metrics for a virtual machine.

In the next section, you will learn how a vCenter Server administrator can provide assurance of necessary and sufficient storage for the VMware virtual datacenter.

Page 39: 03 VMware VSphere VStorage [V5.0]

The key requirements for vStorage administrators are to ensure bandwidth for mission-critical virtual machines, avoid storage I/O bottlenecks, get predictable storage throughput and latency for virtual machines, and ensure that mission-critical virtual machines have storage available at all times.

vStorage provides various features to meet these requirements. It provides Native Multipathing Plug-in to avoid I/O bottlenecks, Pluggable Storage Architecture to enable third-party software developers to design their own load balancing techniques, and Storage I/O Control or SIOC to prioritize I/O for certain virtual machines.

Page 40: 03 VMware VSphere VStorage [V5.0]

To maintain a constant connection between an ESXi host and its storage, ESXi supports multipathing.

Multipathing is the technique of using more than one physical path for transferring data between an ESXi host and an external storage device. In case of a failure of any element in the SAN network, such as HBA, switch, or cable, ESXi can fail over to another physical path.

In addition to path failover, multipathing offers load balancing for redistributing I/O loads between multiple paths, thus reducing or removing potential bottlenecks.

Page 41: 03 VMware VSphere VStorage [V5.0]

To support path switching with Fibre Channel or FC SAN, an ESXi host typically has two or more HBAs available, from which the storage array can be reached using one or more switches. Alternatively, the setup should include one HBA and two storage processors or SPs so that the HBA can use a different path to reach the disk array.

In the graphic shown, multiple paths connect each ESXi host with the storage device for a FC storage type. In FC multipathing, if HBA1 or the link between HBA1 and the FC switch fails, HBA2 takes over and provides the connection between the server and the switch. The process of one HBA taking over another is called HBA failover. Similarly, if SP1 fails or the links between SP1 and the switches break, SP2 takes over and provides the connection between the switch and the storage device. This process is called SP failover.

The multipathing capability of ESXi supports both HBA and SP failover.

Page 42: 03 VMware VSphere VStorage [V5.0]

With Internet Small Computer System Interface or iSCSI storage, ESXi takes advantage of the multipathing support built into the IP network. This support enables the network to perform routing, as shown in the graphic.

Through the Dynamic Discovery process, iSCSI initiators obtain a list of target addresses that the initiators can use as multiple paths to iSCSI LUNs for fail over purposes. In addition, with the software initiated iSCSI, the vSphere administrator can use Network Interface Card or NIC ‐teaming, so that multipathing is performed through the networking layer in the VMkernel.

Page 43: 03 VMware VSphere VStorage [V5.0]

To manage storage multipathing, ESXi uses a special VMkernel layer called the Pluggable Storage Architecture or PSA. PSA is an open, modular framework that coordinates the simultaneous operation of multiple multipathing plug-ins or MPPs.

The PSA framework supports the installation of third party plug-ins that can replace or ‐supplement vStorage native components. These plug-ins are developed by software or storage hardware vendors and integrate with the PSA. They improve critical aspects of path management and add support for new path selection policies and new arrays, currently unsupported by ESXi. Third-party plug-ins are of three types: third-party SATPs, third-party PSPs, and third-party MMPs.

Third-party SATPs are generally developed by third party hardware manufacturers, who have ‐expert knowledge of their storage devices. These plug-ins are optimized to accommodate specific characteristics of the storage arrays and support the new array lines. You need to install third party SATPs when the behavior of your array does not match the behavior of any existing ‐PSA SATP. When installed, the third party SATPs are coordinated by the NMP. They can be ‐simultaneously used with the VMware SATPs.

The second type of third-party plug-ins are third party PSPs. They provide more complex I/O ‐load balancing algorithms. Generally, these plug-ins are developed by third party software ‐companies and help you achieve higher throughput across multiple paths. When installed, the third party PSPs are coordinated by the NMP. They can run along and be simultaneously used ‐with the VMware PSPs.

Page 44: 03 VMware VSphere VStorage [V5.0]

The third type, third party MPPs, can provide entirely new fault tolerance and performance ‐ ‐behavior. They run in parallel with the VMware NMP. For certain specified arrays, they replace the behavior of the NMP by taking control over the path failover and load balancing operations.‐

When the host boots up or performs a rescan, the PSA discovers all physical paths to storage devices available to the host. Based on a set of claim rules defined in the /etc/VMware/esx.conf file, the PSA determines which multipathing module should claim the paths to a particular device and become responsible for managing the device.

For the paths managed by the NMP module, another set of rules is applied to select SATPs and PSPs. Using these rules, the NMP assigns an appropriate SATP to monitor physical paths and associates a default PSP with these paths.

By default, ESXi provides the VMware Native Multipathing Plug-in or NMP. NMP is an extensible module that manages subplug-ins. There are two types of NMP subplug-ins, Storage Array Type Plug-ins or SATPs, and Path Selection Plug-ins or PSPs. SATPs and PSPs can be built-in and are provided by VMware. They can also be provided by a third-party vendor.

When a virtual machine issues an I/O request to a storage device managed by the NMP, the NMP calls the PSP assigned to this storage device. The PSP then selects an appropriate physical path for the I/O to be sent. The NMP reports the success or failure of the operation. If the I/O operation is successful, the NMP reports its completion. However, if the I/O operation reports an error, the NMP calls an appropriate SATP. The SATP interprets the error codes and, when appropriate, activates inactive paths. The PSP is called to select a new path to send the I/O.

Page 45: 03 VMware VSphere VStorage [V5.0]

This page enables you to view details of all storage devices. To ensure that storage device names are consistent across reboots, ESXi uses unique LUN identifiers to name the storage devices in the user-interface and outputs from CLI commands. In most cases, the Network Addressing Authority ID or NAA is used.

Runtime Name is created by the host and shows the name of the first path to the device. Unlike Universally Unique Identifiers or UUIDs,runtime names are not reliable identifiers for the device and they are not persistent. The format for runtime devices is vmhba#:C#:T#:L#. The vmhba# portion of the Runtime Name is the name of the storage adapter. The name refers to the physical adapter on the host but not to the SCSI controller used by the virtual machines.C# is the storage channel number.T# is the target number. The host decides target numbering, and the numbering might change if the mappings of targets are visible to the host. Targets shared by different hosts might not have the same target number.L# is the LUN identifier that shows the position of the LUN within the target. The LUN identifier is provided by the storage system. If a target has only one LUN, the LUN identifier is always zero.

For example, vmhba1:C0:T0:L1 represents LUN1 on target 0 accessed through the storage adapter vmhba1 and channel 0.The Devices page also includes an Owner column so you can view the PSA multipathing module managing the device. From the Devices page, you can click the Manage Paths link to view and manage the path details for a selected device.

Page 46: 03 VMware VSphere VStorage [V5.0]

Here is an example of the Manage Paths dialog box. It shows the storage array type and the status of each multipathing target. The Active status indicates that the path is operational and is the current path used for transferring data. The Standby status indicates that the path is on an active-passive array that is working, but is not currently being used for transferring data. The status may also show as either Disabled or Dead, depending on whether any of the paths are disabled or dead, respectively.In Manage Paths dialog box, you can select the path selection policy based on the multipathing plug-in you are using. This example is using the NMP, so the choices are Most Recently Used, Round Robin, and Fixed. When the Most Recently Used path is selected, the ESXi host uses the most recent path to the disk until this path becomes unavailable. This means that the ESXi host does not automatically revert to the preferred path. Most Recently Used path is the default policy for active passive storage devices and is required for those devices.‐

When the Round Robin path is selected, the ESXi host uses an automatic path selection, rotating through all available paths. In addition to path failover, the Round Robin path supports load-balancing across the paths.

When the Fixed path is selected, the ESXi host always uses the preferred path to the disk, when that path is available. If it cannot access the disk through the preferred path, it tries the alternative paths. Fixed path is the default policy for active active storage devices.‐

Page 47: 03 VMware VSphere VStorage [V5.0]

Storage I/O Control provides I/O prioritization of virtual machines running on a cluster of ESXi servers that access a shared storage pool. It extends the familiar constructs of shares and limits that have existed for CPU and memory to address storage utilization through a dynamic allocation of I/O queue slots across a cluster of ESXi servers. When a certain latency threshold is exceeded for a given block-based storage device, SIOC balances the available queue slots across a collection of ESXi servers. It then aligns the importance of certain workloads with the distribution of available throughput. SIOC can also reduce the I/O queue slots given to virtual machines with low number of shares, to provide more I/O queue slots to a virtual machine with a higher number of shares.

SIOC throttles back I/O activity for certain virtual machines in the interest of other virtual machines to get a fairer distribution of I/O throughput and an improved service level. In the graphic, two business-critical virtual machines, online store and MS Exchange, are provided more I/O slots than the less important data mining virtual machine. SIOC was enhanced in vSphere 5.0 to include support for NFS datastores.

Page 48: 03 VMware VSphere VStorage [V5.0]

SIOC is supported on FC, iSCSI, and NFS storage. However, it does not support RDM or datastores with multiple extents. In vSphere 5.0, SIOC is enabled by default on the Storage DRS-enabled datastore clusters.

On the left side of the slide, you will notice that there are two critical virtual machines, the Online Store and Microsoft Exchange. These virtual machine requires higher datastore access priority. Without Storage I/O Control enabled, these two virtual machines may not get the access priority that they need. However, other virtual machines, such as the Data Mining and Print Server virtual machines may consume more Storage I/O resource than what they really need. On the right side of the slide you will see, with Storage I/O Control enabled, the Storage I/O resources can be prioritized to those virtual machine that require more datastore access priority.

Page 49: 03 VMware VSphere VStorage [V5.0]

VAAI is a set of protocol interfaces between ESXi, storage arrays, and new application programming interfaces in the VMkernel. VAAI helps storage vendors provide hardware assistance to speed up VMware I/O operations that are more efficiently accomplished in the storage hardware. VAAI plug ins improve the performance of data transfer and are transparent ‐to the end-user.

VAAI plug-ins are used by ESXi to issue a small set of primitives or fundamental operations to storage arrays. These operations are used to perform storage functions such as cloning and snapshots, which the storage arrays perform more efficiently than the host. In this way, ESXi uses VAAI to improve its storage services.

Page 50: 03 VMware VSphere VStorage [V5.0]

The three fundamental primitives of VAAI are Atomic Test and Set or ATS, Clone Blocks or Full Copy or XCOPY, and Zero Blocks or Write Same.

In vSphere 5.0, all of the primitives are T10 compliant and integrated in the ESXi stack. Please note that although all three primitives were supported in vSphere 4.1, only the Write Same (Zero Blocks) primitive was T10 compliant. T10 compliant primitives means that T10 compliant arrays can use these primitives immediately with a default VAAI plug-in. Additionally, the ATS primitive has been extended in vSphere 5.0 or VMFS 5 to cover more operations, such as acquire heartbeat, clear heartbeat, mark heartbeat, and reclaim heartbeat. This results in better performance.

In previous versions of VAAI, ATS is used for locks when there is no resources contention. But in the presence of contention, SCSI reservations are used. However, in vSphere 5, ATS is also used in situations where there is contention.

Page 51: 03 VMware VSphere VStorage [V5.0]

New VAAI primitives fall into two categories, Hardware Acceleration for Network Attached Storage or NAS and Hardware Acceleration for Thin Provisioning.

Storage hardware acceleration functionality enables your host to offload specific virtual machine and storage management operations to compliant storage hardware. With the

Page 52: 03 VMware VSphere VStorage [V5.0]

assistance of storage hardware, the host performs these operations faster and consumes less CPU, memory, and storage fabric bandwidth.

To implement the hardware acceleration functionality, the PSA uses a combination of special array integration plug-ins, called VAAI plug-ins, and an array integration filter called vStorage APIs for Array Integration or VAAI filter. The PSA automatically attaches the VAAI filter and vendor-specific VAAI plug-ins to the storage devices that support hardware acceleration, such as block storage devices, FC and iSCSI, and NAS devices.

Hardware acceleration is enabled by default on your host. To enable hardware acceleration on the storage side, you must check with your storage vendor. Certain storage arrays require you to explicitly activate the hardware acceleration support on the storage side.

When the hardware acceleration functionality is supported, the host can get hardware assistance and perform various operations faster and more efficiently. These operations include migration of virtual machines with Storage vMotion, deployment of virtual machines from templates, cloning of virtual machines or templates, VMFS clustered locking and metadata operations for virtual machine files, writes to thin provisioned and thick virtual disks, creation of fault-tolerant virtual machines, and creation and cloning of thick disks on NFS datastores.

Page 53: 03 VMware VSphere VStorage [V5.0]

Hardware Acceleration for NAS is a set of APIs that enables NAS arrays to integrate with vSphere and transparently offload certain storage operations to the array. This integration significantly reduces CPU overhead on the host. VAAI NAS is deployed as a plug-in, which is not shipped with ESXi 5.0. This plug-in is developed and distributed by the storage vendor, but signed by VMware’s certification program. VAAI-enabled array or device firmware is required to use VAAI NAS features.

Page 54: 03 VMware VSphere VStorage [V5.0]

The new VAAI primitives for NAS provide the Reserve Space and Full File Clone operations.

The Reserve Space operation enables storage arrays to allocate space for a virtual disk file in thick format. When you create a virtual disk on the NFS datastore, the NAS server determines the allocation policy. The default allocation policy on most NAS servers does not guarantee backing storage to the file. However, the Reserve Space operation can instruct the NAS device to use vendor-specific mechanisms to reserve space for a virtual disk of non-zero logical size.

The Full File Clone operation enables hardware-assisted cloning of offline virtual disk files. It is similar to VMFS block cloning and enables offline VMDK files to be cloned by the Filter. Offline cloning occurs when you clone from a template, or you perform a cold migration between two different datastores. Note that hot migration using Storage vMotion on NAS is not hardware-accelerated.

The Extended file statistics enables storage arrays to accurately report space utilization for virtual machines.

In vSphere 4.1 VAAI Phase-1, ESXi reverts to software methods, such as, DataMover method or fail operation, if the NAS primitive fails. There is no equivalent ATS primitive because locking is done completely in a different manner on the NAS datastores.

A private VMODL API call to create native snapshots will be used by View. At this time, it is not clear if the native snapshot feature will be supported on VMFS-5 or not. Please note that hot migration using Storage vMotion on NAS is not hardware accelerated.

Page 55: 03 VMware VSphere VStorage [V5.0]

You will now learn about the second category of new VAAI primitives, VAAI primitives for thin provisioning.

Hardware Acceleration for Thin Provisioning is a set of APIs that assists in monitoring disk space usage on thin-provisioned storage arrays. Monitoring this usage helps prevent the condition where the disk is out of space. Monitoring usage also helps when reclaiming disk space. There are no installation steps required for the VAAI Thin Provisioning extensions.

VAAI Thin Provisioning works on all new and existing VMFS3 and VMFS5 volumes. However, VAAI-enabled device firmware is required to use the VAAI Thin Provisioning features. ESXi continuously checks for VAAI-compatible firmware. After the firmware is upgraded, ESXi starts using the VAAI Thin Provisioning features. The thin provisioning enhancements make it convenient to use the thin provisioning feature and reduce the complexity of storage management.

Page 56: 03 VMware VSphere VStorage [V5.0]

Use of thin provisioning creates two problems. The first problem is that as files are added and removed on a datastore, dead space accumulates over time. However, the array is not informed so the space is considered as being in use. This negates the benefit of thin provisioning. This problem is common in virtualized environments because Storage vMotion is used to migrate virtual machines to different datastores. The second problem is that storage over-subscription can lead to out-of-space conditions. An out-of-space condition is catastrophic to all virtual machines running on the Logical Unit Number or LUN.

VAAI Thin Provisioning solves the problems of dead space and the out-of-space condition. VAAI Thin Provisioning has a feature called Reclaim Dead Space. This feature informs the array about the datastore space that is freed when files are deleted or removed by Storage vMotion from the datastore. The array can then reclaim these free blocks of space. VAAI Thin Provisioning also has a Monitor Space Usage feature. This feature monitors space usage on thin-provisioned LUNs and tries to helps administrators avoid running out of physical disk space. vSphere 5.0 also includes a new advanced warning for the out-of-space condition on thin-provisioned LUNs. You will now learn about VASA which is a new feature in vSphere 5.0.

Page 57: 03 VMware VSphere VStorage [V5.0]

vStorage APIs for Storage Awareness or VASA is used by the VASA providers to provide information about their storage arrays to vCenter Server. The vCenter Server instance gets information from storage arrays by using plug-ins called VASA provider. The storage array informs the VASA providers of its configuration, capabilities, storage health, and events. The VASA provider, in turn, informs vCenter Server. This information can then be displayed in vSphere Client.

When VASA provider components are used, vCenter Server can integrate with external storage, both block storage and NFS. This helps you obtain comprehensive and meaningful information about resources and storage data. This also helps you choose the right storage in terms of space, performance, and Service-Level Agreement or SLA requirements.

Page 58: 03 VMware VSphere VStorage [V5.0]

Profile-Driven Storage enables you to have a greater control over your storage resources. It also enables virtual machine storage provisioning to become independent of the specific storage available in the environment. You can define virtual machine placement rules in terms of the storage characteristics and monitor a virtual machine's storage placement based on user-defined rules.

Profile-Driven Storage uses VASA to deliver the storage characterization supplied by the storage vendors. VASA improves visibility into the physical storage infrastructure through vCenter Server. It also enables the vSphere administrator to tag storage based on customer-specific descriptions. This new API allows VASA-enabled arrays to expose storage architecture details to vSphere Server. Instead of only seeing a block or file device with some amount of capacity, VASA allows vCenter to know about replication, RAID, compression, deduplication, and other system capabilities provided by storage array vendors like EMC and NetApp. With this new information, VMware administrators can create storage profiles that map to volumes. Virtual machines can then be assigned storage by policy and not just the availability of space. The storage characterizations are used to create the virtual machine placement rules in the form of storage profiles. Profile-Driven Storage also provides an easy way to check a virtual machine's compliance against the rules.

Page 59: 03 VMware VSphere VStorage [V5.0]

Using Storage vMotion, you can migrate a virtual machine and its files from one datastore to another, while the virtual machine is running. The virtual machine stays on the same host and the virtual machine files are individually moved to a different datastore location. You can choose to place the virtual machine and all its files in a single location or select separate locations for the virtual machine configuration file and each virtual disk.

You can migrate a virtual machine from one physical storage type like FC to different storage type like iSCSI. Storage vMotion supports FC, iSCSI, and NAS network storage. The Storage vMotion migration process does not disturb the virtual machine. There is no downtime and the migration is transparent to the guest operating system and the application running on the virtual machine.

Storage vMotion is enhanced in vSphere 5.0 to support migration of virtual machine disks with snapshots.

Page 60: 03 VMware VSphere VStorage [V5.0]

Storage vMotion has a number of uses in virtual datacenter administration. For example, during an upgrade of VMFS datastore from one version to the next, the vCenter Server administrator can migrate the virtual machines that are running on a VMFS3 datastore to a VMFS5 datastore and then upgrade the VMFS3 datastore without any impact on virtual machines. The administrator can then use Storage vMotion to migrate virtual machines back to the original datastore without any virtual machine downtime.

Additionally, when performing storage maintenance, reconfiguration, or retirement, the vCenter Server administrators can use Storage vMotion to move virtual machines off a storage device to allow maintenance, reconfiguration, or retirement of the storage device without virtual machine downtime.

Another use is for redistributing storage load. The vCenter Server administrator can use Storage vMotion to redistribute virtual machines or virtual disks to different storage volumes in order to balance the capacity and improve performance.

And finally, for meeting SLA requirements, the vCenter Server administrators can migrate virtual machines to tiered storage with different service levels to address the changing business requirements for those virtual machines.

Page 61: 03 VMware VSphere VStorage [V5.0]

To ensure successful migration with Storage vMotion, a virtual machine and its host must meet resource and configuration requirements for virtual machine disks to be migrated. There are certain requirements and limitations of Storage vMotion.

The virtual machine disks must be in persistent mode or RDMs. For virtual compatibility mode RDMs, you can migrate the mapping file or convert it into thick-provisioned or thin-provisioned disks during migration, as long as the destination is not an NFS datastore. If you convert the mapping file, a new virtual disk is created and the contents of the mapped LUN are copied to this disk. For physical compatibility mode RDMs, you can migrate the mapping file only.

Another limitation is that migration of virtual machines during VMware Tools installation is not supported. Additionally, the host on which the virtual machine is running must have a license that includes Storage vMotion. ESX and ESXi 3.5 hosts must be licensed and configured for vMotion. ESX and ESXi 4.0 and later hosts do not require vMotion to be configured in order to perform migrations with Storage vMotion.

The host on which the virtual machine is running must have access to both the source and target datastores. And finally, the number of simultaneous migrations with Storage vMotion is limited.

Page 62: 03 VMware VSphere VStorage [V5.0]

In vSphere 5.0, Storage vMotion uses a new mirroring architecture that guarantees migration success even when facing a slower destination. It also guarantees a more predictable and shorter migration time.

Mirror mode works in a following manner.The virtual machine directory is copied from the source datastore to the destination datastore. The mirror mode driver takes a single pass and copies the virtual disk files from the source to the destination. The mirror mode driver keeps track of which blocks have been copied to the destination disk. If a write occurs to a disk block on the source that has already been copied to the destination, the mirror mode driver copies the modified block to the destination. The virtual machine on the destination datastore is started using the copied files. The destination virtual machine waits for all virtual machine disk files to finish being copied from the source datastore to the destination datastore. After the single-pass copy is complete, Storage vMotion transfers control to the virtual machine on the destination datastore. Finally, the virtual machine directory and the virtual machine’s disk files are deleted from the source datastore.

Page 63: 03 VMware VSphere VStorage [V5.0]

vSphere 5.0 introduces a new storage feature called Storage DRS. This feature helps you to manage multiple datastores as a single resource, called a datastore cluster. A datastore cluster is a group of datastores grouped together but functioning separately. It serves as a container or folder where users can store their datastores.

Storage DRS collects resource usage information for the datastore cluster it has enabled. It then makes recommendations about the initial virtual machine or VMDK placement and migration to avoid I/O and space utilization bottlenecks on the datastores in the cluster.

Storage DRS can be configured to work in either manual mode or fully automated mode. In manual mode, it provides recommendations for the placement or migration of the virtual machines. When you apply Storage DRS recommendations, vCenter Server uses Storage vMotion to migrate the virtual machine disks to other datastores in the datastore cluster to balance the resources. In fully automated mode, Storage DRS automatically handles the initial placement and migrations, based on runtime rules.Storage DRS also includes affinity or anti-affinity rules to govern the virtual disk location.

Page 64: 03 VMware VSphere VStorage [V5.0]

This concludes the Working with VMware vSphere vStorage module.In summary:- vStorage provides control over space optimization by enabling vCenter Server administrators to thin provision, grow volume, add extent, and increase virtual disks size. - vCenter Server administrators can monitor space utilization by setting up alarms that send notification when a certain threshold has reached. They can also analyze reports and charts that graphically represent statistical data for various devices and entities and gives real-time data on the utilization.- vStorage provides NMP to avoid I/O bottlenecks and the PSA that enables third-party software developers to design their own load balancing techniques and failover mechanisms for

Page 65: 03 VMware VSphere VStorage [V5.0]

particular storage array types.- vSphere 5.0 introduces a new storage feature called Storage DRS that helps to manage multiple datastores as a single resource, called a datastore cluster.Now that you have completed this module, feel free to review it until you are ready to start the next module. When you are ready to proceed, close this browser window to return to the course contents page.