avere reference architecture for vmware vdi: the … · avere reference architecture for vmware...

9
Avere Reference Architecture for VMware VDI: Quantifying User Experience and Cost Avere White Paper Avere Systems, Inc. 5000 McKnight Road, Suite 404 Pittsburgh, PA 15237 USA 888-88-AVERE www.averesystems.com [email protected]

Upload: lamkhanh

Post on 23-Aug-2018

239 views

Category:

Documents


0 download

TRANSCRIPT

The Avere Architecture for Tiered NAS

Avere Systems, Inc. 5000 McKnight Road, Suite 404 Pittsburgh, PA 15237 USA 1-412-635-7170 www.averesystems.com [email protected]

Part number 0254-001-0171, Rev A

Avere Reference Architecture for VMware VDI: Quantifying User Experience and Cost

Avere White Paper

Avere Systems, Inc.5000 McKnight Road, Suite 404Pittsburgh, PA 15237 [email protected]

2Copyright © Avere Systems, Inc., 2013

Architecting Storage for VDI Deployments

A successful virtual desktop infrastructure (VDI) deployment delivers reliable, consistently responsive user experiences combined with the cost efficiencies of extreme consolidation. But sizing a supporting storage infrastructure to achieve both performance and cost objectives is an ongoing challenge for virtualization administrators. To bring quantitative clarity to the task, Avere storage engineers benchmarked the Avere Systems Edge filer, testing the Avere solution against a well-known VMware View application workload to establish expectations for responsiveness, virtual desktop density, and cost.

This White Paper characterizes the storage demands of VDI application environments, the Avere solution approach, and the results of the LoginVSI Heavy Workloads benchmarking. The Brief describes how the Avere storage system reference architecture delivers the extremely consolidated performance required for cost-effective VDI, utilizing existing storage assets in the smallest footprint possible with predictable VDI scaling to tens of thousands of virtual desktops.

The I/O Blender Effect

Desktop virtualization creates an application environment particularly demanding of the subordinating storage infrastructure. VDI implementations utilizing VMware vSphere hypervisor technology with VMware Horizon View desktop management software enable levels of extreme consolidation. But each active virtual desktop machine requires memory, CPU, network, and storage resources. To ensure the most efficient utilization of these resources and maximize the number of virtual desktops that can be deployed, the hypervisor controls memory, CPU, and network resources. The storage infrastructure, however, is typically shared across multiple hypervisors, allowing administrators to take advantage of hypervisor features like VMware vMotion for non-disruptively moving data and balancing virtual desktop instances across hypervisors. These features enable more efficient utilization of storage capacity—leading to more hypervisors sharing the same storage and overwhelming the storage array’s I/O capabilities.

Other technologies that contribute to the heavy I/O demand include VMware linked clone functionality that allows groups of VDI users to share a golden image. Each virtual desktop utilizes this base image, plus a delta image that

3Copyright © Avere Systems, Inc., 2013

contains VMDK blocks of user-specific desktop data. Golden images are heavily read by all VDI instances, and delta files are read when per-VDI unique VMDK blocks are requested by the hypervisor. Servicing these I/O requests results in a demanding data-access pattern, particularly in large-scale deployments where thousands of VDI instances are simultaneously active.

Taken together, these activities create demanding, random I/O access patterns. Figure 1 illustrates a typical VMware hypervisor workload with an ops distribution of 70% write and 30% read. The outer circle shows the breakdown of I/O sizes for read and write operations. These combined read and write requests coming from thousands of VDI instances lead to the I/O blender effect that can very quickly stress legacy storage arrays.

Growing Pains

While the I/O blender effect can be reasonably remedied in small VDI environments, randomized I/O is dramatically more challenging to accommodate at scale. For example, if each VDI instance requires (at the disk-virtualized I/O level) 20 I/Os per second with response times around 10 milliseconds to deliver a satisfactory user experience, a 5,000-VDI-instance deployment will require 100,000 IOPS to be delivered reliably under 10 milliseconds. Typically 70-80% of those operations will be writes. If half of those writes are destined for a linked-clone delta disk, the linked-clone write operations do not typically align on a 4K boundary. This will trigger additional disk reads at the storage layer to pad the partial-write, thus boosting I/O requirements another 40,000 IOPS. This level of performance is required simply to maintain steady-state operation. Other activities—like boot storms and highly parallel new-desktop-deployment processes—drive IOPS and throughput requirements even higher.

Figure 1

4Copyright © Avere Systems, Inc., 2013

In sizing the storage infrastructure, virtualization administrators strive to balance steady-state IOPS requirements against peak-throughput demands. Sizing storage for VDI workloads on traditional NAS use both requirements as input and derive recommendations for appropriately sized filer heads, disk counts, and RAM/Flash cache capacity to enable reliable VDI scaling. The problem is that dialing in the golden configuration is not trivial, and these pre-sized VDI storage pods come with large-scale price tags.

Few NAS arrays can economically handle 100,000 IOPS when 80,000 of those IOPS are writes. For example, utilizing high-speed serial attached SCSI (SAS) spindles to deliver sufficient performance would require 466 drives (using 300GB disks with each drive delivering 300 IOPS) and a whopping 140TB of disk space. Actual utilized capacity in this case, however, would be a mere fraction of the total—deduplicated, 5,000 linked-clone VDI instances can consume as little as 25TB of capacity. So in this example, because no performance would remain to service any kind of active workload, 115TB of the SAS storage would be wasted capacity.

Another option to boost performance would be to use an all-flash array, along with deduplication and compression technologies. In this case, all 5,000 VDI instances might fit onto a single array, but once that array is fully built out and consumed, the only way to add new virtual desktops would be to add another expensive all-flash array. Traditional NAS architectures simply do not provide simple and affordable VDI scaling.

Avere FXT Series Edge Filer Technology for Extreme Workloads

Avere FXT Series Edge filer technology delivers high-performance I/O for demanding workloads such as those presented by large virtual desktop implementations. Avere optimizes NAS by separating performance scaling from capacity scaling. Placing an Avere Edge filer into a VDI environment enables linear performance scalability to millions of operations per second, while allowing the use of existing Core filers—such as those provided by EMC Isilon, NetApp, or Oracle Sun ZFS systems—for affordable capacity scaling with space-efficiency features such as deduplication and compression.

5Copyright © Avere Systems, Inc., 2013

Figure 2

The Avere Edge filer delivers high performance to hot data, manipulating only the data blocks required to service the workload at hand. The Avere architecture ideally suits VDI environments that are typically comprised of large numbers of VMDK disk files but where only small sections of the VMDK files are actively accessed by VDI instances. Although the Avere Edge filer is a file-level NAS solution, Avere tiering technology can optimize access to small subsections of very large files. The Avere Edge filer can own and transact upon the hot data blocks of thousands of VMDK files, without requiring highest-performance capacity for the entire footprint of VMDK files.

The Avere architecture effectively handles the entire VDI workflow, from the heavy reads of the linked-clone master file, to the small-request read I/O and misaligned write I/O directed at thousands of linked-clone delta VMDK disks, thousands of persistent persona and temporary disk files, and tens of thousands of large files. In Figure 2 the Avere Edge filer is positioned to handle the highly concurrent and random (I/O blender effect) workload that large scale VDI deployments generate.

6Copyright © Avere Systems, Inc., 2013

Avere Edge filer clusters seamlessly scale up to 50 FXT nodes per cluster, enabling VDI deployments to grow to tens of thousands of virtual desktops. Optimized Avere clustering features also streamline reconfiguration on the VMware Horizon View side—simply add each FXT node as a datastore to the vSphere environment, then add the datastore to a new VMware Horizon View desktop pool configuration. The architectural features of the Avere Edge filer enable incremental growth of the VDI, delivering predictable high-performance results without forklift upgrades of existing storage assets.

Avere VDI Reference Architecture

The Avere VDI Reference Architecture serves as a foundation for predictable VDI scaling. As illustrated in Figure 3 primary components include the virtualization hypervisor infrastructure, a two-node Avere FXT 4500 Edge filer with high-availability and failover capabilities, a Core filer, and 10-gigabit network connectivity. A VMware vSphere datastore is configured for each Edge filer node, and a pool of desktops is deployed on each datastore. The number of VDI instances that a given datastore can support is based on the I/O requirements of the VDI instance. With sufficient space and IOPS capabilities on the Core filer, the Avere Edge filer cluster can scale to support the largest of VDI deployments.

Figure 3

7Copyright © Avere Systems, Inc., 2013

Test Results: Avere Delivers Best Value

Avere engineers utilized the LoginVSI (http://www.loginvsi.com/) tool to test the performance of the Avere FXT 4500 Edge filer under the duress of a large-scale VDI workload. The benchmark measures how long it takes each logged-on user to perform common activities such as opening spreadsheets, word-processing documents, email messages, and general web browsing. The benchmark initiates each user login within a specified time, and continuously monitors the performance of each individual session. The benchmark identifies long-running task outliers to determine the active-session-count saturation point, and measures the overall experience of all active instances.

In this test, the environment included 1472 VDI instances running the LoginVSI “Heavy” workload (http://www.loginvsi.com/documentation/v3/performing-tests/workloads) against a single FXT 4500 node backed by a NetApp FAS2240 Core filer. (For every 15 IOPS generated by the VDI hypervisor to the Avere cluster, the Core filer behind the Avere must be able to handle 1 IOP.) The “Heavy” workload generates approximately 11.2 VDI IOPS per instance.VDI instances were spread out across 25 ESXi hosts configured with VMware Horizon View. The Avere FXT 4500 Edge filer was configured as the vSphere NFS datastore, providing an all-Flash/SSD tier to handle the hot read/write data for the VDI instances. Inactive data was tiered to the Core filer. All infrastructure services—including Windows Active Directory, Microsoft SQL server, VMware View Connection server, and the LoginVSI launchers—were also virtualized using VMware vSphere.

Benchmark results showed that a single Avere FXT 4500 Edge filer node can support 1472 VMware View 5.1 Linked Clone with Persistent User VDI instances, providing a response time at the storage layer of 8 milliseconds while delivering 17,000 IOPS.

For comparison purposes, rather than differentiating read versus write IOPS, results were normalized to a common measuring unit of VDI IOPS. This facilitates evaluation against other storage vendors’ published results from various benchmark sources. In Figure 4, the chart compares Avere results to the IOPS per VDI (arrived at by dividing the total IOPS the array serviced by total number of VDI instances running) of five competitive offerings.

8Copyright © Avere Systems, Inc., 2013

Avere enables the highest density with 4,124.4 VDI IOPS per rack unit. Dell EqualLogic, using the LoginVSI Medium benchmark, ranked second at just 2,800 VDI IOPS per rack. In the cost comparison, based on published prices, the Avere solution again offers the best value with each VDI IOP delivered at list price of $10.79, the lowest cost of the benchmark comparisons.

Painless Growth: 15X More VDI Instances

Avere solutions let virtualization administrators predictably design and implement NAS storage for VMware VDI, ensuring a consistently responsive user experience while enabling the cost efficiencies of extreme consolidation.

As proven in the VDI-workload tests, deploying an Avere Edge filer into an existing storage infrastructure can enable support of as many as 15 times more VDI instances than the native Core filer in place. The Avere architecture successfully eliminates the challenges of the I/O blender effect to enable extreme high performance at scale. Avere offers superior performance and scalability at a cost and footprint significantly less than competitive alternatives. As a result, virtualization administrators can confidently leverage Avere solutions as the foundation for building—and growing— large-scale VDI environments.

Figure 4

9Copyright © Avere Systems, Inc., 2013

For additional information on the testing results described in this brief, visit:

http://info.averesystems.com/blog-0/bid/278489/Untangling-the-VDI-Storage-Enigmahttp://www.youtube.com/watch?v=kBRs_OZw6Y0http://info.averesystems.com/blog-0/bid/286811/NAS-in-a-VDI-Workflowhttp://info.averesystems.com/blog-0/bid/290388/Measuring-VDI-in-a-NAS-Environment