technical report design guide for citrix xendesktop on ... · technical report design guide for...

59
Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

Upload: lyanh

Post on 01-May-2018

222 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

Technical Report

Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp

April 2015 | TR-4138

Page 2: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

2 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

TABLE OF CONTENTS

1 Citrix XenDesktop with FlexCast Technology ................................................................................... 6

2 Introduction to Clustered Data ONTAP .............................................................................................. 6

2.1 Benefits of Clustered Data ONTAP .................................................................................................................6

2.2 NetApp Storage Cluster Components .............................................................................................................8

2.3 Clustered Data ONTAP Networking Concepts ................................................................................................9

2.4 Cluster Management .......................................................................................................................................9

3 Introduction to Citrix XenDesktop .................................................................................................... 10

3.1 Citrix FlexCast Delivery Technology ............................................................................................................. 10

3.2 High-Definition User Experience Technology................................................................................................ 11

3.3 Citrix XenDesktop 7 Desktop and Application Services ................................................................................ 11

3.4 Desktop Provisioning Methods ...................................................................................................................... 12

3.5 Provisioning Technology Comparison ........................................................................................................... 15

3.6 Citrix Provisioning Server RAM Cache ......................................................................................................... 16

4 Solution Architecture ......................................................................................................................... 16

4.1 High-Level Architecture ................................................................................................................................. 17

4.2 XenDesktop Storage Layout in 2,000-Seat Deployment Reference Architecture ......................................... 18

5 Performance Test Results ................................................................................................................. 24

5.1 Load Generation ........................................................................................................................................... 24

5.2 Performance Test Plan ................................................................................................................................. 25

5.3 Performance and Space Testing ................................................................................................................... 25

5.4 Storage Node Failover Testing ..................................................................................................................... 27

5.5 Citrix PVS Workload Characteristics ............................................................................................................. 27

5.6 Performance Effect of Flash Solutions .......................................................................................................... 32

6 Storage Configuration Best Practices .............................................................................................. 32

6.1 Protocol Decision .......................................................................................................................................... 32

6.2 Clustered Data ONTAP Failover Group ........................................................................................................ 33

6.3 LIF Creation .................................................................................................................................................. 33

6.4 LIF Migration ................................................................................................................................................. 34

6.5 10 Gigabit Ethernet ....................................................................................................................................... 34

6.6 Jumbo Frames .............................................................................................................................................. 34

6.7 Flow Control Overview .................................................................................................................................. 35

6.8 Aggregates and Volumes .............................................................................................................................. 35

6.9 Secure Default Export Policy ........................................................................................................................ 35

Page 3: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

3 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

6.10 Storage Block Misalignment.......................................................................................................................... 36

7 Storage Operation Best Practices .................................................................................................... 37

7.1 Deduplication Decision ................................................................................................................................. 37

7.2 Always-on Deduplication ............................................................................................................................... 38

7.3 Inline Zero Detection and Elimination in Data ONTAP 8.3 ............................................................................ 38

7.4 Thin Provisioning and Volume Autogrow Policies ......................................................................................... 39

7.5 Space Reclamation ....................................................................................................................................... 39

7.6 Antivirus ........................................................................................................................................................ 39

7.7 Monitoring NetApp and XenDesktop Infrastructure ....................................................................................... 39

7.8 NetApp OnCommand Insight ........................................................................................................................ 40

8 Backup and Recovery of Virtual Desktops in a vSphere Environment ........................................ 40

8.1 Best Practices for VSC Backup of XenDesktop PvDisk ................................................................................ 41

8.2 Best Practices for VSC Recovery of XenDesktop PvDisk ............................................................................. 41

9 User Data and Profile Management .................................................................................................. 42

9.1 User Data ...................................................................................................................................................... 42

9.2 User Profile Data ........................................................................................................................................... 42

9.3 User-Installed Applications............................................................................................................................ 44

9.4 Separating User Data from the VM ............................................................................................................... 44

10 Application Virtualization with XenApp............................................................................................ 45

11 Citrix Component Scalability Considerations ................................................................................. 47

11.1 Infrastructure Components Limit ................................................................................................................... 47

11.2 Hypervisor Configuration Limits and Pod Concept ........................................................................................ 48

11.3 VM Resource Recommendation ................................................................................................................... 49

12 Storage Sizing Guide.......................................................................................................................... 49

12.1 Solution Assessment .................................................................................................................................... 49

12.2 Capacity Considerations ............................................................................................................................... 50

12.3 Performance Considerations......................................................................................................................... 51

12.4 Desktop Deployment Scenarios .................................................................................................................... 51

12.5 IOPS and Capacity Considerations ............................................................................................................... 52

12.6 Storage Scalability Guidelines on Hybrid Storage ......................................................................................... 53

12.7 Storage Scalability Guidelines on All-Flash FAS Storage ............................................................................. 55

Test Results Overview ........................................................................................................................................... 57

Page 4: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

4 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Acknowledgements .................................................................................................................................. 57

References ................................................................................................................................................. 57

Citrix References .................................................................................................................................................. 57

NetApp References .............................................................................................................................................. 58

VMware References ............................................................................................................................................. 58

LIST OF TABLES

Table 1) Management tools. ...........................................................................................................................................9

Table 2) Provisioning methods. .................................................................................................................................... 15

Table 3) Aggregate configuration details. ..................................................................................................................... 21

Table 4) FlexVol volume configuration details. ............................................................................................................. 21

Table 5) Virtual desktop workload test cases. .............................................................................................................. 25

Table 6) Two thousand–user CIFS workload. .............................................................................................................. 25

Table 7) Five hundred hosted desktop users, average IOPS during boot, login, steady state, and logoff. ................... 26

Table 8) One thousand four hundred and fifty hosted shared desktop users, average IOPS during boot, login, steady state, and logoff. ........................................................................................................................................................... 26

Table 9) Average CPU utilization on node 1 and node 2 during boot, login, steady state, and logoff. ......................... 26

Table 10) Storage protocols for XenDesktop. ............................................................................................................... 32

Table 11) Deduplication recommendations. ................................................................................................................. 37

Table 12) Disk type and protocol. ................................................................................................................................. 38

Table 13) XenDesktop component configuration and limits.......................................................................................... 47

Table 14) VM resource recommendations.................................................................................................................... 49

Table 15) Recommendations for hosted shared desktops. .......................................................................................... 49

Table 16) PVS, Login VSI heavy workload sizing guidance. ........................................................................................ 51

Table 17) Citrix PVS–based deployment scenarios. .................................................................................................... 51

Table 18) IOPS and capacity assumptions. .................................................................................................................. 52

Table 19) Storage scalability guidelines, sample configurations. ................................................................................. 53

Table 20) Scalability guidelines. ................................................................................................................................... 54

LIST OF FIGURES

Figure 1) Multi-tenancy concept. ....................................................................................................................................8

Figure 2) Ports and LIFs example. .................................................................................................................................9

Figure 3) Overview of unified FlexCast architecture. .................................................................................................... 10

Figure 4) XenDesktop 7 supports multiple service modes on a single infrastructure. .................................................. 12

Figure 5) vDisk and write cache (graphic provided by Citrix)........................................................................................ 13

Figure 6) MCS file layout (graphic provided by Citrix). ................................................................................................. 14

Figure 7) Hypervisor snapshot I/O workflow. ................................................................................................................ 14

Figure 8) XenDesktop on clustered Data ONTAP architecture. .................................................................................... 20

Page 5: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

5 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Figure 9) XenDesktop on clustered Data ONTAP example volume layout. .................................................................. 23

Figure 10) Clustered Data ONTAP with mix of disk types. ........................................................................................... 24

Figure 11) Node 1 space usage. .................................................................................................................................. 26

Figure 12) Failover from node 1 to node 2 results in 2,000-user steady-state, Login VSI medium workload running entirely on node 2. ........................................................................................................................................................ 27

Figure 13) Failover from node 2 back to node 1. .......................................................................................................... 27

Figure 14) Citrix PVS reads and writes to storage. ....................................................................................................... 28

Figure 15) I/O size breakdown of write workload. ......................................................................................................... 29

Figure 16) IOPS per desktop, Login VSI heavy workload testing. ................................................................................ 29

Figure 17) I/O comparison, PVS with personal vDisk, 500-seat boot. .......................................................................... 31

Figure 18) I/O comparison, PVS with personal vDisk, 500-seat login. ......................................................................... 31

Figure 19) Setting jumbo frames. ................................................................................................................................. 35

Figure 20) Partition write cache file disk using DiskPart. .............................................................................................. 37

Figure 21) HVD user profile manager policy. ................................................................................................................ 43

Figure 22) HSD user profile manager policy. ................................................................................................................ 43

Figure 23) User data backup types in XenDesktop environment. ................................................................................. 45

Figure 24) VSC cloning in XenApp solution.................................................................................................................. 46

Figure 25) Desktop transformation accelerator. ........................................................................................................... 48

Page 6: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

6 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

1 Citrix XenDesktop with FlexCast Technology

The current state of desktop virtualization is much more comprehensive than that of traditional virtual

desktop infrastructure (VDI) because it addresses the needs of various types of users in a customer

environment. Citrix XenDesktop, with the various FlexCast delivery models, can effectively meet the

performance, security, and flexibility requirements of different types of users. FlexCast delivery models

include:

Hosted shared. Target task workers by hosting multiple user desktops on one server-based operating system (OS).

Hosted VDI. Target knowledge workers by providing each user with his or her own individual desktop OS.

Apps on demand. Virtualize applications. Deliver Windows® applications from the data center

without providing a virtual desktop.

Streamed VHD. Use a single desktop image to allow Windows desktops (Windows 7 or Windows 8) to run locally on an end user’s desktop computer and provision using provisioning services.

Local VM. Allow Windows desktops (Windows 7 or Windows 8) to run locally within a hypervisor on an end user’s laptop. The virtual desktop image is delivered in its entirety to the hypervisor to allow offline connectivity.

The Citrix XenDesktop 7.5 system, which now incorporates both traditional, hosted Windows 7 and

Windows 8 virtual desktops, hosted applications, and hosted shared Windows Server® 2008 R2 or

Windows Server 2012 R2 server desktops (formerly delivered by Citrix XenApp), provides unparalleled

scale and management simplicity while extending the Citrix HDX FlexCast models to mobile devices.

The NetApp® clustered Data ONTAP

® operating system—with its key capabilities such as nondisruptive

operations, unified storage, multiprotocol architecture, secure multi-tenancy, storage efficiency, read and

write performance, and cost-efficient data protection—is ideal for cost-effectively designing and deploying

end-to-end storage solutions for desktop virtualization based on a single XenDesktop FlexCast model or

a mix of various XenDesktop FlexCast models.

This technical report provides key storage design and architecture best practices for deploying a mix of

FlexCast technologies on a NetApp clustered Data ONTAP storage array. It also includes performance

testing results and solution scalability guidelines that prove that NetApp clustered Data ONTAP can cost-

effectively scale to thousands of desktops without adding complexity.

2 Introduction to Clustered Data ONTAP

With the release of Data ONTAP 8.3, NetApp brings enterprise-ready, unified scale-out storage to market

for the first time.

Note: For more information on clustered Data ONTAP with VMware vSphere®, refer to TR-4068:

VMware vSphere 5 on NetApp Clustered Data ONTAP.

2.1 Benefits of Clustered Data ONTAP

All clustering technologies follow a common set of guiding principles. These principles include the

following:

Nondisruptive operation. The key to efficiency and the linchpin of clustering is the ability to make sure that the cluster does not fail.

Virtualized access as the managed entity. Direct interaction with the nodes that make up the cluster is in and of itself a violation of the term cluster. During the initial configuration of the cluster,

Page 7: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

7 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

direct node access is a necessity; however, steady-state operations are abstracted from the nodes as the user interacts with the cluster as a single entity.

Data mobility and container transparency. The end result of clustering—the nondisruptive collection of independent nodes working together and presented as one holistic solution—is the ability of data to move freely within the boundaries of the cluster.

Delegated management and ubiquitous access. In large, complex clusters, the ability to delegate or segment features and functions into containers that can be acted upon independently of the cluster means that the workload can be isolated. It is important to note that the cluster architecture itself must not place conditions on access to the contents of the cluster. This should not be confused with security concerns around the content being accessed.

Scale-Out

Data centers require agility. Each storage controller within a data center has CPU, memory, and disk shelf

limitations. Scale-out means that, as the storage requirements grow, additional controllers can be added

to the resource pool in a shared storage infrastructure. Host and client connections as well as datastores

can be moved seamlessly and nondisruptively anywhere within the resource pool.

Benefits of scale-out:

Nondisruptive operations

Ability to add thousands of users to a virtual desktop environment without downtime

Operational simplicity and flexibility

NetApp clustered Data ONTAP solves the scalability requirements in a storage environment.

Unified Storage

By supporting all common NAS and SAN protocols on a single platform, NetApp unified storage enables

the following advantages:

Direct access to storage by each client

Different platforms to share network files without using protocol emulation products such as SAMBA, NFS Maestro, or PC-NFS

Simple and fast data storage and data access for all of your client systems

Fewer storage systems

Greater efficiency from each system deployed

NetApp clustered Data ONTAP offers the capability to concurrently support several protocols in the same

storage system.

Page 8: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

8 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Note: Data ONTAP 7G and Data ONTAP operating in 7-Mode versions also include support for multiple protocols.

Unified storage is important to XenDesktop solutions. Based on your workload, you can choose different

protocols based, for example, on CIFS SMB for user data, NFS for write cache, and SAN LUNs for

Windows applications.

The supported protocols are:

NFS v3, v4, and v4.1, including pNFS

iSCSI

Fibre Channel (FC)

Fibre Channel over Ethernet (FCoE)

CIFS

Multi-Tenancy

Isolated servers and data storage can result in low utilization, gross inefficiency, and the inability to

respond to changing business needs. Cloud architecture, delivering IT as a service, can overcome these

limitations while reducing future IT expenditure.

The storage virtual machine (SVM) is the primary logical cluster component. Each storage virtual machine

can create volumes, logical interfaces, and protocol access. With clustered Data ONTAP, each

department’s virtual desktops and data can be separated to different storage virtual machines. The

administrator of each storage virtual machine has the rights to provision volumes and other storage virtual

machine–specific operations. This is particularly advantageous for service providers or any multi-tenanted

environments in which workload separation is desired.

Figure 1 shows the multi-tenancy concept in clustered Data ONTAP.

Figure 1) Multi-tenancy concept.

2.2 NetApp Storage Cluster Components

Although definitions of key terms are normally reserved for a glossary, we define some key terms here to

establish a knowledge baseline for the remainder of this publication.

Page 9: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

9 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Cluster. The information boundary and domain within which information moves. The cluster is where high availability is defined between physical nodes and where storage virtual machines (SVMs) operate.

Node. A physical entity running Data ONTAP. This physical entity can be a traditional NetApp FAS controller, a supported third-party array front-ended by NetApp FlexArray

™ technology, or the NetApp

Cloud ONTAP™

operating system.

Storage virtual machine. A secure virtualized storage controller that behaves and appears to the end user as a physical entity (similar to a VM). It is connected to one or more nodes through internal networking relationships. It is the highest visible element to an external consumer, abstracting the layer of interaction from the physical nodes. Based on these two statements, it is the entity used to provision cluster resources and it can be compartmentalized in a secure fashion to prevent access to other parts of the cluster.

2.3 Clustered Data ONTAP Networking Concepts

The physical interfaces on a node are referred to as ports. IP addresses are assigned to logical interfaces

(LIFs). LIFs are logically connected to a port in much the same way that VM virtual network adapters and

VMkernel ports connect to physical adapters, except without the constructs of virtual switches and port

groups. Physical ports can be grouped into interface groups. VLANs can be created on top of physical

ports or interface groups. LIFs can be associated with a port, interface group, or VLAN.

Figure 2 shows the clustered Data ONTAP network concept.

Figure 2) Ports and LIFs example.

2.4 Cluster Management

For complete and consistent management of storage and SAN infrastructure, NetApp recommends using

the tools listed in Table 1, unless specified otherwise.

Table 1) Management tools.

Task Management Tools

Storage virtual machine management NetApp OnCommand® System Manager

Switch management and zoning switch vendor GUI or CLI interfaces

Volume and LUN provisioning and management NetApp Virtual Storage Console

Page 10: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

10 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

3 Introduction to Citrix XenDesktop

3.1 Citrix FlexCast Delivery Technology

In Citrix XenDesktop 7.5, FlexCast Management Architecture is responsible for delivering and managing

hosted-shared RDS apps and complete VDI desktops. XenDesktop integrates with Citrix XenServer,

Windows Server 2008/2012 Hyper-V®, and VMware vSphere and works out of the box with thin clients.

By using Citrix Receiver with XenDesktop 7.5, users have a device-native experience on endpoints

including Windows, Mac®, Linux

®, iOS, Android, ChromeOS, HTML5, and Blackberry.

Figure 3 provides an overview of a unified FlexCast architecture.

Figure 3) Overview of unified FlexCast architecture.

The unified FlexCast architecture consists of the following underlying components:

Citrix Receiver. Running on user endpoints, Receiver provides users with self-service access to resources published on XenDesktop servers. Receiver combines ease of deployment and use, supplying fast, secure access to hosted applications, desktops, and data. Receiver also provides on-demand access to Windows, web, and software-as-a-service applications.

Citrix StoreFront. StoreFront authenticates users and manages catalogs of desktops and applications. Users can search StoreFront catalogs and then use Citrix Receiver to subscribe to published services.

Citrix Studio. Using the new and improved Studio interface, administrators can easily configure and manage XenDesktop deployments. Studio provides wizards to guide the process of setting up an environment, creating desktops, and assigning desktops to users, automating provisioning and application publishing. It also allows administration tasks to be customized and delegated to match site operational requirements.

Page 11: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

11 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Delivery controller. The delivery controller is responsible for distributing applications and desktops, managing user access, and optimizing connections to applications. Each site has one or more delivery controllers.

Server OS machines. These are virtual or physical machines (based on a Windows Server operating system) that deliver RDS applications or hosted shared desktops to users.

Desktop OS machines. These are virtual or physical machines (based on a Windows desktop operating system) that deliver personalized VDI desktops or applications that run on a desktop operating system.

Remote PC. XenDesktop with Remote PC allows IT to centrally deploy secure, remote access to all Windows PCs on the corporate network. It is a comprehensive solution that delivers fast, secure, remote access to all the corporate apps and data on an office PC from any device.

Virtual delivery agent. A virtual delivery agent is installed on each virtual or physical machine (within the server or desktop OS) and manages each user connection for application and desktop services. The agent allows OS machines to register with the delivery controllers and governs the high-definition user experience (HDX) connection between these machines and Citrix Receiver.

Citrix Director. Citrix Director is a powerful administrative tool that helps administrators quickly troubleshoot and resolve issues. It supports real-time assessment, site health and performance metrics, and end user experience monitoring. Citrix EdgeSight reports are available from within the Director console and provide historical trending and correlation data for capacity planning and service-level assurance.

Citrix Provisioning Services 7.1. This new release of Citrix Provisioning Services (PVS) technology is responsible for streaming a shared virtual disk (vDisk) image to the configured server OS or desktop OS machines. This streaming capability allows VMs to be provisioned and reprovisioned in real time from a single image, eliminating the need to patch individual systems and conserving storage. All patching is done in one place and then streamed at boot-up. PVS supports image management for both RDS- and VDI-based machines, including support for image NetApp Snapshot

®

copies and rollbacks.

3.2 High-Definition User Experience Technology

High-definition user experience (HDX) technology in this release is optimized to improve the user

experience for hosted Windows apps on mobile devices. Specific enhancements include the following.

HDX mobile technology. Designed to cope with the variability and packet loss inherent in today’s mobile networks, HDX technology supports deep compression and redirection, taking advantage of advanced codec acceleration and an industry-leading H.264-based compression algorithm. The technology enables dramatic improvements in frame rates while requiring significantly less bandwidth. HDX technology offers users a rich multimedia experience and optimized performance for voice and video collaboration.

HDX touch technology. This technology enables mobile navigation capabilities similar to those of native apps, without rewrites or porting of existing Windows applications. Optimizations support native menu controls, multitouch gestures, and intelligent sensing of text-entry fields, providing a native application look and feel.

HDX 3D Pro. This technology uses advanced server-side GPU resources for compression and rendering of the latest OpenGL and DirectX professional graphics apps. GPU support includes both dedicated user and shared user workloads.

3.3 Citrix XenDesktop 7 Desktop and Application Services

IT departments strive to deliver application services to a broad range of enterprise users who have

varying performance, personalization, and mobility requirements. Citrix XenDesktop 7 allows IT to

configure and deliver any type of virtual desktop or app, hosted or local, and to optimize delivery to meet

individual user requirements while simplifying operations, securing data, and reducing costs. Figure 4

shows XenDesktop 7’s ability to support multiple service delivery modes on a single infrastructure.

Page 12: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

12 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Figure 4) XenDesktop 7 supports multiple service modes on a single infrastructure.

With previous product releases, administrators had to deploy separate XenApp farms and XenDesktop

sites to support both hosted shared RDS desktops and VDI desktops. As shown in Figure 4, the new

XenDesktop 7 release allows administrators to create a single infrastructure that supports multiple modes

of service delivery, including:

Application virtualization and hosting (RDS). Applications are installed on or streamed to Windows Server hosts in the data center and remotely displayed to users’ desktops and devices.

Hosted shared desktops (RDS). Multiple user sessions share a single, locked-down Windows Server environment running in the data center and accessing a core set of apps. This model of service delivery is ideal for task workers using low-intensity applications and enables more desktops per host compared to VDI.

Pooled VDI desktops. This approach leverages a single desktop OS image to create multiple, thinly provisioned or streamed desktops. Optionally, desktops can be configured with a personal vDisk to maintain user application, profile, and data differences that are not part of the base image. This approach replaces the need for dedicated desktops and is generally deployed to address the desktop needs of knowledge workers who run more intensive application workloads.

VM hosted apps (16-bit, 32-bit, or 64-bit Windows apps). Applications are hosted on virtual desktops running Windows 7 or Windows 8 and then remotely displayed to users’ physical or virtual desktops and devices.

3.4 Desktop Provisioning Methods

When you use Citrix XenDesktop on NetApp, you can choose from three provisioning methods: PVS,

machine create service (MCS), and NetApp clone.

Citrix Provisioning Service (PVS)

The PVS infrastructure is based on software-streaming technology. This technology allows computers to

be provisioned and reprovisioned in real time from a single shared disk image. In doing so, administrators

can eliminate the need to manage and patch individual systems. Instead, all image management is done

on the master image vDisk. The local hard disk drive of each system can be used for runtime data

Page 13: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

13 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

caching or, in some scenarios, removed from the system entirely, thereby reducing power usage, system

failure rates, and security risks.

vDisk attributes:

Read only during streaming

Shared by many VMs, simplifying updates

Recommended storage protocol: CIFS SMB 3 or SAN LUN for vDisk

Write cache attributes:

One VM, one write cache file

Write cache file is empty after VM reboots

Recommended storage protocol: NFS for write cache for vSphere and XenServer, SAN LUN or SMB 3 for Hyper-V

The write cache file size is normally 4GB to 10GB; if PvDisk is used in the deployment, the write cache size will be smaller

Note: The protocol decision rationale for write cache and vDisk is explained in section 6.1.

Figure 5 shows that vDisk and write cache are the two main components with a PVS-deployed

XenDesktop solution. For enterprise deployments, vDisk and write cache must be on shared storage for

data integrity, backup, scalability, and easy management.

Figure 5) vDisk and write cache (graphic provided by Citrix).

Best Practice

Because PVS streams the vDisk (the master image) over the network, at least two 1GbE network

interface cards (NICs) are required on your PVS servers. The recommended storage protocol is CIFS

SMB 3 or SAN LUN for vDisk. Citrix Provisioning Server supports 10GbE connections, and NetApp

recommends using 10GbE for streaming desktops.

Machine Create Service (MCS)

XenDesktop 5 introduced machine create service (MCS). MCS simplifies the task of creating, managing,

and delivering virtual desktops to users.

Page 14: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

14 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

When you provision desktops using MCS, a master image is copied to each storage volume. This master

image copy uses the hypervisor snapshot clone. Within minutes of the master image copy process, MCS

creates a differential disk and an identity disk for each VM. The size of the differential disk is the same as

the master image in order to host the session data. The identity disk is normally 16MB and is hidden by

default. The identity disk has the machine identity information such as host name and password.

Figure 6 shows the MCS file layout.

Figure 6) MCS file layout (graphic provided by Citrix).

MCS uses hypervisor snapshots to achieve storage efficiency and rapid provisioning. Hypervisor

snapshots, however, incur a metadata penalty in that they must read and write metadata from the diff

disk. Because of this, 20% more IOPS are read and written to a storage system that uses these types of

clones.

Figure 7 shows why MCS uses more I/O than PVS or NetApp clone.

Figure 7) Hypervisor snapshot I/O workflow.

Page 15: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

15 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

NetApp Clone Created by Virtual Storage Console

You can deploy desktops almost instantly without consuming additional storage by using NetApp

FlexClone® technology for cloning individual files and then importing the desktops to XenDesktop.

The VM cloning capability is available with VSC for XenServer and VSC for VMware vSphere. After

cloning a VM, Virtual Storage Console (VSC) creates a comma-separated value (CSV) file that includes

VM host name and AD accounts. XenDesktop can then import the CSV file to create a desktop group.

VSC also has storage management and VM backup and recovery capabilities.

Note: The VM backup and recovery are only available with VMware vSphere. NetApp Virtual Storage Console supports Citrix XenServer and VMware ESX

® and ESXi

™.

3.5 Provisioning Technology Comparison

Table 2 presents three provisioning methods to assist you in selecting the right provisioning methods for

you.

Table 2) Provisioning methods.

Provisioning Method

Vendor Desktop Catalog in XenDesktop

VM Deployment Supported Hypervisor

Virtual Storage Console (VSC)

NetApp Dedicated or pooled Most storage-efficient and fastest VM cloning. Requires NetApp FlexClone license.

ESX, ESXi, and XenServer; script for Hyper-V

Machine creation service (MCS)

Citrix Dedicated or pooled VDI only. NFS is the preferred protocol. Small-scale deployments (fewer than 2,500 seats, based on Citrix consulting) that need to be up and running quickly. MCS needs almost three times more storage than PVS-based deployment. Quick

ESX, ESXi, XenServer, and Hyper-V

Page 16: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

16 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Provisioning Method

Vendor Desktop Catalog in XenDesktop

VM Deployment Supported Hypervisor

and easy.

Provisioning server (PVS)

Citrix Pooled or dedicated with PvDisk

Proven scalability and storage efficiency. Easy redeployment. Most common method used in XenDesktop deployments.

ESX, ESXi, XenServer, and Hyper-V

Best Practice

For pooled desktops, NetApp recommends PVS for easy redeployment and storage efficiency. For

dedicated desktops, NetApp recommends the NetApp Virtual Storage Console for fast cloning and

storage efficiency.

3.6 Citrix Provisioning Server RAM Cache

A RAM write cache option (cache on device RAM with overflow on hard disk) is available at Provisioning

Server 7.6. Write cache can seamlessly overflow to a differencing disk should RAM cache become full. At

the PVS console, select vDisk properties and choose Cache type as “Cache in device RAM with overflow

on hard disk.” This will use hypervisor RAM first and then use hard disk. Choose the RAM size based on

OS type.

RAM sizing consideration:

256MB for Windows 7 32-bit

512MB for Windows 7 64-bit

2GB–4GB for XenApp VM

For more detail on sizing RAM cache, refer to the Citrix blog topic Size Matters: PVS RAM Cache

Overflow Sizing, authored by Amit Ben-Chanoch.

According to Ben-Chanoch, storage is 4K based and RAM is 64MB. “If transitioning an environment from

Cache on HDD to Cache in RAM with overflow to disk and the RAM buffer is not increased from the

64MB default setting, allocate twice as much space to the write cache as a rule of thumb.”

Best Practice

Defragment the vDisk before deploying the image and after major changes. Increase write cache when

you use RAM cache.

4 Solution Architecture

NetApp recommends implementing VDI layering technologies to separate the various components of a

desktop (such as base OS image, user profiles and settings, corporate apps, user-installed apps, and

user data) into manageable entities called layers. Layers help to achieve the lowest storage cost per

desktop because the storage no longer has to be sized for peak IOPS and intelligent data management

policies. For example, storage efficiency, backup based on Snapshot copy, and recovery can be applied

to the different layers of the desktop.

Some of the key benefits of VDI layering are:

Page 17: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

17 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Ease of VDI image management. Desktops no longer have to be patched or updated individually. This results in cost savings because the storage array no longer has to be sized for write I/O storms.

Efficient data management. Separating the different desktop components into layers enables the application of intelligent data management policies (such as deduplication, NetApp Snapshot backups, and so on) on different layers as required. For example, you can enable deduplication on storage volumes that host Citrix personal vDisks and user data.

Ease of application rollout and updates. This benefit makes it easy to manage the rollout of new applications and updates to existing applications.

Improved end-user experience. This benefit gives users the freedom to install applications and allows persistence of these applications when updating the desktop OS or applications.

4.1 High-Level Architecture

This section outlines the recommended storage architecture for deploying a mix of XenDesktop FlexCast

delivery models, such as hosted VDI or hosted shared desktops, along with intelligent VDI layering (such

as profile management and user data management) on the same NetApp clustered Data ONTAP storage

array.

Storage Architecture for PVS Deployment

Figure shows the NetApp recommended architecture for deploying desktops using PVS provisioning.

Figure 8) NetApp recommended architecture for desktop deployment using Citrix PVS provisioning.

Base OS image:

PVS vDisk. As described in “Citrix Provisioning Service” (section 3.4, “Desktop Provisioning Methods”), CIFS/SMB 3 is the recommended protocol for hosting the PVS vDisk. Using the CIFS/SMB 3 protocol allows the same vDisk to be shared among multiple PVS servers, unlike the hosting of a vDisk on FC or iSCSI LUN, which requires one LUN per PVS server. The advantage is that only one vDisk has to be patched or updated in order to roll out the updates to thousands of pooled desktops. This capability results in significant operational savings and architectural simplicity.

Page 18: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

18 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

PVS write cache file. The PVS write cache file is hosted on NFS datastores for simplicity and scalability, as described in the “Citrix Provisioning Service” section.

Personal vDisk. Citrix personal vDisk (PvDisk) allows hosting user-installed applications on a separate virtual disk. NetApp recommends hosting personal vDisks in an NFS datastore for vSphere- and XenServer-based deployments. To achieve storage savings, deduplication should be enabled on the datastore that hosts the VMDK/VHD files associated with the personal vDisk. Backup and recovery using NetApp Snapshot technology can be easily achieved by using the VSC plug-in. Personal vDisk is optional when you deploy the Citrix XenDesktop on NetApp solution described in this document.

Application virtualization. Citrix XenApp can be used to stream baseline applications from a XenApp repository hosted on CIFS shares from NetApp storage so that applications do not have to be patched or updated on each desktop.

Profile management. To make sure that the user profiles and settings are preserved after the desktops are reprovisioned from the updated vDisk (as a result of desktop patch, update, and so on), leverage the profile management software (Liquidware Labs ProfileUnity, Citrix UPM, and so on) to redirect the user profiles to the CIFS home directories. In NetApp lab validation, Liquidware Labs ProfileUnity was used as the profile management software.

User data management. NetApp recommends hosting the user data on CIFS home directories to preserve data upon VM reboot or redeployment.

Monitoring and management. NetApp recommends using OnCommand Balance and Citrix Desktop Director to provide end-to-end monitoring and management of the solution.

4.2 XenDesktop Storage Layout in 2,000-Seat Deployment Reference Architecture

This reference architecture is a 2,000-seat VDI using Citrix XenDesktop 7.1 built on Cisco UCS® B200-M3

blades with a NetApp FAS3200 series and VMware vSphere ESXi 5.1 hypervisor platform.

The Citrix XenDesktop 7.1 system—which now incorporates traditional hosted Windows 7 or Windows 8

virtual desktops, hosted applications, and hosted shared Windows Server 2008 R2 or Windows Server

2012 R2 desktops (formerly delivered by Citrix XenApp)—provides unparalleled scale and management

simplicity while extending the Citrix HDX FlexCast models to additional mobile devices.

Physical Architecture

The deployed solution is highly modular. Although each customer’s environment might vary in its exact

configuration, after the reference architecture described in this document is built it can easily be scaled as

requirements and demands change. This includes scaling both up (adding additional server resources)

and out (adding additional NetApp FAS storage arrays).

The 2,000-user XenDesktop 7 solution includes Cisco® networking, Cisco UCS, and NetApp FAS storage

that fits into a single data center rack, including the access layer network switches.

Figure shows the various volumes created for this solution.

The hardware to support 1,450 users of hosted shared desktops (HSDs) and 550 users of hosted VDI

(HVD) includes:

Two Cisco Nexus® 5548UP layer 2 access switches

Two Cisco UCS 6248UP series fabric interconnects

Two Cisco UCS 5108 blade server chassis with two 2204XP I/O modules per chassis

Four Cisco UCS B200 M3 blade servers with Intel® E5-2680v2 processors, 384GB RAM, and

VIC1240 mezzanine cards for the 550 hosted Windows 7 virtual desktop workloads with N+1 server fault tolerance

Page 19: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

19 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Eight Cisco UCS B200 M3 blade servers with Intel E5-2680v2 processors, 256GB RAM, and VIC1240 mezzanine cards for the 1,450 hosted shared Windows Server 2012 server desktop workloads with N+1 server fault tolerance

Two Cisco UCS B200 M3 blade servers with Intel E5-2650 processors, 128GB RAM, and VIC1240 mezzanine cards for the infrastructure virtualized workloads

Two-node NetApp FAS3240 dual-controller storage system running clustered Data ONTAP, 4 disk shelves, converged and 10GbE ports for FCoE, and NFS/CIFS connectivity, respectively

(Not shown) One Cisco UCS 5108 blade server chassis with 3 Cisco UCS B200 M3 blade servers with Intel E5-2650 processors, 128GB RAM, and VIC1240 mezzanine cards for the Login VSI launcher infrastructure

Figure 9) Hardware supporting 2,000-seat XenDesktop 7 on NetApp solution.

Logical Architecture

The logical architecture of the validated solution is designed to support 2,000 users within 2 chassis and

14 blades, which provides physical redundancy for the chassis and blade servers for each workload.

Figure shows all servers in the configuration.

Citrix Provisioning Services streams multiple desktops to the VMware ESXi hypervisor. Citrix XenDesktop

manages desktops and facilitates the connections between endpoint devices and desktops. With

Page 20: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

20 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

XenDesktop 7 and up, XenApp is integrated into XenDesktop. Hosted shared desktops and applications

can be delivered through XenDesktop. Corporate application virtualization used as an application-delivery

solution to enable any Windows application can be virtualized, centralized, and managed. Also, hosted

shared desktops are delivered by XenApp. Liquidware Labs ProfileUnity or Citrix User Profile

Management (UPM) manages user profiles and folder redirection.

Figure depicts the logical XenDesktop architecture with NetApp clustered Data ONTAP.

Figure 8) XenDesktop on clustered Data ONTAP architecture.

The infrastructure VMs in the configuration include:

Two Citrix XenDesktop servers

Three Citrix provisioning servers

Two Citrix StoreFront servers

One Citrix licensing server

Two Windows SQL Server® mirroring servers

You can use the existing Active Directory® (AD) server in the virtual desktop deployments.

Storage Architecture

Two FAS3240 controllers and 4 DS2246 disk shelves are used in this solution deployment in order to

support 1,450 users of hosted shared desktops (HSDs) and 550 users with hosted VDIs (HVDs).

Clustered Data ONTAP version 8.2P4 was used.

Page 21: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

21 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Aggregate Configuration

An aggregate is created to provide storage to one or more volumes. An aggregate is a physical storage

object and is associated with a specific node in the cluster. To support the differing security, backup,

performance, and data sharing needs of users, physical data storage resources on your storage system

are grouped into one or more aggregates. You can design and configure aggregates to provide the

appropriate level of performance and redundancy for your storage requirements. For information about

best practices for working with aggregates, refer to TR-3437: Storage Subsystem Resiliency Guide.

Note: Since the writing of this guide, Data ONTAP 8.3 supports Advanced Drive Partitioning for all-flash

configurations, which removes the need to have a separate root aggregate. This feature improves overall

storage utilization.

Table 3 provides aggregate configuration details.

Table 3) Aggregate configuration details.

Aggregate Name Owner Node Name

Disk Count (by Type) Block Type

RAID Type

RAID Group Size

Size Total

aggr0_ccr_cmode_01_01_root ccr-cmode-01-01

3@450GB_SAS_10k 64_bit raid_dp 16 367.4GB

aggr0_ccr_cmode_01_02_root ccr-cmode-01-02

3@450GB_SAS_10k 64_bit raid_dp 16 367.4GB

aggr01 ccr-cmode-01-01

40@450GB_SAS_10k 64_bit raid_dp 20 12.9TB

aggr02 ccr-cmode-01-02

40@450GB_SAS_10k 64_bit raid_dp 20 12.9TB

FlexVol Configuration

Volumes are data containers that enable you to partition and manage your data. Volumes are the highest

level logical storage objects. Unlike aggregates, which are composed of physical storage resources,

volumes are completely logical objects. Understanding the types of volumes and their associated

capabilities will enable you to design your storage architecture for maximum storage efficiency and ease

of administration.

A NetApp FlexVol® volume is a data container associated with a virtual storage machine with FlexVol

volumes. It gets its storage from a single associated aggregate, which it might share with other FlexVol

volumes or Infinite Volumes. It can be used to contain files in a NAS environment or LUNs in a SAN

environment.

Table 4 provides FlexVol volume configuration details.

Table 4) FlexVol volume configuration details.

Cluster Name

SVM Name Volume Name Containing Aggregate

Snapshot Policy

Efficiency Policy

Protocol Size Total

ccr-cmode-01

HVD HVDWC aggr02 None None NFS 3.0TB

Page 22: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

22 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Cluster Name

SVM Name Volume Name Containing Aggregate

Snapshot Policy

Efficiency Policy

Protocol Size Total

ccr-cmode-01

RDS1 RDSWC aggr01 None None NFS 2.9TB

ccr-cmode-01

RDS2 RDS2WC aggr02 None None NFS 2.0TB

ccr-cmode-01

Infra_Vserver infra_datastore_1 aggr02 None Deduplication NFS 1.5TB

ccr-cmode-01

Infra_Vserver infra_swap aggr01 None Deduplication NFS 100.0GB

ccr-cmode-01

Infra_Vserver xdsql1db_vol aggr02 None Deduplication iSCSI 103.1GB

ccr-cmode-01

san_boot esxi_boot aggr01 Default Deduplication FCOE 200.0GB

ccr-cmode-01

RDSuserdata userdata aggr01 Default Deduplication CIFS 6.0TB

ccr-cmode-01

HVDuserdata1 userdata1 aggr02 Default Deduplication CIFS 1.0TB

A 725-RDS-user write cache is on node 1. A 725-RDS-user and a 550-HVD-user write cache are on node

2. Two CIFS virtual storage servers are created for HSD and HVD users on each storage node. VMware

ESXi 5.1 SAN boot volume is on node 1, and infrastructure virtual storage server is on node 2.

Figure shows the various volumes created for this solution.

Page 23: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

23 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Figure 9) XenDesktop on clustered Data ONTAP example volume layout.

Best Practice

Perform load balancing on each node in the cluster. Spread the write cache across the controllers

because the majority of the workload is from the write cache.

Flash Decision

NetApp offers all-flash and hybrid storage options, both of which deliver high performance, advanced data

management, and low cost per desktop.

The NetApp All-Flash FAS solution combines consistent ultralow latency and high IOPS with the industry-

leading clustered Data ONTAP operating system. All-Flash FAS shares the same unified storage

architecture, Data ONTAP OS, management interface, rich data services, and advanced feature sets as

the hybrid FAS solution. All-Flash FAS offers the following additional benefits:

Proven enterprise availability

Reliability and scalability

Storage efficiency proven in thousands of VDI deployments

Unified storage with multiprotocol access

Advanced data services

Operational agility through tight application integrations

The NetApp hybrid FAS solution can be based on NetApp Flash Cache™

or NetApp Flash Pool™

intelligent caching and delivers low latency along with high IOPS. Both Flash Cache and Flash Pool

provide read acceleration to speed up boot and login storms. Flash Pool also provides write acceleration

to speed up write-intensive, steady-state operations. Both technologies help to reduce the number of

HDDs required in your XenDesktop and/or XenApp deployment.

NetApp All-Flash FAS is a good option when the following conditions apply:

20+ IOPS per desktop

Ultralow latency requirement (<1ms)

Thousands of desktops

Page 24: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

24 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Hybrid FAS is a good option when the following conditions apply:

Less than 10 IOPS per desktop

Low latency (<5ms)

For guidance on whether All-Flash FAS, hybrid FAS, or a combination will meet your use case, consult

your NetApp sales team.

Figure shows an example of a solution using All-Flash FAS, hybrid FAS, and HDD FAS.

Figure 10) Clustered Data ONTAP with mix of disk types.

5 Performance Test Results

The testing results focus on the entire virtual desktop process lifecycle for the XenDesktop 7.1 hosted VDI

and RDS hosted shared model, including:

Desktop boot-up, user login, and virtual desktop acquisition (collectively referred to as “ramp-up”)

User workload execution (also referred to as steady state)

User logoff

5.1 Load Generation

Login Virtual Session Indexer version 3.7 (Login VSI) is designed for benchmarking server-based

computing and virtual desktop infrastructure (VDI) environments. Login VSI is platform and protocol

independent and allows customers to simulate user workloads and test their environment.

Page 25: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

25 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

5.2 Performance Test Plan

Scale testing of up to 2,000 seats for boot storm, login storm, steady states, and logoff storm was

performed. For each test, a Login VSI medium workload was used. Table 5 describes the test cases for

virtual desktop workloads.

Table 5) Virtual desktop workload test cases.

Workload Test Cases

Boot All 550 HVD VMs and 64 RDS VMs at the same time.

Login The test assumed one user logging in and beginning work every four seconds until the maximum of 2,000 users was reached, at which point “steady state” was assumed.

Steady state With the steady-state workload, all users performed various tasks using Microsoft

® Office, web browsing, and PDF printing;, playing flash videos;

and using the freeware mind mapper application.

Logoff Log off all 2,000 users at the same time.

5.3 Performance and Space Testing

The main findings of the 2,000-seat scale testing are as follows:

NetApp Flash Cache decreases IOPS during the boot and login phase.

Storage can easily handle the 2,000-user virtual desktop workload, averaging less than 3ms read latency and less than 1ms write latency. Based on the performance testing results and the available IOPS and capacity headroom, this configuration is estimated to be able to scale to 2,500 users.

With NetApp clustered Data ONTAP, volumes can easily be moved between nodes without downtime.

The Citrix UPM exclusion rule feature is essential to lowering user login IOPS and login time.

Boot time is 7 minutes, and login time for a 2,000-user configuration is consistently 30 minutes, which is specified in LoginVSI.

Table 6 shows a 2,000-user CIFS workload.

Table 6) Two thousand–user CIFS workload.

Ops/sec Latency (ms) Read Ops/sec

Read Latency (ms)

Write Ops/sec

Write Latency (ms)

Boot 984 0.431 969 0.448 2 0.450

Login 2,460 2.290 1,091 4.234 6 0.334

Steady state 187 0.814 10 0.550 11 0.281

Logoff 788 0.480 76 1.374 80 0.391

Table 7 shows the average IOPS during boot, login, steady state, and logoff for 500 hosted desktop

users.

Page 26: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

26 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Table 7) Five hundred hosted desktop users, average IOPS during boot, login, steady state, and logoff.

Ops/sec Latency (ms) Read Ops/sec

Read Latency (ms)

Write Ops/sec

Write Latency (ms)

Boot 4,555 0.407 109 1.465 2,548 0.384

Login 4,021 1.139 85 2.511 3,894 1.111

Steady state 4,836 1.003 80 2.326 4,734 0.974

Logoff 2,667 0.895 85 1.993 2,425 0.874

Table 8 shows 1,450 hosted shared desktop users’ average IOPS during boot, login, steady state, and

logoff.

Table 8) One thousand four hundred and fifty hosted shared desktop users, average IOPS during boot, login, steady state, and logoff.

Ops/sec Latency (ms) Read Ops/sec

Read Latency (ms)

Write Ops/sec

Write Latency (ms)

Boot 923 0.185 61 0.353 862 0.257

Login 10,445 0.610 1,579 2.243 9,866 0.528

Steady state 8,016 0.541 433 1.585 7,583 0.482

Logoff 5,350 0.374 374 1.239 4,976 0.316

Table 9 shows average CPU utilization on node 1 and node 2 during boot, login, steady state, and log off.

Table 9) Average CPU utilization on node 1 and node 2 during boot, login, steady state, and logoff.

Node 1 Node 2

Boot 15% 27%

Login 71% 71%

Steady state 28% 48%

Logoff 5% 29%

Space consumed in this deployment:

Node 1 used 6.23TB, which is 48% of total space.

Node 2 used 3.25TB, which is 25% of total space.

Performance tests indicate that there is plenty of space to grow.

Figure shows node 1 and node space usage.

Figure 11) Node 1 space usage.

Page 27: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

27 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

5.4 Storage Node Failover Testing

Test scenario: Two thousand users in steady-state phase with Login VSI medium workload.

We first triggered storage node 1 to fail over to node 2. As shown in Figure , this action resulted in the

2,000-user workload running on one storage controller.

Figure 12) Failover from node 1 to node 2 results in 2,000-user steady-state, Login VSI medium workload running entirely on node 2.

Figure shows that, after Login VSI logged off 2,000 users, we failed the workload back over to node 1

and verified that all of the LIFs were back to their home ports.

Figure 13) Failover from node 2 back to node 1.

Test result:

Node 1 finished rebooting and was ready for giveback from node 2.

After node 1 failover was triggered, average CPU utilization on node 2 went from 38% to above 90% and lasted 1 to 2 minutes. Then the average CPU stayed between 60% and 80%.

During failover, average read latency stayed at less than 5ms and average write latency was less than 2ms.

The conclusion of the test is that the storage configuration can handle failover without affecting the user

experience during steady-state operations.

5.5 Citrix PVS Workload Characteristics

The vast majority of workload generated by PVS is to the write cache storage. Comparatively, the read

operations constitute very little of the total I/O, except at the beginning of the VM boot process. After the

initial boot, reads to the OS vDisk are mostly served from the PVS server cache.

The write portion of the workload includes all the OS and application-level changes that the VMs incur.

Figure summarizes the I/O size breakdown of the read and write workload. The Y axis represents the I/O

size range in bytes, and the X axis represents the I/O size.

Page 28: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

28 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Figure 14) Citrix PVS reads and writes to storage.

The entire PVS solution is approximately 90% writes from the storage perspective in all cases, with the

size breakdown as shown in Figure . The total average I/O size to storage is between 8K and 10K.

With nonpersistent pooled desktops, the PVS write cache composes the rest of the I/O. The storage that

contains the read-only OS vDisk incurred almost no I/O activity after initial boot, and averaged zero IOPS

per desktop to the storage (due to the PVS server cache). The write cache showed a peak average of 10

IOPS per desktop during the login storm, with the steady state showing 15% to 20% fewer I/Os in all

configurations. The write cache workload op size averaged 8k, with 90% of the workload being writes.

The addition of CIFS profile management had the effect of taking some workload off the write cache.

Login VSI tests showed that three IOPS per desktop were removed from write cache and served from

CIFS. The additional four IOPS per desktop seen on the CIFS side were composed of metadata

operations (open/close, getattr, lock).

Note: Sizing for CIFS home directories should be done as a separate workload from the virtual desktop workload. Storage resource needs for CIFS home directories will be highly variable and depend on the needs of the users and applications in the environment.

Nonpersistent Desktops

The write portion of the workload includes all the OS and application-level changes that the VMs incur.

Figure summarizes the I/O size breakdown of the write workload. The Y axis represents the I/O size

range in bytes, and the X axis represents the I/O size.

Page 29: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

29 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Figure 15) I/O size breakdown of write workload.

Note that the I/O size breakdown in Table 10 is (for the most part) similar across both persistent and nonpersistent use cases. The main difference in each use case is that the location of the I/O changes with each configuration. For example, eight IOPS per desktop move from the write cache to the personal vDisk when added to the configuration.

The entire PVS solution is approximately 90% write from the storage perspective in all cases, with the size breakdown as shown in Table 10. The total average I/O size to storage is between 8k and 10k.

Figure shows the IOPS per desktop observed during the Login VSI heavy workload testing of all possible use cases with pooled (nonpersistent) desktops with and without CIFS profile management and user data.

Figure 16) IOPS per desktop, Login VSI heavy workload testing.

In all cases the PVS server handled the majority of reads to the OS disk, averaging five read operations per desktop.

With nonpersistent pooled desktops, the PVS write cache composes the rest of the I/O. The storage that contains the read-only OS vDisk incurred almost no I/O activity after initial boot and averaged zero IOPS per desktop to the storage (due to the PVS server cache). The write cache showed a peak average of 10 IOPS per desktop during the login storm, with the steady state showing 15% to 20% fewer I/Os in all configurations. The write cache workload op size averaged 8K, with 90% of the workload being writes.

Page 30: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

30 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

The addition of CIFS profile management had an effect of taking some workload off the write cache. Login VSI heavy tests showed three IOPS per desktop were removed from write cache and served from CIFS. The additional four IOPS per desktop seen on the CIFS side were composed of metadata operations (open/close, getattr, lock).

Note: Sizing for CIFS home directories should be done as a separate workload from the virtual desktop workload. Storage resource needs for CIFS home directories will be highly variable and depend on the needs of the users and applications in the environment.

Persistent Desktops With the persistent (assigned) desktop configuration using Citrix personal vDisk, the effect of I/O to storage can be summarized as follows:

Much of the workload that went to the write cache in a nonpersistent desktop now goes to the personal vDisk. Write cache workload is much lower in a persistent environment.

Per Citrix, the personal vDisk driver on the guest machine contains a small read cache. This results in a 10% to 20% decrease in total operations to the storage over nonpersistent desktops.

The amount of I/O that the personal vDisk incurs is highly variable and depends on the applications that

are installed on the individual desktops. I/O is affected by the type of applications installed on the

personal vDisks in the environment and where that data is stored. As an example, a desktop environment

that uses highly I/O-intensive applications or performs copy or compression of large files will see much

higher I/O to the personal vDisk storage than an environment that only runs word processing or

spreadsheet applications.

In Login VSI testing, most of the applications were installed on the golden image; therefore, the majority of reads were serviced from the PVS server cache instead of the personal vDisk storage. This naturally gives the best results in terms of storage resource utilization and efficiency. The user data written by the Login VSI applications was contained mostly on the CIFS user data storage. The personal vDisk handled mostly OS-level changes to the file system.

In a persistent configuration, most of the workload that had been serviced by the write cache is now divided among the personal vDisk storage and the CIFS user data if profile management software is used. One major difference that is observed when using CIFS for user data is that additional overhead of CIFS metadata operations is incurred. This is because of the client’s data being contained on file-based storage rather than block-based vDisk or virtual machine disk (VMDK).

Figure shows the breakdown of IOPS per desktop in each of the persistent (assigned) desktop use

cases.

Figure 19) Breakdown of IOPS per desktop in each of the persistent desktop use cases.

In tests with only personal vDisk enabled, the total I/O was less than total I/O in the nonpersistent tests. A

peak average of eight total IOPS occurred, with two going to write cache and six served from the personal

vDisk storage. The reason for this is that personal vDisk adds an additional read cache at the guest driver

layer, which slightly decreases the amount of I/O served from storage.

Page 31: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

31 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Adding CIFS profile management and home directories to the environment takes some I/O from personal vDisk and adds it to CIFS. The additional metadata operations necessary for SMB file access make up the remainder.

After the personal vDisk and CIFS user data is configured in a PVS environment, the write cache is

mostly relegated to handling operations to the guest OS page file, and its I/O is greatly reduced. Most of

the I/O to the OS and application files is served from the PVS server cache, the personal vDisk storage,

and CIFS home directories; sizing and planning should be adjusted accordingly.

Figure and Figure summarize the breakdown of read and write operations to the personal vDisk and write cache storage during boot and login scenarios for persistent desktops. Again, as in all cases, the workload was approximately 90% writes.

Figure 17) I/O comparison, PVS with personal vDisk, 500-seat boot.

Figure 18) I/O comparison, PVS with personal vDisk, 500-seat login.

Page 32: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

32 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

5.6 Performance Effect of Flash Solutions Flash Cache was shown to decrease the total boot time of the environment by 14%. As a whole, Flash Cache improved the read latency on the infrastructure storage and improved the latency on the small amount of rereads to write cache and personal vDisk. Reads to the OS vDisk storage mostly occur only during boot, because again the OS disk is cached by the PVS server; very few operations were performed to the read-only OS vDisk storage during login and steady-state operations. Flash Cache did not significantly affect the write cache workload because it was composed of approximately 90% writes.

6 Storage Configuration Best Practices

NetApp provides a scalable, unified storage and data management solution. The unique benefits of the

NetApp solution are:

Storage efficiency. Significant cost savings with multiple levels of storage efficiency on all VMs.

Performance. Enhanced user experience with virtual storage tier (VST) and write I/O optimization that complements NetApp storage efficiency.

Operational agility. Enhanced Citrix XenDesktop solution management with tight partner integration.

Data protection. Enhanced protection of both the virtual desktop OS data and the user data, with very low overhead in terms of cost and operational data components.

6.1 Protocol Decision

NFS, iSCSI, and FC have very similar performance—within a margin of 7%. The choice of protocol is

based on the customer’s current infrastructure and best practices. However, NetApp has a few

recommendations for protocols in clustered Data ONTAP and 7-Mode environments, based on ease of

management and cost efficiency. For more information, refer to TR-3697: Performance Report:

Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems.

Table 10 shows the preferred storage protocols for the deployment of XenDesktop with Citrix PVS. The

virtual desktop solution includes delivery of OS, personal applications (instant messages, Pandora for

music), corporate apps (MS Office), and management of user profiles and user data.

Table 10) Storage protocols for XenDesktop.

ESXi, ESX, XenServer

Hyper-V Reason

vDisk CIFS SMB 3 or SAN LUN

CIFS SMB 3 The vDisk is read from storage once and cached in PVS RAM. CIFS SMB 3 brings good performance, reliability, and easy management on vDisk. If SMB 3 is not possible, NetApp recommends SAN LUN.

Write cache NFS or SAN LUN CIFS SMB 3 or SAN LUN

NFS is a space-efficient and easily managed protocol. NFS uses thin provisioning by default, which optimizes utilization of available storage.

If using SAN LUN, spread the VMs across different LUNs. NetApp recommends 150 to 600 VMs per LUN.

For Hyper-V, you can use CIFS SMB 3 or SAN LUN.

Personal vDisk NFS or SAN LUN CIFS SMB 3 or SAN LUN

As with write cache, NFS is a space-efficient and easily managed protocol. NFS uses thin provisioning by default, which optimizes utilization of available storage.

Page 33: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

33 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

ESXi, ESX, XenServer

Hyper-V Reason

If using SAN LUN, spread the VMs across different LUNs. NetApp recommends 150 to 600 VMs per LUN.

User data/profile CIFS SMB CIFS SMB A best practice is to separate user data from the VM OS. This end user–centric focus allows administrators to deliver a higher-quality desktop-like experience, with data center benefits such as security, compliance, backup and recovery, and disaster recovery.

6.2 Clustered Data ONTAP Failover Group

LIFs and ports have roles; different ports are used for management, storage, data motion, and fault

tolerance. Roles include cluster or node management, cluster (for traffic between nodes), intercluster (for

NetApp SnapMirror® replication to a separate cluster), and data. From a solution perspective, data LIFs

are further classified by how they are used by servers and applications and whether they are on private

nonroutable networks, corporate internal routable networks, or a DMZ.

The NetApp cluster connects to these various networks by using data ports; the data LIFs must use

specific sets of ports on each node for traffic to be routed properly. Some LIFs, such as cluster

management and data LIFs for NFS and CIFS, can fail over between ports within the same node or

between nodes so that if a cable is unplugged or a node fails, traffic will continue without interruption.

Failover groups are used to control to which ports a LIF can fail over. If failover groups are not set up or

are set up incorrectly, LIFs can fail over to a port on a wrong network and cause loss of connectivity.

Best Practices

All data ports should be members of an appropriate failover group.

All data LIFs should be associated with an appropriate failover group.

To keep network connectivity as standardized as possible, use the same port on each node for the same purpose.

6.3 LIF Creation

NetApp OnCommand System Manager can be used to set up volumes and LIFs. Although LIFs can be

created and managed through the command line, this document focuses on configuration using the

NetApp OnCommand System Manager GUI.

Note: OnCommand System Manager 2.0R1 or later is required to set up LIFs.

For good housekeeping purposes, NetApp recommends creating a new LIF whenever a new volume is

created. A key feature in clustered Data ONTAP is the ability to move volumes in the same SVM from one

node to another. When you move a volume, make sure that you move the associated LIF as well. This

will help keep the virtual cabling neat and prevent indirect I/O that will occur if the migrated volume does

not have an associated LIF to use. It is also a best practice to use the same port on each physical node

for the same purpose. Due to the increased functionality in clustered Data ONTAP, more physical cables

are necessary and can quickly become an administrative problem if care is not taken when labeling and

placing cables. By using the same port on each cluster for the same purpose, you will always know what

each port does.

Page 34: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

34 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Best Practices

Each NFS datastore should have a data LIF for every node in the cluster.

When you create a new storage virtual machine (SVM), add one LIF per protocol per node.

6.4 LIF Migration

LIFs for NFS and CIFS can be migrated, but iSCSI LIFs cannot. If you get Flash Cache, you might need

the I/O Expansion Module (IOXM) to add another 10GbE card. If you use IOXM, you need to connect the

high-availability (HA) pair with a fiber cable.

Best Practice

Use 10GbE for cluster interconnection and data networks. NetApp recommends 1GbE for

management networks. Make sure you have enough 10GbE cards and ports.

6.5 10 Gigabit Ethernet

Clustered Data ONTAP requires more cables because of its enhanced functionality. A minimum of three

networks is required when creating a new clustered Data ONTAP system:

Cluster network: 10GbE

Management network: 1GbE

Data network: 10GbE

The physical space-saving functionality of the NetApp FAS32XX series and now the FAS8000 controllers

can be a potential issue if Flash Cache cards are present in the controllers. That is because 10GbE

network cards are required, and 32XX series controllers do not have on-board 10GbE network interface

cards (NICs). Check with your NetApp sales engineer if the system will be migrated from 7-Mode to

clustered Data ONTAP to make sure that enough open slots are available on your existing controllers.

Best Practice

Although a 10GbE NIC is not required for data, it is a best practice to have one for optimal data traffic.

6.6 Jumbo Frames

Enable jumbo frames for data network and link aggregation control protocol (LACP) if the switch supports

it.

NetApp recommends using jumbo frames or MTU 9000 for the data network. Enabling jumbo frames can

help speed up data traffic and uses less CPU because fewer frames are sent over the network. Jumbo

frames must be enabled on all physical devices and logical entities from end to end in order to avoid

truncation or fragmentation of packets with the maximum size. The link aggregation type will depend

largely on the current switching technology used in the environment.

Not all link aggregation implementations are alike or offer the same features. Some only offer failover

from one link to another in the event of failure of one link. More complete solutions offer actual

aggregation, allowing traffic to flow on two or more links at the same time. There is also the LACP, which

allows devices to negotiate the configuration of ports into bundles. For more information regarding the

setup of link aggregation with VMware ESXi and clustered Data ONTAP, refer to TR-4068v2: VMware

vSphere 5 on NetApp Clustered Data ONTAP Best Practices.

Figure shows jumbo frames set on each network component.

Page 35: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

35 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Figure 19) Setting jumbo frames.

Best Practice

NetApp recommends using jumbo frames or MTU 9000 for the data network.

6.7 Flow Control Overview

Current network equipment and protocols generally handle port congestion better now than in the past.

Although NetApp previously recommended the use of flow control “send” on ESX hosts and NetApp

storage controllers, the current recommendation, especially with 10GbE equipment, is to disable flow

control on ESXi, NetApp FAS, and the switches in between.

With ESXi 5, flow control is not exposed in the vSphere Client GUI. The ethtool command sets flow

control on a per-interface basis. There are three options for flow control: autoneg, tx, and rx. The tx

option is equivalent to “send” on other devices.

6.8 Aggregates and Volumes An aggregate is the NetApp virtualization layer that abstracts physical disks from logical datasets, referred to as flexible volumes. Aggregates are the means by which the total IOPS available to all of the physical disks are pooled as a resource. This design is well suited to meet the needs of an unpredictable and mixed workload. NetApp recommends that, when possible, a small aggregate be used as the root aggregate. In most cases we use a default of three disks for the root aggregate. This root aggregate stores the files required for running and providing GUI management tools for the storage system. The remaining storage should be placed into a smaller number of larger aggregates. The maximum size of an aggregate depends on the system; for details, refer to NetApp configuration limits as described in TR-3838: Storage Subsystem Configuration Guide.

For practical purposes, NFS does not have a limit on the number of VMs per datastore. Also, the NFS

protocol does not limit the addressable size of an NFS datastore, which means it automatically supports

the current clustered Data ONTAP volume size limit. With this in mind, VMs can be grouped according to

business requirements that can be organizational (department, tenant, or application) or service-level

based, such as the type of storage, replication requirements, or schedule.

6.9 Secure Default Export Policy Clustered Data ONTAP changes the architecture of exports by defining export policies scoped within an SVM. An export policy has a set of rules that includes which clients get which type of access. Each volume has exactly one export policy applied to it. All volumes that are used by the vSphere environment,

Page 36: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

36 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

or at least for each VMware cluster, can use the same export policy so that all hosts see the set of volumes in the same manner. When a new host is added to the vSphere cluster, a rule for that host is added to the policy. The rule for the new host includes a client-match pattern (or simply an IP address) for the new host, protocol, and permissions and authentication methods for read and write, read only, superuser, anonymous user, and some other options that are of less significance for vSphere. When a new volume is added and other volumes are already in use as datastores, the same export policy can be used as for the previous volumes. For more information on clustered Data ONTAP export policies and junction paths, refer to TR-4068: VMware vSphere 5 on NetApp Clustered Data ONTAP.

6.10 Storage Block Misalignment

There are multiple layers in a storage system. Each layer is organized into blocks or chunks for efficient

accessibility of the storage. The size and the starting offset of each block can be different at each layer.

Although a different block size across the storage layers doesn’t require any special attention, the starting

offset does. For optimal performance, the starting offset of a file system should align with the start of a

block in the next lower layer of storage. For example, an NTFS file system that resides on a LUN should

have an offset that is divisible by the block size of the storage array presenting the LUN. Misalignment of

block boundaries at any one of these storage layers can result in performance degradation. Misalignment

causes storage to read from or write to more blocks than necessary to perform logical I/O. For more

information on misalignment, refer to TR-3747: Best Practices for File System Alignment in Virtual

Environments.

In XenDesktop deployment, depending on the vDisk OS type, Windows 7 and Windows 2008 do not have

an OS alignment issue. Windows XP and Windows 2003 Server have OS alignment issues.

Because the VMFS layer is not involved with NFS, only the alignment of the guest VM file system within

the VMDK to the NetApp storage array is required. There is no misalignment if the write cache file is on

an NFS share. If a SAN LUN will use the host write cache, you must perform the following procedure to

correct misalignment.

Partition Write Cache File Disk

When you create the PVS write cache file for the VM template, it is a best practice to mount the LUN on

an existing Windows Server instance and run DiskPart.

1. Mount the write cache file disk in Windows Server.

2. Click Start > Run and enter diskpart.

3. Enter list disk.

4. Enter select disk <disk number>.

5. Choose the write cache file disk number.

6. Enter create partition primary align=32.

7. Unmount the write cache disk from the Windows Server instance.

8. Attach the disk to the PVS VM template.

Figure shows the procedure to create a partition by using DiskPart.

Page 37: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

37 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Figure 20) Partition write cache file disk using DiskPart.

7 Storage Operation Best Practices

7.1 Deduplication Decision

NetApp can help you use less storage space. Citrix XenDesktop environments can benefit from the cost

savings associated with NetApp deduplication (dedupe), as discussed in section 3.6. Each VM consumes

storage as new writes occur. Scheduling and monitoring deduplication operations for the NetApp volumes

hosting VMs are very important. It is important to schedule the deduplication operations to run during off-

peak hours so that the end-user experience is not affected. Also, it is important to understand the number

of simultaneous dedupe operations that can be performed on the storage controller. Planning for dedupe

operations ultimately depends on your environment. The status and storage savings of dedupe

operations can be monitored with NetApp OnCommand System Manager or the Virtual Storage Console

deduplication Management tab. For details on NetApp deduplication, refer to NetApp TR-3505: NetApp

Deduplication for FAS, Deployment and Implementation Guide.

Unlock the power of deduplication and optimize space utilization on user data, personal vDisk,

infrastructure, and vDisk volumes.

Table 11 summarizes NetApp recommendations for deduplication in the XenDesktop environment.

Table 11) Deduplication recommendations.

Recommended Deduplication

Reason

vDisk Yes OS data can be deduplicated.

Write cache No Log file and page file are transient data and are only temporary; therefore, it is important to not enable deduplication on these volumes. Doing so results in using storage resources for no reason.

Personal vDisk Yes Use dedupe on the same applications between the users.

User data and profile Yes Use dedupe on user data and profile.

Page 38: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

38 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

7.2 Always-on Deduplication Typical storage sizing for VDI environments includes sizing for headroom to make sure that, in the event of a storage failover, the end-user experience is not impacted. This extra CPU headroom for storage failover typically isn’t used during normal operations. In the case of VDI, this is an excellent advantage for storage vendors with true active-active storage systems. When using an All-Flash FAS8000, it is possible to use deduplication with a very aggressive deduplication schedule to maintain storage efficiency over time. To eliminate potential concerns about always-on deduplication causing additional wear on the SSDs, NetApp provides up to a five-year warranty with all SSDs (three-year standard, with an offer of an additional two-year extended warranty, with no restrictions on the number of drive writes).

Requirements for Always-on Deduplication

The following components are required to use always-on deduplication:

All-Flash FAS8000

Data ONTAP 8.2.2 or later

Best Practices

Size the storage controller properly so that users are not impacted if a storage failover occurs. NetApp recommends testing storage failover during normal operations.

Stagger the timing of patching activities.

Have at least eight volumes per node for maximum deduplication performance.

Set the efficiency policy schedule to one minute.

Set the QoS policy for the storage efficiency policy to background.

Monitor the storage system performance with OnCommand Performance Monitor as well as a desktop monitoring utility such as Liquidware Labs Stratusphere UX to measure client experiences.

Disable deduplication in the event of a storage failover if client latencies increase.

7.3 Inline Zero Detection and Elimination in Data ONTAP 8.3

Inline zero detection and elimination is a storage efficiency technology introduced in Data ONTAP 8.3.

There are a couple of different situations in which zeros are written to a storage controller. The first is

when using VMDKs on VMFS. Each time a thin or lazy zeroed thick VM is written to, blocks have to be

zeroed prior to the data being written. When using NFS, only in the case of lazy zero thick are the blocks

zeroed prior to use. With both VMFS and NFS, eager zeroed thick VMs are zeroed at VMDK creation.

The second case is for normal data zeros. The elimination of any write to media helps to improve

performance and extend the media life span. Table 12 shows VMDK disk types and protocols.

Table 12) Disk type and protocol.

VMDK Type NFS VMFS

Thin Zeroed on use

Lazy zero thick Reserved Reserved and zeroed on use

Eager zero thick Reserved and zeroed Reserved and zeroed

Inline zero elimination is a way to reduce writes to disk; instead of writing the zero and then deduplicating

postprocess, it is a metadata update only. Deduplication must be turned on for the volume, at a minimum.

Scheduled deduplication does not have to take place. By enabling deduplication, Data ONTAP inline zero

elimination provides approximately 20% faster cloning when cloning eager zeroed thick VMDKs. It

eliminates the need to deduplicate the zeros from the VMs postprocess, thus increasing disk longevity.

Page 39: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

39 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Best Practices

Put the templates in the destination datastore.

Enable deduplication on the volume; a schedule is not required.

When using NFS, thin-provisioned disks are best because they provide end-to-end utilization transparency and have no upfront reservation that drives higher storage utilization.

When using VMFS, eager zeroed thick disks are the best format. This conforms with VMware’s best practice for getting the best performance from your virtual infrastructure. Cloning time is faster with eager zeroed thick provisioning than with thin provisioning on VMFS datastores.

7.4 Thin Provisioning and Volume Autogrow Policies

NetApp recommends the use of thin provisioning for write cache and PvDisk.

NetApp thin provisioning is a great way to overprovision storage on volumes that don’t necessarily need

all of their space immediately. To avoid potential problems if the volume fills up quickly, configure warning

thresholds and autogrow policies. When these two processes are initiated, volumes will never hit 100%

capacity, which would potentially bring production to a halt.

7.5 Space Reclamation

When customers deploy a virtual desktop infrastructure using NFS, they can maintain storage efficiency

of thin-provisioned virtual machines by using Virtual Storage Console 2.1.1.

Space reclamation requires that the following conditions be met:

NFS only

NTFS on basic disks only (GPT or MBR partitions)

Data ONTAP 7.3.4 or later

Virtual machine powered off

No virtual machine VMware snapshots

7.6 Antivirus

A successful VDI implementation begins with change. Using traditional desktop methods to install

applications, back up data, and protect systems from viruses only leads to performance problems and

administrative headaches. Antivirus companies such as Trend Micro, McAfee, and Sophos allow you to

protect user desktops and avoid bottlenecks that can cripple the performance of your desktop

environment. For more information, refer to the following resources:

Trend Micro Endpoint Security Solutions

SOPHOS Endpoint Antivirus

McAfee MOVE AntiVirus

7.7 Monitoring NetApp and XenDesktop Infrastructure

NetApp Operations Manager

Operations Manager delivers complete NetApp infrastructure monitoring for optimal storage efficiency

and availability.

As data grows in volume and complexity, you need a complete, current depiction of your storage

resources. The central console of NetApp Operations Manager software provides a comprehensive

monitoring and management dashboard for instant analysis of your storage infrastructure.

Page 40: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

40 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Use the detail and visibility provided by NetApp Operations Manager—part of the OnCommand family of

management software—to:

Get intelligent, comprehensive reports of storage utilization and trend information to support capacity planning, space usage, and backup allocation.

See how physical resources are affected by data utilization and deduplication.

Monitor system performance and health to resolve potential problems before they occur.

Deploy, provision, and manage your complete enterprise storage network from a central location.

With its comprehensive array of alerts, reports, and performance and configuration tools, NetApp

Operations Manager helps keep your storage infrastructure aligned with your business requirements to

improve capacity planning and utilization and increase IT efficiency.

For more information on Operations Manager, visit the NetApp Operations Manager page at NetApp.com.

7.8 NetApp OnCommand Insight

Optimize storage resources and investments in your hybrid cloud environment. NetApp OnCommand

Insight storage resource–management tools provide you with a cross-domain view of performance

metrics, including application performance, datastore performance, virtual machine performance, and

storage infrastructure performance. OnCommand Insight analyzes tier assignments and lets you load-

balance your entire application portfolio across the storage fabric.

OnCommand Insight consists of four distinct modules:

Insight Assure. Delivers complete monitoring, risk detection, and compliance auditing for complex environments

Insight Perform. A scalable solution for a broad view of performance data and resource optimization

Insight Plan. Provides global visibility of asset utilization to simplify purchasing decisions

Insight Discover. Offers a single platform that identifies your entire inventory and integrates all the OnCommand Insight modules

For more information on OnCommand Insight Balance, visit the NetApp OnCommand Balance Library

site.

8 Backup and Recovery of Virtual Desktops in a vSphere

Environment

As virtualized environments scale and data increases, having a standardized backup, recovery, and

disaster recovery solution becomes a necessity. Solutions such as Virtual Storage Console (VSC) for

vSphere provide administrators with the ability to manage their entire data protection environment with

the same procedures and tools independent of the applications to be backed up. As new nodes, storage

virtual machines, VMs, and applications are added, lightweight agents are all that are needed to enable

backup, recovery, and disaster recovery.

The PVS write cache files are transient; they are discarded every time the VM reboots. The VSC backup

and recovery capability integrates VMware snapshots with NetApp array-based block-level Snapshot

copies to provide consistent backups for virtual desktops. Backup and recovery are NetApp primary

storage data deduplication aware. Backup and recovery also integrate with NetApp SnapMirror replication

technology. Because it is not required to rerun dedupe on the destination storage array, storage savings

across source and destination storage arrays are preserved. The VSC backup and recovery plug-in also

provides a user-friendly GUI for the data protection schemes.

Snapshot technology is built into the fabric of clustered Data ONTAP, enabling instantaneous backups of

the data in each storage volume. Although this is great technology for volumes such as user data,

Page 41: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

41 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

personal vDisk, infrastructure, and vDisk, Snapshot reserve must be disabled on write cache volumes.

Because these volumes contain only temporary, transient data, there is no need to reserve space for

backups on these volumes or to back up these volumes. By disabling the Snapshot reserve of write

cache volumes, the maximum amount of space that is assigned to these volumes can be used. The

Snapshot reserve should, in contrast, be enabled for vDisk, PvDisk, and user data volumes.

For more information on XenDesktop disaster and recovery, refer to TR-3931: DR Solution for

XenDesktop on NetApp.

Table lists the volumes of a Citrix XenDesktop PVS implementation and the backup methods

recommended for each.

Table 13) XenDesktop on vSphere backup methods.

Snapshot Reserve Backup Method

Write cache Off Temporary data does not require backup

vDisk On NetApp Snapshot copy

Personal vDisk On VSC for vSphere

User data On NetApp Snapshot copy

You can find information regarding the installation and basic usage of NetApp VSC in the Virtual Storage

Console 4.1 for VMware vSphere Installation and Administration Guide.

8.1 Best Practices for VSC Backup of XenDesktop PvDisk

The method employed to back up and recover your data depends on how you architected your virtual

machine storage in XenDesktop at the host level. For setup and deployment instructions, refer to

Deployment Guide for Citrix XenDesktop 5, Provisioning Services, XenServer, and VMware vSphere on

NetApp Storage.

Although various methods can be used to back up the VDI environment, the easiest way is with NetApp

VSC backup and recovery. Multiple machines can be selected for immediate or scheduled backups.

Because the base image of a PVS desktop is read only, a backup is not required; however, it is critical to

back up the PvDisk. When the backup policy is created, the “Include Datastores with Independent Disks”

option must be selected. If not, because PvDisks are normally on separate datastores, they will not be

backed up properly.

If SnapMirror has been set up between volumes, VSC can trigger an update.

Backup schedules will vary greatly and can be created per customer requirements.

8.2 Best Practices for VSC Recovery of XenDesktop PvDisk

A restore of a virtual machine can only occur if a backup was created and the NetApp Snapshot copy still

exists. Multiple restore options are available within VSC and should be selected based on your needs. It

is important to note that while restoring, you should validate the individual PvDisk to make sure it was

properly backed up. If the PvDisk was stored on a separate datastore and was not backed up, user

applications and data cannot be restored.

A virtual machine can be restored entirely over the original. Alternatively, a Snapshot copy of the

datastore can be mounted on an ESXi server and the virtual machine’s drives mounted on another

machine for individual file restores.

Page 42: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

42 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

9 User Data and Profile Management

9.1 User Data The first type of user data is the end-user data or home directory data. This data is the intellectual property of each company and is directly generated by the end user. In a virtual desktop environment, the home directory data located in a NetApp volume is shared through the CIFS protocol and is mapped as a drive letter in the virtual desktop. This data often requires backup and recovery as well as disaster recovery services. Using a CIFS home directory brings more efficiency in the management and protection of user data. End-user data files should be deduplicated and compressed to achieve storage efficiency and reduce the overall solution cost.

Best Practice

NetApp recommends using deduplication and compressing end-user data files stored in home

directories to obtain storage efficiency. NetApp strongly recommends storing user data on the CIFS

home directory.

9.2 User Profile Data

The second type of user data is the user profile, otherwise known as personal data. This data allows the

user to have a customized desktop environment while using a nonpersistent virtual desktop. The user

profile is normally stored in C:\Users on a Microsoft Windows physical machine and can be redirected

to a CIFS share for use in a nonpersistent virtual desktop environment. Many profile management

solutions on the market simplify management, improve reliability, and reduce network requirements more

than standard Windows folder redirection. Using a profile management solution helps speed the migration

process from a physical desktop or laptop by first virtualizing the profile and then virtualizing the desktop.

This improves login times over folder redirection alone and centralizes the end user’s profile for better

backup and recovery and disaster recovery of this data.

Best Practice

NetApp recommends using a profile management solution such as Citrix User Profile Management

(UPM) or Liquidware Labs ProfileUnity to allow end users to customize their experience while in a

nonpersistent desktop environment.

Profile Management Profile management provides an easy, reliable, and high-performance way to manage user personalization settings in virtual or physical Windows OS environments. It requires minimal infrastructure and administration and provides users with fast logons and logoffs. A Windows user profile is a collection of folders, files, registry settings, and configuration settings that define the environment for a user who logs on with a particular user account. These settings can be customized by the user, depending on the administrative configuration. Examples of settings that can be customized are:

Desktop settings such as wallpaper and screen saver

Shortcuts and start menu settings

Internet Explorer favorites and homepage

Microsoft Outlook signature

Printers

Some user settings and data can be redirected by means of folder redirection. However, if folder redirection is not used, these settings are stored within the user profile.

The first stage in planning a profile management deployment is to decide on a set of policy settings that together form a suitable configuration for your environment and users. The automatic configuration

Page 43: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

43 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

feature simplifies some of this for XenDesktop deployments. Figure and Figure show the Citrix User Profile Management (UPM) GUI that is used to establish policies for hosted VDI (HVD) users and hosted shared desktop (HSD) users (for testing purposes). Basic profile management policy settings are documented here: Citrix Product Documentation, Basic Policy Settings.

Figure 21) HVD user profile manager policy.

Figure 22) HSD user profile manager policy.

Best Practice

For a faster login, use NetApp Flash Cache and user profile–management software to eliminate

unnecessary file copying during login.

Page 44: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

44 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

9.3 User-Installed Applications The third type of user data is user-installed application data. Some companies allow end users to install applications on their physical desktops and laptops. In nonpersistent virtual desktop environments, this is accomplished with Citrix XenDesktop personal vDisks (PvDisks). PvDisks allow a separation between vDisks and PvDisks, master image and personal data, by redirecting all changes made on the user's VM to a separate disk (the personal vDisk) attached to the user's VM. The content of the PvDisk is blended at runtime with the content from the base VM to provide a unified, personalized experience. In this way, users can still access applications provisioned by their administrator in the base VM.

9.4 Separating User Data from the VM

Because each data type on a virtual desktop has a different purpose, each data type also has different

requirements. Separating the different components of a virtual desktop allows the administrator to create

a stateless or nonpersistent virtual desktop environment. This deemphasizes the importance of the

operating system and corporate installed applications within the virtual machine and allows administrators

to focus on end users: their experience and their data. This end user–centric focus allows administrators

to deliver a higher-quality, desktop-like experience with data center benefits, such as security,

compliance, backup and recovery, and disaster recovery.

Figure shows the data types organized by volume.

Page 45: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

45 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Figure 23) User data backup types in XenDesktop environment.

10 Application Virtualization with XenApp

Different types of workers across the enterprise have various performance and personalization

requirements. Some require simplicity and standardization while others require high performance or a

fully personalized desktop. IT administrators can deliver every type of virtual desktop, hosted or local,

physical or virtual—each specifically tailored to meet the performance, security, and flexibility

requirements of each individual user—with NetApp storage and Citrix desktop virtualization products. This

section discusses the application virtualization capabilities of XenApp to deliver applications on demand

to virtual desktops and how to leverage the power of NetApp technology to power and simplify this

enterprise application delivery software.

Application delivery consists of the three following use cases:

Server-side application virtualization. These applications run inside the data center. XenApp presents each application on the user device and relays user actions from the device, such as keystrokes and mouse actions, back to the application.

Page 46: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

46 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Client-side application virtualization. XenApp streams applications on demand to the user device from the data center and runs the application on the user device.

VM-hosted application virtualization. Problematic applications, or those requiring specific operating systems, run inside a desktop on the data center. XenApp presents each application interface on the user device and relays user actions from the device, such as keystrokes and mouse actions, back to the application.

Server-side application virtualization was chosen as the preferred method for these tests, but other

methods can be used as well. A single virtual XenApp server was created to run the Citrix AppCenter,

and a virtual golden image XenApp server was created to create clones. In this architecture, the

additional XenApp servers are identical, used to power the user demand for applications, and housed in

an NFS datastore. VSC is used to rapidly clone VMs and utilizes only small amounts of storage, thanks to

FlexClone technology. A customization specification file is used to add the new XenApp servers to Active

Directory and, upon boot, the new servers are added to the XenApp farm.

After selecting an application, the Citrix Streaming Profiler is run, which in turn runs a virtual installation

and captures the application in a profile. NetApp recommends placing this profile file in a NetApp CIFS

share so that it can be shared among the virtual XenApp servers. When patches or changes are applied

to the XenApp servers, the original and golden images are modified. The redeploy function in VSC is then

used to quickly update the XenApp clones to apply the new updates. For more information on XenApp,

refer to the Citrix Product Documentation site.

Figure illustrates VSC cloning in the XenApp solution.

Figure 24) VSC cloning in XenApp solution.

Page 47: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

47 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

11 Citrix Component Scalability Considerations

11.1 Infrastructure Components Limit

When scaling a large virtual desktop deployment, you must adhere to the configuration best practices and

limitations for each XenDesktop component.

Table 14 describes the limits and recommended sizing configuration for each XenDesktop component.

For more information, download the Citrix XenDesktop 7.6 Blueprint.

Table 14) XenDesktop component configuration and limits.

Limit Configuration

XenDesktop One XenDesktop 7.5 supports 5,000 virtual desktops.

For high-availability purposes, you need N+1 XenDesktop servers.

# of DCs = (# of VDAs/5,000) + 1

A Windows Server 2012 R2 VM, 4 vCPUs, 4GB RAM, 40GB storage.

PVS One virtual Citrix Provisioning Services server can stream approximately 700 VMs.

N+1

4 vCPU, 8GB RAM.

RAM depends on the number of vDisks. 4GB RAM plus 2GB RAM multiplied with the vDisk number.

Licensing server Single license server is generally sufficient.

Limit of 170 license checkouts per second, or 600,000 every hour.

2 vCPU, 4GB RAM.

StoreFront 30,000 connections per hour.

N+1

NetScaler for load balancing

2 vCPU, 4GB RAM.

SQL Server Two vCPU 4GB RAM supports 2,500–10,000 users.

Four vCPU 16GB RAM supports 10,000 users.

NetApp recommends using SQL Mirror with witness.

Database and transaction log sizing is a significant factor.

Database is relatively static, but transaction logs grow rapidly for hosted VDI.

Transaction logs should be backed up and cleared regularly.

2 vCPU, 4GB RAM.

Desktop Accelerator

Citrix Project Accelerator is a free, customizable, step-by-step implementation guidance tool, available at

http://project.citrix.com/. The Project Accelerator tool offers the following benefits:

Guided assessment and design. Leverage a detailed checklist of decisions, options, and tools.

Peer benchmarks. Compare your design to thousands of others.

Best practices. Use Citrix best practices to make informed decisions.

Documentation. Jump-start your project by downloading a project plan and other documents.

Step-by-step installation. Get custom-generated installation instructions and resources.

Figure shows the design section in Project Accelerator.

Page 48: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

48 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Figure 25) Desktop transformation accelerator.

11.2 Hypervisor Configuration Limits and Pod Concept

Carefully review the VMware documentation on configuration maximums associated with the various

storage-related parameters critical to the system design. For information, refer to Configuration

Maximums VMware vSphere 5.1 and XenServer Configuration Limits Citrix XenServer 6.2.0.

These configuration parameters should help determine the following design parameters:

Proposed number of VMs per hypervisor host

Proposed number of hypervisor hosts per hypervisor cluster

Proposed number of datastores (storage repository) per hypervisor cluster

Proposed number of VMs per hypervisor cluster

Number of hypervisor clusters managed by a vCenter™

or XenCenter instance

Proposed number of VMs per datastore (storage repository)

Total number of datastores required for the project

Provisioning fewer, denser datastores provides key advantages such as the ease of system

administration, solution scalability, ease of managing data protection schemes, and effectiveness of

NetApp deduplication.

Pod Sizing

A VDI pod is analogous to a XenDesktop site. A hosted shared pod is analogous to a XenApp farm.

Larger pods are easier to manage.

Page 49: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

49 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Users per pool = (single server density) x (resource pool size)

Pools per pod = target users per pod / users per pool

11.3 VM Resource Recommendation

Refer to XenDesktop Planning Guide - Hosted VM-Based Resource Allocation (Doc ID CTX127277) for

general guidelines. The best practice is to use Liquidware Labs or any data collector software to find the

real requirements. Table 15 shows VM resource recommendations.

Table 15) VM resource recommendations.

User Type vCPU RAM Estimated Users per Core

Normal 1 2GB 5–7

Light 1 1.5GB 8–12

Heavy 2 4GB 2–4

For hosted shared desktops, multiple users are simultaneously hosted on a XenApp server. As with the

hosted VDI (HVD) model, the number of users that can be hosted on a single server depends upon the

user load, CPU, and memory. However, with hosted shared desktops, CPU overcommit is not

recommended for XenApp servers. Table 16 provides the recommended values for CPU and memory

based on user load for hosted shared desktops.

If your servers are hyperthreaded, double the vCPU count but don’t double your number of users. The

hyperthreaded counts shouldn’t be used in calculations or estimates, because they can add up to only

~15% additional capacity, depending on the workload.

Table 16) Recommendations for hosted shared desktops.

User Type vCPU RAM Estimated Users per VM

Estimated Users per Core

Light 2 or 4 HT 16GB 60 30

Normal 2 or 4 HT 16GB 30 15

Heavy 2 or 4 HT 16GB 14 7

12 Storage Sizing Guide

Storage sizing involves three tasks:

Gather solution requirements.

Estimate storage capacity and performance.

Obtain recommendations for storage configuration.

12.1 Solution Assessment

Assessment is an important first step. Liquidware Labs Stratusphere FIT and Lakeside SysTrack for VDI

assessment are recommended for collecting network, server, and storage requirements. NetApp has

contracted with Liquidware Labs to provide free licenses to NetApp employees and channel partners. For

information on how to obtain software and licenses, refer to this FAQ. Liquidware Labs also provides a

storage template that fits NetApp System Performance Modeler (SPM). For guidelines on how to use

Stratusphere FIT and the NetApp custom report template, refer to TR-3902: Guidelines for Virtual

Desktop Storage Profiling and Sizing.

Page 50: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

50 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Virtual desktop sizing varies depending on the following considerations:

Number of seats

VM workload (applications, VM size, VM OS)

Connection broker (VMware View® or Citrix XenDesktop)

Hypervisor type (vSphere, XenServer, or Hyper-V)

Provisioning method (NetApp clone, linked clone, PVS, MCS)

Future storage growth

Disaster recovery requirements

Number and size of user home directories

There are many factors that affect storage sizing. NetApp developed the SPM sizing tool to simplify the

process of performance sizing for NetApp systems. SPM has a step-by-step wizard that supports varied

workload requirements and provides recommendations to meet customers’ performance needs.

Best Practice

NetApp recommends using the NetApp SPM sizing tool to size the virtual desktop solution. Contact a

NetApp partner or NetApp sales engineer who has access to SPM. When using the NetApp SPM to

size a solution, NetApp recommends that you separately size the VDI workload (including write cache

and personal vDisk, if used) and the CIFS profile/home directory workload. When sizing CIFS, NetApp

recommends sizing with the CIFS heavy user workload.

Note: For the solution described in this document, 80% concurrency was assumed. Also assumed was 10GB per user for home directory space, with 35% deduplication space savings. Each VM used 2GB of RAM. PVS write cache is sized at 5GB per desktop for nonpersistent/pooled and 2GB for persistent desktops with personal vDisk.

12.2 Capacity Considerations

Capacity is one of two main factors to consider in storage sizing. Performance, covered in section 12.3, is

the other. Deploying XenDesktop with PVS requires the following capacity considerations:

vDisk. The size of the vDisk depends on the operating system and the number of applications to be installed on the vDisk. It is a best practice to create vDisks larger than necessary in order to leave room for additional application installations and patches: for example, a 20GB vDisk with a Windows 7 image. Each organization must determine its own space requirements for vDisk images. NetApp deduplication can be used for space saving.

Write cache file. NetApp recommends the size range for each user to be 4–18GB. Write cache size is based on the type of workload and how often the VM is rebooted. 4GB is used in this example for the write-back cache. Because NFS is thin provisioned by default, only the space currently used by the virtual machine is consumed on NetApp storage. If iSCSI or FCP is used, N x 4GB would be consumed as soon as a new virtual machine is created.

PvDisk. The PvDisk is normally sized at 5–10GB, depending on the application and the size of the profile. Use 20% of the master image as the starting point. NetApp recommends it for running NetApp deduplication.

CIFS home directory. Various factors must be considered for each home directory deployment. The key considerations for architecting and sizing a CIFS home directory solution include:

Number of users

Number of concurrent users

Space requirement for each user

Network load

Run deduplication and obtain space savings.

Page 51: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

51 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

vSwap. VMware ESXi and Hyper-V both require 1GB per VM.

Infrastructure. Requires XenDesktop host, PVS, SQL Server, DNS, DHCP, with 500GB capacity.

The space calculation formula for a 2,000-seat deployment is as follows:

[(Number of vDisks) x 20GB + (2,000 x 4GB write cache) + (2,000 x 10GB PvDisk) + {(2,000 x 5GB user

home directory) x 70%} + (2,000 x 1GB vSwap) + 500GB infrastructure]

12.3 Performance Considerations

Performance requirement collection is a critical step. After using Liquidware Labs Stratusphere FIT and

Lakeside SysTrack to gather I/O requirements, contact NetApp’s account team to obtain recommended

software and hardware configurations. Based on IOPS/VM and latency requirements, you can choose

hybrid storage systems or All-Flash FAS systems. For customers who require less than 1ms latency or

more than 20 IOPS/VM, an All-Flash FAS system is required.

I/O has a few factors: size, read/write ratio, and random/sequential. We use 90% write and 10% read for

a PVS workload. Storage CPU utilization also needs to be considered. Table 17 provides guidance for a

PVS workload when using Login VSI heavy workload.

Table 17) PVS, Login VSI heavy workload sizing guidance.

Boot IOPS Login IOPS Steady IOPS

Write cache (NFS) 8–10 9 7.5

vDisk (CIFS SMB) 0.5 0 0

Infrastructure (NFS) 2 1.5 0

12.4 Desktop Deployment Scenarios

To provide storage scalability guidelines, Citrix and NetApp divided the Citrix PVS–based deployments

into six scenarios based on the use cases we see deployed in customer environments. The first four

deployment scenarios are more common; the last two are relatively uncommon.

Table 18) Citrix PVS–based deployment scenarios.

PVS-Based Models

Use Cases User State

Profile Folders User-Installed Apps

Common Deployment Scenarios

1 Pooled streamed desktop

Task workers: training labs, call centers, admin workstations, and so on.

Typically deployed when hosted shared desktops cannot be used because of application compatibility issues.

Nonpersistent

Roaming Redirected Nonpersistent

2 Hosted shared desktops (streamed XenApp server)

Task workers: kiosks, training labs, call centers, and so on.

No application compatibility issues with multiuser sessions on same Windows machine.

Users who have minimal customization needs beyond roaming profile or folder redirection.

Nonpersistent

Roaming Redirected Nonpersistent

Page 52: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

52 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

PVS-Based Models

Use Cases User State

Profile Folders User-Installed Apps

3 Personal streamed desktop (roaming between desktops)

Knowledge workers: developers, contractors, and so on.

Users who tend to need a lot of supporting apps that IT does not want to manage.

Persistent Roaming* Redirected Persistent

4 Personal streamed desktop (no roaming between desktops)

Variation of scenario 3 when user does not roam between desktops.

Persistent Local Local Persistent

Less Common Deployment Scenarios

5 Hosted shared desktop (streamed XenApp servers)

Task workers with very controlled and limited desktop environment.

Nonpersistent

Mandatory* Redirected Nonpersistent

6 Pooled streamed desktop

Variation of scenario 4 in which users sometimes log in from other desktops and require access to their data, but don’t care about application settings.

Nonpersistent

Local Redirected Nonpersistent

*Using profile management software: for example, Citrix UPM.

12.5 IOPS and Capacity Considerations

Table 19 shows the IOPS and capacity assumptions (per desktop) used to generate the sample storage

array scalability guidelines in section 12.6.

The IOPS requirements were determined based on performance testing using Login VSI heavy user

workload in conjunction with profile management tools such as Citrix UPM and Liquidware Labs

ProfileUnity, redirecting user profile and folders to CIFS shares.

The capacity requirements are assumptions based on what we typically observe in customer

deployments and during lab testing.

Note: These IOPS and capacity numbers might vary for your deployment, based on your work profile and applications in your environment.

Table 19) IOPS and capacity assumptions.

Desktop Deployment Scenario

PVS vDisk (shared by 100s of users)

PVS Write Cache File

CIFS Share

PvDisk IOPS Served by Shared Storage

IOPS Served by Host Cache

Capacity Requirement on Shared Storage

1 Capacity 25GB 5GB 5GB N/A - - 10GB

Page 53: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

53 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Desktop Deployment Scenario

PVS vDisk (shared by 100s of users)

PVS Write Cache File

CIFS Share

PvDisk IOPS Served by Shared Storage

IOPS Served by Host Cache

Capacity Requirement on Shared Storage

IOPS 5 (100% R; local)

7 (95%W)

7 (2:W, 1:R, 4:Other)

N/A 14 5 -

2 Capacity 25GB 1GB 5GB N/A - - 6GB

IOPS 5 (100% R; local)

4 (66%R)

7 (2:W, 1:R, 4:Other)

N/A 11 5 -

3 Capacity 25GB 2GB 5GB 10GB - - 17GB

IOPS 5 (100% R; local)

2 (95% W)

7 (2:W, 1:R, 4:Other)

5 (95% W) 14 5 -

4 Capacity 25GB 2GB N/A 10GB - - 12GB

IOPS 5 (100% R; local)

2 (95% W)

N/A 6 (90% W) 8 5 -

5 Capacity - 1GB 5GB N/A - - 6GB

IOPS 5 (100% R; local)

4 (66%R)

7 (2:W, 1:R, 4:Other)

N/A 11 5 -

6 Capacity - 5GB 5GB N/A - - 10GB

IOPS 5 (100% R; local)

7 (95%W)

7 (2:W, 1:R, 4:Other)

N/A 14 5 -

12.6 Storage Scalability Guidelines on Hybrid Storage

Based on the IOPS and capacity requirements per desktop listed in Table 19, Table 20 provides the

NetApp unified storage scalability guidelines for each desktop deployment scenario on FAS2240HA and

FS3250HA systems. Similar scalability guidelines can be obtained for other FAS models by using the

NetApp SPM sizing tool.

Table 20) Storage scalability guidelines, sample configurations.

Scenario FAS2240HA Configuration Max. # of Desktops

FAS3250HA with Flash Cache Max. # of Desktops

1 FAS2240HA w/ 54 disks (10RU)

1,400 FAS3250HA w/ 84 disks (22RU) 3,000

2 FAS2240HA w/ 112 disks (18RU)

3,200 FAS3250HA w/ 67 disks (18RU) 4,500

3 FAS2240HA w/ 60 disks (10RU)

1,300 FAS3250HA w/ 109 disks (26RU) 2,900

4 FAS2240HA w/ 50 disks (10RU)

1,600 FAS3250HA w/ 100 disks (26RU)

–No flash cache

3,500

5 FAS2240HA w/ 112 disks (18RU)

3,200 FAS3250HA w/ 67 disks (18RU) 4,500

Page 54: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

54 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

6 FAS2240HA w/ 54 disks (10RU)

1,400 FAS3250HA w/ 84 disks (22RU) 3,000

The following key assumptions were made while sizing these storage configurations:

All VMs in each configuration boot in less than 10 minutes.

Login VSI application latency is within the acceptable limits; that is, the VSImax number is less than 1,500.

NetApp storage controller utilization is no more than 60% during steady-state operations.

No performance degradation occurs when one NetApp controller fails.

PvDisk dedupe savings: 35% assumption (savings might be higher or lower for your environment depending on the type and amount of data in the personal vDisk).

2GB storage per VM is configured as VM memory vSwap space on storage.

CIFS is sized as follows:

Workload type: heavy

User concurrency: 80%

Dedupe savings: 35% assumption (savings might be higher or lower for your environment, depending on the type and quantity of data in the CIFS share)

NetApp Flash Cache: Flash Cache provided significant disk savings on the FAS3250 platform for all the desktop deployment scenarios that require CIFS shares (except scenario 4) and for the hosted shared desktops use case, in which we saw 66% read traffic to the PVS write cache file.

The disk type used in this sizing is 600GB SAS (15K RPM).

Table 21 shows the scalability guidelines on a highly dense 2-rack-unit FAS2240-2 system with 24 internal drives (600GB, 10K RPM SAS). Figure shows the FAS2240-2. The preceding sizing assumptions were used to generate these guidelines.

Figure 29) FAS2240-2 system (2U) with 24 internal SAS drives (600GB; 15K RPM).

Table 21) Scalability guidelines.

Desktop Deployment Scenario Maximum # of Desktops

1 600

2 550

3 500

4 800

5 550

6 600

Note: The 2-rack-unit FAS2240-2 system with 24 internal drives (600GB, 10K RPM SAS) can host more than 1,000 desktops for a very basic VDI deployment with the following parameters:

• Citrix PVS with 5GB write cache file and 2GB RAM per VM

• 10–12 IOPS per desktop to PVS write cache file (95% writes)

• No profile management or folder redirection

Page 55: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

55 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

12.7 Storage Scalability Guidelines on All-Flash FAS Storage

The purpose of this testing was to validate that a 2,000-desktop mixed workload of Citrix XenDesktop 7.5

VDIs and Citrix XenDesktop 7.5 HSD models (XenApp users) could be hosted on a NetApp All-Flash

FAS8060 storage system running clustered Data ONTAP 8.2.2. The extensive 2,000-desktop

performance testing described in this section proved that the system had plenty of headroom and showed

that this All-Flash FAS8060 storage configuration can potentially scale to 4,000 desktops.

To cover a variety of customer use cases, we tested both roaming and mandatory profile types. Roaming

profiles require more I/O than mandatory profiles. Our tests demonstrated that the All-Flash FAS8060

storage system could easily handle both profile scenarios without issues.

The information contained in this section provides data points that customers can reference in designing

their own implementations. These validation results are an example of what is possible under the specific

environment conditions outlined here; they do not represent the full characterization of XenDesktop with

VMware vSphere.

Pooled Desktops

The machine type defines the type of hosting infrastructure used for desktops and the level of control that

users have over their desktop environment, which determines the usage scenarios for which the desktops

are best suited. When deciding which machine type to use, consider the tasks that users will perform with

their desktops and the devices to which the desktops will be delivered. The type and amount of

infrastructure available to host each desktop are also important considerations.

Pooled machines provide desktops that are allocated to users on a per-session, first-come, first-served

basis. Pooled-random machines are assigned to users arbitrarily at each login and are returned to the

pool when the users log out. Machines that are returned to the pool are available for other users to

connect to. Alternatively, with pooled-static machines, users are assigned a specific machine from the

pool when they first log in to XenDesktop and are connected to the same machines for all subsequent

sessions. This allows the users of pooled-static machines to be associated with specific VMs, which is a

licensing requirement for some applications. Pooled desktops are freshly created from the master VM

when users log on, although profile management can be used to apply users’ personal settings to their

desktops and applications. Any changes that users make to their desktops are stored for the duration of

the session but are discarded when users log out. Maintaining a single master VM in the data center

dramatically reduces the time and effort required to update and upgrade users’ desktops.

Pooled desktops are the right selection if your users:

Are task workers who require standardized desktops, such as call-center operators and retail workers

Use shared workstations; for example, students and faculty in educational institutions

Do not need to, or are not permitted to, install applications on their desktops

Pooled desktops are also appropriate if you want to:

Optimize hardware usage by providing only the number of desktops that are required at any one time rather than assigning each user a specific desktop

Maintain control over desktops and increase security by preventing users from making permanent changes

Minimize desktop management costs by providing a locked-down standardized environment for your users

Profile Management

A user-profile solution, integrated with XenDesktop, overcomes many of the challenges associated with

Windows local, mandatory, and roaming profiles through proper policy configuration. Citrix Profile

Page 56: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

56 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Management improves the user experience by capturing user-based changes to the environment and by

improving user login and logout speeds.

A Windows user profile is a collection of folders, files, registry settings, and configuration settings that

define the environment for a user who logs on with a particular user account. These settings might be

customizable by the user, depending on the administrative configuration. Settings that can be customized

include the following examples:

Desktop settings, such as wallpaper and screen saver

Shortcuts and start menu settings

Internet Explorer favorites and homepage

Microsoft Outlook® signature

Printers

Some user settings and data can be redirected by means of folder redirection. If folder redirection is not

used, however, these settings are stored in the user profile.

The first stage in planning a profile management deployment is to decide on a set of policy settings that

together form a suitable configuration for your environment and users. The automatic configuration

feature simplifies some of this decision making for XenDesktop deployments. The user profile

management interfaces establish policies for this reference’s HSD and VDI users (for testing purposes).

Basic profile management policy settings are documented in Profile Management policy settings, located

on the Citrix Product Documentation website.

Mandatory profiles are read only, so a single mandatory profile can be used for large groups of users. A

mandatory profile is copied from a CIFS share to the VM during login. During logout, any changes to the

profile are not saved. Instead, the local copy of the mandatory profile is reset to its initial state at the next

login. Therefore, storage requirements are minimal: a single mandatory profile is kept on the file servers

instead of keeping thousands of roaming profiles. Mandatory profiles can be used only on kiosk-like

systems.

In this reference architecture, we tested both a mandatory profile case and a roaming case. Figure

shows the decision points.

Figure 30) Decision points of nonpersistent pooled desktops and profile types.

Page 57: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

57 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Test Results Overview

Table lists the high-level results that were achieved during the reference architecture testing of 2,000

users on a single All-Flash FAS8060 controller.

Table 22) Test results overview.

Time to Complete

Peak IOPS

Average IOPS

Peak Throughput (MB/sec)

Average Throughput (MB/sec)

Peak Storage Latency (ms)

Average Storage Latency

Peak CPU

Average CPU

Boot storm test (VMware vCenter power-on operations, VM registered in XenDesktop)

8 min, 13 sec 9,516 7,463 116.00 73.00 0.426 0.406 10% 8%

Boot storm test (XenDesktop power-on operations)

25 min, 8 sec 7,929 3,614 88.50 54.60 0.508 0.448 8.9% 4.8

Initial login test with mandatory profile

30 min (configured in Login VSI)

27,280 18,516 422 267 0.678 0.548 79% 51%

Initial login test with roaming profiles

30 min (configured in Login VSI)

48,181 27,855 524 303 1.479 0.813 88% 68%

Steady-state test with mandatory profile

30 min 19,509 14,859 344 265 0.618 0.544 55% 48%

Steady-state test with roaming profiles

30 min 29,266 25,035 752 674 0.365 0.315 67% 60%

Acknowledgements

Chris Gebhardt, NetApp Senior TME, End User Computing System

Brian Casper, NetApp Senior TME, End User Computing System

Abhinav Joshi, NetApp Senior Product Manager, Desktop Virtualization

References

Citrix References

How to Optimize XenDesktop Machines

XenDesktop Modular Reference Architecture

XenDesktop Design Handbook

Other articles of note:

Hosted VM Based Resource Allocation

Page 58: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

58 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Memory and Storage for Provisioning Server

XenDesktop Scalability

XenDesktop DB Sizing Best Practices

NetApp References

TR-3795: Deployment Guide for XenDesktop 3.0 and VMware ESX Server on NetApp

TR-3505: NetApp Deduplication for FAS, Deployment and Implementation Guide

TR-3705: NetApp and VMware View Solution Guide

TR-3732: Citrix XenServer and NetApp Storage Best Practices

TR-4068: VMware vSphere 5 on NetApp Data ONTAP 8.1 Operating in Cluster-Mode

TR-4342: NetApp All-Flash FAS Solution for Persistent and Nonpersistent Desktops with Citrix XenDesktop and XenApp

VMware References

Sysprep File Locations and Versions

vSphere Virtual Machine Administration Guide

Page 59: Technical Report Design Guide for Citrix XenDesktop on ... · Technical Report Design Guide for Citrix XenDesktop on NetApp Storage Rachel Zhu, NetApp April 2015 | TR-4138

59 Design Guide for Citrix XenDesktop on NetApp Storage ® 2015 NetApp, Inc. All Rights Reserved.

Refer to the Interoperability Matrix Tool (IMT) on the NetApp Support site to validate that the exact product and feature versions described in this document are supported for your specific environment. The NetApp IMT defines the product components and versions that can be used to construct configurations that are supported by NetApp. Specific results depend on each customer's installation in accordance with published specifications.

Trademark Information

NetApp, the NetApp logo, Go Further, Faster, AltaVault, ASUP, AutoSupport, Campaign Express, Cloud ONTAP, Clustered Data ONTAP, Customer Fitness, Data ONTAP, DataMotion, Fitness, Flash Accel, Flash Cache, Flash Pool, FlashRay, FlexArray, FlexCache, FlexClone, FlexPod, FlexScale, FlexShare, FlexVol, FPolicy, GetSuccessful, LockVault, Manage ONTAP, Mars, MetroCluster, MultiStore, NetApp Insight, OnCommand, ONTAP, ONTAPI, RAID DP, RAID-TEC, SANtricity, SecureShare, Simplicity, Simulate ONTAP, SnapCenter, Snap Creator, SnapCopy, SnapDrive, SnapIntegrator, SnapLock, SnapManager, SnapMirror, SnapMover, SnapProtect, SnapRestore, Snapshot, SnapValidator, SnapVault, StorageGRID, Tech OnTap, Unbound Cloud, WAFL and other names are trademarks or registered trademarks of NetApp Inc., in the United States and/or other countries. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such. A current list of NetApp trademarks is available on the Web at http://www.netapp.com/us/legal/netapptmlist.aspx. TR-4138

Cisco and the Cisco logo are trademarks of Cisco in the U.S. and other countries. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such. TR-4138-0415

Copyright Information

Copyright © 1994–2015 NetApp, Inc. All rights reserved. Printed in the U.S. No part of this document covered by copyright may be reproduced in any form or by any means—graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an electronic retrieval system—without prior written permission of the copyright owner.

Software derived from copyrighted NetApp material is subject to the following license and disclaimer:

THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

NetApp reserves the right to change any products described herein at any time, and without notice. NetApp assumes no responsibility or liability arising from the use of products described herein, except as expressly agreed to in writing by NetApp. The use or purchase of this product does not convey a license under any patent rights, trademark rights, or any other intellectual property rights of NetApp.

The product described in this manual may be protected by one or more U.S. patents, foreign patents, or pending applications.

RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987).