eric burgener vp, product management alternative approaches to meeting vdi storage performance...
TRANSCRIPT
Eric BurgenerVP, Product Management
Alternative Approaches to Meeting VDI Storage Performance Req’ts
July 2012
2
Agenda
The storage challenge in VDI environments
Profiling VDI workloads
Focus on SSD
An alternative approach: the storage hypervisor
Customer case studies
July 2012
VM
VM
VM
VM
❶ Poor performance• Very random, write-intensive workload• Spinning disks generate fewer IOPS• Storage provisioning trade-offs
Virtual Machines
❷ Poor capacity utilization• Over-provisioning to ensure performance• Performance trade-offs
❸ Complex management• Requires storage expertise• Imposes SLA limitations• Limits granularity of storage operations
The VM I/O Blender
Hidden Storage Costs of Virtualization
3July 2012
VM
VM
VM
VM
❶ Poor performance• Very random, write-intensive workload• Spinning disks generate fewer IOPS• Storage provisioning trade-offs
Virtual Machines
❷ Poor capacity utilization• Over-provisioning to ensure performance• Performance trade-offs
❸ Complex management• Requires storage expertise• Imposes SLA limitations• Limits granularity of storage operations
The VM I/O Blender
Hidden Storage Costs of Virtualization
Can decrease storageperformance by 30% - 50%
4July 2012
5
VDI Environments Are Even Worse
July 2012
Windows desktops generate a
lot of small block writes
IOPS vs throughput needs
Even more write-intensive
due to many more VMs/host
Much wider variability be-
tween peak and average IOPS
Boot, login, application, logout
storms
Add’l storage provisioning,
capacity consumption issues
VIRTUAL DESKTOPS
6
As If It’s Not Already Hard Enough…
July 2012
Thick VMDKsFixed VHDs
Fully provisioned
Thin VMDKsDynamic VHDs
Thin provisioned
Linked ClonesDifferencing VHDs
Writable clones
HIGH PERFORMANCESlow provisioning
Poor space utilization
SPACE-EFFICIENTRAPID PROVISIONING
Poor performance
RAPID PROVISIONINGSPACE-EFFICIENTPoor performance
Hypervisor storage options force suboptimal choices
7
Thick, Thin, and Snapshot Performance
July 2012
8
Sizing VDI Storage Configurations: The Basics
July 2012
1. PERFORMANCE (latency, IOPS)Steady state I/OPeak I/ORead/write ratiosSequential vs random I/O
RAID reduces usable capacityRAID increases “actual” IOPSAppropriate RAID levels
2. AVAILABILITY (RAID)
Logical virtual disk capacitiesSnapshot/clone creation/usageSecondary storage considerationsCapacity optimization technology
3. CAPACITY
9
Single Image Management
How can we take advantage of many common images?
July 2012
vSphere: parent VMs, snapshots, replicas, linked clones
Parent VM Snapshot Replica
…
Linked Clones
• Reads from replica• Changes to delta disks• Space efficient• Poor performance
Compose, re-compose and refresh workflows
10
Reads vs Writes in VDI Environments
July 2012
Understand read/write ratios forsteady state and burst scenarios
VDI is VERY write intensive (80%+)Read caches do not help hereLogs or write back cache help
Steady state XP Desktops 7 – 15 IOPS Steady state W7 Desktops 15 – 30 IOPSBurst IOPS 30 – 300 IOPS
VMware recommendations, May 2012
READ INTENSIVE: Boot, login, and application storms, AV scansWRITE INTENSIVE: Steady state VDI IOPS, logout storms, backups*GOLDEN MASTERS: Read only, great place to use “fast” storage
* Depending on how backups are done
11
What Are My Options?
Adding spindles adds
IOPS
Tends to waste storage
capacity
Drives up energy, backup
costs
July 2012
BUY MORE STORAGE
Add to host or SAN
Promises tremendously
lower I/O latencies
Easy to add
Focus on $/IOPS
SOLID STATE DISK
Add higher performance
drives (if available)
Upgrade to a higher
performance array
Increased storage
complexity
BUY FASTER STORAGE
12
Focus On SSD
Extremely high read performance
with very low power consumption
Generally deployed as a cache
where you’ll need 5% - 10% of total
back end capacity
Deploy in host or in SAN
Deployment option may limit HA support
3 classes of SSD: SLC, enterprise
MLC, MLC
SSD is expensive ($60-$65/GB) so
you’ll want to deploy it efficiently
July 2012
13
Understanding SSD Performance
July 2012
100% read max IOPS 115,000100% write max IOPS 70,000100% random read max IOPS 50,000100% random write max IOPS 32,000
14
What They Don’t Tell You About SSD
July 2012
The VDI storage performance problem is mostly about writes
SSD is mostly about read performance
But there are VDI issues where read performance helps
Write performance is not predictable
Can be MUCH slower than HDDs for certain I/Os
Amdahl’s Law problem: SSDs won’t deliver promised performance, it
just removes storage as the bottleneck
Using SSD efficiently is mostly about the software it’s packaged with
Sizing is performance, availability AND capacity
15
Good Places To Use SSD
July 2012
Cache In host: lowest latencies but doesn’t support failover In SAN: still good performance and CAN support failover
Golden masters Where high read performance is needed for various “storms”
Tier 0 To primarily boost read performance If you don’t use SSD as a cache
Keep write performance trade-offs in mind when
deploying SSD
16
The Storage Hypervisor Concept
July 2012
Server hypervisors increase
server resource utilization and
virtualize server resources
Increases resource utilization
and improves flexibility
Storage hypervisors increase
storage utilization and virtualize
storage resources
Increases storage utilization
and improves flexibility
SERVER HYPERVISOR
STORAGE HYPERVISOR
…
Performance, capacity and management implications
Introduction of a Dedicated Write Log Per Host
17
Tier 1
Tier n
Tiered Storage
Optimizedasynchronous
de-staging
Log turns random writes into a
sequential stream
Storage devices can perform up to 10x faster
De-staging allows data to be laid out
for optimum read performance
Minimizes fragmentation issues
Requires no add’l hardware to
achieve large performance gains
The more write intensive, the better the speed up
Excellent recovery model for shared
storage environments
Dedicatedwrite log
Optimized WritesAcknowledgements
…
HYPERVISOR
OptimizedReads
HOST
July 2012
Log De-Couples High Latency Storage Operations
18
…
HYPERVISOR
HOSTS
Shared Storage
Write Logs
Storage Pool
THIN PROVISIONING
ZERO IMPACT SNAPSHOTS
HI PERFORMANCE CLONES
INSTANT PROVISIONING
July 2012
THESE OPERATIONS
NO LONGER IMPACT
VM PERFORMANCE
The Virsto Storage Hypervisor
19
Fundamentally changes the wayhypervisors handle storage I/O
Improves performance of existingstorage by up to 10x
Thin provisions ALL storage withNO performance degradation
Reduces storage capacity consumption by up to 90%
Enables almost instant provision-ing of high performance storage
Reduces storage provisioning timesby up to 99%
Allows VM-centric storage manage-ment on top of block-based storage
Enables safe provisioning and de-provisioning of VMs by anyone
July 2012
Virsto Architecture
20
Integrates log architecture
transparently into hypervisor
Speeds ALL writes ALL the time
Read performance speedups
Storage tiering, optimized layouts
Instant provisioning of space-efficient,
high performance storage
Scalable snapshots open up
significant new use cases
Software only solution that requires
NO new hardwareJuly 2012
Hypervisor
Block Storage Capacity (RAID)
Server Host
Primary Storage
Slow, random I/O
Virsto vSpace
VirstoVSA
Sequential I/O
VirstovLogOptimized
de-staging
Multi Node Architecture For Scalability
21July 2012
Block Storage Capacity (RAID)
Multiple Different Arrays
Block Storage Capacity (RAID)
Virsto vSpace
Hypervisor
Host 1
VirstoVSA
Sequential I/O
VirstovLog
Hypervisor
Host 2
VirstoVSA
Sequential I/O
VirstovLog
Hypervisor
Host N
VirstoVSA
Sequential I/O
VirstovLog
…
Integrated Virsto Management
Install and configure through Virsto Console
Provision Virsto ONCE up front
Uses standard native workflows
vSphere, Hyper-V
Transparently uses Virsto storage
Higher performance, faster provisioning, lower capacity consumption, cluster-aware
Works with native tools so minimal training
July 2012 22
23
Virsto And SSD
July 2012
Virsto achieves 10x performance speedups WITHOUT SSD
and with what you already own
But Virsto logs and storage tier 0 are great places to use SSD
Easily uses 50% less SSD than caching approaches to get
comparable speedups Logs are only 10GB in size per host We make random writes perform 2x+ faster on most SSD Very small tier 0 to get read performance (for golden masters, certain VMs)
If you want to use SSD, you spend a lot less money to
implement it
24
Proof Point: Higher Education
July 2012
Baseline Environment Performance with Virsto
341 IOPSNative VMDKs
3318 IOPSVirsto vDisks
10X more IOPS 24% lower latency
9x CPU cycle reduction
Virsto for vSphereDecember 2011 Results
2926 IOPSVirsto vDisk
165 IOPSNative VMDKs
Proof Point: State Government
18X more IOPS 1758% better throughput94% lower response time
July 2012 25
Proof Point: Desktop Density
July 2012 26
0 200 400 600 800 1000 12000
10000
20000
30000
40000
50000
60000
70000
80000
90000
Virsto vDisks vs Native Differencing VHDs
VSIIndex_avg - Virsto VSIIndex_avg - DVHD
VDI session count
Wei
ghte
d re
spon
se ti
me
With Virsto, each host supportsover 2x times the number of VDI
sessions, assuming the samestorage configuration
401830
2000
27
Case Study 1: Manufacturing
July 2012
1200 Windows 7 desktops
Common profile Steady state: 12 IOPS
Read/write ratio: 10/90
Peak load: 30 IOPS
25GB allocated/desktop
Need vMotion support now HA as a possible future
Windows updates 4/year
Already own an EMC VNX 40U enclosure w/4 trays
10K rpm 900GB SAS
100 drives = 90TB
REQUIREMENTS Would like to maximize desktop density to minimize host count Target is 125-150 desktops/host Will be using vSphere 5.1
Spindle minimization could accommodate other projects
Open to using SSD in VNX 400GB EFDs
Asked about phasing to minimize peak load requirements
Asked about VFcache usage
28
Comparing Options Without SSD
July 2012
Metric Storage Configw/o Virsto
Storage Config w/ Virsto
Virsto Savings
IOPS Needed: 36,000 IOPSDelivered: 36,010 IOPSDrive count: 277 drives(130 IOPS/drive)
Needed: 36,000 IOPSDelivered: 39,000 IOPSDrive count: 30 drives(1300 IOPS/drive)
247 drives
Capacity 249TB (raw)206.7TB (w/RAID 5)(34TB needed)
27TB (raw)22.4TB (w/RAID 5)Thin provisionedEasily looks like 80TB+
222TB (raw)184.3TB (w/RAID 5)
Provisioning time 83 hrs per compose1 min VM creation100MB/sec network
17 hrs per compose1 min VM creation
66 hrs savings per compose operation4 x 66 = 256 hrs/yr
Incremental Cost (list) $309,750 (177 add’l drives) + 5700 upgrade
$070 drives freed up
$309,750 + savings on other projects
29
Virsto Single Image Management
July 2012
vSnap ofgolden master
25GB logical12GB (Windows)
vClones
vClone 0Stabilizes at 2GB*
12GB + 2TB = 2TBActual Space Consumed for Virsto
vClone 1Stabilizes at 2GB*
vClone 2Stabilizes at 2GB*
vClone 999Stabilizes at 2GB*
…
* Based on 8 different LoginVSI runs with 1000-2000 desktops
With EZT VMDKs, native consumption was 25GB x 1000 = 25TB
With thin VMDKs, native consumption would be 25GB + (14GBx1000) = 14TB And would require 5x as many spindles for IOPS Not workable, too many drives/arrays, etc.
With View Composer linked clones, space consumption would be the same as Virsto but you’d need 5x the spindle count
Virsto provides better than EZT VMDK performance with space savings of linked clones
30
Virsto Single Image Management
July 2012
vSnap ofgolden master
25GB logical12GB (Windows)
vClones
vClone 0Stabilizes at 2GB*
12GB + 2TB = 2TBActual Space Consumed for Virsto
vClone 1Stabilizes at 2GB*
vClone 2Stabilizes at 2GB*
vClone 999Stabilizes at 2GB*
…
* Based on 8 different LoginVSI runs with 1000-2000 desktops
With EZT VMDKs, native consumption was 25GB x 1000 = 25TB
With thin VMDKs, native consumption would be 25GB + (14GBx1000) = 14TB And would require 5x as many spindles for IOPS Not workable, too many drives/arrays, etc.
With View Composer linked clones, space consumption would be the same as Virsto but you’d need 5x the spindle count
Virsto provides better than EZT VMDK performance with space savings of linked clones
Virsto 92% better than thick VMDKs
Virsto 86% better even than thin VMDKs
31
Assumptions
July 2012
EMC VNX SSD performance 12K read IOPS, 3K write IOPS per SSD With 2 SPs, can max out a tray w/o limiting performance
VM creation time depends on busy-ness of vCenter Server Observed 30 sec - 1 min baseline across both Virsto and non-Virsto configs 5 min VM+storage creation time w/o Virsto, 1 min w/Virsto
Customer had chosen thick VMDKs for performance/spindle minimization Provisioning comparisons were EZT VMDKs against Virsto vDisks (which outperform EZTs handily)
Customer RAID 5 overhead was 17% (5 + 1) RAID
Pricing for EMC VNX 5300 200GB EFD $12,950 900GB SAS 10K RPM $1,750
32
Case Study 1: Other Observations
July 2012
Virsto vClones do not have the 8 host limit View Composer linked clones in VMFS datastores limited to 8 hosts
Performance + capacity considerations limit applicability of SSD to this environment Using RAID 5, minimum capacity required is 33.9TB
Customer could not have met storage requirement with VNX 5300 Would have to upgrade to VNX 5700 or buy extra cabinets
Thin provisioned Virsto vDisks provide significant capacity cushion Virsto vClones expected to save 66 hours provisioning time for high
performance storage on each re-compose That’s up to 256 hours per year clock time for provisioning (4 Windows updates)
33
Case Study 2: Financial Services
July 2012
1000 Windows 7 desktops
Common profile Steady state: 20 IOPS
Read/write ratio: 10/90
Peak load: 60 IOPS
30GB allocated/desktop
Need vMotion support now HA as a possible future
Windows updates 6/year
Would be buying new SAN
storage
REQUIREMENTS Would like to maximize desktop
density to minimize host count Target is 125-150 desktops/host Will be using vSphere 5.1
Spindle minimization could accommodate other projects
Wants to use a SAN and open to using SSDs
Asked about phasing to minimize peak load requirements
34
Comparing Options
July 2012
Metric Storage Configw/o Virsto
Storage Config w/ Virsto
Virsto Savings
IOPS Needed: 60,000 IOPS18 x 200GB EFD 54K IOPS62 x 600GB 10K SAS 8060 IOPSDelivered: 62,060 IOPS
60,000 IOPS8 x 200GB EFD 48K IOPS16 x 600GB 10K SAS20,800 IOPSDelivered: 68,800 IOPS
4 x 200GB EFD26 x 600GB 10K SAS
Capacity 37.2TB (raw)30.9TB (w/RAID 5)(30TB needed)
10.4TB (raw)8.6TB (w/RAID 5)Easily looks like 35TB+
26.8TB (raw)22.3TB (w/RAID 5)
Provisioning time 100 hrs/compose1 min VM creation100MB/sec network
17 hrs/compose1 min VM creation
83 hrs savings percompose operation6 x 83 = 498 hrs/yr
Cost (list) $233,100 for EFDs$133,000 for array/SAS$366,100
$103,600 for EFDs$64,000 for array/SAS$167,600
$198,500
35
Assumptions
July 2012
IBM DS5000 SSD performance 12K read IOPS, 3K write IOPS per SSD With 2 SPs, can max out a tray w/o limiting performance
VM creation time depends on business of vCenter Server Observed 30 sec - 1 min baseline across both Virsto and non-Virsto configs 6 min VM+storage creation time w/o Virsto, 1 min w/Virsto
Customer had chosen thick VMDKs for performance/spindle minimization Provisioning comparisons were EZT VMDKs against Virsto vDisks (which outperform EZTs handily)
Customer RAID 5 overhead was 17% (5+1) RAID
Pricing for IBM DS5000 200GB EFD $12,950 600GB SAS 10K RPM $1,500, DS5000 frame $40K
36
Case Study 2: Other Observations
July 2012
Virsto makes SSD perform twice as fast Makes all writes sequential Need 50% less SSD 10GB log in RAID 1 across 8 hosts = 160GB for logs, leaves 1.4TB available for Fast Cache/tier 0 use
Virsto cuts required raw storage capacity by 78% And can accommodate an additional 300+ desktops w/o more storage hardware purchases
Space savings conservative at only 70% Generally we see 80% - 90% space savings over the long term
Virsto vClones expected to save 83 hours provisioning time for high performance storage on each re-compose That’s 498 hours per year across 6 Windows updates
Demonstrated Customer Value
July 2012 37