oss 2013 - murat karslioglu - delivering sds simplicity and extreme preformance
DESCRIPTION
Real-World SDS implementation of getting most out of limited hardware presented at Open Storage Summit 2013 by Murat KarsliogluTRANSCRIPT
Delivering SDS simplicity and extreme performance
Real-World SDS implementation of getting most out of limited hardware
Murat KarsliogluDirector Storage Systems – Nexenta Systems
Santa Clara, CA USAOctober 2013 1
Agenda
• Key Takeaways• Introduction • Performance Results • Conclusion• Q & A
Santa Clara, CA USAOctober 2013 2
Key Takeaways
• VDI as a case study of SDS delivering multi-tenancy and on-demand provisioning
• Remove storage from the VDI admin's plate• Get higher VDI density and better
performance out of the limited hardware resources
Santa Clara, CA USAOctober 2013 3
Consolidate. Simplify. Virtualize. Monitor
• We picked an affordable branch office server:• Limited resources, NOT a great fit for VDI
• Intel® Xeon® E5-2400 series 6 core processor • 48 Gigabytes of RAM • Three 2.5” size HDDs (No SSDs)
Santa Clara, CA USAOctober 2013 4
Challanges
Santa Clara, CA USAOctober 2013 5
HIGH STORAGE COST
VDI (storage) PERFORMANCE IS BAD
LIMITED RESOURCES
TOO COMPLEX
BAD END USER EXPERIENCE
FAILED POCs
The storage guessing game
Santa Clara, CA USAOctober 2013 6
Hypervisor
Management Server
Connection Broker
Connection Agent
Connection Agent
Connection Agent
Connection Agent
Connection Agent
Physical Servers Shared Storage
How does NV4V Remove the storage guessing game?
In depth integration between NexentaVSA and VMware Horizon View, vSphere, vCenter
New features to optimize storage A user friendly application to simplify and
automate NAS VAAI Integration Real-world concrete SDS implementation
Santa Clara, CA USAOctober 2013 7
Integrate VDI and Storage
NV4V as software-defined-storage
Santa Clara, CA USAOctober 2013 8
Deploy Measure Configure
Step function increments to meet performance requirements (Bandwidth, latency and IOPS)
NV4V High-level Architecture
Santa Clara, CA USAOctober 2013 9
NV4V VDI Deployment Overview
Santa Clara, CA USAOctober 2013 10
NexentaStor VSA
2. NV4V->vCenter: Provision VSA. (N/A with external NexentaStor)
ESXi Cluster
ESXi Host (VMKernel)
ESXi Host (VMKernel)
NFS netowrk (ESXi port group)
1. NV4V->vCenter: Provision NFS network (N/A with external NexentaStor)
View Connection Server
Nexenta Agent
NV4V Management Server
VMware SDK
vCenter Server
(NV4VCommunicateswith DesktopNexenta Agents For Benchmarkand Calibration)
VMDKs on ESXi
host disks
3. NV4V->vCenter: Create and attach VMDK datastores, Power on VSA. (N/A with external NexentaStor)
NFS Shares
zpools
4. NV4V->VSA: Create zpools and NFS shares. (Opt. with external NexentaStor)
VDI DesktopsVDI DesktopsVDI Desktops
NxAgent
NxAgent
NxAgent
VMDK’s for VDI VMs
5. NV4V->View: Deploy desktop pool.
NV4V VDI Deployment Overview
Santa Clara, CA USAOctober 2013 11
Process Point of View
1 Create VMDK(s) for VSA syspool (resilver if mirrored)
ClusterT&E
2 Create resource pool for VSA
ClusterT&E
3 Clone NexentaStor VSA template
ClusterT&E
4 Reconfigure VSA - Set resources, reservations and limits
Event
5 Power on VSA ClusterT&E
6 Confirm VMware Tools on VSA
ActivityEvent
7 Assign DHCP address to VSA Network interface
VSA
8 Verify DHCP and Reverse mapping
Event
Process Point of View
9 Create VMDK datastores for data
ClusterT&E
10 Attach datastores to the VSA, one by one
ClusterT&E
11 Create Port Group or use existing one for NFS network
ClusterT&E
12 Configure port group - set MTU to 9000 for NFS network
ClusterT&E
13 Configure ZFS tunables and reboot
ZpoolHistory, Event
14 Linked-clones only: Create and configure zpools , by default two zpools, Replica and Desktop; Only one for all SSD desktop pool
ZpoolHistory, Event
15 Share NFS filesystems ZpoolHistory, Event
16 Mount NFS datastores to all hypervisors
ClusterT&E
Process Point of View
17 Start deployment through VMware View
ActivityEvent
18 Linked-clones only: Copy Replica image from snapshot to NFS Replica filesystem
ClusterT&E
19 Linked-clones only: Create VMs, store linked clones in NFS Desktop fielsystem
ClusterT&E
20 Full-clones only: Clone desktop image from template to NFS Desktop filesystem once for every desktop
ActivityClusterT&E
21 Customize desktops ActivityEvent, VDI
22 Finish when target number of desktop pool size is hit
ClusterT&E
23 Entitlements ActivityEvent
NV4V VDI Deployment Overview
Santa Clara, CA USAOctober 2013 12
VSA VMDK’s and NFS Shares
Improving performance with NV4V
Santa Clara, CA USAOctober 2013 13
Nexenta NV4V + Server + VMware View =
Perfect Branch Office Solution11x Better End-user Experience
• Tested with IoMeter (75%Write)3x Higher Density
• Tested with LoginVSI
With NV4V Local Disk
Medium Workload 55 Desktops* 18 Desktops
Heavy Workload 37 Desktops 12 Desktops
• Simplified deployment• On-demand storage• Monitoring
• Backup/Restore• NAS VAAI• Software RAID
• Inline Compression• Caching on memory (ARC)• Other ZFS Benefits
With NV4V55 Desktops
With NV4V18 Desktops
Local Disk18 Desktops**
IoMeter Total IOPS 2343 IOPS 2160 IOPS 198 IOPSIoMeter IOPS/Desktop 42.6 IOPS 120 IOPS 11 IOPS
*VSImax not reached, 55 desktop is due to memory limitation on Cisco UCS E Series platform** VSImax 18 with local disk
Improving performance with NV4V
Santa Clara, CA USAOctober 2013 14
Speed up full clone deployment First-time in world’s history NV4V utilizes NAS VAAI to provide ZFS to CoW-clone files to deploy persistent VM images much faster, while at the same time saving on the storage capacity
5.4x faster to deploy full clones• Comparison is for 24 full clone desktops
w/ NAS VAAI w/o NAS VAAI
Pure cloning time 2min 36sec 13min 36sec
Clone&custimization 4min 38sec 18min 10sec
Total deployment 2hours 38min 7hours 28min
8.5x saving on storage capacity• NAS VAAI utilizes enhanced Deduplication*
w/ NAS VAAI w/o NAS VAAI
Used Storage 48 GB 408 GB
Dedup ratio x22.82 x1
Conclusion
Desktop Pool running on local HDD take longer to login and start apps, causing increased CPU utilization,
Single drive cannot handle more than 15 desktops efficiently. High random disk I/O causes CPU spikes resulting in dropped or frozen sessions, Storage is the most important component of virtualization, can also reduce CPU utilization,
NV4V Benefit #1: SDS removes the storage guessing from admin’s plate NV4V Benefit #2: Inline compression reduces writes up to 4x NV4V Benefit #3: Striping two drives doubles disk performance NV4V Benefit #4: NAS VAAI reduces full-clone deployment time and saves disk
capacity NV4V Benefit #5: Reduced disk I/O and increased storage performance reduces
CPU utilization NV4V Benefit #6: NV4V provides faster storage than world’s fastest SSDs
Santa Clara, CA USAOctober 2013 15
LogiNVSI Medium Workload
Santa Clara, CA USAOctober 2013 16
55 linked-clone Desktops starting medium workload on local disk
Before NV4V
CPU Utilization10 Desktops – 66%15 Desktops – 81%18 Desktops – 88%20 Desktops – 90%25 Desktops – 100%>25 - Sessions dropped and desktops became unresponsive
LogiNVSI Medium Workload
Santa Clara, CA USAOctober 2013 17
55 linked-clone Desktops starting medium workload on NV4V
With NV4V
CPU Utilization10 Desktops – 45%25 Desktops – 75%30 Desktops – 78%35 Desktops – 80%40 Desktops – 81%45 Desktops – 82%50 Desktops – 84%55 Desktops – 88%
LogiNVSI Medium Workload
Santa Clara, CA USAOctober 2013 18
55 linked-clone Desktops running medium workload on NV4V
With NV4V
Recommended VSImax:VSImax not reached*Baseline = 220955 Desktops max88% CPU utilizationDesktops are highly responsive
LogiNVSI Heavy Workload
Santa Clara, CA USAOctober 2013 19
50 linked-clone Desktops starting heavy workload on NV4V
With NV4V
CPU Utilization10 Desktops – 47%25 Desktops – 76%30 Desktops – 79%37 Desktops – 85%40 Desktops – 90%45 Desktops – 92%50 Desktops – 94%
LogiNVSI Heavy Workload
Santa Clara, CA USAOctober 2013 20
CPU utilization during 1 hour IOMETER test (/w NAS VAAI with full-clones)
Threshold: < 90%
Average utilization running IOmeter is ~84%
DISK BENCHMARK
Santa Clara, CA USAOctober 2013 21
SDS solution turned slow HDDs into fastest SSD speed