optimize and scale lect2 stu pr
DESCRIPTION
SAPTRANSCRIPT
-
VMware Education ServicesVMware, Inc.
www.vmware.com/education
VMware vSphere:Optimize and ScaleStudent Manual Volume 2ESXi 5.0 and vCenter Server 5.0
VS5OS_LectGuideVol2.book Page 1 Monday, June 25, 2012 10:27 PM
-
www.vmware.com/education
Copyright/TrademarkCopyright 2012 VMware, Inc. All rights reserved. This manual and its accompanying materials are protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
The training material is provided as is, and all express or implied conditions, representations, and warranties, including any implied warranty of merchantability, fitness for a particular purpose or noninfringement, are disclaimed, even if VMware, Inc., has been advised of the possibility of such claims. This training material is designed to support an instructor-led training course and is intended to be used for reference purposes in conjunction with the instructor-led training course. The training material is not a standalone training tool. Use of the training material for self-study without class attendance is not recommended.
These materials and the computer programs to which it relates are the property of, and embody trade secrets and confidential information proprietary to, VMware, Inc., and may not be reproduced, copied, disclosed, transferred, adapted or modified without the express written approval of VMware, Inc.
Course development: Carla Guerwitz, Mike Sutton, John Tuffin
Technical review: Jonathan Loux, Brian Watrous, Linus Bourque, Undeleeb Din
Technical editing: Jeffrey Gardiner
Production and publishing: Ruth Christian, Regina Aboud
VMware vSphere:Optimize and ScaleESXi 5.0 and vCenter Server 5.0Part Number EDU-EN-VSOS5-LECT2-STUStudent Manual Volume 2Revision A
VS5OS_LectGuideVol2.book Page 2 Monday, June 25, 2012 10:27 PM
-
VMware vSphere: Optimize and Scale i
T A B L E O F C O N T E N T S
M O D U L E 7 Storage Optimization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .341You Are Here . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .342Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .343Module Lessons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .344Lesson 1: Storage Virtualization Concepts . . . . . . . . . . . . . . . . . . . . . .345Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .346Storage Performance Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .347Storage Protocol Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .348SAN Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .349Storage Queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .351Performance Impact of Queuing on the Storage Array . . . . . . . . . . . . .352LUN Queue Depth. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .353Network Storage: iSCSI and NFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . .354What Affects VMFS Performance?. . . . . . . . . . . . . . . . . . . . . . . . . . . .355SCSI Reservations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .357VMFS Versus RDMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .358Virtual Disk Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .359Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .360Lesson 2: Monitoring Storage Activity . . . . . . . . . . . . . . . . . . . . . . . . .361Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .362Applying Space Utilization Data to Manage Storage Resources . . . . .363Disk Capacity Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .364Monitoring Disk Throughput with vSphere Client . . . . . . . . . . . . . . . .365Monitoring Disk Throughput with resxtop . . . . . . . . . . . . . . . . . . . . . .366Disk Throughput Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .367Monitoring Disk Latency with vSphere Client . . . . . . . . . . . . . . . . . . .369Monitoring Disk Latency with resxtop . . . . . . . . . . . . . . . . . . . . . . . . .371Monitoring Commands and Command Queuing . . . . . . . . . . . . . . . . .372Disk Latency and Queuing Example . . . . . . . . . . . . . . . . . . . . . . . . . . .373Monitoring Severely Overloaded Storage . . . . . . . . . . . . . . . . . . . . . . .374Configuring Datastore Alarms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .375Analyzing Datastore Alarms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .377Lab 10 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .378Lab 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .379Lab 10 Review. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .380Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .381Lesson 3: Command-Line Storage Management . . . . . . . . . . . . . . . . .382Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .383Managing Storage with vMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .384Examining LUNs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .385Managing Storage Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .386Managing NAS Storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .387Managing iSCSI Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .388
VS5OS_LectGuideVol2.book Page i Monday, June 25, 2012 10:27 PM
-
ii VMware vSphere: Optimize and Scale
Masking LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .389Managing PSA Plug-Ins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .390Migrating Virtual Machine Files to a Different Datastore . . . . . . . . . .392vmkfstools Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .393vmkfstools Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .394vmkfstools General Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .395vmkfstools Command Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .396vmkfstools File System Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .397vmkfstools Virtual Disk Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .398vmkfstools Virtual Disk Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . .400vscsiStats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .401Why Use vscsiStats? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .402Running vscsiStats. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .403Lab 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .405Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .406Lesson 4: Troubleshooting Storage Performance Problems . . . . . . . . .407Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .408Review: Basic Troubleshooting Flow for ESXi Hosts . . . . . . . . . . . . .409Overloaded Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .410Causes of Overloaded Storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411Slow Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .412Factors Affecting Storage Response Time . . . . . . . . . . . . . . . . . . . . . .413Random Increase in I/O Latency on Shared Storage. . . . . . . . . . . . . . .414Example 1: Bad Disk Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . .416Example 2: Virtual Machine Power On Is Slow . . . . . . . . . . . . . . . . . .417Monitoring Disk Latency with the vSphere Client . . . . . . . . . . . . . . . .418Monitoring Disk Latency With resxtop. . . . . . . . . . . . . . . . . . . . . . . . .419Solving the Problem of Slow Virtual Machine Power On . . . . . . . . . .420Example 3: Logging In to Virtual Machines Is Slow . . . . . . . . . . . . . .421Monitoring Host CPU Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .422Monitoring Host Disk Usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .423Monitoring Disk Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .424Solving the Problem of Slow Virtual Machine Login. . . . . . . . . . . . . .425Resolving Storage Performance Problems . . . . . . . . . . . . . . . . . . . . . .426Checking Storage Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .427Reducing the Need for Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .428Balancing the Load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .429Understanding Load Placed on Storage Devices. . . . . . . . . . . . . . . . . .430Storage Performance Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . .431Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .432Key Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .433
VS5OS_LectGuideVol2.book Page ii Monday, June 25, 2012 10:27 PM
-
Contents iii
M O D U L E 8 CPU Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .435You Are Here . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .436Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .437Module Lessons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .438Lesson 1: CPU Virtualization Concepts . . . . . . . . . . . . . . . . . . . . . . . .439Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .440CPU Scheduler Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .441What Is a World? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .442CPU Scheduler Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .443CPU Scheduler Features: Support for SMP VMs . . . . . . . . . . . . . . . . .444CPU Scheduler Feature: Relaxed Co-Scheduling . . . . . . . . . . . . . . . . .446CPU Scheduler Feature: Processor Topology/Cache Aware . . . . . . . .447CPU Scheduler Feature: NUMA-Aware . . . . . . . . . . . . . . . . . . . . . . . .448Wide-VM NUMA Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .450Performance Impact with Wide-VM NUMA Support . . . . . . . . . . . . .451What Affects CPU Performance? . . . . . . . . . . . . . . . . . . . . . . . . . . . . .452Warning Sign: Ready Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .453Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .454Lesson 2: Monitoring CPU Activity . . . . . . . . . . . . . . . . . . . . . . . . . . .455Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .456CPU Metrics to Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .457Viewing CPU Metrics in the vSphere Client . . . . . . . . . . . . . . . . . . . .458CPU Performance Analysis Using resxtop . . . . . . . . . . . . . . . . . . . . . .459Using resxtop to View CPU Metrics per Virtual Machine . . . . . . . . . .461Using resxtop to View Single CPU Statistics . . . . . . . . . . . . . . . . . . . .462What Is Most Important to Monitor?. . . . . . . . . . . . . . . . . . . . . . . . . . .463Lab 12 Introduction (1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .464Lab 12 Introduction (2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .465Lab 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .466Lab 12 Review. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .467Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .468Lesson 3: Troubleshooting CPU Performance Problems . . . . . . . . . . .469Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .470Review: Basic Troubleshooting Flow for ESXi Hosts . . . . . . . . . . . . .471Resource Pool CPU Saturation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .472Host CPU Saturation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .473Causes of Host CPU Saturation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .474Resolving Host CPU Saturation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .475Reducing the Number of Virtual Machines on the Host . . . . . . . . . . . .476Increasing CPU Resources with DRS Clusters . . . . . . . . . . . . . . . . . . .477Increasing Efficiency of a Virtual Machine's CPU Usage . . . . . . . . . .478Controlling Resources Using Resource Settings . . . . . . . . . . . . . . . . . .480When Ready Time Might Not Indicate a Problem . . . . . . . . . . . . . . . .481
VS5OS_LectGuideVol2.book Page iii Monday, June 25, 2012 10:27 PM
-
iv VMware vSphere: Optimize and Scale
Example: Spotting CPU Overcommitment . . . . . . . . . . . . . . . . . . . . . .482Guest CPU Saturation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .483Using One vCPU in SMP Virtual Machine. . . . . . . . . . . . . . . . . . . . . .484Low Guest CPU Utilization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .486CPU Performance Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . .487Lab 13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .488Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .489Key Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .490
M O D U L E 9 Memory Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .491You Are Here . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .492Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .493Module Lessons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .494Lesson 1: Memory Virtualization Concepts . . . . . . . . . . . . . . . . . . . . .495Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .496Virtual Memory Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .497Application and Guest OS Memory Management . . . . . . . . . . . . . . . .498Virtual Machine Memory Management . . . . . . . . . . . . . . . . . . . . . . . .499Memory Reclamation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .500Virtual Machine Memory-Reclamation Techniques . . . . . . . . . . . . . . .501Guest Operating System Memory Terminology . . . . . . . . . . . . . . . . . .502Reclaiming Memory with Ballooning . . . . . . . . . . . . . . . . . . . . . . . . . .503Memory Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .504Host Cache. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .506Reclaiming Memory with Host Swapping . . . . . . . . . . . . . . . . . . . . . .507Why Does the Hypervisor Reclaim Memory? . . . . . . . . . . . . . . . . . . .508When to Reclaim Host Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .509Sliding Scale Mem.MinFreePct. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511Memory Reclamation Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .513Memory Space Overhead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .515Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .516Lesson 2: Monitoring Memory Activity . . . . . . . . . . . . . . . . . . . . . . . .517Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .518Monitoring Virtual Machine Memory Usage . . . . . . . . . . . . . . . . . . . .519Memory Usage Metrics Inside the Guest Operating System . . . . . . . .520Consumed Host Memory and Active Guest Memory . . . . . . . . . . . . . .521Monitoring Memory Usage Using resxtop/esxtop . . . . . . . . . . . . . . . .522Monitoring Host Swapping in the vSphere Client . . . . . . . . . . . . . . . .524Host Swapping Activity in resxtop/esxtop: Memory Screen . . . . . . . .525Host Swapping Activity in resxtop/esxtop: CPU Screen . . . . . . . . . . .526Host Ballooning Activity in the vSphere Client . . . . . . . . . . . . . . . . . .527Host Ballooning Activity in resxtop . . . . . . . . . . . . . . . . . . . . . . . . . . .528Lab 14 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .529
VS5OS_LectGuideVol2.book Page iv Monday, June 25, 2012 10:27 PM
-
Contents v
Lab 14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .530Lab 14 Review. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .531Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .532Lesson 3: Troubleshooting Memory Performance Problems . . . . . . . .533Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .534Review: Basic Troubleshooting Flow for ESXi Hosts . . . . . . . . . . . . .535Active Host-Level Swapping (1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . .536Active Host-Level Swapping (2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . .537Causes of Active Host-Level Swapping . . . . . . . . . . . . . . . . . . . . . . . .538Resolving Host-Level Swapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .539Reducing Memory Overcommitment . . . . . . . . . . . . . . . . . . . . . . . . . .540Enabling Balloon Driver in Virtual Machines. . . . . . . . . . . . . . . . . . . .542Memory Hot Add . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .543Memory Hot Add Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .544Reducing a Virtual Machine's Memory Reservations . . . . . . . . . . . . . .545Dedicating Memory to Critical Virtual Machines. . . . . . . . . . . . . . . . .546Guest Operating System Paging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .547Example: Ballooning Versus Swapping . . . . . . . . . . . . . . . . . . . . . . . .549High Guest Memory Demand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .550When Swapping Occurs Before Ballooning . . . . . . . . . . . . . . . . . . . . .551Memory Performance Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . .552Lab 15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .553Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .554Key Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .555
M O D U L E 1 0 Virtual Machine and Cluster Optimization . . . . . . . . . . . . . . . . . . . . . .557You Are Here . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .558Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .559Module Lessons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .560Lesson 1: Virtual Machine Optimization . . . . . . . . . . . . . . . . . . . . . . .561Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .562Virtual Machine Performance Overview. . . . . . . . . . . . . . . . . . . . . . . .563Selecting the Right Guest Operating System . . . . . . . . . . . . . . . . . . . .564Timekeeping in the Guest Operating System . . . . . . . . . . . . . . . . . . . .565VMware Tools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .567Virtual Hardware Version 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .569CPU Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .570Using vNUMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .571Memory Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .573Storage Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .575Network Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .577Virtual Machine Power-On Requirements . . . . . . . . . . . . . . . . . . . . . .578Power-On Requirements: CPU and Memory Reservations . . . . . . . . .579
VS5OS_LectGuideVol2.book Page v Monday, June 25, 2012 10:27 PM
-
vi VMware vSphere: Optimize and Scale
Power-On Requirements: Swap File Space. . . . . . . . . . . . . . . . . . . . . .580Power-On Requirements: Virtual SCSI HBA Selection . . . . . . . . . . . .581Virtual Machine Performance Best Practices . . . . . . . . . . . . . . . . . . . .582Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .583Lesson 2: vSphere Cluster Optimization . . . . . . . . . . . . . . . . . . . . . . . .584Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .585Guidelines for Resource Allocation Settings . . . . . . . . . . . . . . . . . . . .586Resource Pool and vApp Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . .588DRS Cluster: Configuration Guidelines . . . . . . . . . . . . . . . . . . . . . . . .589DRS Cluster: vMotion Guidelines. . . . . . . . . . . . . . . . . . . . . . . . . . . . .591DRS Cluster: Cluster Setting Guidelines . . . . . . . . . . . . . . . . . . . . . . .593vSphere HA Cluster: Admission Control Guidelines . . . . . . . . . . . . . .594Example of Calculating Slot Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . .596Applying Slot Size. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .597Reserving a Percentage of Cluster Resources . . . . . . . . . . . . . . . . . . . .598Calculating Current Failover Capacity . . . . . . . . . . . . . . . . . . . . . . . . .599Virtual Machine Unable to Power On . . . . . . . . . . . . . . . . . . . . . . . . . .600Advanced Options to Control Slot Size. . . . . . . . . . . . . . . . . . . . . . . . .602Setting vSphere HA Advanced Parameters . . . . . . . . . . . . . . . . . . . . . .603Fewer Available Slots Shown Than Expected . . . . . . . . . . . . . . . . . . .604Lab 16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .605Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .606Key Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .607
M O D U L E 1 1 Host and Management Scalability. . . . . . . . . . . . . . . . . . . . . . . . . . . . .609You Are Here . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .610Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611Module Lessons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .612Lesson 1: vCenter Linked Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .613Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .614vCenter Linked Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .615vCenter Linked Mode Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . .617Searching Across vCenter Server Instances . . . . . . . . . . . . . . . . . . . . .619Basic Requirements for vCenter Linked Mode . . . . . . . . . . . . . . . . . . .621Joining a Linked Mode Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .623vCenter Service Monitoring: Linked Mode Groups . . . . . . . . . . . . . . .624Resolving Role Conflicts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .626Isolating a vCenter Server Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . .627Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .628Lesson 2: vSphere DPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .629Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .630vSphere DPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .631How Does vSphere DPM Work? . . . . . . . . . . . . . . . . . . . . . . . . . . . . .633
VS5OS_LectGuideVol2.book Page vi Monday, June 25, 2012 10:27 PM
-
Contents vii
vSphere DPM Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .634Power Management Comparison: vSphere DPM . . . . . . . . . . . . . . . . .636Enabling vSphere DPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .637vSphere DPM Host Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .639Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .640Lesson 3: Host Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .641Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .642Host Configuration Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .643Host Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .644Basic Workflow to Implement Host Profiles . . . . . . . . . . . . . . . . . . . .645Monitoring for Compliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .646Applying Host Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .648Customizing Hosts with Answer Files . . . . . . . . . . . . . . . . . . . . . . . . .649Standardization Across Multiple vCenter Server Instances . . . . . . . . .650Lab 17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .651Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .652Lesson 4: vSphere PowerCLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .653Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .654vSphere PowerCLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .655vSphere PowerCLI Cmdlets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .656Windows PowerShell and vSphere PowerCLI . . . . . . . . . . . . . . . . . . .657Advantages of Using vSphere PowerCLI . . . . . . . . . . . . . . . . . . . . . . .658Common Tasks Performed with vSphere PowerCLI . . . . . . . . . . . . . .659Other Tasks Performed with vSphere PowerCLI . . . . . . . . . . . . . . . . .660vSphere PowerCLI Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .661Returning Object Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .662Connecting and Disconnecting to an ESXi Host. . . . . . . . . . . . . . . . . .663Certificate Warnings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .664Types of vSphere PowerCLI Cmdlets . . . . . . . . . . . . . . . . . . . . . . . . . .665Using Basic vSphere PowerCLI Cmdlets . . . . . . . . . . . . . . . . . . . . . . .666vSphere PowerCLI Snap-Ins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .667Lab 18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .669Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .670Lesson 5: Image Builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .671Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .672What Is an ESXi Image? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .673VMware Infrastructure Bundles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .674ESXi Image Deployment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .675What Is Image Builder? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .676Image Builder Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .677Building an ESXi Image: Step 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .678Building an ESXi Image: Step 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .680Building an ESXi Image: Step 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .681
VS5OS_LectGuideVol2.book Page vii Monday, June 25, 2012 10:27 PM
-
viii VMware vSphere: Optimize and Scale
Using Image Builder to Build an Image: Step 4 . . . . . . . . . . . . . . . . . .682Lab 19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .683Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .684Lesson 6: Auto Deploy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .685Learner Objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .686What Is Auto Deploy? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .687Where Are the Configuration and State Information Stored? . . . . . . . .688Auto Deploy Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .689Rules Engine Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .690Software Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .691PXE Boot Infrastructure Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .693Initial Boot of an Autodeployed ESXi Host: Step 1 . . . . . . . . . . . . . . .694Initial Boot of an Autodeployed ESXi Host: Step 2 . . . . . . . . . . . . . . .695Initial Boot of an Autodeployed ESXi Host: Step 3 . . . . . . . . . . . . . . .696Initial Boot of an Autodeployed ESXi Host: Step 4 . . . . . . . . . . . . . . .697Initial Boot of an Autodeployed ESXi Host: Step 5 . . . . . . . . . . . . . . .698Subsequent Boot of an Autodeployed ESXi Host: Step 1. . . . . . . . . . .699Subsequent Boot of an Autodeployed ESXi Host: Step 2. . . . . . . . . . .700Subsequent Boot of an Autodeployed ESXi Host: Step 3. . . . . . . . . . .701Subsequent Boot of an Autodeployed ESXi Host: Step 4. . . . . . . . . . .702Managing Your Auto Deploy Environment . . . . . . . . . . . . . . . . . . . . .703Using Auto Deploy with Update Manager to Upgrade Hosts . . . . . . . .704Lab 20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .705Review of Learner Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .706Key Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .707
VS5OS_LectGuideVol2.book Page viii Monday, June 25, 2012 10:27 PM
-
VMware vSphere: Optimize and Scale 341
Storage Optim
ization7
M O D U L E 7
Storage Optimization 7Slide 7-1
Module 7
VS5OS_LectGuideVol2.book Page 341 Monday, June 25, 2012 10:27 PM
-
342 VMware vSphere: Optimize and Scale
You Are HereSlide 7-2
Course Introduction
VMware Management Resources
Performance in a Virtualized Environment
Network Scalability
Network Optimization
Storage Scalability
Storage Optimization
CPU Optimization
Memory Performance
VM and Cluster Optimization
Host and Management Scalability
VS5OS_LectGuideVol2.book Page 342 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 343
Storage Optim
ization7Importance
Slide 7-3
Storage can limit the performance of enterprise workloads. You should know how to monitor a hosts storage throughput and troubleshoot problems that result in overloaded storage and slow storage performance.
VS5OS_LectGuideVol2.book Page 343 Monday, June 25, 2012 10:27 PM
-
344 VMware vSphere: Optimize and Scale
Module LessonsSlide 7-4
Lesson 1: Storage Virtualization ConceptsLesson 2: Monitoring Storage ActivityLesson 3: Command-Line Storage ManagementLesson 4: Troubleshooting Storage Performance Problems
VS5OS_LectGuideVol2.book Page 344 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 345
Storage Optim
ization7Lesson 1: Storage Virtualization Concepts
Slide 7-5
Lesson 1:Storage Virtualization Concepts
VS5OS_LectGuideVol2.book Page 345 Monday, June 25, 2012 10:27 PM
-
346 VMware vSphere: Optimize and Scale
Learner ObjectivesSlide 7-6
After this lesson, you should be able to do the following: Describe factors that affect storage performance.
VS5OS_LectGuideVol2.book Page 346 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 347
Storage Optim
ization7Storage Performance Overview
Slide 7-7
VMware vSphere ESXi enables multiple hosts to share the same physical storage reliably through its optimized storage stack and VMware vSphere VMFS. Centralized storage of virtual machines can be accomplished by using VMFS as well as NFS. Centralized storage enables virtualization capabilities such as VMware vSphere vMotion, VMware vSphere Distributed Resource Scheduler (DRS), and VMware vSphere High Availability (vSphere HA). To gain the greatest advantage from shared storage, you must understand the storage performance limits of a given physical environment to ensure that you do not overcommit resources.
Several factors have an affect on storage performance:
Storage protocols
Proper configuration of your storage devices
Load balancing across available storage
Storage queues
Proper use and configuration of your VMFS volumes
Each of these factors is discussed in this lesson.
What affects storage performance? Storage protocols:
Fibre Channel, Fibre Channel over Ethernet, hardware iSCSI, software iSCSI, NFS
Proper storage configuration Load balancing Queuing and LUN queue depth VMware vSphere VMFS
(VMFS) configuration: Choosing between VMFS and
RDMs SCSI reservations
Virtual disk types
VS5OS_LectGuideVol2.book Page 347 Monday, June 25, 2012 10:27 PM
-
348 VMware vSphere: Optimize and Scale
Storage Protocol PerformanceSlide 7-8
For Fibre Channel and hardware iSCSI, a major part of the protocol processing is off-loaded to the HBA. Consequently, the cost of each I/O is very low.
For software iSCSI and NFS, host CPUs are used for protocol processing, which increases cost. Furthermore, the cost of NFS and software iSCSI is higher with larger block sizes, such as 64KB. This cost is due to the additional CPU cycles needed for each block for tasks like check summing and blocking. Software iSCSI and NFS are more efficient at smaller blocks and both are capable of delivering high throughput performance when CPU resource is not a bottleneck.
ESXi hosts provide support for high-performance hardware features. For greater throughput, ESXi hosts support 8Gb Fibre Channel adapters and 10Gb Ethernet adapters for hardware iSCSI and NFS storage. ESXi hosts also support the use of jumbo frames for software iSCSI and NFS. This support is available on both Gigabit and 10Gb Ethernet NICs.
VMware vSphere ESXi supports Fibre Channel, hardware iSCSI, software iSCSI, and NFS.All storage protocols are capable of delivering high throughput performance. When CPU is not a bottleneck, software iSCSI and NFS can be part of
a high-performance solution.Hardware performance features: 8Gb Fibre Channel Software iSCSI and NFS support for jumbo frames:
Using Gigabit or 10Gb Ethernet NICs
VS5OS_LectGuideVol2.book Page 348 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 349
Storage Optim
ization7SAN Configuration
Slide 7-9
Storage performance is a vast topic that depends on workload, hardware, vendor, RAID level, cache size, stripe size, and so on. Consult the appropriate VMware documentation as well as the storage vendor documentation on how to configure your storage devices appropriately.
Because each application running in your VMware vSphere environment has different requirements, you can achieve high throughput and minimal latency by choosing the appropriate RAID level for applications running in the virtual machines.
By default, active-passive storage arrays use the Most Recently Used path policy. To avoid LUN thrashing, do not use the Fixed path policy for active-passive storage arrays.
By default, active-active storage arrays use the Fixed path policy. When using this policy, you can maximize the use of your bandwidth to the storage array by designating preferred paths to each LUN through different storage controllers.
Round Robin uses an automatic path selection rotating through all available paths, enabling the distribution of load across the configured paths.
For Active/Passive storage arrays, only the paths to the active controller will used in the Round Robin policy.
For Active/Active storage arrays, all paths will used in the Round Robin policy.
Proper SAN configuration can help to eliminate performance issues.Each LUN should have the right RAID level and storage characteristics for the applications in virtual machines that will use it.Use the right path selection policy: Most Recently Used (MRU) Fixed (Fixed) Round Robin (RR)
VS5OS_LectGuideVol2.book Page 349 Monday, June 25, 2012 10:27 PM
-
350 VMware vSphere: Optimize and Scale
For detailed information on SAN configuration, see vSphere Storage Guide at http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html.
VS5OS_LectGuideVol2.book Page 350 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 351
Storage Optim
ization7Storage Queues
Slide 7-10
There are several storage queues that you should be aware of: device driver queue, kernel queue, and storage array queue.
The device driver queue is used for low-level interaction with the storage device. This queue controls how many active commands can be on a LUN at the same time. This number is effectively the concurrency of the storage stack. Set the device queue to 1 and each storage command becomes sequential: each command must complete before the next starts.
The kernel queue can be thought of as an overflow queue for the device driver queues. A kernel queue includes features that optimize storage. These features include multipathing for failover and load balancing, prioritization of storage activities based on virtual machine and cluster shares, and optimizations to improve efficiency for long sequential operations.
SCSI device drivers have a configurable parameter called the LUN queue depth that determines how many commands can be active at one time to a given LUN. The default value is 32. If the total number of outstanding commands from all virtual machines exceeds the LUN queue depth, the excess commands are queued in the ESXi kernel, which increases latency.
In addition to queuing at the ESXi host, command queuing can also occur at the storage array.
Queuing at the host: Device queue controls number of
active commands on LUN at any time. Depth of queue is 32 (default).
VMkernel queue is an overflow queue for device driver queue.
Queuing at the storage array: Queuing occurs when the number of
active commands to a LUN is too high for the storage array to handle.
Latency increases with excessive queuing at host or storage array.
queues
queues
VS5OS_LectGuideVol2.book Page 351 Monday, June 25, 2012 10:27 PM
-
352 VMware vSphere: Optimize and Scale
Performance Impact of Queuing on the Storage ArraySlide 7-11
To understand the effect of queuing on the storage array, consider a case in which the virtual machines on an ESXi host can generate a constant number of SCSI commands equal to the LUN queue depth. If multiple ESXi hosts share the same LUN, SCSI commands to that LUN from all hosts are processed by the storage processor on the storage array. Strictly speaking, an array storage processor running in target mode might not have a per-LUN queue depth, so it might issue the commands directly to the disks. But if the number of active commands to the shared LUN is too high, multiple commands begin queuing up at the disks, resulting in high latencies.
VMware tested the effects of running an I/O intensive workload on 64 ESXi hosts sharing a VMFS volume on a single LUN. As shown in the table above, except for sequential reads, there is no drop in aggregate throughput as the number of hosts increases. The reason sequential read throughput drops is that the sequential streams coming in from the different ESXi hosts are intermixed at the storage array, thus losing their sequential nature. Writes generally show better performance than reads because they are absorbed by the write cache and flushed to disks in the background.
For more details on this test, see Scalable Storage Performance at http://www.vmware.com/resources/techresources/1059.
Sequential workloads generate random access at the storage array.
0
20
40
60
80
100
120
140
160
180
200
1 2 4 8 16 32 64
MB
ps
No. of hosts
Aggregate throughput
seq_rdseq_wrrnd_rdrnd_wr
VS5OS_LectGuideVol2.book Page 352 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 353
Storage Optim
ization7LUN Queue Depth
Slide 7-12
SCSI device drivers have a configurable parameter called the LUN queue depth that determines how many commands can be active at one time to a given LUN. QLogic Fibre Channel HBAs support up to 255 outstanding commands per LUN, and Emulex HBAs support up to 128. However, the default value for both drivers is set to 32. If an ESXi host generates more commands to a LUN than the LUN queue depth, the excess commands are queued in the VMkernel, which increases latency.
When virtual machines share a LUN, the total number of outstanding commands permitted from all virtual machines to that LUN is governed by the Disk.SchedNumReqOutstanding configuration parameter, which can be set in VMware vCenter Server. If the total number of outstanding commands from all virtual machines exceeds this parameter, the excess commands are queued in the VMkernel.
To reduce latency, ensure that the sum of active commands from all virtual machines does not consistently exceed the LUN queue depth. Either increase the queue depth (the maximum recommended queue depth is 64) or move the virtual disks of some virtual machines to a different VMFS volume.
For details on how to increase the queue depth of your storage adapter, see vSphere Storage Guide at http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html.
LUN queue depth determines how many commands to a given LUN can be active at one time.Set LUN queue depth size properly to decrease disk latency. Depth of queue is 32 (default). Maximum recommended queue depth
is 64.Set Disk.SchedNumReqOutstandingto the same value as the queue depth.
Set LUN queue depthto its maximum: 64.
VS5OS_LectGuideVol2.book Page 353 Monday, June 25, 2012 10:27 PM
-
354 VMware vSphere: Optimize and Scale
Network Storage: iSCSI and NFSSlide 7-13
Applications or systems that write large amounts of data to storage, such as data acquisition or transaction logging systems, should not share Ethernet links to a storage device. These types of applications perform best with multiple connections to storage devices.
For iSCSI and NFS, make sure that your network topology does not contain Ethernet bottlenecks. Bottlenecks where multiple links are routed through fewer links can result in oversubscription and dropped network packets. Any time a number of links transmitting near capacity are switched to a smaller number of links, such oversubscription becomes possible. Recovering from dropped network packets results in large performance degradation.
Using VLANs or VPNs does not provide a suitable solution to the problem of link oversubscription in shared configurations. However, creating separate VLANs for NFS and iSCSI is beneficial. This separation minimizes network interference from other packet sources.
Finally, with software-initiated iSCSI and NFS, the network protocol processing takes place on the host system and thus can require more CPU resources than other storage options.
Avoid oversubscribing your links. Using VLANs does not solve the problem of oversubscription.Isolate iSCSI traffic and NFS traffic.Applications that write a lot of data to storage should not share Ethernet links to a storage device.For software iSCSI and NFS, protocol processing uses CPU resources on the host.
VS5OS_LectGuideVol2.book Page 354 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 355
Storage Optim
ization7What Affects VMFS Performance?
Slide 7-14
VMFS Partition AlignmentThe alignment of your VMFS partitions can affect performance. Like other disk-based file systems, VMFS suffers a penalty when the partition is unaligned. Using the VMware vSphere Client to create VMFS datastores avoids this problem because it automatically aligns the datastores along the 1MB boundary.
To manually align your VMFS partitions, check your storage vendors recommendations for the partition starting block. If your storage vendor makes no specific recommendation, use a starting block that is a multiple of 8KB.
Before performing an alignment, carefully evaluate the performance effect of the unaligned VMFS partition on your particular workload. The degree of improvement from alignment is highly dependent on workloads and array types. You might want to refer to the alignment recommendations from your array vendor for further information.
NOTE
If a VMFS3 partition was created using an earlier version of ESXi that aligned along the 64KB boundary, and that file system is then upgraded to VMFS5, it will retain its 64KB alignment. 1MB
VMFS partition alignment: The VMware vSphere Client properly aligns a VMFS partition along
the 1MB boundary. Performance improvement is dependent on workloads and array types.Spanning VMFS volumes: This feature is effective for increasing VMFS size dynamically. Predicting performance is not straightforward.
VS5OS_LectGuideVol2.book Page 355 Monday, June 25, 2012 10:27 PM
-
356 VMware vSphere: Optimize and Scale
alignment can be obtained by deleting the partition and recreating it using the vSphere Client and an ESXi host.
Spanned VMFS VolumesA spanned VMFS volume includes multiple extents (part or an entire LUN). Spanning is a good feature to use if you need to add more storage to a VMFS volume while it is in use. Predicting performance with spanned volumes is not straightforward, because the user does not have control over how the data from various virtual machines is laid out on the different LUNs that form the spanned VMFS volume.
For example, consider a spanned VMFS volume with two 100GB LUNs. Two virtual machines are on this spanned VMFS, and the sum total of their sizes is 150GB. The user cannot determine the contents of each LUN in the spanned volume directly. Hence, determining the performance properties of this configuration is not straightforward.
Mixing storage devices of different performance characteristics on the same spanned volume could cause an imbalance in virtual machine performance. This imbalance might occur if a virtual machines blocks were allocated across device boundaries, and each device might have a different queue depth.
For steps on how to align a VMFS partition manually, see Recommendations for Aligning VMFS Partitions at http://www.vmware.com/resources/techresources/608. Although this paper is otherwise obsolete, you can still use it for the manual alignment procedure.
VS5OS_LectGuideVol2.book Page 356 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 357
Storage Optim
ization7SCSI Reservations
Slide 7-15
VMFS is a clustered file system and uses SCSI reservations as part of its distributed locking algorithms. Administrative operationssuch as creating or deleting a virtual disk, extending a VMFS volume, or creating or deleting snapshotsresult in metadata updates to the file system using locks and thus result in SCSI reservations. A reservation causes the LUN to be available exclusively to a single ESXi host for a brief period of time. Although an acceptable practice is to perform a limited number of administrative tasks during peak hours, postponing major maintenance to off-peak hours in order to minimize the effect on virtual machine performance is better.
The impact of SCSI reservations depends on the number and nature of storage or VMFS administrative tasks being performed:
The longer an administrative task runs (for example, creating a virtual machine with a larger disk or cloning from a template that resides on a slow NFS share), the longer the virtual machines are affected. Also, the time to reserve and release a LUN is highly hardware-dependent and vendor-dependent.
Running administrative tasks from a particular ESXi host does not have much effect on the I/O-intensive virtual machines running on the same ESXi host.
You usually do not have to worry about SCSI reservations if no VMFS administrative tasks are being performed or if VMFS volumes are not being shared by multiple hosts.
A SCSI reservation: Causes a LUN to be used exclusively by a single host for a brief period Is used by a VMFS instance to lock the file system while the VMFS
metadata is updatedOperations that result in metadata updates: Creating or deleting a virtual disk Increasing the size of a VMFS volume Creating or deleting snapshots Increasing the size of a VMDK fileTo minimize the impact on virtual machine performance: Postpone major maintenance/configuration until off-peak hours.
VS5OS_LectGuideVol2.book Page 357 Monday, June 25, 2012 10:27 PM
-
358 VMware vSphere: Optimize and Scale
VMFS Versus RDMs Slide 7-16
There are three choices on ESXi hosts for managing disk access in a virtual machine: virtual disk in a VMFS, virtual raw device mapping (RDM), and physical raw device mapping. Choosing the right disk-access method can be a key factor in achieving high performance for enterprise applications.
VMFS is the preferred option for most virtual disk storage and most enterprise applications, including databases, ERP, CRM, VMware Consolidated Backup, Web servers, and file servers. Common uses of RDM are in cluster data and quorum disks for virtual-to-virtual clustering, or physical-to-virtual clustering; or for running SAN snapshot applications in a virtual machine.
For random workloads, VMFS and RDM produce similar I/O throughput. For sequential workloads with small I/O block sizes, RDM provides a small increase in throughput compared to VMFS. However, the performance gap decreases as the I/O block size increases. For all workloads, RDM has slightly better CPU cost.
For a detailed study comparing VMFS and RDM performance, see Performance Characterization of VMFS and RDM Using a SAN at http://www.vmware.com/resources/techresources/1040.
This technical paper is a follow-on to an earlier performance study that compares the performance of VMFS and RDM in ESX 3.0.1 (Recommendations for Aligning VMFS Partitions). At the time of writing, there is no equivalent technical paper for vSphere.
VMFS is the preferred option for most enterprise applications.Examples: Databases, ERP, CRM, Web servers, and file serversRDM is preferred when raw disk access is necessary.
I/O characteristic Which yields better performance?
Random reads/writes VMFS and RDM yield similar I/O operations/second
Sequential reads/writes at small I/O block sizes
VMFS and RDM yield similar performance
Sequential reads/writes at larger I/O block sizes
VMFS
VS5OS_LectGuideVol2.book Page 358 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 359
Storage Optim
ization7Virtual Disk Types
Slide 7-17
The type of virtual disk used by your virtual machine can have an effect on I/O performance. ESXi supports the following virtual disk types:
Eager-zeroed thick Disk space is allocated and zeroed out at the time of creation. Although this extends the time it takes to create the disk, using this disk type results in the best performance, even on the first write to each block. The primary use of this disk type is for quorum drives in an MSCS cluster. You can create eager-zeroed thick disks at the command prompt with vmkfstools.
Lazy-zeroed thick Disk space is allocated at the time of creation, but each block is zeroed only on the first write. Using this disk type results in a shorter creation time but reduced performance the first time a block is written to. Subsequent writes, however, have the same performance as an eager-zeroed thick disk. This disk type is the default type used to create virtual disks using the vSphere Client and is good for most cases.
Thin Disk space is allocated and zeroed upon demand, instead of upon creation. Using this disk type results in a shorter creation time but reduced performance the first time a block is written to. Subsequent writes have the same performance as an eager-zeroed thick disk. Use this disk type when space use is the main concern for all types of applications. You can create thin disks (also known as thin-provisioned disks) through the vSphere Client.
Disk type Description How to create
Performance impact
Use case
Eager-zeroed thick
Space allocated and zeroed out at time of creation
vSphere Client or vmkfstools
Extended creation time, but best performance from first write on
Quorum drive in an MSCScluster
Lazy-zeroed thick
Space allocated at time of creation, but zeroed on first write
vSphere Client (default disk type) or vmkfstools
Shorter creation time, but reduced performance on first write to block
Good for most cases
Thin Space allocated and zeroed upon demand
vSphere Client or vmkfstools
Shorter creation time, but reduced performance on first write to block
Disk space utilization is the main concern.
VS5OS_LectGuideVol2.book Page 359 Monday, June 25, 2012 10:27 PM
-
360 VMware vSphere: Optimize and Scale
Review of Learner ObjectivesSlide 7-18
You should be able to do the following: Describe factors that affect storage performance.
VS5OS_LectGuideVol2.book Page 360 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 361
Storage Optim
ization7Lesson 2: Monitoring Storage Activity
Slide 7-19
Lesson 2:Monitoring Storage Activity
VS5OS_LectGuideVol2.book Page 361 Monday, June 25, 2012 10:27 PM
-
362 VMware vSphere: Optimize and Scale
Learner Objectives Slide 7-20
After this lesson, you should be able to do the following: Determine which disk metrics to monitor. Identify metrics in VMware vCenter Server and resxtop. Demonstrate how to monitor disk throughput.
VS5OS_LectGuideVol2.book Page 362 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 363
Storage Optim
ization7Applying Space Utilization Data to Manage Storage Resources
Slide 7-21
The overview charts on the performance tab for a datastore displays information on how the space on a datastore is utilized. To view the charts:
1. From the Datastores and Datastore Clusters view, select the datastore.
2. Select the Performance tab.
3. On the View pull-down menu, ensure Space is selected.
Two pie charts provide information on space use for the datastore.
By File Type - The chart displays the portion of space utilized by swap files, other virtual machine files, other non-virtual machine files and free space.
By Virtual Machines (Top 5) - Shows a breakdown of virtual machine space use of the top five virtual machines residing on the datastore.
A summary graph reveals how space is utilized over time:
From the Time Range pull-down menu, select one day, one week, one month, one year, or custom. When custom is selected, you have the option of selecting a specific time to view by entering a date and time to view the information.
The graph displays used space, storage capacity, and allocated storage over the time selected.
The overview charts on the performance tab displays usage details of the datastore.By default, the displayed charts include: Space Utilization By Virtual Machines
(Top 5) 1 Day Summary
VS5OS_LectGuideVol2.book Page 363 Monday, June 25, 2012 10:27 PM
-
364 VMware vSphere: Optimize and Scale
Disk Capacity Metrics Slide 7-22
To identify disk-related performance problems, start with determining the available bandwidth on your host and compare it with your expectations. Are you getting the expected IOPS? Are you getting the expected bandwidth (read/write)? The same thing applies to disk latency. Check disk latency and compare it with your expectations. Are the latencies higher than you expected? Compare performance with the hardware specifications of the particular storage subsystem.
Disk bandwidth and latency help determine whether storage is overloaded or slow. In your vSphere environment, the most significant metrics to monitor for disk performance are the following:
Disk throughput
Disk latency
Number of commands queued
Number of active disk commands
Number of aborted disk commands
Identify disk problems. Determine available bandwidth and compare with expectations.What do I do? Check key metrics. In a VMware vSphere environment, the most
significant statistics are: Disk throughput Latency (device, kernel) Number of aborted disk commands Number of active disk commands Number of active commands queued
VS5OS_LectGuideVol2.book Page 364 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 365
Storage Optim
ization7Monitoring Disk Throughput with vSphere Client
Slide 7-23
The vSphere Client allows you to monitor disk throughput using the advanced disk performance charts. Metrics of interest:
Disk Read Rate Number of disk reads (in KBps) over the sample period
Disk Write Rate Number of disk writes (in KBps) over the sample period
Disk Usage Number of combined disk reads and disk writes (in KBps) over the sample period
These metrics give you a good indication of how well your storage is being utilized. You can also use these metrics to monitor whether or not your storage workload is balanced across LUNs.
Disk Read Rate and Disk Write Rate can be monitored per LUN. Disk Read Rate, Disk Write Rate, and Disk Usage can be monitored per host.
The vSphere Client counters: Disk Read Rate, Disk Write Rate, Disk Usage
VS5OS_LectGuideVol2.book Page 365 Monday, June 25, 2012 10:27 PM
-
366 VMware vSphere: Optimize and Scale
Monitoring Disk Throughput with resxtopSlide 7-24
Disk throughput can also be monitored using resxtop:
READs/s Number of disk reads per second
WRITES/s Number of disk writes per second
The sum of reads/second and writes/second equals I/O operations/second (IOPS). IOPS is a common benchmark for storage subsystems and can be measured with tools like Iometer.
If you prefer, you can also monitor throughput using the following metrics instead:
MBREAD/s Number of megabytes read per second
MBWRTN/s Number of megabytes written per second
All of these metrics can be monitored per HBA (vmhba#).
Measure disk throughput with the following: READs/s and WRITEs/s READs/s + WRITEs/s = IOPSOr you can use: MBREAD/s and MBWRTN/s
VS5OS_LectGuideVol2.book Page 366 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 367
Storage Optim
ization7Disk Throughput Example
Slide 7-25
One of the challenges when monitoring these values is to determine whether or not the values indicate a potential disk bottleneck. The screenshots above were taken while a database was being restored. A database restore can generate a lot of sequential I/O and is very disk-write intensive.
The first resxtop screen shows disk activity per disk adapter (type d in the window.) All I/O is going through the same adapter, vmhba2. To display the set of fields, type f. Ensure that the following fields are selected: A (adapter name), B (LUN ID), G (I/O statistics), and H (overall latency statistics). If necessary, select fields to display by typing a, b, g, or h.
The second resxtop screen shows disk activity per device (type u in the window.) Most of the I/O is going to the same LUN. To display the set of fields, type f. Ensure that the following fields are selected: A (device name), G (I/O stats), and H (overall latency stats). If necessary, select fields to display by typing a, g, or h.
The third resxtop screen shows disk activity per virtual machine (type v in the window.) The virtual machine is generating about 161MB per second (MBWRTN/s). To display the set of fields, type f. Ensure that the following fields are selected: C (group/world name), I (I/O stats) and J (overall latency stats). If necessary, select fields to display by typing c, i, or j.The data shown potentially indicate a problem. In this example, vmhba2 is a 2Gbps Fibre Channel adapter. The statistics indicate that the adapter has reached its limit (2Gbps is roughly equal to
Virtual machine view: Type v.
Device view: Type u.
Adapter view: Type d.
VS5OS_LectGuideVol2.book Page 367 Monday, June 25, 2012 10:27 PM
-
368 VMware vSphere: Optimize and Scale
1.5Gbps, plus some Fibre Channel protocol overhead). If this becomes a problem, consider using a 4Gbps or 8Gbps Fibre Channel adapter.
VS5OS_LectGuideVol2.book Page 368 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 369
Storage Optim
ization7Monitoring Disk Latency with vSphere Client
Slide 7-26
Disk latency is the time taken to complete an I/O request and is most commonly measured in milliseconds. Because multiple layers are involved in a storage system, each layer through which the I/O request passes might add its own delay. In the vSphere environment, the I/O request passes from the VMkernel to the physical storage device. Latency can be caused when excessive commands are being queued either at the VMkernel or the storage subsystem. When the latency numbers are high, the storage subsystem might be overworked by too many hosts.
In the vSphere Client, there are a number of latency counters that you can monitor. However, a performance problem can usually be identified and corrected by monitoring the following latency metrics:
Physical device command latency This is the average amount of time for the physical device to complete a SCSI command. Depending on your hardware, a number greater than 15 milliseconds indicates that the storage array might be slow or overworked.
Kernel command latency This is the average amount of time that the VMkernel spends processing each SCSI command. For best performance, the value should be 01 milliseconds. If the value is greater than 4 milliseconds, the virtual machines on the ESXi host are trying to send more throughput to the storage system than the configuration supports.
The vSphere Client counters:
Physical device latency counters and kernel latency counters
VS5OS_LectGuideVol2.book Page 369 Monday, June 25, 2012 10:27 PM
-
370 VMware vSphere: Optimize and Scale
It might help to know whether a performance problem is occurring specifically during reads or writes. The following metrics provide this information:
Physical device read latency and Physical device write latency These metrics break down physical device command latency into read latency and write latency. These metrics define the time it takes the physical device, from the HBA to the platter, to service a read or write request, respectively.
Kernel read latency and Kernel write latency These metrics are a breakdown of the kernel command latency metric. These metrics define the time it takes the VMkernel to service a read or write request, respectively. This is the time between the guest operating system and the virtual device.
VS5OS_LectGuideVol2.book Page 370 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 371
Storage Optim
ization7Monitoring Disk Latency with resxtop
Slide 7-27
In addition to disk throughput, the disk adapter screen (type d in the window) lets you monitor disk latency as well:
ADAPTR The name of the host bus adapter (vmhba#), which includes SCSI, iSCSI, RAID and Fibre Channel adapters.
DAVG/cmd The average amount of time it takes a device (which includes the HBA, the storage array, and everything in between) to service a single I/O request (read or write).
If the value < 10, the system is healthy. If the value is 1120 (inclusive), be aware of the situation by monitoring the value more frequently. If the value is > 20, this most likely indicates a problem.
KAVG/cmd The average amount of time it takes the VMkernel to service a disk operation. This number represents time spent by the CPU to manage I/O. Because processors are much faster than disks, this value should be close to zero. A value or 1 or 2 is considered high for this metric.
GAVG/cmd The total latency seen from the virtual machine when performing an I/O request. GAVG is the sum of DAVG plus KAVG
Host bus adapters (HBAs) include SCSI, iSCSI, RAID,
and FC-HBA adapters.
latency stats from the device, the kernel, and
the guest
DAVG/cmd: Average latency (ms) of the device (LUN)KAVG/cmd: Average latency (ms) in the VMkernel, also known as
queuing time
GAVG/cmd: Average latency (ms) in the guest. GAVG = DAVG + KAVG.
VS5OS_LectGuideVol2.book Page 371 Monday, June 25, 2012 10:27 PM
-
372 VMware vSphere: Optimize and Scale
Monitoring Commands and Command QueuingSlide 7-28
There are metrics for monitoring the number of active disk commands and the number of disk commands that are queued. These metrics provide information about your disk performance. They are often used to further interpret the latency values that you might be observing.
Number of active commands This metric represents the number of I/O operations that are currently active. This includes operations for which the host is processing. This metric can serve as a quick view of storage activity. If the value of this metric is close to or at zero, the storage subsystem is not being used. If the value is a nonzero number, sustained over time, then constant interaction with the storage subsystem is occurring.
Number of commands queued This metric represents the number of I/O operations that require processing but have not yet been addressed. Commands are queued and awaiting management by the kernel when the drivers active command buffer is full. Occasionally, a queue will form and result in a small, nonzero value for QUED. However, any significant (double-digit) average of queued commands means that the storage hardware is unable to keep up with the hosts needs.
Performance metric Name in vSphere Client
Name in resxtop/esxtop
Number of active commands
Commands issued ACTV
Number of commands queued
Queue command latency
QUED
VS5OS_LectGuideVol2.book Page 372 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 373
Storage Optim
ization7Disk Latency and Queuing Example
Slide 7-29
Here is an example of monitoring the kernel latency value, KAVG/cmd. This value is being monitored for the device vmhba0. In the first resxtop screen (type d in the window), the kernel latency value is 0.01 milliseconds. This is a good value because it is nearly zero.
In the second resxtop screen (type u in the window), there are 32 active I/Os (ACTV) and 2 I/Os being queued (QUED). This means that there is some queuing happening at the VMkernel level.
Queuing happens if there is excessive I/O to the device and the LUN queue depth setting is not sufficient. The default LUN queue depth is 32. However, if there are too many I/Os (more than 32) to handle simultaneously, the device will get bottlenecked to only 32 outstanding I/Os at a time. To resolve this, you would change the queue depth of the device driver.
For details on changing the queue depth of the device driver, see Fibre Channel SAN Configuration Guide at http://www.vmware.com/support/pubs.
normalVMkernel latency
queuing at the device
VS5OS_LectGuideVol2.book Page 373 Monday, June 25, 2012 10:27 PM
-
374 VMware vSphere: Optimize and Scale
Monitoring Severely Overloaded StorageSlide 7-30
When storage is severely overloaded, commands are aborted because the storage subsystem is taking far too long to respond to the commands. The storage subsystem has not responded within an acceptable amount of time, as defined by the guest operating system or application. Aborted commands are a sign that the storage hardware is overloaded and unable to handle the requests in line with the hosts expectations.
The number of aborted commands can be monitored by using either the vSphere Client or resxtop:
From the vSphere Client, monitor Disk Command Aborts.
From resxtop, monitor ABRTS/s.
The vSphereClient counter: Disk Command
Abortsresxtop counter: ABRTS/s
VS5OS_LectGuideVol2.book Page 374 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 375
Storage Optim
ization7Configuring Datastore Alarms
Slide 7-31
Alarms can be configured to detect when specific conditions or events occur against a selected datastore.
Monitoring for conditions is limited to only three options:
Datastore Disk Provisioned (%)
Datastore Disk Usage (%)
Datastore State to All Hosts
Monitoring for events offers more options. The events that can be monitored include the following:
Datastore capacity increased
Datastore discovered
File or directory copied to datastore
File or directory deleted
File or directory moved to datastore
Local datastore created
To configure datastore alarms: Right-click on the datastore and select Add Alarm. Enter the condition or event you want to monitor and what action to
occur as a result.
VS5OS_LectGuideVol2.book Page 375 Monday, June 25, 2012 10:27 PM
-
376 VMware vSphere: Optimize and Scale
NAS datastore created
SIOC: pre-4.1 host {host} connected to SIOC-enabled datastore {objectName}
Storage provider system capability requirements met
Storage provider system capability requirements not met
Storage provider system default capability event
Storage provider thin provisioning capacity threshold crossed
Storage provider thin provisioning capacity threshold reached
Storage provider thin provisioning default capacity event
Unmanaged workload detected on SIOC-enabled datastore
VMFS datastore created
VMFS datastore expanded
VMFS datastore extended
VS5OS_LectGuideVol2.book Page 376 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 377
Storage Optim
ization7Analyzing Datastore Alarms
Slide 7-32
By default, alarms are not configured to perform any actions. An alarm appears on the Alarms tab when the Triggered Alarms button is selected and a red or yellow indicator appears on the datastore, but the only way to know an alarm has triggered is to actively be searching for them.
Actions that can be configured include the following:
Send a notification email
Send a notification trap
Run a command
Configuring actions enables the vCenter Server to notify you if an alarm is triggered.
Once a triggered alarm is discovered, you must gather as much information as possible concerning the reason for the alarm. The Tasks & Events tab can provide more detail as to why the alarm occurred. Searching the system logs might also provide some clues.
By selecting the Triggered Alarms button, the Alarms tab displays all triggered alarms for the highlighted datastores.
VS5OS_LectGuideVol2.book Page 377 Monday, June 25, 2012 10:27 PM
-
378 VMware vSphere: Optimize and Scale
Lab 10 IntroductionSlide 7-33
In this lab, you generate various types of disk I/O and compare performance. Your virtual machine is currently configured with one virtual disk (the system disk). For this lab, you add two additional virtual disks (a local data disk and a remote data disk) to your test virtual machine: one on your local VMFS file system and the other on your assigned VMFS file system on shared storage. Your local VMFS file system is located on a single physical drive and your assigned VMFS file system on shared storage is located on a LUN consisting of two physical drives.
After adding the two virtual disks to your virtual machine, you run the following scripts:
fileserver1.sh This script generates random reads to the local data disk.
fileserver2.sh This script generates random reads to the remote data disk.
datawrite.sh This script generates random writes to the remote data disk.
logwrite.sh This script generates sequential writes to the remote data disk.
Each script starts a program called aio-stress. aio-stress is a simple command-line program that measures the performance of a disk subsystem. For more information on the aio-stress command, see the file readme.txt on the test virtual machine, in the same directory as the scripts.
TestVM01
local data disk
VMFS
shared storage
fileserver1.sh
fileserver2.sh
datawrite.sh
logwrite.sh
TestVM01.vmdk
TestVM01_1.vmdk
TestVM01.vmdk
system disk
remote data disk
VMFS
your assigned LUN
VS5OS_LectGuideVol2.book Page 378 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 379
Storage Optim
ization7Lab 10
Slide 7-34
In this lab, you will use performance charts to monitor disk performance:1. Identify the vmhbas used for local storage and shared storage.2. Add disks to your test virtual machine.3. Display the real-time disk performance charts of the vSphere Client.4. Perform test case 1: Measure sequential write activity to your remote virtual disk.5. Perform test case 2: Measure random write activity to your remote virtual disk.6. Perform test case 3: Measure random read activity to your remote virtual disk.7. Perform test case 4: Measure random read activity to your local virtual disk.8. Summarize your findings.9. Clean up for the next lab.
VS5OS_LectGuideVol2.book Page 379 Monday, June 25, 2012 10:27 PM
-
380 VMware vSphere: Optimize and Scale
Lab 10 ReviewSlide 7-35
vSphereClientCounters
Case 1Sequential writes to remote disk
Case 2Random writes to remote disk
Case 3Random reads to remote disk
Case 4Random reads tolocal disk
Read Rate
Write Rate
VS5OS_LectGuideVol2.book Page 380 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 381
Storage Optim
ization7Review of Learner Objectives
Slide 7-36
You should be able to do the following: Determine which disk metrics to monitor. Identify metrics in vCenter Server and resxtop. Demonstrate how to monitor disk throughput.
VS5OS_LectGuideVol2.book Page 381 Monday, June 25, 2012 10:27 PM
-
382 VMware vSphere: Optimize and Scale
Lesson 3: Command-Line Storage ManagementSlide 7-37
Lesson 3: Command-Line Storage Management
VS5OS_LectGuideVol2.book Page 382 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 383
Storage Optim
ization7Learner Objectives
Slide 7-38
After this lesson, you should be able to do the following: Use VMware vSphere Management Assistant (vMA) to manage
vSphere virtual storage. Use vmkfstools for VMFS operations. Use the vscsiStats command.
VS5OS_LectGuideVol2.book Page 383 Monday, June 25, 2012 10:27 PM
-
384 VMware vSphere: Optimize and Scale
Managing Storage with vMASlide 7-39
The VMware vSphere Command-Line Interface (vCLI) storage commands enable you to manage ESXi storage. ESXi storage is storage space on various physical storage systems (local or networked) that a host uses to store virtual machine disks.
Be brief on the slide. These tasks and commands are discussed on the next several slides.
Storage management task vMA commandExamine LUNs. esxcli
Manage storage paths. esxcli
Manage NAS storage. esxcli
Manage iSCSI storage. esxcli
Mask LUNs. esxcli
Manage PSA plug-ins. esxcli
Migrating virtual machines to a different datastore. svmotion
VS5OS_LectGuideVol2.book Page 384 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 385
Storage Optim
ization7Examining LUNs
Slide 7-40
The esxcli storage core command is used to display information about available logical unit numbers (LUNs) on the ESXi hosts.
The esxcli storage filesystem command is used for listing, mounting, and unmounting VMFS volumes.
Use the esxcli command with the storage namespace: esxcli storage core|filesystem Examples: To list all logical devices known to a host:
esxcli -server esxi02 storage core device list
To list a specific device: esxcli -server esxi02 storage core device
list d mpx.vmhba32:C0:T0:L0
To display host bus adapter (HBA) information: esxcli -server esxi02 storage core adapter list
To print mappings of datastores to their mount points and UUIDs: esxcli -server esxi02 storage filesystem list
VS5OS_LectGuideVol2.book Page 385 Monday, June 25, 2012 10:27 PM
-
386 VMware vSphere: Optimize and Scale
Managing Storage PathsSlide 7-41
You can display information about paths by running the esxcli storage core path command.
To list information about a specific device path, use the --path option:
esxcli -server esxi02 storage core path list -path vmhba33:C0:T2:L0
where is the runtime name of the device.
To rescan a specific storage adapter, use the --adapter option:
esxcli -server esxi02 storage core adapter rescan -adapter vmhba33
Use the esxcli command with the storage core path|adapternamespace: esxcli storage core path|adapter Examples: To display mappings between HBAs and devices:
esxcli -server esxi02 storage core path list
To list the statistics for a specific device path: esxcli -server esxi02 storage core path stats get
-path vmhba33:C0:T2:L0
To rescan all adapters: esxcli -server esxi02 storage core adapter rescan
VS5OS_LectGuideVol2.book Page 386 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 387
Storage Optim
ization7Managing NAS Storage
Slide 7-42
At the command prompt, you can add, delete, and list attributes concerning NAS datastores attached to the ESXi host.
When adding an NFS file system to an ESXi host, use the following options:
--host Specifies the NAS/NFS server
--share Specifies the folder being shared
--volume-name Specifies the NFS datastore name
Another useful command is the showmount command:
showmount -e displays the file systems that are exported on the server where the command is being run.
showmount -e displays the file systems that are exported on the specified host.
Use the esxcli command with the storage nfs namespace: esxcli storage nfs
Examples: To list NFS file systems:
esxcli -server esxi02 storage nfs list
To add an NFS file system to an ESXi host: esxcli -server esxi02 storage nfs
add --host=nfs.vclass.local --share=/lun2 --volume-name=MyNFS
To delete an NFS file system: esxcli -server esxi02 storage nfs remove --volume-name=MyNFS
VS5OS_LectGuideVol2.book Page 387 Monday, June 25, 2012 10:27 PM
-
388 VMware vSphere: Optimize and Scale
Managing iSCSI StorageSlide 7-43
The esxcli iscsi command can be used to configure both hardware and software iSCSI storage for the ESXi hosts.
When you add an iSCSI software adapter to an ESXi host, you are essentially binding a VMkernel interface to an iSCSI software adapter. The n option enables you to specify the VMkernel interface that the iSCSI software adapter binds to. The -A option enables you to specify the vmhba name for the iSCSI software adapter.
To find the VMkernel interface name (vmk#), use the command:
esxcli -server network ip interface list
To find the vmhba, use the command:
esxcli -server storage core adapter list
Use the esxcli command with the iscsi namespace: esxcli iscsi
Examples: To enable software iSCSI:
esxcli -server esxi02 iscsi software set -enabled=true
To add an iSCSI software adapter to an ESXi host: esxcli -server esxi02 iscsi networkportal add
n vmk2 -A vmhba33
To check the software iSCSI adapter status: esxcli -server esxi02 iscsi software get
- If the command returns the value true, the adapter is enabled.
VS5OS_LectGuideVol2.book Page 388 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 389
Storage Optim
ization7Masking LUNs
Slide 7-44
Storage path masking has a variety of purposes. In troubleshooting, you can force a system to stop using a path that you either know or suspect is a bad path. If the path was bad, then errors should be reduced or eliminated in the log files as soon as the path is masked.
You can also use path masking to protect a particular storage array, LUN, or path. Path masking is useful for test and engineering environments.
You can prevent an ESXi host from accessing the following: A storage array A LUN An individual path to a LUNUse the esxcli command to mask access to storage. esxcli corestorage claimrule add -r
-t -P MASK_PATH
VS5OS_LectGuideVol2.book Page 389 Monday, June 25, 2012 10:27 PM
-
390 VMware vSphere: Optimize and Scale
Managing PSA Plug-InsSlide 7-45
The NMP is an extensible multipathing module that ESXi supports by default. You can use esxcli storage nmp to manage devices associated with NMP and to set path policies.
The NMP supports all storage arrays listed on the VMware storage Hardware Compatibility List (HCL) and provides a path selection algorithm based on the array type. The NMP associates a set of physical paths with a storage device. A SATP determines how path failover is handled for a specific storage array. A PSP determines which physical path is used to issue an I/O request to a storage device. SATPs and PSPs are plugins within the NMP plugin.
The list command lists the devices controlled by VMware NMP and shows the SATP and PSP information associated with each device. To show the paths claimed by NMP, run esxcli storage nmp path list to list information for all devices, or for just one device with the --device option.
The set command sets the PSP for a device to one of the policies loaded on the system.
Use the esxcli command to manage the NMP. List the devices controlled by the NMP
esxcli storage nmp path list
Set the PSP for a device esxcli storage nmp device set --device naa.xxx
--psp VMW_PSP_FIXED
View paths claimed by NMP esxcli storage nmp path list
Retrieve PSP information esxcli storage nmp psp generic deviceconfig get
--device= esxcli storage nmp psp fixed deviceconfig get
--device= esxcli storage nmp psp roundrobin deviceconfig get
--device=
VS5OS_LectGuideVol2.book Page 390 Monday, June 25, 2012 10:27 PM
-
Module 7 Storage Optimization 391
Storage Optim
ization7Use the path option to list paths claimed by NMP. By default, the command displays information
about all paths on all devices. You can filter the command by doing the following:
Only show paths to a singe device (esxcli storage nmp path list --device )
Only show information for a single path (esxcli storage nmp path list --path=).
The esxcli storage nmp psp generic deviceconfig get and esxcli storage nmp psp generic pathconfig get commands retrieve PSP configuration parameters. The type of PSP determines which command to use:
Use nmp psp generic deviceconfig get for PSPs that are set to VMW_PSP_RR, VMW_PSP_FIXED or VMW_PSP_MRU.
Use nmp psp generic pathconfig get for PSPs that are set to VMW_PSP_FIXED or VMW_PSP_MRU. No path configuration information is available for VMW_PSP_RR.
VS5OS_LectGuideVol2.book Page 391 Monday, June 25, 2012 10:27 PM
-
392 VMware vSphere: Optimize and Scale
Migrating Virtual Machine Files to a Different DatastoreSlide 7-46
The ability to perform VMware vSphere Storage vMotion migrations at the command prompt can be useful when the user wants to run a script to clear datastores. Sometimes, a script is the preferred alternative over migrating with the vSphere Client. Using a script is especially desirable in cases where a lot of virtual machines must be migrated.
The svmotion command can be run in interactive or noninteractive mode.
In noninteractive mode, all the options must be listed as part of the command. If an option is missing, the command fails.
Running the command in interactive mode starts with the --interactive switch. In interactive mode, the user is prompted for the information required to complete the migration. The user is prompted for the following:
The datacenter
The virtual machine to migrate
The new datastore on which to place the virtual machine
Whether the virtual disks will reside on the same datastore or on different datastores
When the command is run, the --server option must refer to a vCenter Server system.
Use the svmotion command to perform VMware