Hyper-V and Storage: Maximizing Performance and Deployment Best Practices in Windows Server 2008 R2Robert LarsonDelivery ArchitectMicrosoft Corporation
SESSION CODE: WSV316
David LefPrincipal Systems ArchitectMicrosoft Corporation
Session Objectives and TakeawaysSession Objectives:
Quick Review of Windows Server 2008 R2 storage features for scaling performanceLearn current benchmark numbers of Windows Server 2008 R2 to help dispel common myths around iSCSI and Hyper-V performanceUnderstand Hyper-V storage configuration options and benefits of different infrastructure modelsExplore a real-life deployment of Hyper-V on an enterprise infrastructure in Microsoft IT’s server environment
Session Takeaways:Understand the key factors to maximizing virtual machine density and hardware utilizationApply Microsoft’s lab and IT production learning to you and your customer’s virtualization deployments
Agenda
Windows Server 2008 R2 Scalable Platform Enhancements iSCSI Breakthrough Performance ResultsStorage: Hyper-V Options and Best Practices
A Real World Deployment: Microsoft IT’s Server EnvironmentMSIT’s Hyper-V DeploymentMSIT’s “Scale Unit” Virtualization Infrastructure
Questions and Answers
Compute• 256 processor core support• Core Parking• Advanced Power Management
Storage • 256 processor core IO scaling• Dynamic Memory allocation•
• iSCSI Multi-Core scaling (NUMA IO)
Virtualization• Hot Add Storage• Intel EPT memory management support• Live Migration
Networking• 256 processor core support • NUMA awareness• VMQ and Virtualization performance
Windows Server 2008 R2 ScalablePlatform: Efficient scaling across multi-core CPUs
Intel® Xeon® Processor
Hyper-V™
Intel® Ethernet Adapters
PHYSICAL VIRTUAL
Reliability, Scalability and Performance
Extending the iSCSI platformiSCSI and Storage Enhancements in R2
• iSCSI Multi-Core and Numa IO• DPC redirection• Dynamic Load Balancing• Storage IO Monitoring• CRC Digest Offload• Support for 32 paths at boot time
• iSCSI Quick Connect• Configuration Reporting• Automated deployment• iSCSI Server Core UI
Management
Agenda
Windows Server 2008 R2 Scalable Platform Enhancements iSCSI Breakthrough Performance ResultsStorage: Hyper-V Options and Best Practices
A Real World Deployment: Microsoft IT’s Server EnvironmentMSIT’s Hyper-V DeploymentMSIT’s “Scale Unit” Virtualization Infrastructure
Questions and Answers
RSSLROLSO
iSCSI Performance Architectures
OS
HW
HW
RSS
Filesystem
Volume Manager
Class/Disk
MSISCSI.SYS
Port Driver (Storport)
TCPIP.SYS
NDIS MINIPORT
iSCSI HBA miniportTCP Chimney
NDIS MINIPORT
OS
HW
iSCSI HBA LROLSO
Provided by:
Microsoft
Firmware/DriverHardware
IHV
MSISCSI.SYS
Native Used in Perf tests
Stateful Offload - ChimneyNative + Transport Offload iSCSI HBA
Note: TCP Chimney should be disabled for iSCSI traffic for best performance and interoperability
iSCSI Test Configuration – 2008 R2
Iometer Management
System
1 Gbps
Target
Target
Target
Target
Target
Target
SwitchCisco*Nexus*5020 (10 GbE)
Switch
10 Gbps per Target
Adapter•Intel® Ethernet Server Adapter X520 based on Intel® 82599 10GbE Controller.
1 Gbp
sTarget
Target
Target
Target
iSCSI Soft Targets
Performance factors•iSCSI initiator perf optimizations•Network stack optimizations•Receive Side Scaling (RSS)•Intel Xeon 5500 QPI and integrated memory controller•Intel® 82599: HW Acceleration, multi-core scaling with RSS, MSI-X
Server•Windows Server 2008 R2•Microsoft iSCSI Initiator •Intel ®Xeon® Processor 5580, quad core, dual socket, 3.2 Ghz, 24GB DDR3, MTU 1500, Outstanding I/Os =20
Software Initiator
LUN 1
LUN 2
LUN 3
LUN 4
LUN 5
LUN 6
LUN 7
LUN 8
LUN 9
LUN 10Single Port 10GbE
* Other names and brands may be claimed as the property of others.
Intel® Xeon® Processor 5580 Platform, Windows Server 2008 R2 and Intel® 82599 10GbE Adapter
Breakthrough Performance at 10GbE
1,030,000 IOPs• Single Port• 10GbE line rate• 10k IOPs per CPU point• Performance for real world apps • Future ready: Performance Scales
552k IOPs at 4k represents•3,100 Hard Disk Drives•400x a demanding database workload •1.7m Exchange mailboxes•9x transactions of large eTailers
• Jumbo frames: >30% CPU decrease is common for larger IO size (jumbo frames not used here)
Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Microsoft and Intel products as measured by those tests. Any difference in system hardware or software design or configuration may affect actual performance. Buyers should consult other sources of information to evaluate the performance of systems or components they are considering purchasing.
Read/Write IOPs and CPU Test
Read/Write IOPs and Throughput Test
2008 R2 Hyper-V iSCSI Test ConfigurationIometer
Management System
1 Gbps
Target
Target
Target
Target
Target
Target
SwitchCisco*Nexus*5020 (10 GbE)
Switch
10 Gbps per Target
Single 10 Gbps Port Host
1 G
bps
Target
Target
Target
Target
iSCSI Soft Lab Targets
iSCSI Direct connection iSCSI initiator runs in VMMicrosoft VMQ and Intel VMDq
Performance factors•iSCSI initiator perf optimizations•Microsoft network stack optimizations•Hyper-V scaling•Receive Side Scaling on host•Microsoft VMQ•Intel VMDqVirtual connection
LUN 1
LUN 2
LUN 3
LUN 4
LUN 5
LUN 6
LUN 7
LUN 8
LUN 9
LUN 10
Physical connection
* Other names and brands may be claimed as the property of others.
iSCSI Performance with Intel® 82599 10G NIC with VMDq, Intel® Xeon 5580 Platform, Windows Server 2008 R2 and R2 Hyper V
Breakthrough Performance – Hyper-V
• 715k IOPs -- 10GbE line rate• Intel VMDq and Microsoft VMQ accelerates
iSCSI to the guest• Hyper-V achieves native throughput at 8k and
above• Future ready: Scales with new platforms, OS
and Ethernet adapters
Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Microsoft Intel products as measured by those tests. Any difference in system hardware or software design or configuration may affect actual performance. Buyers should consult other sources of information to evaluate the performance of systems or components they are considering purchasing.
Near native iSCSI performance
Read/Write IOPs and Throughput Test
iSCSI Performance Test Conclusions
iSCSI protocol performance is only limited by the speed of the underlying bus and vendor implementation
iSCSI is ready for mission critical performance workloads Use Receive Side Scaling to optimize for iSCSI performance in the host Use VMQ/VMDq to optimize iSCSI performance within the VM for best
IO scaling VLANs offers logical separation of LAN/SAN traffic and additional
performance isolation Most applications use moderate IO and throughput
Agenda
Windows Server 2008 R2 Scalable Platform Enhancements iSCSI Breakthrough Performance ResultsStorage: Hyper-V Options and Best Practices
A Real World Deployment: Microsoft IT’s Server EnvironmentMSIT’s Hyper-V DeploymentMSIT’s “Scale Unit” Virtualization Infrastructure
Questions and Answers
Hyper-V Storage Connect Options
Boot volumes = VHDs or CSVCan be located on Fibre Channel or iSCSI LUNs connected from parent or DAS disks local to parent
Data VolumesVHD or CSVPassthrough
Most applicable to Fibre Channel, but technically works for DAS and iSCSI
iSCSI Direct – only applicable to running iSCSI from guest
iSCSI Direct UsageMicrosoft iSCSI Software initiator runs transparently from within the VM
VM operates with full control of LUN
LUN not visible to parent
iSCSI initiator communicates to storage array over TCP stack
Supports advanced application requirements:
Supports application specific replication & array side replication utilities run transparently
LUNs can be hot added & hot removed without requiring reboot of VM (2003, 2008 and 2008 R2)
VSS hardware providers run transparently within the VM
Backup/Recovery runs in the context of VM
Enables guest clustering scenario
Inherits Hyper-V networking performance enhancements
Works transparently with VMQ
Performance boost with Jumbo Frames
iSCSI Perf Best Practices with Hyper-V
Standard Networking & iSCSI best practices applyUse Jumbo Frames (Jumbo frames supported with Hyper-V Switch and virtual NIC in Windows Server 2008 R2) with higher IO request sizes
Benefits seen at 8K and above, the higher the IO size the higher the benefit of jumbo frames with 512K request size seeing the best benefit
Use Dedicated NIC ports or VLANsiSCSI traffic (Server to SAN)
Multiple to scaleClient Server (LAN)
Multiple to scaleCluster heartbeat (if using cluster)Hyper-V Management
Unbind unneeded services from NICs carrying iSCSI trafficFile Sharing, DNS
Cluster Shared Volume Deployment Guidance CSV Requirements
All cluster nodes must use the same drive letter for the %SystemDrive% (example – C:\)NT LAN Manager (NTLM) must be enabled on cluster nodesSMB must be enabled for each network on each node that may be involved in CSV cluster communications
Client for Microsoft NetworksFile and Printer Sharing for Microsoft Networks
The Hyper-V role must be installed on every cluster node
Cluster Shared Volume Deployment Guidance Storage Deployment Factors
For Server workloads, the application/role will dictate CSV configurationThe virtual configuration should closely resemble the physical configurationIf separate LUNs are required for OS, Data, Logs in a physical server then this data should reside on separate CSVs in a VMThe CSVs used for OS, Data or Logs can be used to store this data for multiple VMs that have this requirement
Generally, avoid disk contention by using separate CSVs for OS and application dataThis can be mitigated by having a large number of spindles backing the CSV LUN, something typically seen in high-end SAN storage
Cluster Shared Volume Deployment GuidanceGeneral Guidance
VHD VHD VHD
VHD VHD VHD
VHD VHD VHD
VM 1
VM 3
VM 2
Boot/OS
Data
Log
Each VM required 3 separate CSVsHowever in this case: 3 X 3 = 3
Cluster Shared Volume Deployment Guidance With 16 Cluster nodes sending I/O to single LUN…
How many IOPS can your storage array handle?
Cluster Shared Volume Deployment Guidance Work with your storage provider
How many CSVs per cluster node?As many as you need - there is no limit
How many VMs should I deploy on a CSV?Dependent on rate/volume of storage I/O
What backup applications support CSV?Microsoft System Center Data Protection Manager 2010Symantec Backup Exec 2010NetApp SnapManager for Hyper-VRecently released or releasing soon:
HP Data Protector, EMC NetWorker
CSV and Hyper-V BackupProtect from parent or within the guest VM?
Answer – Both!Backup from Hyper-V Parent Partition
Protect the virtual machine and associated VHDsIncludes non-Windows servers
Backup from Guest VMProtect application data
SQL databaseExchangeSharePointFiles
Same as protecting a physical server
iSCSI and CSV Demo
DEMO
Redundant Fibre Channel Infrastructure with Microsoft MPIO
Windows Server Hosts
Clients
Fibre Channel switch Network
Increases uptime of Windows Server by providing multiple paths to storage
Increased server bandwidth over multiple ports
Automatic failure detection and failover Dynamic load balancing Works with Microsoft DSM provided
inbox or 3rd party DSMs (PowerPath, OnTAP DSM, etc. )
MPIO and MCS with Hyper-VMicrosoft MPIO and MCS (Multiple Connections for iSCSI) are natively included in Windows Server and work transparently with Hyper-VEspecially important in virtualized environments to reduce single points of failureLoad balancing & fail over using redundant HBAs, NICs, switches and fabric infrastructure Aggregates bandwidth to maximum performanceMPIO supported with Fibre Channel , iSCSI, Shared SASTwo Options for multi-pathing with iSCSI
Multiple Connections per SessionMicrosoft MPIO (Multipathing Input/Output)
Protects against loss of data path during firmware upgrades on storage controllerWhen using iSCSI direct, MPIO and MCS work transparently with VMQ
SAN Performance Considerations
• Always use active/active multipathing load balance policies (round robin, least queue depth) vs. failover mode.
• Follow vendor guidelines for timer settings• Queuedepth, PDORemovePeriod, etc. as these settings will have a direct impact
on ability to deliver max IO to VMs and controlling failover times• Many array vendors include host utilities that automatically adjust settings to
optimize for their array• Example: Dell “hit kit” host integration kit
• Pay attention to spindle count for workload• SSDs (Solid State Drives) change the game on this
• For NICs used for iSCSI in HBA mode with offload (iSOE), turn off TCP Chimney• http://support.microsoft.com/kb/951037
Agenda
Windows Server 2008 R2 Scalable Platform Enhancements iSCSI Breakthrough Performance ResultsStorage: Hyper-V Options and Best Practices
A Real World Deployment: Microsoft IT’s Server EnvironmentMSIT’s Hyper-V DeploymentMSIT’s “Scale Unit” Virtualization Infrastructure
Questions and Answers
MSIT Enterprise Virtualization - HistoryDetermined that 30% of MSIT servers could be hosted as VMs on Virtual Server 2005
Proof of concept started in 2004, with success leading to the “Virtual Server Utility” service in 2005VMs offered as an alternative to smaller physical servers
Hyper-V R1 adopted as commodity at RC, targeting up to 80% of new MSIT workloadsAchieved 60% VM adoption since RTM
Hyper-V R2 deployment began at M3, with nearly 50% of new capacity deployed on virtual machines within the six months after RTM
Host Failover Clustering with CSV used throughout the R2 betaAt least 80% VM adoption is possible with R2 and new virtualization hardware platform
“Physical by exception” and “one in, one out” policies
Native consolidation is encouraged as first step where possible, but we have virtualized a very wide cross section of workloads successfully
This includes SQL, Exchange, SharePoint, and core Windows infrastructure
MSIT Enterprise Virtualization - CurrentVMs are approximately 40% of our total server population
About 1500 physical servers host more than 8000 VMsHighest VM concentrations are in enterprise datacenters
Previous consolidation efforts have limited regional sprawlWhere field services and applications are needed, we deploy a “Virtual Branch Office Server” (VBOS) or a scaled-down virtualization infrastructure
Goal of the VM hosting service is to equal or better the service level and value of traditional physical servers
Availability averaged 99.95%, even prior to HA VM configurationsVM hosting is 50% of the cost of a comparable physical server
Targeting 50% virtualized in 2010 and greater than 80% in the 2011-2012 timeframeOn-boarding process steers appropriate candidates into VMs for all new growth and hardware refresh/migration requirements
MSIT’s Big Problem: “Discrete Unit” Proliferation“Discrete Unit” Definition
Capacity that is purchased, provisioned, and lifecycle-managed independently in the data centers• Compute – Isolated rack-mount servers, usually deployed for a single application or customer• Storage – DAS and small-to-midrange SAN, per-server or per-cluster and determined by short-term forecasting• Network – Resources averaged to the number of expected systems in a given location
MSIT’s ChallengesOver-provisioning, under-utilization, and stranded capacity were the norm
Data center space and power constraints were increasing
Time and effort required to deploy was becoming unacceptable
Cost of basic server support services grew over time
Hardware lifecycle management was burdensome
MSIT’s Solution - The “Scale Unit” Virtualization Platform
DefinitionCompute, Storage, and Network resources deployed in bundles to allow both extensibility and reuse/reallocation - Dynamic Infrastructure• Compute Scale Unit – One rack of blade servers, enclosure switching elements, and cabling• Storage Scale Unit – Enterprise-class storage array with thousands of disks• Network Scale Unit – Redundant Ethernet and FC fabrics at the aggregation and distribution layers• Additional infrastructure resources – VLAN framework, IP address ranges, etc.
Key TenetsUtilize the same compute, storage, and networking elements across MSIT data center environments and customer sets
Centralized procurement of and provisioning of new capacity supports customer deployment requirements, with a minimum of stranded and used resources
Suitable for both net-new deployment and platform refresh
Scale Unit – Basic Design ElementsComprehensive infrastructure, procured and deployed in fairly large capacity chunks
Compute – 64 blade servers per rackNetwork – Highly aggregated and resilient
Per-port costs reduced by a factor of 10
Storage – Enterprise class SAN arraysThin-provisioned storage for efficiency
High availability and fault tolerance part of the designA level of redundancy at the blades and throughout the network and storage fabrics, coupled with logical resiliency for inherent high availability at a nominal additional cost
Large network and storage domains cover multiple compute racks - Highly-aggregated, but allowing mobility and enhanced flexibility“Green IT” – Scale Unit V1 compared to an equivalent number of 2U discrete servers
33% of the space, 55% of the power/cooling, 90% fewer cablesVM consolidation potential greater than 50x
MSIT Scale Unit – Technology Enablers
Blade Servers and EnclosuresReplaceable/interchangeable elementsMultiple system and cross-enclosure power management
OS Efficiency and VirtualizationBoot from SAN – Near stateless hostsWindows Server 2008 R2 Server CoreHyper-V host clustering and highly-available virtual machines
Enterprise StorageThin provisioningVirtual pooling of storage resources
Converged Networking10Gb Ethernet – High-speed iSCSI and FCoE (Fibre Channel over Ethernet)Virtualized networks – Trunks with multiple VLANs (802.1q), link aggregation with current (802.3ad, vPC) and future (TRILL, DCB, L2 overlay) technologies
Compute Scale Unit - Deployment Metrics ExampleComparison Point Discrete Units* Scale Unit v1 Scale Unit v2
Rack Space (Rack Units) 1408 U 440 U 200 U
Rack Space (Number of Racks)
32 Racks 11 Racks 5 Racks
Power (Kilowatts) 320 KW 185 KW 90 KW
Max Heat (BTU/Hr) 1.1 Million BTU/Hr 0.6 Million BTU/Hr 0.3 Million BTU/Hr
Power Cords/Ports 1408 (C13) 264 (C19) 120 (C19)
Network Cables/Ports 1408 44 40
Storage Cables/Ports 1408 176 80
Avg CPU Utilization ~15-20% ~30-40% ~70-80%
RAM Available 22.5TB (32GB per server) 22.5TB (32GB per blade) 46TB (144GB per blade)
Avg VMs per Host** 6 6 32
* Equivalent number of traditional 2U rack-mounted servers** Based-upon an average of 4 GB per VM
Compute Scale Unit - Physical Layout
Compute Scale Unit - Logical Layout Overlay
16 Node Hyper-V CSV Cluster
MSIT R2 Production Clusters
Server virtualization 16 node clusters100 to 400 VMs per cluster (~8 to 32 per host)
Client virtualization (VDI) 8 node clusters250 to 300 VMs per cluster (~30-40 per host)
Standard LUNs are configured as CSVs, hosting multiple VMs per volume500 GB in V1 and up to 13 TB in V2 designsLimited single VM CSVs or pass-through disks
Dedicated cluster VLAN on all hostsLive Migration, CSV, and Cluster network trafficIPsec exempt, jumbo frame capable
Single Scale Unit View
Multiple Scale Unit View
Storage Scale Unit - History and New Design PrinciplesHistory
Shift from mid-range to high-end enterprise storage250 mid-range arrays reduced to 20 enterprise-class arraysLower administrative overhead and more flexibility
Operational Issues“Islands of capacity” led to migration challengesVolume growth issues
Performance IssuesShared write cache constraintsSmall disk pools
New Design PrinciplesHigh-end, enterprise storage
Larger pools of array resources (disk, front-end ports, etc.)
Enabling of more dynamic storage
Enabling of data replication in the platform (local and remote)
Strong reporting infrastructure to facilitate high resource usage
Storage Scale Unit - Deployment Details
Flexible Capacity Allotments400 disks per pool, with approximately four pools per arrayPort groups of 16 x 8 Gb FC ports
Groups of ports are mapped to a pools of disks as-needed
Thin ProvisioningR2 Hyper-V data LUNs presented as CSVs (Cluster Shared Volumes) as neededCommon volume sizes for dedicated storage (Hyper-V pass-through disks or dedicated blades)
100 GB, 500 GB, 1000 GB, 2000 GBReduces “LUN grow” operations by 90%
Host side partitioning dictates actual storage consumptionPartition defined at size requestedVolumes take up zero space until user writes dataApplication requirement: 150 GBStorage delivered: 500 GB (potential)OS boot partition is always limited to 50 GB
Storage Scale UnitPhysical Layout
13 U
FC Switch Fabric A
FC Switch Fabric B
iSCSI LAN Switch A
iSCSI LAN Switch B
EMC VMAX1,920x400GB 10k rpm
FC disks96 front end FC ports
16 front end iSCSI ports680TB raw capacity
9x42U racks
Power consumption = 48kVA
Heat dissipation = 156,900 BTU/hr
Annualized Energy Cost = $120,676
Sound Pressure = 72 dBWeight = 20,741 lbs
48x8Gbps storage connections to each FC switch
8x1GigEthernet storage connections to each IP switch
240 disks
240 disks
240 disks
240 disks
240 disks
240 disks
240 disks
240 disks
system
Storage Scale Unit - Key MetricsTotal Storage Managed
10 PBTB per Admin: 950TBStorage Utilization: 71%
Total Servers ManagedNumber of apps: ~ 2500Total server count: ~ 16,000
All numbers reported monthly to management
Tiers of CapacityDetermining resource assignment and service levels
Endless options for server/storage Where do servers land?Define tiers and stick with them!
Tier Description I/O Threshold Price
1 Dedicated space > 5000 IOps> 80 MB/s
Straight pass-through + consulting cost (per engagement)
2 Shared R1 500-5000 IOps25-80 MB/s
3x shared SU Pricing
3 Shared R6 < 500 IOps< 25 MB/s
Standard SU Pricing
Capacity ManagementMultiple dimensions of capacity management
Pools of shared and highly-aggregated resourcesShared front-end ports and disk groupsShared arrays and fabric switching infrastructures
When do we stop adding servers and what are the safe thresholds?
Element Name Provisioning Lock(No new presentations)
Grow Lock(No volume growth)
Mitigation(Start migrating servers and applications)
Pool Oversubscription 150% 200% N/A
Pool Written Capacity 70% 80% 90%
Avg. Performance Thresholds
60% 70% 80%
Scale Unit - Future Storage OpportunitiesStorage Impacts from Windows and Application Changes
SQL Compression30-80% in initial test cases (YMMV)
OS and SQL EncryptionData security controlled at the OS and application layers
Scale Unit v3+Network
Fully converged networking (FCoE and iSCSI)40Gb/100Gb Ethernet
ComputeDynamic server identity provisioning – Boot from SAN or network
Further uncoupling of the OS instance from the hardware
StorageSmall Form Factor (SFF) drives and enterprise-class SSDsEnd-to-end encryptionDe-duplication
Agenda
Windows Server 2008 R2 Scalable Platform Enhancements iSCSI Breakthrough Performance ResultsStorage: Hyper-V Options and Best Practices
A Real World Deployment: Microsoft IT’s Server EnvironmentMSIT’s Hyper-V DeploymentMSIT’s “Scale Unit” Virtualization Infrastructure
Questions and Answers
Related ContentBreakout Sessions
VIR207 – Advanced Storage Infrastructure Best Practices to Enable Ultimate Hyper-V ScalabilityVIR303 – Disaster Recovery by Stretching Hyper-V Clusters Across SitesVIR310 – Networking and Windows Server 2008 R2: Deployment ConsiderationsVIR312 – Realizing a Dynamic Datacenter Environment with Windows Server 2008 R2 Hyper-V and Partner SolutionsVIR319 – Dynamic Infrastructure Toolkit for Microsoft System Center Deployment: Architecture and Scenario WalkthroughWSV313 – Failover Clustering Deployment SuccessWSV315 – Guest vs. Host Clustering: What, When, and Why
Hands-on LabsVIR06-HOL – Implementing High Availability and Live Migration with Windows Server 2008 R2 Hyper-V
Product Demo StationsTLC-56 – Windows Server 2008 R2 Failover Clustering (TLC Red)
Resources
www.microsoft.com/teched
Sessions On-Demand & Community Microsoft Certification & Training Resources
Resources for IT Professionals Resources for Developers
www.microsoft.com/learning
http://microsoft.com/technet http://microsoft.com/msdn
Learning
Complete an evaluation on CommNet and enter to win!
Sign up for Tech·Ed 2011 and save $500 starting June 8 – June 31st
http://northamerica.msteched.com/registration
You can also register at the
North America 2011 kiosk located at registrationJoin us in Atlanta next year
© 2010 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to
be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
JUNE 7-10, 2010 | NEW ORLEANS, LA