qlogic netslice™ technology addresses i/o virtualization needs

8
WHITE PAPER QLogic NetSlice™ Technology Addresses I/O Virtualization Needs Flexible, Multi Protocol Supported Over 10GbE Maximizes Cost-Efficiency and Utilization Introduction Server virtualization based on virtual machine (VM) software is becoming a popular solution for consolidation of data center servers. With VM software, a single physical machine can support a number of guest operating systems (OSes), each of which runs on its own complete virtual instance of the underlying physical machine, as shown in Figure 1. The guest OSes can be instances of a single version of one OS, different releases of the same OS, or completely different OSes (for example, Linux ® , Windows ® , Mac OS X ® , or Solaris ® ). Figure 1. Simplified View of Virtual Machine Technology A thin software layer called a virtual machine monitor (VMM) or hypervisor creates and controls the virtual machines and other virtual subsystems. The VMM also takes complete control of the physical machine and provides resource guarantees for CPU, memory, storage space, and I/O bandwidth for each guest OS. The VMM can also provide a management interface that allows server resources to be dynamically allocated to match temporal changes in user demand for different applications. VMM software is available from independent software vendors (ISVs) such as VMware ® , Citrix™ (XenSource), and Virtual Iron ® (now part or Oracle ® ). Microsoft ® , a late entrant to this market, has been gaining acceptance for its Hyper-V VMM because it offers the convenience of being integrated into their server OSes, starting with Windows Server ® 2008. Virtualization Benefits There are numerous benefits that can be derived from server virtualization: Server Consolidation. A single physical server can easily support 4 to 16 VMs, allowing numerous applications that normally require dedicated servers to share a single physical server. This configuration allows the number of servers in the data center Virtual Machine Monitor / Hypervisor Physical Machine ... ... ... Application 1 Guest OS 1 Virtual Machine 1 Application 2 Guest OS 2 Virtual Machine 2 5 5 on n atio tio OS O O 5 ne 5 Application 3 Guest OS 3 Virtual Machine 3

Upload: cameroon45

Post on 06-Dec-2014

887 views

Category:

Technology


1 download

DESCRIPTION

 

TRANSCRIPT

Page 1: QLogic NetSlice™ Technology Addresses I/O Virtualization Needs

W H I T E P a P E r

QLogic NetSlice™ Technology Addresses I/O Virtualization Needs

Flexible, Multi Protocol Supported Over 10GbE

Maximizes Cost-Efficiency and Utilization

Introduction

Server virtualization based on virtual machine (VM) software is becoming a popular solution for consolidation of data center servers. With VM software, a single physical machine can support a number of guest operating systems (OSes), each of which runs on its own complete virtual instance of the underlying physical machine, as shown in Figure 1. The guest OSes can be instances of a single version of one OS, different releases of the same OS, or completely different OSes (for example, Linux®, Windows®, Mac OS X®, or Solaris®).

Figure 1. Simplified View of Virtual Machine Technology

a thin software layer called a virtual machine monitor (VMM) or hypervisor creates and controls the virtual machines and other virtual subsystems. The VMM also takes complete control of the physical machine and provides resource guarantees for CPU, memory, storage space, and I/O bandwidth for each guest OS. The VMM can also provide a management interface that allows server resources to be dynamically allocated to match temporal changes in user demand for different applications. VMM software is available from independent software vendors (ISVs) such as VMware®, Citrix™ (XenSource), and Virtual Iron® (now part or Oracle®). Microsoft®, a late entrant to this market, has been gaining acceptance for its Hyper-V VMM because it offers the convenience of being integrated into their server OSes, starting with Windows Server® 2008.

Virtualization Benefits

There are numerous benefits that can be derived from server virtualization:

Server Consolidation. • a single physical server can easily support 4 to 16 VMs, allowing numerous applications that normally require dedicated servers to share a single physical server. This configuration allows the number of servers in the data center

Virtual Machine Monitor / Hypervisor

Physical Machine

...

...

...Application 1

Guest OS 1

Virtual Machine 1

Application 2

Guest OS 2

Virtual Machine 2

55

5

5

5

onn atioatiotio

OSOO

5ne 5

Application 3

Guest OS 3

VirtualMachine 3

Page 2: QLogic NetSlice™ Technology Addresses I/O Virtualization Needs

QLogic NetSlice™ Technology Addresses I/O Virtualization Needs Flexible, Multi Protocol Supported Over 10GbE

HSG-WP09005 NE0130902-00 a 2

W H I T E P a P E r

to be reduced while increasing average utilization of physical servers from as low as 5–15 percent to as high as 50–60 percent.

Flexible Server/application Provisioning. • Virtual machines can be brought up on a physical machine in almost no time at all because there is no context or personality tying a particular application to specific physical resources. This configuration allows a VM to quickly react to changing workloads by requesting and being allocated additional resources (CPU, memory, etc.) to dynamically respond to peak workloads.

reliability. • The lack of ties to particular physical machines also allows a running VM (together with its application) to be migrated over the network to a different physical server connected to the same storage area network (SaN) without service interruption. This enables workload management/optimization across the virtual infrastructure as well as zero-downtime maintenance. VMs also help streamline provisioning new applications and backup/restore operations.

Lower Total Cost of Ownership (TCO). • Server virtualization allows significant savings in the both CaPEX (costs of server hardware, SaN host bus adapters, switch ports, and Ethernet adapters) and OPEX (server management labor expense; plus facility costs such as power, cooling, and floor space).

These benefits have led to efforts to further optimize the performance and effectiveness of server virtualization by adding hardware support for virtualization in both the CPU and I/O subsystems. The remainder of this white paper focuses on the QLogic architectural approach to hardware-based I/O virtualization and the role this can play in a virtualized, agile data center.

a Case for Intelligent Ethernet adapters

Figure 2 shows how networking I/O is typically virtualized by VMM software. The VMs within a virtualized server share a conventional physical Ethernet adapter to connect to a data center LaN. The VMM provides each VM with a virtual NIC (vNIC) instance complete with MaC and IP addresses and creates a virtual switched network to provide the connectivity between the vNICs and the physical Ethernet adapter. The virtual switched network performs the I/O transfers using shared memory and asynchronous buffer descriptors, similar to a shared memory Ethernet switch. With this software-based I/O virtualization, the VMM is directly in the data path between each VM and the shared NIC.

Figure 2. Current Software-based Virtual NIC

There are some challenges that arise out of software-based I/O virtualization.

Server CPU utilization increases in proportion to the number of •VMs and their aggregate bandwidth. This is because the CPU has to support the VMM-based virtual switched networking for multiple VMs, the network I/O through the NIC, the computational demands of VM applications, and the general overhead of virtualization. With a software-based approach, the overhead of virtualization is increased because the VMM must emulate a NIC for each VM so that each VM appears to have its own dedicated NIC, with a fair share of the network bandwidth provided by the NIC. There may be extra copies of the data at the VM-VMM interface.

With traditional NICs (where the host CPU performs TCP/IP •processing and control of data transfers), the aggregate network I/O of multiple VMs can easily overwhelm the server CPU.

Traditional NICs do not have any hardware support for I/O •virtualization. This forces the virtualization infrastructure to implement software-based virtualization, adding to the overhead.

as a result, software-based I/O virtualization, especially with traditional NICs, poses a serious constraint on the number of VMs per server and thus hinders consolidation strategies based on virtualization of high performance servers, such as modern multi-socket, multi-core server platforms.

The solution to these challenges is a hardware-based Intelligent Ethernet adapter that can offload I/O virtualization processing and upper layer protocol processing from the host CPU or CPUs. The Intelligent Ethernet adapter minimizes VMM-related host CPU utilization by removing the hypervisor from the I/O data path and greatly reducing TCP/IP-related host CPU utilization by offloading TCP/IP and upper layer protocol processing. Intelligent Ethernet

Physical Machine

Virtual Machine Monitor

Virtual Switched Network

VirtualMachine 1

Virtual NIC

VirtualMachine 2

Virtual NIC

NIC Driver

NIC

LAN

...C

5

IC

5

C

aline 5inin

al NIal al

VirtualMachine 3

Virtual NIC

Page 3: QLogic NetSlice™ Technology Addresses I/O Virtualization Needs

QLogic NetSlice™ Technology Addresses I/O Virtualization Needs Flexible, Multi Protocol Supported Over 10GbE

HSG-WP09005 NE0130902-00 a 3

W H I T E P a P E r

adapters with the following characteristics will play a key role in driving the performance and scalability of virtualized servers to meet the demands of next-generation data centers:

10GbE Support. • a single 10GbE physical connection is more flexible and cost-effective than multiple 1GbE connections and has the aggregate bandwidth needed for multiple VMs. By matching I/O bandwidth to aggregate VM network requirements, rather than to the number of servers, I/O cost/efficiency can be improved due to fewer Ethernet adapters, cables, and switch ports. Scalability, in terms of the number of VMs, is also improved.

TCP/IP Offload. • TCP/IP offload delivers the full potential bandwidth and server scalability afforded by 10GbE, because it enables end-to-end wire speed throughput with extremely low CPU utilization. The CPU savings allow the server to scale in terms of both performance per VM and number of VMs per server, further driving the economic momentum for server consolidation.

Multi-Protocol Offload Support. • The combination of Intelligent Ethernet adapters and 10GbE will drive Ethernet as the “unified fabric” in the data center. For example, 10GbE Intelligent Ethernet adapters allow the data center’s backbone to serve as the switch fabric for both the general purpose LaN and high performance storage area network (SaN) via iSCSI or Fiber Channel over Ethernet (FCoE). as the industry moves to a 10GbE unified fabric, applications will require offload for multiple protocols simultaneously. These include transport protocols such as TCP/IP, rDMa/iWarP, iSCSI, and iSCSI extensions for rDMa (iSEr).

Hardware Support for I/O Virtualization. • Efficient virtualized NIC solutions will:

Support removal of VMM software from the I/O data path, –while preserving VMM control functions

Support hardware virtualization features being added to Intel – ® and aMD® CPUs

Leverage emerging virtualization features of PCI Express – ® as ratified by the PCI-SIG® I/O Virtualization Workgroup.

Networking Virtualized Servers with Hardware-Based I/O Virtualization

One way to circumvent some of the limitations of software-based I/O virtualization is through direct assignment of physical NICs to VMs. This can be accomplished by installing a NIC driver in the VM partition and allowing the VM to pass data directly to the NIC without involving the VMM in the data path. Bypassing the VMM also requires a DMa “remapping” function in hardware. DMa remapping intercepts the I/O device’s attempts to access system memory and uses I/O page tables to (re)map the access to the portion of system memory that belongs to the target VM. The VMM retains control

of the data flow by ensuring that the DMa requests of the VMs are isolated from one another.

While direct assignment would reduce the overhead of virtual networking in the VMM software layer, the requirement for a dedicated physical Ethernet adapter for each VM is undesirable because it would detract from the flexibility of server virtualization (Figure 3). For example, the ability to move a VM from one physical machine to another would be adversely affected.

Figure 3. Direct Assignment of NICs to VMs

The shortcomings of direct assignment are overcome with a new category of Intelligent Ethernet adapters that support multiple virtual NICs over a shared physical architecture, as shown in Figure 4.

Figure 4. Near Future-Virtual NIC with Hardware-Based DMA Remapping

This new type of adapter provides hardware support for I/O virtualization, allowing a dedicated virtual NIC to be assigned to each VM. One pre-requisite for this is a DMa remapping function. Some platforms have started shipping with DMa remapping (IOMMU) support

Physical Machine

VirtualMachine 1

NIC Driver

VirtualMachine 2

NIC Driver

NIC NIC NIC

Direct Memory Access Remapping

Memory

...55aline 5inin

er

VirtualMachine 3

NIC Driver

Virtual Machine Monitor

Physical Machine

...

Memory

VirtualMachine 1

VirtualNIC Driver

VirtualMachine 2

VirtualNIC Driver er

alDrive

al5he e

DriDri r

VirtualMachine 3

VirtualNIC Driver

Direct Memory Access Intelligent NIC

Virtual Machine Monitor

LAN

Page 4: QLogic NetSlice™ Technology Addresses I/O Virtualization Needs

QLogic NetSlice™ Technology Addresses I/O Virtualization Needs Flexible, Multi Protocol Supported Over 10GbE

HSG-WP09005 NE0130902-00 a 4

W H I T E P a P E r

in the chipset. However, this addition implies an additional latency in the data path. To reduce or eliminate the latency, the Intelligent Ethernet adapter implements a cache of the IOMMU translations, and is able to bypass the IOMMU remapping in the chipset for most of the frequently used mappings. another key feature of this adapter is the ability to natively support generic interfaces as defined by the VMM. This feature ensures that there are no hardware specific components in the VM. With this implementation, advanced virtualization features such as VM migration are preserved.

The virtual NIC must also perform the virtual networking functions previously performed by the VMM in software-based I/O virtualization (Figure 2). These functions include:

Ensuring that the virtual NICs are fully independent, each with its •own parameter settings, such as MTU size, TCP segmentation offload (TSO) on/off, and interrupt parameters.

Implementing policies for sharing bandwidth and other NIC •resources across the VMs.

Virtual switching and traffic steering based on Layer 2/Layer 3 •header information. Virtual switching supports IP-based VM-to-VM communication, which is required for a virtual DMZ where firewall, IDS, and web servers run on separate VMs within a single physical server.

Providing flexibility so that the VMM can control and configure the •virtual NIC interfaces.

Providing each virtual NIC its own virtual function to interface to •the host.

I/O Virtualization — a Look ahead

as the market acceptance of server virtualization gains momentum, vendors of CPUs, operating systems, VMMs, and intelligent I/O devices are working to improve hardware support for virtualization. This means that server virtualization technology is expected to evolve rapidly over the next several years with a number of developments contributing to improved computational performance, better I/O performance, and enhanced flexibility in system configuration.

Processor Support for I/O Virtualization

Both of the major vendors of x86 architecture CPUs (Intel and aMD) introduced processors in the latter half of 2006 that offer hardware assistance for CPU and memory virtualization. These enhancements (Intel’s Virtualization Technology and aMD’s Pacifica Virtualization) are similar and share the following major features:

New privileged instructions to speed context switches between •the VMM and VMs, supporting multiple logical CPUs

VM interrupt handling assistance •

Improved support for I/O virtualization via an integrated I/O •memory management unit (IOMMU), including DMa remapping

These enhancements simplify VMM robustness and security, improve the performance of guest operating systems, and improve I/O performance.

With these enhancements (and corresponding enhancements in the VMM software), the virtualization architecture changes in both VMMs and I/O devices. Based on these improvements, the architecture of virtual I/O in the next generation of virtualized server will evolve to resemble the one shown in Figure 5. at this stage in the evolution, the VMM will be able to manage the assignment of virtual I/O devices to the VMs. In addition, processor hardware will support one or more logical CPUs dedicated to each VM. The CPU or processor will use the IOMMU to ensure discrete I/O memory space for each VM’s I/O.

Figure 5. Next Generation Hardware-based Virtual NIC (with CPU and Chipset Support for Virtualization)

With the migration of DMa remapping to the CPU chipset, the virtual NIC will still maintain a DMa I/O translation look-aside buffer (IOTLB) to serve as a cache for recent address translations and to allow pre-fetching of translated addresses. With these enhancements, the virtual NIC will be able to support hardware-based network transfers directly to application memory locations, bypassing both the VMM and guest OS and thereby further reducing host CPU overhead.

PCI Support for I/O Virtualization

The PCI-SIG I/O Virtualization (IOV) Workgroup has released specifications that allow virtualized systems based on PCI Express

(PCIe®) to leverage shared IOV devices. The IOV Workgroup focused on the following three areas to support interoperable solutions:

DMa remapping •

Single root (Sr) I/O virtualization and sharing •

Multi-root (Mr) I/O virtualization and sharing •

Virtual Machine Monitor ControlControl

CPUMemoryManagementUnit

Physical NICMemory

I/O MemoryManagementUnit

e 4...Virtual

Machine 2VirtualMachine 3

VirtualMachine 1

LAN

Page 5: QLogic NetSlice™ Technology Addresses I/O Virtualization Needs

QLogic NetSlice™ Technology Addresses I/O Virtualization Needs Flexible, Multi Protocol Supported Over 10GbE

HSG-WP09005 NE0130902-00 a 5

W H I T E P a P E r

DMa remapping

The aTS 1.0 specification enables PCIe I/O endpoints to interoperate with platform-specific DMa remapping functions, which are termed address translation services (aTS) by the PCI-SIG. The specification is compatible with DMa remapping performed either within the I/O device (Figure 4) or the CPU (Figure 5).

Single root (Sr) I/O Virtualization and Sharing

The IOV-Sr 1.0 specification focuses on allowing multiple VMs (or “system images”) in a single root complex (host CPU chip set including memory or shared memory) to share a PCIe IOV endpoint without sacrificing performance. The specification covers configuration, resource allocation, error/event/interrupt handling, etc. The basic Sr topology is shown in Figure 6.

Figure 6. Single Root PCIe IOV Topology

Multi-root (Mr) I/O Virtualization and Sharing

The Mr IOV specification focuses on allowing multiple root complexes (such as multiple processor blades in a blade server) to share PCIe IOV endpoints. The basic Mr topology is shown in Figure 7. To support the multi-root topology, the PCIe switches and IOV devices need to be Mr “aware” (support Mr capabilities). Mr awareness within the adapter supports enhanced PCI Express routing and separate register sets for storing a number of independent PCI hierarchies associated with the various root complexes.

Figure 7. Multi-Root PCIe IOV Topology

QLogic NetSlice™ architecture for I/O Virtualization

as described in this white paper, hardware-based I/O virtualization requires a new type of Intelligent Ethernet adapter which is itself a virtualized device optimized for sharing I/O among multiple VMs. The QLogic NetSlice architecture has been engineered to satisfy the following goals:

Maximum Cost-effectiveness. • High-performance virtualized servers will require 10GbE physical interfaces for multiple data center interconnects, including general purpose LaN connectivity, cluster interconnects, and storage interconnects. The QLogic NetSlice architecture, as implemented in its Intelligent Ethernet adapter, supports multiple 10GbE interfaces on a single PCIe card.

Maximum flexibility. • End-user requirements for protocol support by Ethernet adapters can vary significantly. In addition, upper level protocols (iWarP/rDMa, iSCSI, and FCoE) are continuing to evolve. Furthermore, the system architecture for virtualized servers is also evolving, as described in the previous sections. The only way to protect investments in data center I/O virtualization is with an Intelligent Ethernet adapter based on a programmable architecture rather than “hard-wired” aSIC implementations. QLogic’s programmable NetSlice architecture supports the evolution of hardware-based I/O virtualization from its inception (Figure 4) through several stages of evolution, spanning the development of I/O virtualization support in CPUs and PCI Express (Figures 5 through 7), to its final, fully offloaded form (Figure 9).

The NetSlice architecture creates a platform that exports a number of independent virtual interfaces, with each virtual interface utilizing a share of the existing (hardware) resources. Each virtual interface is serviced in a fair manner. Optionally, each interface can be assigned a priority that determines how much of a share of the underlying resources are allocated to that interface. Strong isolation is built around each interface to ensure that one interface does not adversely affect another.

Figure 8. Virtualized Multi-channel NIC

Virtual Machine Monitor

Physical Machine

e 4...Virtual

Machine 2VirtualMachine 3

VirtualMachine 1

PCI Root

PCIe Switch

PCIe IOV Endpoint PCIe IOV Endpoint

PCIe IOV Endpoint PCIe IOV Endpoint

Virtual Machine Monitor

e 4...Virtual

Machine 2VirtualMachine 3

VirtualMachine 1

Virtual Machine Monitor

Physical Machine

e 4...Virtual

Machine 2VirtualMachine 3

VirtualMachine 1

Multi-Root Aware PCIe Switch

PCI Root PCI Root

Physical NIC

VirtualInterface

...

PCIe Node

MAC/PHY

Direct Memory Access

Direct Memory Access

Direct Memory Access

ce

sOOsFIFOFIFFIFBuffers/FIFOs

VirtualInterface

VirtualInterface

Page 6: QLogic NetSlice™ Technology Addresses I/O Virtualization Needs

QLogic NetSlice™ Technology Addresses I/O Virtualization Needs Flexible, Multi Protocol Supported Over 10GbE

HSG-WP09005 NE0130902-00 a 6

W H I T E P a P E r

a major benefit of the virtual channel abstraction is its synergy with the programmability of the underlying physical resources, maximizing the architectural flexibility to accommodate technology evolution.

Key features of the QLogic NetSlice architecture and Intelligent Ethernet adapter implementation include:

The Intelligent Ethernet adapter currently operates in PCIe base •mode, which means that the device appears as a standard PCIe device to a single root complex system. as the platform evolves, new firmware loads will enable appropriate functionality to keep pace with the new features.

Virtual NICs are managed on a per VM basis for up to 1,024 VMs. •Each VM can be assigned a priority for access to I/O resources. For example, traffic metering and rate control per VM ensures fairness of access across VMs and implementation of service level agreements.

Multiple DMa engines ensure optimum use of the PCIe bus •bandwidth, irrespective of the type of traffic generated/received by the individual VMs.

PCIe interrupts issued by the extended message signaled •interrupt (MSI-X) scheme are DMa remapped to be received only by the target VM. QLogic currently supports up to 1,024 VMs with MSI-X.

The QLogic Intelligent Ethernet adapter can be reprogrammed •to perform the caching of DMa requests via its IOTLB, which reduces latency in the data path. Support for IOTLB will become necessary as DMa remapping becomes widely supported by server CPUs and chipsets.

Interrupt moderation allows each virtual NIC to have its interrupt •behavior adjusted to minimize disruption of traffic flows. With support for independent interrupt moderation, the Intelligent Ethernet adapter can implement a different moderation scheme for each virtual channel optimized for the type of traffic flow (sensitivity to latency, jitter, or throughput).

The Intelligent Ethernet adapter supports multiple unicast and •multicast MaC addresses that can be mapped to the virtual NICs of a given VM, furthering the appearance of a given VM having a dedicated NIC.

Virtual NICs can be configured with multiple transmit and receive •queues dedicated to each VM. The logical separation of I/O queues minimizes contention among VMs for I/O bandwidth, while multiple transmit and receive queues support advanced functions such as traffic steering, traffic classification, and priority processing of different traffic classes (in accordance with IEEE 802.1p or DiffServ markings). The separation of queues on a per

VM basis also avoids data commingling, increasing the security and robustness of the virtualized I/O path.

The Intelligent Ethernet adapter hardware supports virtual •switching (switch) and traffic steering based on Layer 2 and Layer 3 packet header information, including MaC address, 802.1p priority, VLaN ID, and IP address. This support allows flexible, QoS-aware switching within and among VLaNs for VM-to-VM IP communications on the same physical platform. Consequently, a VM can communicate to another VM on the same physical machine with a low-latency path because local traffic never leaves the physical system.

as a result of these features, the CPU utilization on the host is significantly reduced. This enables more VMs to run on the same physical machine, thus magnifying the virtualization benefits to the end user.

The End Game

In its final form, the ultimate expression of a scalable, intelligent, virtual NIC is one in which:

Virtualization overhead is offloaded. •

Multiple (up to 1024) virtual NICs can be presented and managed •independently.

Network protocol processing overhead is completely offloaded. •

Multiple network protocols are supported simultaneously for •each VM.

QLogic’s Intelligent Ethernet adapter is capable of doing all of these things today using a unique programmable solution that supports full offload of transport protocols, such as TCP/IP and iWarP (rDMa), in standalone as well as virtualized environments. The ultimate solution is shown in Figure 9.

Figure 9. End Game — Protocol and Virtualization Offload

...TCP/IP

MAC

...

LAN

Socket

Virtual Machine 1

UDPTCP

RouteIP

Virtual NICDriver

TCP/IP

MAC

Socket

Virtual Machine 2

UDPTCP

RouteIP

Virtual NICDriver

e 5

DP

oute

CNII

Machin

U

Ro

e

U

o

ocket

MachMach

R

TCP PUDPCPCP

eRouteRRIPIP

Virtual NDriverVirtuVirtu

Virtual Machine 3

UDPTCP

RouteIP

Virtual NICDriver

Socket

Physical NICIntelligent NIC

Virtual Switch Port

TCP/IP

MAC

Virtual Machine Monitor

Page 7: QLogic NetSlice™ Technology Addresses I/O Virtualization Needs

QLogic NetSlice™ Technology Addresses I/O Virtualization Needs Flexible, Multi Protocol Supported Over 10GbE

HSG-WP09005 NE0130902-00 a 7

W H I T E P a P E r

Figure 9 shows how the layers of the networking stack within each guest OS interfaces to vNICs within the Intelligent Ethernet adapter. The I/O path for socket connections bypasses the guest OS protocol stack via direct connection to the offload engine in the vNIC. Virtualization overhead in the VMM is also bypassed to maximize performance and scalability. at the same time, the traditional I/O path taken through the guest OS stack for connections that cannot be offloaded is still present. The vSwitch switching function, also shown as part of the vNIC protocol stack, is accessed by the protocol offload engine or directly by the VM guest OS stack.

as described in this white paper, offloading protocol processing to the Intelligent Ethernet adapter increases the number of VMs that can be instantiated on the physical server because of the substantial reduction in host CPU utilization required to process network traffic. QLogic’s architecture is already capable of achieving this next level of virtualized I/O optimization.

Summary and Conclusion

Server virtualization is one of a number of virtualization technologies that promises to transform the data center by improving its ability to adapt to changing user requirements while, at the same time, dramatically reducing TCO.

However, virtualization technology continues to evolve at a fairly rapid pace to accommodate demands for higher I/O performance, better integration of VMMs with single-core and multi-core CPUs, and better integration with peripheral interconnect buses.

In this dynamic environment, data center managers need flexible solutions that can evolve along with the technology to protect current investments in virtualization and server consolidation.

The QLogic NetSlice architecture addresses these requirements by incorporating a unique degree of programmable support for virtualization and protocol offload. Programmability allows the QLogic Intelligent Ethernet adapter to be deployed today to meet current requirements for server virtualization without running the risk of obsolescence. The QLogic Intelligent Ethernet adapters deployed today can also meet tomorrow’s requirements by adapting to evolving virtualization technology and/or changing user requirements through simple firmware and driver updates.

Page 8: QLogic NetSlice™ Technology Addresses I/O Virtualization Needs

QLogic NetSlice™ Technology Addresses I/O Virtualization Needs Flexible, Multi Protocol Supported Over 10GbE

W H I T E P a P E r

Corporate Headquarters QLogic Corporation 26650 aliso Viejo Parkway aliso Viejo, Ca 92656 949.389.6000 www.qlogic.com

Europe Headquarters QLogic (UK) LTD. Quatro House Lyon Way, Frimley Camberley Surrey, GU16 7Er UK +44 (0) 1276 804 670

© 2007–2009 QLogic Corporation. Specifications are subject to change without notice. all rights reserved worldwide. QLogic, the QLogic logo, and NetSlice are trademarks or registered trademarks of QLogic Corporation. Microsoft and Windows are registered trademarks of Microsoft Corporation. IBM is a registered trademark of International Business Machines Corporation. Oracle is a registered trademark of Oracle Corporation. Microsoft, Windows, and Windows Server are registered trademarks of Microsoft Corpora-tion. Linux is a registered trademark of Linus Torvalds. Mac OS X is a registered trademark of apple, Inc. Oracle is a registered trademark of Oracle Corporation. Solaris is a registered trademark of Sun Microsystems, Inc. VMware is a registered trademark of VMware, Inc. Citrix is a trademark of the Citrix Inc. Virtual Iron is a registered trademark of Virtual Iron, Inc. PCIe, PCI Express, and PCI-SIG are registered trademarks of PCI-SIG Corporation. aMD is a registered trademark of advanced Micro Devices, Inc. Intel is a registered trademark of Intel Corporation. all other brand and product names are trademarks or registered trademarks of their respective owners. Information supplied by QLogic Corporation is believed to be accurate and reliable. QLogic Corporation assumes no responsibility for any errors in this brochure. QLogic Corporation reserves the right, without notice, to make changes in product design or specifications.

HSG-WP09005 NE0130902-00 a 8

Disclaimer

reasonable efforts have been made to ensure the validity and accuracy of these performance tests. QLogic Corporation is not liable for any error in this published white paper or the results thereof. Variation in results may be a result of change in configuration or in the environment. QLogic specifically disclaims any warranty, expressed or implied, relating to the test results and their accuracy, analysis, completeness or quality.