11.) data center chassis (high density & low latency)

79
.

Upload: jeff-green

Post on 16-Apr-2017

145 views

Category:

Internet


5 download

TRANSCRIPT

Slide 1

.

Where Does Networking Fit In? To gain the full benefits of cloud computing and virtualization and achieve a business agile IT infrastructure, organizations need a reliable, high-performance data center networking infrastructure with built-in investment protection. Several technology inflection points are coming together that are fundamentally changing the way networks are architected, deployed and operated both in the public cloud as well as the private cloud. From performance, to scale, to virtualization support and automation to simplified orchestration, the requirements are rapidly changing and driving new approaches to building data center networks.

With Extreme Networks, IT can manage more with less. Automated intelligence and analytics for compliance, forensics, and traffic patterns translates into reduced help desk calls. Businesses can predict costs and return on investment, and increase employee productivity by securely onboarding BYOD, increasing both customer and employee satisfaction. A constant risk to the network, and ultimately the hospital, are unapproved applications and rogue devices that may appear on the network and either permit unauthorized access or interfere with other devices. A means to monitor all devices and applications that operate across the network is vital. Just as important are the audit and reporting capabilities necessary to report on who, what, where, when, and how patient data is accessed.

What is SDN? What software-defined networking really means has evolved dramatically and now includes automation and virtualization. Hardware is still a critical component in data center networking equipment, but the influence of switch software shouldnt be overlooked. When everyone began to get excited about SDN a few years ago, we thought of it as only one thing: the separation of network control from network data packet handling. Traditional networks had already started down this path, with the addition of controller cards to manage line cards in scalable chassis-based switches, and with various data center fabric technologies. SDN took the idea to its logical end, removing the need for the controller and the packet handlers to be on the same backplane or even from the same vendor.

Cost. Reducing costs in the data center and contributing to corporate profitability is an increasingly important trend in todays economic climate. For example, energy costs for the data center are increasing at 12% a year. Moreover, increased application requirements such as 100% availability necessitate additional hardware and services to manage storage and performance thus raising total cost of ownership. 1

High Density Speed of the BDX82

.

Technical RequirementsBefore preparing your RFP Response, please read carefully all sections of the RFP. Please respond with Comply or Does Not Comply and provide a supporting narrative response if necessary. If more than one product is being utilized to provide similar functions in each case, address the requirements below for each product quoted. will consider any vendor not responding to each requirement for all products quoted to be non-responsive.

Data Center Aggregation/Core SwitchThe proposed solution must provide a high-density chassis based switch solution that meets the requirements provided below. Your response should describe how your offering would meet these requirements. Vendors must provide clear and concise responses, illustrations can be provided where appropriate. Any additional feature descriptions for your offering can be provided, if applicable.Must offer a chassis-based switch solution that provides eight I/O module slots, two management module slots and four fabric module slots. Must support a variety of I/O modules providing support for 1GbE, 10GbE, 40GbE and 100GbE interfaces. Please describe the recommended switching solution and the available I/O modules.Switch must offer switching capacity up to 20.48 Tbps. Please describe the performance levels for the recommended switching solution.Switch system must support high availability for the hardware preventing single points of failure. Please describe the high availability features.It is preferred that the 10 Gigabit Ethernet modules will also be able to accept standard Gigabit SFP transceivers. Please describe the capability of your switch.Must support an N+1 redundant power suppliesMust support N+1 redundant fan traysMust support a modular operating system that is common across the entire switching profile. Please describe the OS and advantages.

Data Center Product of the Year 201211/13/2016Jeff Green 248-521-7593 Page 2

Low-Latency 100G Performance

People

Performance & CapacitySpeeds with Features LatencyAvailabilityExtreme Performance

XoS

Why?Why?

How?

.

One of the key themes of Extreme networks solutions has always been about performance, and for Extreme, performance means a lot of things. We have a long history of building the highest performance Ethernet switches in the industry. In the past, the Eternal network question is, how do you use this technology in building your network. There was a simple principle to follow - Switch where you can, Route where you have to." With a layer 3 switch you dont compromise performance when routing is required. If you have a small network, and all of your devices will be on the same IP subnet, then use a switch. The routing can be accomplished by using a Layer 3 switch, which has routing in the fast path.

What will we possibly do with all the new bandwidth? That sounds like a legitimate question, If immediate needs were technology's guiding lights, we would all still be in the Stone Ages. Sometimes necessity is indeed the mother of invention, but inventions can create newer necessities and propel needs to newer heights. Metcalfe's dream for Ethernet was driven by the belief that if you build it, they will come. And that includes the needs themselves. Metcalfe said last year he wants to "help shape a road map" to Terabit Ethernet, because "we're going to get there anyway." As we move further into the paradigm of cloud computing, where computing power and storage will gradually move to the center of the network, it becomes necessary that the network be fast enough to meet the needs of this new, highly distributed model. The aspiration of "the network being the computer" can only become real if the network is as fast as the computer, if not faster. The possibilities are endless for applications that Terabit Ethernet would influence, with one obvious example being high-definition Web/video conferencing tools from the desktop.

311/13/2016

Extreme Networks Jeff Green 248-521-7593.The Next Generation Data Center

.

The Blue Waters supercomputer started being built at the National Center for Supercomputing Applications (NCSA) back in January. When completed, it will be the most powerful supercomputer in the world and able to achieve sustained performance above 1 petaflop. Getting to that point requires a lot of hardware be installed, though. The good news for those wanting to take advantage of all that performance is that Blue Waters doesnt need to be fully installed before it goes online. In fact, Cray has managed to bring the supercomputer online at just 15 percent of its computational power. That first 15 percent is made up of 48 Cray XE6 cabinets and is called the Blue Waters Early Science System. Even at this early stage the 48 cabinets offer up the most computational power the National Science Foundation has to offer. The ability to use Blue Waters comes down to whether your project has been awarded Petascale Computing Resource Allocations (PRAC). So far, 24+ projects have received PRACs, but only 6 of those are allowed to start using Blue Waters in its current state. Those 6 projects use Blue Waters for the following purposes:

Modeling high-temperature plasmas to help understand what impact solar flares and solar wind has on our atmosphere. Simulating the formation of very early galaxies that appeared soon after the Big Bang and classed as the Milky Ways ancestors. The volume of the simulation is greater than 200 megaparsecs, where as previous projects only managed 1 megaparsec. Simulating the protein capsid of a HIV-1 genome using 12.5 million atoms to better understand how infection occurs.

A lattice quantum chromodynamics study into exchange particles called gluons and the charmonium spectrum.High resolution adaptive mesh refinement simulation of a Type Ia supernova.High resolution time slice simulation of the end of the 20th and 21st centuries to better understand extreme environmental events such as tropical cyclones. As you can see, even at just 15% of its potential, the Blue Waters system is offering the opportunity to run simulations at a scale and resolution not possible before. The two big drivers for 10GbE adoption are virtualization and big data. If enterprises are to leverage the capabilities of more powerful servers, connectivity must also be improved to avoid bottlenecks. Server virtualization is driving 10GbE adoption as the standard network I/O interface in enterprises, and research shows the uptick in 10GbE-capable hardware sales will continue as network infrastructures are refreshed.

Jeff Green 248-521-7593 411/13/2016Data Center Product of the Year 2012

Building a better Switch

Extreme Networks Jeff Green 248-521-7593.The Next Generation Data Center

.

Extremes Summit series switches can not only stack but they can stack across models. When a new model is released, customers can add it to their current/older stack to leverage new features while still retaining older investments. The Summit X460-G2 also comes with an optional 40 Gigabit Ethernet uplink module so the option for 40 GbE uplinks is available in the future if not needed immediately. Ciscos Catalyst stacks require that all models match within a stack. This means that customers that grow with Cisco must replace and start their fixed switch stacks from scratch each time a new model comes out, forcing the customer to ultimately buy more and not protect current or older investments. Also none of the Catalyst 3000 series have the option for 40 GbE uplinks.

OneFabric Control Center (NetSight with NAC) for unified management, visibility and control. Even beyond the network - a broad range of third party IT tools (MDM, Firewalls, UC, web filtering, etc) are integrated to provide an intelligent combined overview as well as more intelligent dynamic policies based on the context of the entire IT spectrum.Cisco has confused customers and gotten stark criticism even from Gartner for their constantly changing and confusing series of Prime products and updates, combined with the wireless tools (NCS/WCS) required, plus the ever complex ISE required for any kind of intelligent BYOD. Plus, any BYOD management value requires ISE Advanced which is both complex and extremely expensive (annual recurring subscription). Even the Meraki solution which promised everything in the cloud now is integrated and requires Cisco ISE for solving BYOD problems.Extreme is the most committed to approaching mobility from an end-to-end perspective that focuses on simplifying the entire deployment and management of network policies and enables future growth from not just performancebut also power and future-proof needs.

Regarding PoE, while Extreme Networks is one of the few suppliers whose 802.11ac wave 1 access points do not require the newer PoE+ to operate at full performance, other vendors do and this will be increasingly the case the next generation of access points deploying 802.11ac wave 2 so the Summit X460 is loaded with up to 1668 Watts of PoE+ power. The Summit X460 has the performance, power requirements, and unmatched flexibility for future growth (see cell above about platform independent stacking) to ensure networks support not only the mobility needs of today but also tomorrow.

511/13/2016

BlackDiamond X8 Awards

.

Extreme Fabric-in-a-Box - BlackDiamond X8 as End/Middle-of-Row (EoR/MoR) solution - Companies are demanding dynamic networks that respond to application requirements, rather than networks that force applications to fit the networks parameters. Responsiveness requires integration of applications and management systems so that the network can quickly adjust to conditions based on business rules rather than fixed, preassigned allocations. OpenFlow can help do this. Today, each switch is an island that makes forwarding decisions based on a limited view of the world. Multipath Layer 2 protocols such as Transparent Interconnection of Lots of Links (TRILL) and Shortest Path Bridging (SPB) offer larger views, but in both cases, theyre driven by nodes and paths without regard to other factors, such as the performance characteristics of applications traversing the network.

The switch, together with the ExtremeXOS operating system, is at the heart of Extreme's Open Fabric architecture, which enables interoperable data centre fabrics and supports Software Defined Networking (SDN), helping to reduce operational complexity. With an open fabric, you can take our BlackDiamond X8 fabric switch and use anybody's top-of-rack switch, because it uses an open technology. That's very important because when you think about it from a migration perspective, typically people have a lot of technology already deployed in their data centre, and with a proprietary solution you have to rip everything out.

Lets say you have a low-bandwidth transactional application that requires a one-way delay of 1 millisecond or less. You also have a bulk file transfer application. It can suffer one way delay of 50 milliseconds. You want both applications to perform well, but your transactional traffic shouldnt suffer due to bulk traffic. With conventional QoS, transactional traffic is prioritized by queuing the lower-priority traffic in the network devices, but this can lead to time-outs, dropped packets, and retransmissions.

Wouldnt it be better to have the network determine when transactional traffic might exceed its 1-millisecond requirement and then automatically shift other traffic to a different path? The applications still meet their service requirements without stomping on each other. Thats just one example of software- defined networking. However, to achieve this vision we need intelligent controllers that can integrate with applications, firewalls, and load balancers anything that can affect traffic service requirements. When and if intelligent controllers using OpenFlow reach the market, I expect to see configurations that solve real problems.611/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

BlackDiamond X8 AwardsHot Product of Interop 2012

Network World picked Extreme Networks BlackDiamond X8 10GBaseT card as one of the few Hot Products at the Interop 2012.

Nick Lippis, leading industry analyst

3X-10X faster than anything we testedBreaks all previous records performance, latency, density and power consumption.

.

The concept of "fabrics" has emerged over the past as a means to converge separate storage, networking and server technology in the data center. This association of the workload to a business function will change traditional IT from compute-centric toward a user, department, consumer and service orientation. Enterprises are increasingly seeing their performance and success in relation to user empowerment, innovation and productivity, with IT as the integral element of such success. Fabric lines up well with this shift, because it creates more opportunities for agility and adaptability, such that the user assumes a more active role in the tools and intelligence at its disposal. The new world of virtualization has forced greater attention to speeding the processes of deployment and using resources more efficiently. Speed and latency of the fabric are critical to the performance of the virtual servers. As a result, the boundaries between server, storage and networking technology become blurred.

Like many technologies, FCoE is not without its challenges, but for different reasons than some have projected. Although FCoE technically works as advertised, many organizations are challenged with the political ramifications of who manages the converged platform. Along with the political challenges, the total cost of ownership of deploying the technology end-to-end cannot always be justified. There is also the belief that FCoE may just be a transition phase for storage networking as iSCSI, leveraging the same lossless Ethernet infrastructure required for FCoE, becomes a more attractive solution due to its performance, and flexibility in a virtualized environment for large scale SANs. There is also the growing popularity of IP SAN and NAS solutions due to the increasing diversity of application requirements. Finally, there is the historical challenge of interoperability among SAN vendors that makes a multi-vendor approach problematic.

Data Center Convergence

Extreme Networks and Logic are working together to bring an alternative approach to converging data center fabrics, an approach that helps solve the challenges faced by FCoE. This new architecture maximizes flexibility by allowing any port to toggle its identity between 16 Gb Fibre Channel and 10 Gb Ethernet. Because of this flexibility, customers can natively support Fibre Channel, FCoE, NAS and iSCSI SANs to create a pragmatic way to converge data center networks.

The most significant benefit of this architecture is the elimination of the need to re-architect the data center network to converge LAN and SAN environments. Because this platform can be architected to be near or at the SAN edge and because data center protocols run natively through it, customers can now add or subtract protocols for storage networks without impacting the overall LAN environment. More importantly, it gives another alternative to the vision of a converged data center network. It is a conservative yet pragmatic approach for those who want to keep their options and their network open while reducing effort and cost. With this solution, customers can evolve their data center networks at their own pace.

Data Center Product of the Year 201211/13/2016Jeff Green 248-521-7593 Page 7

Why Extreme?

CIO survey of Cisco customers, called Extreme a Top 4 Alternative Network VendorSeptember 2011 report

Rated Data Network Specialist in Data Center Report November 2011 report

Independent tests called our core switch (BDX8) 3X-10X faster than competitionNovember 2011 report

Rated as #1 in modular 40G & #2 overall in 40GNovember 2011 report

Rated a Data Center Champion and ExemplaryNovember 2011 report

ZDNet China Awards BDX Best 40G SwitchDecember 2011 report

Digital Times Korea Hit Switch Award for x670November 2011 report

8

.

Extreme Networks DC Architecture - Extreme Networks has chosen to build its data center strategy around an open, interoperable architecture, avoiding a "black box" proprietary approach.

Extreme Networks Open Fabric:High speed, low-latency interconnectivity Non-blocking/non-oversubscribed infrastructureLayer 2 connectivity (Layer 3 connectivity is also available)Multiple active paths with fast failover Mesh connectivity rather than a tree type topologySimple management, configuration and provisioning

Broad Portfolio with Common ArchitectureFrom $995 10/100 switch to industry first stackable 10 GbE high density switch providing the freedom of choice. Common technologies allows customer simple network deployments

Single Operating SystemAll switches share the same binary image, reduce the management burden

Network Automation and Optimization With Universal Port, CLEAR-Flow, XML interface, the network becomes more dynamic to manage complex environment

Cross Family Stacking with High Availability1 and 10 GbE, 40 GbE* can be stacked together to create virtual chassis

Quick failover and high-performance distributed forwardingState of the Art Technology AvailabilityHigh density 48-port 10GBASE-X SFP+ switch, as well as 4-port 40GbE uplink moduleIndustrys first high density 10GBASE-T switch to significantly reduces CAPEX and OPEX

Data Center Product of the Year 201211/13/2016Jeff Green 248-521-7593 Page 8

Vertically integratedClosed, proprietarySlow innovationSmall industry

SpecializedOperatingSystemSpecializedHardwareSpecializedApplicationsHorizontalOpen interfacesRapid innovationHuge industry

1980s Mainframe TransitionAppAppAppAppAppAppAppAppAppAppAppLinuxMacOSWindows(OS)ororOpen InterfaceControlPlaneControlPlaneControlPlaneororOpen InterfaceCommoditySwitching SiliconOpen Interface

.

The mainframe industry in the 1980s was vertical and closed: it consisted of specialized hardware, operating system and applications --- all from the same company. A revolution happened when open interfaces started to appear. The industry became horizontal. Innovation exploded. Storage and network convergence is becoming a reality today using 10 G or higher speed Ethernet in conjunction with iSCSI-based storage which works natively over an Ethernet based TCP/IP infrastructure. FCoE- based storage convergence is a little further out in terms of its adoption and interoperability. However, in both cases, the availability of standards-based Data Center Bridging (DCB) technology is a key facilitator to enabling this convergence..

OpenFlow is a relatively new industry-backed technology that centralizes the intelligence in the network while keeping the data path distributed. By centralizing the intelligence, OpenFlow provides a platform upon which a diverse set of applications can be built and used to program, provision and manage the network in a myriad of different ways. Within the context of a data center fabric, OpenFlow holds the promise of taking complex functions such as traffc provisioning in converged data center networks, logical network partitioning in public and hybrid cloud environments, as well as user and VM provisioning in highly virtualized data centers, providing a centralized and simplified approach to addressing all of these at scale. OpenFlow is in its early stages in terms of applications and adoption. It will take time for the technology to mature. But, it holds a lot of promise as a platform upon which smart applications can be built for the next generation data center fabric. In effect, OpenFlow provides an open source platform upon which users can customize, automate, and innovate to provide new ways to address some of the challenges in the data center. It is important to note that the benefits of OpenFlow are not limited to data center and cloud scale networks. Indeed, applications are being built on the enterprise and campus side of the network as well using OpenFlow technology. But within the context of the data center and the cloud infrastructure, OpenFlow holds particular promise for both simplifying and automating complex provisioning tasks and easing the burden on network administrators.

Networking equipment typically has three planes of operation: management, control, and forwarding. The management plane handles functions such as device management, firmware updates, SNMP, and external configuration via the command line. The forwarding plane (sometimes called the data plane) governs packet and frame forwarding of data payloads through the network device. The control plane consists of routing and switching protocols. In a typical operation, the control plane uses routing protocols to build the forwarding table used by the forwarding plane. This forwarding table is delivered to the forwarding plane by the management plane as part of the device operating system. When an Ethernet frame arrives on the switch interface, the forwarding plane sends it to an output port.

911/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

ExtremeCompetitorsCommentsSingle ArchitectureMultiple ArchitectureHow Many Switching Product-Lines Does Your Vendor Have? Can They Work Together?Standards Based/Open/InteroperableProprietaryCan You Extend Your Fabric to Include Multi-Vendor? Use Anyones Optics and Receive Support?Investment ProtectionNo CompatibilityCan You Phase-In Your Fabric to a Legacy Network? Run Fabric and Legacy Architectures Simultaneously? Use Existing 1Gb Products?Open EcosystemClosed EcosystemCan You Lower Your Virtualized Application Costs From $1,000 Per/App, Per Month to June 15, 2015*RJ-45 twisted-pair connectivity over standard Cat 5, 6 and 7 copper cabling. Non-blocking performance in fully populated BlackDiamond X8 chassis for up to 384 ports of 1 GbE.Supported with both 10Tbps and 20Tbps fabric modules with N+1 data plane protection.Maintains N+1 power support in a fully populated BlackDiamond X8 chassis

32

48-port wire-speed 100/1000MbE (RJ45) and 48-port 1GbE (SFP) Choice of 1G-SR, LR, ER, ZR opticsSupported with either 10Tbps or 20Tbps fabric module typesWire-speed with single fabric moduleN+1 data and power plane support with fully populated chassisGA June 2015 with EXOS 16.1

X8 48x1G

New33

.

SCALABLE DATA CENTER CORE/SPINE The BlackDiamond X8 is designed for tier 1 through 4 enterprise, multi-tenant and cloud data center core and border deployments. In these applications, the BlackDiamond X8 can be used with XL and non-XL series modules to provide a mix of high- and modest- scale connectivity in a spine-leaf architecture. The non-XL series modules provide 10GbE and 40GbE downlink connectivity for leaf switches such as the Extreme Network Summit X670 and X770, while the XL series modules provide a high-scale 10GbE,40GbE and 100GbE solution for inter-DC, disaster recovery, or cloud connectivity. The CFP2-based 100GbE module supports connectivity over short ( 2X density per rack

This results in more rack rental expense, more power and cooling expenseThis also results in lost revenue that additional bandwidth and density would have generated otherwise (lost opportunity cost)It still does not allow any other ToR with 40GbE uplinks to be connectedConclusion: BlackDiamond X8 is a clear Winner in terms of Density, Consolidation and Openness 3911/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

Virtual Chassis40

1SummitStack within racks2SummitStack across racks3SummitStack across rowsScalable PerformanceUp to 512G Stack FabricShortest path algorithm to maximize the throughputDistributed Layer 2 and Layer 3 forwardingSub 50 ms link failoverPowered by ExtremeXOS

.

Extreme Networks OpenFabric - The Extreme Networks OpenFabric architecture is an approach to building next generation data centers. It combines industry leading ToR and EoR switching platforms, as well as the automation and intelligence layer of the ExtremeXOS network operating system. The ability to build blades that are suitable to a service provide environment will exist with the coming of the Broadcom FE-1600 switch fabric chip, the Arad packet processor, and the Triumph 3 (T3) packet processor. The BDX8 chassis has been designed to accommodate an architecture that uses all three of these chips, with the Arad used to provide deep packet buffering and conversion between the HiGig and SAND switch fabrics the FE-1600 used as the main switch fabric, and the T3 used as the packet processor with its highly scalable table space. This architecture can also be used to provide clear-channel 100Gb Ethernet. Th

OpenFabric derives its name from its ability to offer customers a truly open architecture, where they can deploy best-in-class compute, storage, virtualization, and orchestration solutions. A key factor of the OpenFabric architecture is that is deployable today, and there are many customers who have adopted its principals. The majority of data centers today are built on a 'classic' architecture, and OpenFabric can fully support this. There are many discussions and messages around the concept of an Ethernet Fabric, but today these rely upon proprietary mechanisms and do not offer the features that are inherent in OpenFabric. This is because OpenFabric is based on proven industry standards and features.

Due to the open and interoperable nature of Extreme Networks OpenFabric, it has the ability to adapt to the changing needs of the Data Centre. One of the key directions of OpenFabric is toward Software Defined Networking. With this in mind, Extreme Networks are working with the ONF to deploy OpenFlow as an interoperable part of the OpenFabric architecture. At this time, OpenFlow is in its early stages, but there is significant development going on and deployments are expected in 2012. In summary Extreme Networks OpenFabric architecture support's today's deployments, as well as being fully capable of adapting to tomorrow's fabric deployments where required based on the customer's business model.

The Extreme Networks architecture for this solution is designed to cater for large Enterprise customers who require a dedicated virtual architecture. Functions such as overlapping address space, multiple vlans per customers as well as a range of security features can be included.In this architecture, it is anticipated that each customers requirements may be different to a certain degree and it includes both elements that are customer specific, both at a physical and virtual level.

4011/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

Data Center/Cloud41

Extreme Open Fabric

Summit X480, X650 or X670 as Top-of-Rack (ToR) Access and BlackDiamond X8 as Core1/10 GbE copper/fiber Server/Storage connectivity10/40 GbE uplink connectivityMulti-path through M-LAGOnly 3.5 uSec end-to-end latency including Access and Core Open to plug any 3rd party ToR to the Core through standard 10/40 GbEUp to 192 40GE (384 in redundant) downlinks20 Tbps (40 Tbps in redundant) total Core switching capacitySingle OS across Access & CoreAdvanced features for server virtualization and storage convergence (XNVTM, VEPA, DCB)

.

Chassis Clustering (CC) is the ability to manage multiple chassis as if they are combined into a single chassis. A CC may be clustering for management purposes only, or may also be clustering for data plane purposes. In the latter case, front-panel ports are converted into switch fabric links, and the fast path data planes of the chassis in the cluster are unified. Since the slow path hardware can only deliver slow path packets to the local CPU, the slow path must instead run in-band over the fast path.

EX8200 Chassis Attack Points - Hardware: EX8208 has reasonable size but offers too little wire-speed densityDespite taking too much room by EX8216, the density is stays poorOversubscribed ports give better density but useless for data center applicationFabric and Routing Engine combined makes it more suitable for CampusBoth chassis are poorly designed for data center with side-to-side airflow that requires special baffles, consume additional space and more utility wastageNo 40GbE line cards availableSignificantly high latency makes it unsuitable for many EoR applications. Even worse when combined with a ToR switch latency.High power consumption and heat loadRequiring 2 XRE(s) on top of in-chassis RE(s) makes no sense for only 2 chassis in a VC (or even 4 in future), M-LAG is a much better solution.

Software:The VC on EX8200 is totally incompatible with the VC on fixed switch in the same family i.e. EX4500 or EX4200. Both cannot be mixed in same VC.The VC on EX8200 only support 2-chassis currently. Poor Data Center Bridging feature supportNo virtualization feature like Extreme XNV for true VM tracking

4111/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

Leading 10G/40G PerformanceLowest latencyFlexibilityScale

East-West TrafficAutomationInventoryHistoryProvisioning

StorageComputeVirtualizationSecurityOrchestration

OpenFlowOpenStackVEPADCB - MLAGTRILLOpen Fabric Architecture

Best in Class SwitchingVM Mobility IntelligenceOpenStandards Based

.

Open, Standards-Based, High-Speed, Inter-Connectivity Fabric From a fabric interconnectivity perspective, standards based 10 GbE and 40 GbE interconnectivity fabrics are fast becoming the mainstay of the data center network. With the server edge moving to 10GbE, the aggregation layer is moving to 40 GbE. This requires high density 10 GbE as well as high density, high-performance 40 GbE connectivity solutions. Along with density and capacity, low latency and low carbon footprint are becoming key requirements. Extreme Networks BlackDiamond X* series chassis will offer up to 768 wire speed 10 GbE ports or up to 192 wire speed 40 GbE ports in just a 1/3 rack size form factor. This level of nonblocking performance and density is industry leading and the basis for building cloud-scale connectivity fabrics. Using this model, servers can directly attach to a high density 10 GbE End-of Row (EoR) solution (such as the BlackDiamond X* series), or may connect to a tier of Top-of-Rack (ToR) switches (such as Extreme Networks

For example the Summit X670* ToR switch offers latency of around 800-900 sec while the BlackDiamond X chassis will offer port-to-port latency of well below 3 sec.

This combination allows building single tier or two-tier network fabrics that offer very low end-to-end latency.

While M-LAG itself is proprietary, the tier that dual homes into the M-LAG switches simply uses standard link aggregation. For example, servers can use link aggregation (or NIC teaming as it is commonly called) to dual-home into two TOR switches which present themselves as a single switch to servers via M-LAG. (Reference: New Data Center Network Architectures with Multi-Switch Link Aggregation (M-LAG)). If a true multi-homed architecture is to be used, for example where 4 uplinks may connect to 4 different switches, a standards track protocol such as TRILL (Transparent Interconnection of Lots of Links) may be used to provide Layer 2-based multipath capability. However, with data centers typically dual-homing connections at each layer, an M-LAG-type approach should suffice. In addition to high-speed, low-latency, and high-density fabric connectivity, many of the interoperable standards based solutions on the market today are also low carbon footprint switching infrastructures. This is particularly important as the cloud network transitions from a 1GbE edge to a 10 GbE edge and 40 GbE core, since the power footprint of the network increases significantly with this transition. Where earlier 10GbE ports would consume 10W-30W per port, today that number is dropping rapidly to around 3W-10W per port. For example, the BlackDiamond X* chassis will consume around 5W per 10 GbE port.

To address resiliency, where dual uplinks are used from the TOR to the aggregation tier, solutions such as M-LAG may be used for active-active redundancy.

Similarly if servers need to be dual homed to the ToR or EoR tier, NIC teaming can be used in combination with an M-LAG-type approach for active-active redundancy.

4211/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

BlackDiamond X8 CompetitiveThe Next Generation Data Center43

.

These switches represent the state-of-the-art of computer network hardware and software engineering, and are central to private/public data center cloud computing infrastructure. If not for this category of Ethernet switching, cloud computing would not exist. The Lippis/Ixia public test was the first evaluation for every Core switch tested. Each suppliers Core switch was evaluated for its fundamental performance and power consumption features. The Lippis/ Ixia test results demonstrate that these new Core switches provide state-of-the-art performance at efficient power consumption levels not seen before. The port density tested for these Core switches ranged from 128 10GbE ports to a high of 256 10GbE and for the first time 24-40GbE ports for the Extreme Networks BlackDiamond X8. There were four Core/Spine Switches evaluated for performance and power consumption in the Lippis/Ixia test.

These participating vendors were:

Alcatel-Lucent OmniSwitch 10KArista 7504 Series Data Center SwitchExtreme Networks BlackDiamond X8Juniper Network EX Series EX8200 Ethernet Switch

IT business leaders are responding favorably to Core switches equipped with a value proposition of high performance, high port density, competitive acquisition cost, virtualization aware services, high reliability and low power consumption. These Core switches currently are in high demand with quarterly revenues for mid-size firms in the $20 to $40M plus range. The combined market run rate for both ToR and Core 10GbE switching is measured in the multi-billion dollar range. Further, Core switch price points on a 10GbE per port basis are a low of $1,200 to a high of $6,093. 40GbE core ports are priced between 3 and 4 times that of 10GbE core ports. Their list price varies from $230K to $780K with an average order usually being in the million plus dollar range. While there is a large difference in list price as well as price per port between vendors, the reason is found in the number of network services supported by the various suppliers and 10GbE/40GbE port density.

We compare each of the above firms in terms of their ability to forward packets: quickly (i.e., latency), without loss or their throughput at full line rate, when ports are oversubscribed with network traffc by 150%, in IP multicast mode and in cloud simulation. We also measure their power consumption as described in the Lippis Test Methodology section. Designed for large, virtualized data centers and clouds, the Extreme Networks BlackDiamond X8 switch provides high density 10GbE and 40GbE. Furthermore, the BlackDiamond X8 was built with green operations in mind, resulting in low per port power consumption, which when combined with high density, can result in a lower Total Cost of Ownership.

4311/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

44

BlackDiamond X2.3 uSec30.5 BPPS

Juniper QFabric5.0 uSec? BPPS

Arista 75004.5 uSec5.76 BPPS

Juniper EX820010 uSec960 MPPS

TbpsSpeed and Horse PowerCisco N70005.0 uSec5.76 BPPS

.

A comparison of the Arista 7508, Brocade MLXe-8, Cisco Nexus 7010, Dell Force 10 E600i, Extreme BlackDiamond 8810, Extreme BlackDiamond X8, HP 12508, Juniper EX8208 and the Juniper QFX3008 switches. I put competitors into 2 categories, the device proliferation trend will continue, supported by a matching demand in user mobility. In addition, the focus on cloud serviceswhether the cloud is private, public, or hybridas well as the focus on virtualization will create new business models that IT organizations must consider as they estimate five-year TCO for network infrastructure and operations. A Next-Generation Network can serve multiple purposes, including machine-to-machine connectivity or for data center backup applications. Emerging players are newer vendors that are starting to gain a foothold in the marketplace. They balance product and vendor attributes, though score lower relative to market Champions.Old Guard are established players with very strong vendor credentials, but with more average product scores.

Gartner Chooses Extreme as "Data Center Specialist - Its hard to ignore quality. After four years of intense focusing on virtualized data centers, dense 10/40GbE and key tools that enable Cloud scalability, Extreme Networks was named a "Data Center Specialist" and top vendor by Gartner for cloud in the most recent Competitive Landscape: Data center and Ethernet switches report. The analysts looked at all equipment vendors across the entire space and considered a comprehensive scope of criteria, from technology and products, to customer base, channel strategy and sets of alliance partners that are focused on the data centers. Extreme, with a large portfolio of high-density products ready for 40 gigabit and 100 gigabit support, it is a good option for specific user requirements. Extreme also shows good understanding of emerging data center requirement with support for virtualization, automation and customization.Direct Attach, Extreme Networks' implementation of VM switching conducted in the network.Extreme supports network virtualization integration and automation and customization through scripting and XML-based interface.Its 10 Gigabit product portfolio is now FCoE-ready also supports OpenFlow

Superior Performance - The new X8 chassis is 2-3 times faster than any other competitorBiggest backplanes /Highest port densitiesNon-blocking line rate switches means no oversubscription/ Ingress and egress queuingsFlow, ACLs, IPv4/IPv6 in hardwareLarge table sizes to accommodate virtualization environments or service provider needs

4411/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

Density Per Chassis

Wire speed10G40GOversub10G40GBlackDiamond X8is a Leader Not a Follower!All 8 Slot Chassis apples-to-apples comparisonAll 10/40GE Capable

Arista750838400N/A

ExtremeNetworksBlackDiamond8800641648192

HP 125086400256

JuniperEX82086400320

BrocadeMLXE86400N/A

Dell/F10E600i Exa701428280

Cisco NX701025600384

ExtremeNetworksBlackDiamond X8768192N/AN/A

JuniperQFX30080128N/AN/A

.

This is the first time in the history of networking that we finally begin to fulfill the vision of Ethernet Ubiquity. Data Center industry analysts and pundits are calling for scalability and performance to solve Ethernet ubiquity challenge. We delivered with unprecedented density & latency to make this a reality.

In other words, you can pack 2,304 ports of 10 GbE into a single rack containing three BDX8 chassis. Other vendors might offer somewhat comparable density but only in an oversubscribed configuration. To get this amount of wire-speed 10 GbE ports with Cisco Nexus 7010 switches, you would need to fill six racks with 12 chassis. Thats a lot of capital expense, a lot of switches to manage and a lot of real estate in a data center. Also, note that the Nexus 7010 is a half-rack chassis. The 18-slot Nexus 7018 has higher density (768 wirespeed 10 GbE ports) but youre only going to squeeze one of those into a single rack. Prior to the BDX8 release, Arista Networks 7508 chassis had the most impressive wire-speed port density (albeit only with 10 GbE). Aristas 7508 packs 384 wire-speed 10 GbE ports into a chassis that is only 11 rack units with 8 I/O slots.

Solutions that finally solve the financial challenges and business model of driving costs out of the network to better capitalize on the benefits of ones virtualization investments.Because of this Ethernet ubiquity, legacy proprietary technologies in the Data Center can finally be challenged with standard based technology that has proven to drive down costs and total cost of ownership.Some examples of this ubiquity are with Virtualized Cloud-based Networks or Fabrics, Virtualized Multi-Tenant Data Center, Ethernet (block or file based) Storage Networking and finally High Performance Computing. Today I will present one of these focused area, Virtualized Cloud-based Fabrics, and explain the benefit of our vision and solution.

Fabric computing refers to networking technology in which a series of switches are controlled by network intelligence to be operated as one virtual switch, creating multiple paths for data to take through those switches. Fabric enables east-west traffic between and among switches and servers on the same network layer, in addition to the traditional north-south path among the core, aggregation and access layers.

Then you have some of the newer data center fabrics, like Junipers QFabric, which you cant really compare to the BDX8 from a pure speeds and feeds perspective, since Juniper positions an entire QFabric deployment as a single, logical switch chassis that has been exploded into scores of individual devices. Not a lot of people need port density like this yet, let alone this kind of 40 GbE port density. While Extreme has a flashy flagship switch to show off, a lot of enterprises will be looking at Extremes overall network architecture, rather than just the impressive density.

4511/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

Competitive Density Per (44RU) Rack46

BDX8QFX30087508NX7010BD880012508EX8208BDX8BDX8QFX3008NX701075087508BD8800BD8800

MLXEMLXEMLXEMLXEMLXEMLXEEX8208EX8208HPJuniperBrocadeExtremeNetworksCiscoAristaJuniperExtremeNetworksE600iE600iDell/F10125087508128005121920096038400N/A1924814457651200768153600N/A0256N/AN/A2304576N/AN/A1402856560Wire speed10G40GOversub10G40G

.

This is rather a more practical use of the port densities and form factors in a real data center environment. This shows how many chassis from each leading networking vendors can be fit into a standard data center rack and the total port density per rack. The BlackDiamond X8 squarely wins with 2304 10GbE and 576 40GbE wire-speed ports per rack. Consider if you are in the cloud business and each of those ports means money to you. All the DC Fabric offerings from all the vendors (including us) are not really Fabrics in the true sense of the word. Don't be misled - QFabric, FabricPath, OpenFabric, FlexFabric, Brocade One are all attempts to emulate a fabric architecture, but none of them actually achieve it. Why is this? First we need to take a look at what a true switch fabric really is.

Vendor Comparison by 10 GE Rack - And taking the per rack capacities discussed forward and applying them to data center consolidation, this is another way of looking at it. Below I'm showcasing how many racks from each vendor will be needed to match the 10GbE port density supported by the BlackDiamond X8 into a single rack.

Three points:First, this saves a lot in terms of rack space and floor space savings in a multi-tenant or cloud environment, providing as much as 18X better consolidation.Second, it saves a lot in terms of power and cooling expenses that would rather be needed to power up those additional racks, and even the saving for that single rack as shown by the power charts on top right.And third, for cloud providers, each of that additional rack space can be used to provision more server, storage or networking gear to host more applications or to provision more services, which means more revenue. Zero lost opportunity cost.

Extreme response: Ciscos highest port densities are on a 25 RU (rack unit) Cisco Nexus-7018 chassis utilizing 16 IO modules. Fully packed, that only adds up to 96 x 40G ports in the giant chassis. In comparison, the compact 14.5 RU, BD-X8 chassis provides 192 non-blocking 40G ports. In other words, with Extreme BD-X8, you get double the density, plus save more than 10RU. As for 100G, with 1.2 terabits per slot the BD-X8 is capable of supporting 12 ports of non-blocking 100 G ports per slot, which equates to 96 ports of 100G per chassis, when available. There is no doubt the BD-X8 is a clear winner in the density battle. One new module for the Cisco Catalyst 6500 four port 40G module which supports up to 32x40G ports per chassis.

4611/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

47

Extreme NetworksCiscoHPJuniperBrocadeAristaForce10DC StrategyOpen FabricUCSFlexFabricQFabricBrocade ONECloud NetworkingVirtual Framework

End-to-End Proprietary, non-scalable, 5-tier, expensive architecture, VMware environments onlyPhased migration, open standards based approach, does not strand assets, applies to public and private cloud DC of any scale Repurposed Virtual ConnectPurely network solution, requires new infrastructure, strands existing assets, requires new management paradigm No synergy between VCS, Brocade ONE, and Foundry roadmap. Strategy mainly focused around Fiber Channel and Ethernet convergence only. Little to no virtualization support.Publicly, minimal support for Cloud TechnologiesFramework consists of Virtual Control, Virtual Scale, Virtual View none of which has much to do with virtualization or convergenceFabric ApproachesCost of delivering a VM to an enterprise customer**True Open Fabric Can Lower TCO by 66% Over 5 years

.

Various Fabric Architectures - Fabric is essential to handling the massive increases in data workloads created by the explosion of virtualization and cloud computing. Fabric generally is based on one of two underlying technologies: shortest path bridging, a standard approved by the Institute of Electrical and Electronics Engineers (IEEE), and Transparent Interconnect of Lots of Links (TRILL), a standard approved by the Internet Engineering Task Force (IETF).

Extreme Networks Open Fabric: Extreme Networks has chosen to build its data center strategy around an open, interoperable architecture, avoiding a "black box" proprietary approach.EXOS end-to-end feature software to improve the virtualization space such as Extremes XNVTM and Direct AttachTM.High speed, low-latency interconnectivity Non-blocking/non-oversubscribed infrastructureLayer 2 connectivity (Layer 3 connectivity is also available)Multiple active paths with fast failover Mesh connectivity rather than a tree type topologySimple management, configuration and provisioning

Qfabric: Juniper entered the switching market in April 2008, Junipers datacenter fabric architecture that separates a switch into three physical components. Junipers QFabric is criticized as an expensive vendor lock-in play, still more a marketing plan than an actual product, though Juniper disagrees. Since that introduction more than a year ago, Juniper has faced criticism from competitors and some analysts that with Qfabric "their marketing was well ahead of product.Nodes - The nodes are at the edge of the network and process packets and makes switching decisions. With a software upgrade, the QFX3500 can become the QFabric Node edge device of a QFabricInterconnect - The interconnect is the fabric and transports the packets to the destination node. Director - The director is the control plane and runs in a x86 appliance. Qfabric, is a closed and proprietary solution that today only supports 10GbE connectivity, It supports up to 128 QF/Nodes and 4 interconnects, up to 6000 10GbE ports and 10K hosts

Arista Data Center Architecture: Consists of the 7000 Series switches for TOR and core / Aggregation coupled with EOS. They Brand this as a Leaf, Spine and Core architecture. Positioning: They Leverage low East-West latency (inter rack / server) communications. This has led them to pursue, HPC and high transaction environments such as financial sectors, as well as Cloud level switching.

Data Center Product of the Year 201211/13/2016Jeff Green 248-521-7593 Page 47

48

Virtual/Cloud Network (Fabric)

.

Extreme Networks Reduces Tiers in the Data Center- The ability to support this new forwarding mode in the network infrastructure provides an open standards track approach to simplifying VM switching. For example, data center networking products from Extreme Networks support the ability to switch VMs in hardware at wire speed. Extreme uses an open approach to Virtual Machine Switching. The broad adoption of server virtualization has been instrumental in enabling the cloud model to gain acceptance. However, along with the benefits of virtualization come a set of challenges. For example addressing VM switching through virtual switches gives rise to several challenges. From the complexity of dealing with multiple hypervisor technologies, to providing security between VMs within the hypervisor, to software (CPU)- based switching between VMs that could lead to unpredictable performance, the list of potential issues runs large. The IEEE 802.1Qbg working group is looking at this problem and is defining new forwarding modes that allow switching VM traffc directly in the network switch.

Arista Architecture - They state that due to the creation of such high density switches, most of the traffic is able to stay at the spine layer and hence a two tier model, however, they still architecturally show a network core. Consists of the 7000 Series switches for TOR and core / Aggregation coupled with their EOS operating system. While they talk about flattening of layers, their architecture is still essentially, Edge, aggregation, core which they have re-named to move peoples mindset over to low latency type deployments. Its a marketecture! They state that due to the creation of such high density switches, most of the traffic is able to stay at the spine layer and hence a two tier model, They still architecturally show a network core. With four Arista 7508 switches at layer 3, more than 18,000 Servers can be connected with a non-blocking, low-latency, two-stage network, although there is little mention of how they really accomplish this number!!!

Cisco DC Architecture - Consists of complete end-to-end solutions with the Unified Network, Unified Computing and Unified Services. The Unified Network includes Nexus Series switches for Data Center Top of Rack, Aggregation and Core applications. Multi tier network solution. Cisco aims to maintain its networking lead in the fabric era with its Cisco Unified Fabric vision, says Shashi Kiran, senior director of Cisco Data Center Solutions. The Unified Fabric strategy combines Cisco Nexus switches and MDS SAN switches with its NX-OS operating system and FabricPath, its variation on the TRILL standard also supported by Brocade.Cisco leverages its vast fixed and modular product options to offer network designs to fit a variety of data center needs. Flexible support for Ethernet and Fibre-channel on the same hardware gives them an added advantage.Fabric path provides converged Fibre-Chanel and Ethernet architectures, providing seamless and transparent Ethernet and storage area network (SAN) convergence and migration.

4811/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

Ethernet UbiquityConsolidationConvergencePerformance

Physical EvolutionVirtualizationInteroperabilityScalability

Virtual Evolution

ExtremeXOS delivers Open Fabric as opposed to Closed FabricExtreme Networks XNV allows auto-configuration of Virtual Port ProfilesEmerging players

.

OpenFlow is one of the more exciting networking protocols to be developed in recent memory. This standards effort is being shepherded by the Open Networking Foundation and boasts a whos who of networking vendors and service providers. It has the potential to shape the future of data center automation and management through software-defined networking. The BlackDiamond X8, powered by the ExtremeXOS modular operating system, helps provide the foundation for virtualized applications.

Arista Limited to Data Center.Difficulty in coding to 3 HW platforms and having to do SW clean-up due to HW difficultiesWhile CloudVision works well in a confined and small sized network, the idea of executing a command across Thousands of switches can quickly become very daunting.Arista has limited their virtualization environment to VMware alone.While vEOS addresses the control and configuration of the Vswitch, Extreme is proposing the complete removal of the Vswitch with Direct Attach.LANZ would require integration into a middleware application which in itself can be costly. Without that, the system could generate more information about network status than any human could reasonably resolve / analyze.No 40GbE modules today in the core / Aggregation

Software & Features Comparison - 7500 does not support IP Multicast, as tested by Lippis (2011)These issues have been known about Arista (some might have been rectified):Single supervisor failure causes whole switch to reloadLosing one fan module or fabric shuts down the whole switchProper I/O module cooling requires all fabric modules Has classic HOLB (head-of-line blocking) issue!Single 10GbE l/O module no choice, no scale beyond thatLimited power budget for future 40GbE/100GbE and10GBaseTNo user-configurable QoS CLISupports limited OSPF/BGP features, non-deployable in production networkNo support of virtualization awareness No support of fabric convergence or storage services. No FCoE or FC or DCB.No support for MPLSLacks robust L2 / L3 feature set no IP Multicast, IPv6, VRRP, and small table sizesVery limited technical support and field serviceIts a fast but a dumb switch!4911/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

Ethernet UbiquityConsolidationConvergencePerformance

Physical EvolutionVirtualizationInteroperabilityScalability

Virtual EvolutionExtremeXOS delivers Open Fabric as opposed to Closed FabricExtreme Networks XNV allows auto-configuration of Virtual Port Profiles

Old Guard

.

Competitive Weaknesses - The new 40G and 100G blades are based upon the older M2 fabric vs. the more recently announced F2 fabrics and therefore run at the slower rate of 240G/slot vs. the F2 which runs at 480G/slot. This is due in part to the routing and tables, how ever given the increased performance and capacity demands some of the largest environments are experiencing, a four or even eight port (2:1) 100G F2 module would be highly desirable.

Latency Comparison

BlackDiamond X8 has almost < the latency of MLXMLX is a total misfit for data center or HPC applications

Conclusion: BlackDiamond X8 is a clear Winner for Low-Latency applications (HPCC, Financial, Storage, IXP)

Ethernet UbiquityMLX is a Metro box, trying to fit into data center due to lack of right products. MLX has no serious data center centric features like DCB, VEPA, TRILL & cannot provide storage convergence or server virtualization supportBlackDiamond X8 EXOS has all the required features to be deployed in data center edge or core and ideally suited for those applicationsExtremeXOS is same for BlackDiamond and Summit products

Conclusion: BlackDiamond X, running ExtremeXOS has a more stable and robust OS, simple to manage and has many DC features to offer

The switch fabric latency mostly depends on the packet size and the number of store-and-forward hops as it does in other Broadcom devices. Packets coming in and going out on the same device will require the latency related to one store and forward hop, while packets ingressing and egressing on different devices will incur three store and forward hops. In addition, there is a relatively small amount of latency added in by the chip for internal operations. Notice there is little difference in average delay variation across all vendors proving consistent latency under heavy load at zero packet loss for L2 and L3 forwarding. Average delay variation is in the 5 to 10 ns range. Difference does reside within average latency between suppliers, thanks to different approaches to buffer management and port densities.5011/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

2 to 9 times faster51

X8 has X8 does not need deeper buffers because of Cut Through verses Store & Forward only fabric latency of EX8208/EX8216 latency of 7508Emerging players

.

We show average latency and average delay variation across all packet sizes for layer 2 and 3 forwarding. Measurements were taken from ports that were far away from each other to demonstrate the consistency of performance between modules and across their backplanes. The clear standout here is the Extreme Networks BlackDiamond X8 latency measurements which are show its 2 to 9 times faster at forwarding packets than all others. Another standout here is the Arista 7504s large latency at 64Byte size packets. According to Arista, the Arista 7500 series performance and design are fine tuned for applications its customers use. These applications use mixed packet sizes, rather than an artificial stream of wire speed 64 byte packets. Ingress port hashing is used for all blades that provide only 10Gb links. When a blade provides 40Gb links (or a mixture of 40Gb and 10Gb), the packet field hash is used. Thus 10Gb blades (and 40Gb blades configured to provide only 10Gb ports) are fully non-blocking. 40Gb blades operating with 40Gb ports have sufficient switch fabric bandwidth to be non-blocking, but the packet field hash can cause packet loss before full bandwidth utilization of the trunk is reached.

Latency variation (jitter) is between 5 and 10 ns for all frame sizes.Under some circumstances, RFC3918 multicast forwarding latency is 2.3 s for 64-byte frames and 3.8 s for 9,216-byte frames in cut-through mode, chip-to-chip.Lippis Enterprises Cloud test reports 3.5 s overall average latency in cut-through mode.Cloud test includes many north-south and east-west traffic flows, with a frame size mix designed to approximate data center traffic. RFC2889 congestion test results show no frame loss, no head-of-line blocking, and appropriate back-pressure with either 802.3x flow-control, or PFC (802.1Qbb) enabled.

Standard While they talk about flattening of layers, their architecture is still essentially, Edge, aggregation, core which they have re-named to move peoples mindset over to low latency type deployments. Its a marketecture! They state that due to the creation of such high density switches, most of the traffic is able to stay at the spine layer and hence a two tier model, however, they still architecturally show a network core.

Against Arista - No clear silicon strategyStarted with Fulcrum for 10GUnclear path to 40Gig support will it require new hardware??Moved to Dune for 1G and chassisMoving to Broadcom XGS for next generation 10G

5111/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

2 to 9 times faster52X8 has X8 does not need deeper buffers because of Cut Through verses Store & Forward only fabric latency of EX8208/EX8216 latency of 7508

Old Guard

.

Competitive Strengths - Competitive Positives - Cisco has announced both six-port 40G and two-port 100G options in an M2 fabric blade for the Nexus 7000 platform. With a maximum chassis port density of 96 x 40G or 32 x 100G, this w ill suffice for all but the largest environments. Intended as both a data center interconnect option as w ell as backbone /data center spine connectivity, these new options reduce the need to trunk four to ten ports of 10G to accommodate high bandwidth needs. Given that the M2 fabric supports a full L2/L3 feature set, customers may also employ these as core switches connecting campuses as w ell.

The new 6500 four-port 40G blade w ill offer high speed interconnectivity to other sites, interconnectivity to other 6500s, and of course connect this traditional campus backbone platform with the new 40G modules on the data center Nexus 7000 to improve performance. Additionally, Cisco stated the new module is Trill, OTV and LISP ready (though not supported today).A new member of the Nexus 3000 family, the Nexus 3064-X further lowers latency, reduces power consumption and offers 10G density greater than that previously available. Popular in demanding environments that require the lowest latency achievement, the Nexus 3000 has acquired a sizable market in a short period of time and this newest member promises to further improve that position based customer demands (and those features that are sought after in this class of product).A new form factor and member of the Catalyst 4500 family, the 4500-X breaks from the chassis based form factor that the 4500 has always been in a one-RU package. Boasting 40 ports of 10GbE and supporting the full software feature set of the 4500, this new product offers a compelling aggregation story for customers w ho are looking for a dense, high performance and feature rich switch to aggregate these more and more powerful access devices with 10G links.Perhaps the most notable software enhancement of this announcement is the addition of the a new feature dubbed Easy Virtual Networking (EVN), Cisco is providing a new capability that radically reduces the number of configuration touch points required to implement and configure a VRF-Lite or VRF derivative network. On an order of 30 to 1 configuration points reduced (4 vs. 120) customers can reduce both configuration time to deploy and reduce the probability of introducing human errors, thereby eliminating embarrassing or costly network outages related to configuration of this technology.

5211/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

Buffers means Stale Data53

BDX8 was the only core switch to utilize back pressure to slow down the rateFR% = Agg Forwarding Rate (% Line Rate) HOL = Head of Line Blocking BP = Back Pressure

Emerging players

.

Buffers means Stale Data - The Extreme Networks BDX8 demonstrated 100% of aggregated forwarding rate as percentage of line rate during congestion conditions. We tested congestion in both 10GbE and 40GbE configurations, a first for these test. The Extreme Networks BDX8 sent flow control frames to the Ixia test gear signaling it to slow down the rate of in- coming traffc flow. We anticipate that this will become a new feature that most core switch vendors will adopt as it provides reliable network forwarding when the network is the data center fabric supporting both storage and data gram flows.

All Core switches performed extremely well under congestion conditions with nearly no HOL blocking and between 77 to 100% aggregated forwarding as a percentage of line rate. Its expected that at 150% of offered load to a port, that that Core switchs port would show 33% loss if its receiving at 150% line rate and not performing back pressure or flow control. Extremes BDX8 was the only core switch to utilize back pressure to slow down the rate of traffic flow so as to maintain 100% throughput.FR% = Agg Forwarding Rate (% Line Rate) HOL = Head of Line Blocking BP = Back Pressure

Performance & Capacity ComparisonBlackDiamond X8 has 32X the performance, 6X the switching capacity and 8X the bandwidth per slot for future expansionsBlackDiamond X8 has dedicated fabric modules for data center class high availability while EX8208 has primary fabric combined with the SRE Conclusion: BlackDiamond X8 is a clear Winner in terms of Performance, Capacity and Investment Protection

The Extreme Networks BDX8 was tested across all 256 ports of 10GbE and 24 ports of 40GbE. Its average cut through latency ranged from a low of 2326 ns or 2 s to a high of 19,083 ns or 19 s at jumbo size 9216 Byte size frames for layer 2 traffic. Its average delay variation ranged between 5 and 9 ns, providing consistent latency across all packet sizes at full line rate; a welcome measurement for converged I/O implementations.

For layer 3 traffic, Extreme Networks BDX8s measured average cut through latency ranged from a low of 2,328 ns at 64Bytes to a high of 18,578 ns or 18s at jumbo size 9216 Byte size frames. Its average delay variation for layer 3 traffic ranged between 4 and 9 ns, providing consistent latency across all packet sizes at full line rate; again a welcome measurement for converged I/O implementations.5311/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

Buffers means Stale Data54BDX8 breakthrough architecture of was the only core switch to utilize back pressure to slow down the rateFR% = Agg Forwarding Rate (% Line Rate) HOL = Head of Line Blocking BP = Back Pressure

Old Guard

.

BlackDiamond X8 - BlackDiamond X8 is built with cloud data centers in mind. With its highly packed design, BlackDiamond X8 takes only 1/3rd of a rack space to provision up to 768 wire-speed 10GbE ports and up to 192 wire-speed 40GbE ports.

And BlackDiamond X8 has ample capacity to switch all those ports at wire-speed with more than 20Tbps switching throughput. With such performance and capacity, 128,000 virtual machines can be support with single BlackDiamond X8 that can be migrated and tracked through ExtremeXOS Network Virtualization and Virtual Port Profiles features. Besides, BlackDiamond X8 open standards based Data Center Bridging is ideal for IP based storage services.

Being the fastest switch in the industry, as proven by Lippis testing, the BlackDiamond X8 switches those packets with lowest latency of only 2.3 uSec port-to-port. The BlackDiamond X8 is built with data center class availability and efficiency in mind. The switch supports 1+1 control plane, N+1 data plane and N+N power plane redundancy making it extremely resilient against failures. At the same time, its front to back cooling design with innovative, mid-plane less design and intelligent controls, it consumes only 5.62 W/10GbE port, making it the most power efficient switch in its class.

2 Layer TOR and Core X670 and BDXEXOS end-to-endExtreme Networks has chosen to build its data center strategy around an open, interoperable architecture, avoiding a "black box" proprietary approach.

Extreme Networks Open FabricHigh speed, low-latency interconnectivity Non-blocking/non-oversubscribed infrastructureLayer 2 connectivity (Layer 3 connectivity is also available)Multiple active paths with fast failover Mesh connectivity rather than a tree type topologySimple management, configuration and provisioning

5411/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

Performance & Capacity55

BDX8 requires less number of fabric modules with or without N+1 BDX8 has 2X the bandwidth per slot for future expansionsEmerging players

.

A few standouts arose in this test - First, Aristas 7504 delivered nearly 84% of aggregated forwarding rate as percent- age of line rate during congestion conditions for both L2 and L3 traffc flows, the highest for all suppliers without utilizing flow control. This is due to its generous 2.3GB of buffer memory design as well as its VOQ architecture. Note also that Arista is the only Core switch between Juniper and Alcatel-Lucent that showed HOL blocking for 9216 byte size packets at L3. Note the 7504 was in beta testing at the time of the Lippis/IXIA test and there was a corner case at 9216 bytes at L3 that needed further tuning. Arista states that its production code provides wire speed L2/L3 performance at all packet sizes without any head of line blocking.

Qfabric Performance, Capacity, Reliability Comparison

BlackDiamond X8 has 2X the switching capacity of QFabricBlackDiamond X8 has 2X the bandwidth per slot for future expansionsBlackDiamond X8 has N+1 data plane (fabric) protection vs. QFabric with no data plane protection, which could result in severe data loss

BlackDiamond X8 is a clear Winner in terms of Performance, Capacity, Investment Protection and Reliability. Second, Alcatel-Lucent and Juniper delivered consistent and perfect performance with no HOL blocking or back pressure for all packet sizes during both L2 and L3 forwarding.

Against Arista VM tracer and vEOS works with VmWare onlyNo support for CitrixCloud providers use Citrix/KVMNo centralized provisioning, management or analytics i.e. no management platformImportant in public cloud infrastructureNo cloud based network analytics and monitoring solutionNo support for IPv6Limited L3No support for virtual machine switchingNot really cloud OS!No Stacking!

5511/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

Performance & Capacity56

BDX8 requires less number of fabric modules with or without N+1 BDX8 has 2X the bandwidth per slot for future expansions

Old Guard

.

Cisco DC Architecture - Consists of complete end-to-end solutions with the Unified Network, Unified Computing and Unified Services.

The Unified Network includes Nexus Series switches for Data Center Top of Rack, Aggregation and Core applications. Multi tier network solutionCisco leverages its vast fixed and modular product options to offer network designs to fit a variety of data center needs. Flexible support for Ethernet and Fibre-channel on the same hardware gives them an added advantage.Fabric path provides converged Fibre-Chanel and Ethernet architectures, providing seamless and transparent Ethernet and storage area network (SAN) convergence and migration.Despite conformance claims to open standards Cisco has implemented many proprietary protocols that lock in the user

Brocade Performance & Capacity Comparison

BlackDiamond X8 has 6-8X the performance of MLX32 in much smaller sizeBlackDiamond X8 has 1.3-2.5X the switching capacity of MLX32BlackDiamond X8 has 2.5-5X the bandwidth per slot for future expansionsBlackDiamond X8 requires less number of fabric modules with or without N+1 for wire speed performance, resulting in less total fabric $ cost

Conclusion: BlackDiamond X8 is a clear Winner in terms of Performance, Capacity and Investment Protection. 7508 Performance, Capacity, Reliability Comparison

BlackDiamond X8 has 2X the performance of 7508BlackDiamond X8 has 2X the switching capacity of 7508BlackDiamond X8 has 2X the bandwidth per slot for future expansionsBlackDiamond X8 requires less number of fabric modules with or without N+1 for wire speed performance, resulting in less total fabric $ cost Conclusion: BlackDiamond X8 is a clear Winner in terms of Performance, Capacity and Investment Protection

5611/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

Zero packet loss57

The BDX8 delivered 100% aggregated throughput while processing a large combination of east-west and north-south traffic flows.Zero packet loss was observed as its latency stayed under 5.07s measured in cut through mode.Emerging players

.

Lowly Oversubscribed Three Stage Open Fabric Clos Oriented DataCentre - The Extreme Networks BDX8 seeks to offer high performance, low latency, high port density of 10 and 40GbE and low power consumption for private and public cloud networks. This architecture proved its value as it delivered the lowest latency measurements to date for core switches while populated with the equivalent of 352 10GbE ports running traffic at line rate. Not a single packet was dropped offering 100% throughput a line rate for the equivalent of 352 10GbE ports.

Extreme Networks BDX8 was measured in Store & Forward plus Cut-Through Mode. For cut-through cloud simulation latency results see the X8 write-up. out here is the Extreme Networks BDX8 with forwards a range of cloud protocols 4 to 10 times faster than others tested.

If we remove or bypass these inefficient switching layers, we can get close to a 4 or less stage Clos Styled Architecture. I say 4 stages because we have the X670 and then the BD8X which (like the 8800) has local line card switching and a set of fabrics and therefore be seen as a maximum 3 stages.

Density & Consolidation Comparison - So next time your talking with a customer about Data Centre Fabrics, make sure they are evaluating like for like. Ensure they know what the real oversubscription is in our solution and the competitors . If they have been sold on 'any to any' by our competitors - tell them the truth about it. Get them to ask their competitors about Clos - how many stages do they have? What's the oversubscription between stages? And if Juniper are in the mix, ask them how they can get to traffic in the Middle Stage - we can break traffic out of the core. Juniper can't.

7508 consumes 1.5X the space of BlackDiamond X8This results in more rack rental expense, more power and cooling expenseThis also results in lost revenue that additional rack space would have generated otherwise (lost opportunity cost)It still does not provide a mix-n-match of 10 and 40GbE in single boxLimited presence in DC market (mostly financial sector)Lack of software maturity and feature completeness From data sheet: No VRRP, CIR, ETS, DCBX, Rate Limiting, PFC, L2/L3/L4 Policy Conditions, DSCP Marking/Policing, IPv6 support, and limited L3 supportEOS not proven to be portable across low end and high end platformsLack of CEE support (UNH results below)Poor L2 address learning (Internal test results below)Conclusion: BlackDiamond X8 is a clear Winner in terms of Density and Consolidation

5711/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

Zero packet loss58

The BDX8 delivered 100% aggregated throughput while processing a large combination of east-west and north-south traffic flows.Zero packet loss was observed as its latency stayed under 5.07s measured in cut through mode.

Old Guard

.

Nexus series is expensive, and fitting every data center comes at a high cost. Cisco Systems Data Center strategy consists of complete end-to-end solutions with the Unified Network, Unified Computing and Unified Services. The Unified Network includes Nexus Series switches for Top of Rack (TOR), Aggregation and Core applications as well as the Catalyst 6500 and 4900 Series switches for TOR and Aggregation applications.

Cisco UCS aims to streamline data center resources to reduce total cost of ownership, while providing the flexibility to grow as you need and reducing the total number of computing and networking devices within a deployment. Fabric path provides converged FIbre-Chanel and Ethernet architectures, providing seamless and transparent Ethernet and storage area network (SAN) convergence and migration.Catalyst 6K is a power hungry and based on an archaic architecture, which does not support DCB protocols, lossless Ethernet and FCoE, iSCSI etc.Fabric path is not an open networking solution. Extending the fabric to the TOR is therefore a proprietary system.No definitive solutions aimed to improve the virtualization space such as Extremes XNVTM and Direct AttachTM.Nexus series and the Unified Networking infrastructure will fit every data center.Catalyst 6K switches have been extended to support the unified network solutions.Nexus 5500 series switches support lossless Ethernet, FCoE and Fibre-Channel on the same hardware.Nexus 2K series provides fabric path services to extend the Nexus 7K/5K to TOR.Nexus 3K series is based on merchant silicon, a deviation from all other Nexus Series products.

Brocade Density, Diversity & Consolidation Comparison

MLX32 has no 40GbE for data center use case and very tall/heavy (33RU)BlackDiamond X8 provides 3X the 10GbE density per chassis, 9X per rackBlackDiamond X8 provides 1-2X the 100GbE density per chassis, 3-6X per rack This results in more rack rental expense, more power and cooling expense required by MLX32 for matching the density BlackDiamond X8 can supportThis also results in lost revenue that additional rack space would have generated otherwise (lost opportunity cost)

Conclusion: BlackDiamond X8 is a clear Winner in terms of Density, Diversity and Consolidation

5811/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

Performance and Power

59

BlackDiamond X8 generates < 1/2 heat than EX8208

All above results in less power and cooling expensesEmerging players

.

The BlackDiamond X8 now delivers the lowest latency (2.3 s), highest port density (768 10GE ports in a third of a rack) and lowest power (5.6 watts per port). The Extreme X8 possess the performance and power efficiency required to be deploy in a two-tier network architecture, reducing application latency plus capital and operational cost from these budgets.

Power and cooling features include:Intelligent power management for unused ports, Front-to-back cooling and variable fan speed, allowing greater power efficiency. Designed for high availability, 1+1 ManagementN+1 FabricN+1Fan; N+N Power Grid

Lower Operating Costs - The BlackDiamond X8 is designed to be efficient, with power consumption as low as 5.6 Watts per 10GbE and 22.5 Watts per 40GbE port. Front-to back cooling helps data center network operators optimize the cooling environment. Fan speed is dynamically controlled and is set no higher than required. This allows efficient use of the cool air from the cold aisle and produces lower heat in the hot aisle.

QFX3008 is just a pass-through interconnect, more like the SFM in BDX8 chassis. Whereas BDX8 power concludes the I/O modules that can actually connect any 40GbE device (open). BDX8 power cannot be compared as apple-to-apple. Conclusion: Power can be compared on component by component basis but not at system level due to different architectureSummit X670 consumes almost the power than QFX3500Summit X670 generates almost the heat than QFX3500All above results in less power and cooling expensesConclusion: Summit X670 is a clear Winner for lowest Total Cost of Ownership or Opex

EX8208 requires special baffle to redirect airflow that uses more spaceBlackDiamond X8 consumes < 1/2 power than EX8208BlackDiamond X8 generates < 1/2 heat than EX8208All above results in less power and cooling expensesConclusion: BlackDiamond X8 is a clear Winner for lowest Total Cost of Ownership or Opex

5911/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

Performance and Power

60

Nexus 7018 reasons for high power consumptionA lot of ASICs per I/O card 12 ASICs for just 48 ports of 10Gbit/s (4 ports per ASIC) There are 28 fans in total with weight of 22kg and power consumption ONLY for fans up to 1273W!

Old Guard

.

Energy & TCO Comparison - Taking the latency aspect for example, this shows the latency and power consumption competitive tests done by Lippis that are publically available through Lippis Report. With only 2.3 uSec port-to-port latency, BlackDiamond X8 became the fastest core switch in the world. And taking the power aspect, with only 5.62 Watts/10GbE port, BlackDiamond X8 becomes the most energy efficient switch in its class.

BlackDiamond X8 shrinks down the space usage by 1/6 of MLX16BlackDiamond X8 generates 80% lesser heat than MLX16BlackDiamond X8 is 80% more power efficient on per rack basis than MLX16BlackDiamond X8 is 87% more power efficient on per port basis than MLX16All above results in less power and cooling expenses

Conclusion: BlackDiamond X8 is a clear Winner for lowest Total Cost of Ownership or OpexNo 40Gbit interfacesCounter PointsNo 40GE interfaces for inter-switch links (ISL)Limited to Data Center.No L3 features in the Ethernet fabric including L3 ACLsExtreme is proposing the complete removal of the Vswitch with Direct Attach for 100% visibility into data traffic flows.No overall network management system for simplifying network configuration of key features

Nexus Counter Points - Cisco claims conformance to open standards and seamless interoperability with other vendors products. In reality Cisco has introduced many proprietary protocols and architectures in the data center solutions, such as Fabric Path and Overlay Transport Virtualization (OTV), limiting inter-operability and multi-vendor integration.

Nexus series is expensive, and fitting every data center comes at a high cost.Catalyst 6K is a power hungry and based on an archaic architecture, which does not support DCB protocols, lossless Ethernet and FCoE, iSCSCI etc.Software feature pack supporting FCoE and FC are not generally available at this time.Fabric path is not an open networking solution. Extending the fabric to the TOR is therefore a proprietary system.No definitive solutions aimed to improve the virtualization space such as Extremes XNVTM and Direct AttachTM.

6011/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

IP Multicast - Comparison61

BDX8s IP Multicast and congestion test performance were the best measured to date in terms of latency and throughput. BDX8 is the fastest at forwarding IP Multicast thanks to its replication chip

Emerging players

.

Arista does not support IP Multicast at this time and thus is excluded from this test. The Extreme Networks BDX8 offers the lowest IP Multicast latency at the highest port density tested to date in these industry test. The Extreme Net- works BDX8 is the fastest at forwarding IP Multicast thanks to its replication chip design delivering forwarding speeds that are 3-10 times faster than all others. Alcatel-Lucents OmniSwitch 10K demonstrated 100% throughput at line rate and the shortest aggregated average latency. Junipers EX8216 demonstrated 100% throughput at line rate and the IP multicast aggregated average latency twice that of Alcatel-Lucents OmniSwitch.

Counter Points to Arista.No 40GE interfaces for inter-switch links (ISL)Limited to Data Center.No L3 features in the Ethernet fabric including L3 ACLsExtreme is proposing the complete removal of the Vswitch with Direct Attach for 100% visibility into data traffic flows.No overall network management system for simplifying network configuration of key features

Extreme Networks BDX8 performed very well under cloud simulation conditions by delivering 100% aggregated throughput while processing a large combination of east-west and north-south traffc flows. Zero packet loss was observed as its latency stayed under 4.6 s and 4.8 s measured in cut through and store and forward modes respectively. Arista does not support IP Multicast at this time and thus is excluded from this test. The Extreme Networks BDX8 offers the lowest IP Multicast latency at the highest port density tested to date in these industry test. The Extreme Networks BDX8 is the fastest at forwarding IP Multicast thanks to its replication chip design delivering forwarding speeds that are 3-10 times faster than all others.

4,094 VLANs (Port, Protocol, IEEE 802.1Q)9216 Byte maximum packet size (Jumbo Frame)384 load sharing trunk groups, up to 16 members per trunk2,048 ingress and 1,024 egress ACL rules per group of 24 10GbE or 6 40GbE ports1,024 ingress ACL meters and 512 egress ACL meters per group of 24 10GbE or 6 40GbE portsIngress and egress bandwidth policing/rate limiting per flow/ACL6111/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

IP Multicast - Comparison62

BDX8s IP Multicast and congestion test performance were the best measured to date in terms of latency and throughput. BDX8 is the fastest at forwarding IP Multicast thanks to its replication chip

Old Guard

.

Mish-mash of software images, capabilities, and features. Even across NX-OS - Cisco leverages its vast fixed and modular product options to offer network designs to fit a variety of data center needs. Flexible support for Ethernet and Fibre-channel on the same hardware gives them an added advantage over much of the competition with the Nexus 5K/2K series products. Proposed network architectures include Multi-tier and Server Cluster designs, integrating the Nexus series and high-end Catalyst series switches, Cisco Unified Computing System (UCS), Fabric Path Technology and recently introduced Jawbreaker products.

Nexus 5KLow scalability (for ex Nexus 5000 16k mac only),No hardware support for L3 forwardingNo stackingNo 10GbaseTNexus 2KNo local switching (adds latency): Everything has to go to Nexus 5KAdds oversubscription (Up to 4:1 for the 10G version)4 Qos queues CEE allows for 8Keyed opticsCopper onlyNexus 1000v only works with VMware No support for Citrix/KVM/Hyper-VFabric path, OTV all proprietary technologies, available on limited Nexus products only

Brocade Software & Features ComparisonMLX is a Metro box, trying to fit into data center due to lack of right productsMLX has no serious data center centric features like DCB, VEPA, TRILL & cannot provide storage convergence or server virtualization support

Conclusion: BlackDiamond X, running ExtremeXOS has a more stable and robust OS, simple to manage and has many DC features to offer

6211/13/2016Jeff Green 248-521-7593 Data Center Product of the Year 2012

Clear Differentiation63

*Future AvailabilityArista 7508BlackDiamond X82X 10GbE Density10/40GbE Mix-n-Match5X Performance2X Switching Capacity2X Bandwidth per Slot~1/2 Latency~1/2 Power & Heat Mature EXOS SoftwareDCB for Storage ServicesXNV with all 3 Hypervisors MPLS/VPLS for Inter-DCIP Multicast (lowest latency)

.

Designed from the ground up with the density, performance and capacity requirements of cloud scale data centers in mind, the BlackDiamond X8 provides 20.48Tbps total switching capacity, supporting up to 2,304 10GbE ports or up to 576 40GbE ports at wire-speed in a single seven-foot rack helping increase utilization of limited and expensive rack space. The BlackDiamond X8 fabric design uses an orthogonal direct mating system between interface and switch fabric modules, eliminating the performance bottleneck of backplane or midplane design. The BlackDiamond X8 can