2
Special Note Regarding Forward Looking Statements
This presentation contains forward-looking statements that involve substantial risks and uncertainties, including but not limited to, statements relating to goals, plans, objectives and future events. All statements, other than statements of historical facts, included in this presentation regarding our strategy, future operations, future financial position, future revenues, projected costs, prospects and plans and objectives of management are forward-looking statements. The words “anticipates,” “believes,” “estimates,” “expects,” “intends,” “may,” “plans,” “projects,” “will,” “would” and similar expressions are intended to identify forward-looking statements, although not all forward-looking statements contain these identifying words. Examples of such statements include statements relating to products and product features on our roadmap, the timing and commercial availability of such products and features, the performance of such products and product features, statements concerning expectations for our products and product features [and projections of revenue or other financial terms. These statements are based on the current estimates and assumptions of management of Force10 as of the date hereof and are subject to risks, uncertainties, changes in circumstances, assumptions and other factors that may cause the actual results to be materially different from those reflected in our forward looking statements. We may not actually achieve the plans, intentions or expectations disclosed in our forward-looking statements and you should not place undue reliance on our forward-looking statements. In addition, our forward-looking statements do not reflect the potential impact of any future acquisitions, mergers, dispositions, joint ventures or investments we may make. We do not assume any obligation to update any forward-looking statements. Any information contained in our product roadmap is intended to outline our general product direction and it should not be relied on in making purchasing decisions. The information on the roadmap is (i) for information purposes only, (ii) may not be incorporated into any contract and (iii) does not constitute a commitment, promise or legal obligation to deliver any material, code, or functionality. The development, release and timing of any features or functionality described for our products remains at our sole discretion.
3
Agenda
About Force10 (just 3 slides)
100 GbE Technology Requirements– Requirements and Feasible Technology– How is Force10 Going to Get There?
Ethernet Alliance
100 GbE Standards Update– OIF– IEEE
4
E1200:1.68 TbpsUp to 672/1260 GbE, 56/224 - 10 GbE
E1200:1.68 TbpsUp to 672/1260 GbE, 56/224 - 10 GbE
E600: 900 GbpsUp to 336/630 GbE, 28/98 - 10 GbE
E600: 900 GbpsUp to 336/630 GbE, 28/98 - 10 GbE
E300: 400 Gbps Up to 132/288 GbE,12/48 - 10 GbE
E300: 400 Gbps Up to 132/288 GbE,12/48 - 10 GbE
We Make Switch/Routers and Switches
1/6 Rack 1/6
Rack
1/2 Rack 1/2
Rack
1/3 Rack 1/3
Rack
S50: 192 Gbps From 48 to 384 GbE2 - 10 GbE
S50: 192 Gbps From 48 to 384 GbE2 - 10 GbE
1-RU1-RUS2410: 480 Gbps 24 - 10 GbE
S2410: 480 Gbps 24 - 10 GbE
5
This is Our Design Approach
Fundamental Advances in:Fundamental Advances in:
DramaticImprovements in:
DramaticImprovements in:
Government/ResearchGovernment/Research Data CenterData Center Service ProviderService Provider
Bring Revolutionary Network Technology to
Multiple Markets
6
Portals/ContentPortals/Content
Service ProvidersService Providers
Internet eXchangesInternet eXchanges
Lots of Your Packets Go Through Our Gear
7
Agenda
About Force10 (just 3 slides)
100 GbE Technology Requirements– Requirements and Feasible Technology– How is Force10 Going to Get There?
Ethernet Alliance
100 GbE Standards Update– OIF– IEEE
8
What is Driving the Need for 100 GbE?
Research network trends– 10 GbE WANs– Cluster and grid computing
Data center trends– Lots of GbE and 10 GbE servers– Cluster computing
– High-end servers pushing almost 8 Gbps– Storage flows > 1 Gbps
ISP / IX trends– High bandwidth applications– 10 GbE peering
Something has to aggregate all those 10 GbE links– LAG is an interim solution
Service ProviderService Provider
Data CenterData Center
Government/ResearchGovernment/Research
9
Higher Speeds Drive Density – Everyone Benefits!
Even if you don’t need 100 GbE, you still benefit
100 GbE technology will drive 10 GbE port density up and cost down– Just as 10 GbE did for GbE
Assuming routers have the switching capacity we can support these line-rate combinations on a single line card – 1 x 100 GbE port– 10 x 10 GbE ports– 100 x 1 GbE ports– And even more oversubscribed port density…
10
Higher Speeds Drive Switch/Router Requirements
New capacity and density drive architectural requirements needed to support 100 GbE
Massive hardware and software scalability– >200 Gbps/slot fabric capacity for any reasonable 100
GbE port density (local switching capacity is useless here!)
– Support for several thousand interfaces– Multi-processor, distributed architectures
Really fast packet processing at line-rate– 100 GbE is ~149 Mpps or 1 packet every 6.7 ns(10 GbE is only ~14.9 Mpps or 1 packet every 67 ns)
11
Higher Speeds Drive Switch/Router Requirements
Complete system resiliency– Hitless forwarding at Tbps speeds– Redundant hardware– HA software, hitless upgrades– DoS protection and system security
Chassis design issues– N+1 switching fabric– Channel signaling for higher internal speeds– Clean power routing architecture– Reduced EMI
– Conducted through power – Radiated into air
– Cabling interfaces
12
Feasible Technology for 2009:Defining the Next Generation
Let’s start working on this now, standards will take about 4 or 5 years…
So, what are your bandwidth requirements in 2009?– Higher than what you need now for sure – How much is that going to cost?
Architecture for next generation ultra high speed interfaces should scale existing network architectures
Ethernet has scaled well, so use existing practices and concepts– Topologies– Deployment methods– Distances and media
13
Feasible Technology for 2009:Interface Speeds
Feasible interface speeds for 2009:– < 80 Gbps … not enough return on investment– 100 Gbps (could be 100 GbE)– 120 Gbps– 160 Gbps (could be OC-3072/STM-1024)
Reasonable channel widths based on cost, efficiency and feasible technology– 16λ by 6.25 - 10 Gbps (best from ASIC perspective)– 10λ by 10 - 16 Gbps– 8λ by 12.5 - 20 Gbps– 4λ by 25 - 40 Gbps (best from optics perspective)– 1λ by 100-160 Gbps
Port density is impacted by channel width– Fewer λs translates to higher port density and less power
consumption
14
Anatomy of a 100 Gbps Solution: Slot Capacity
Year System Introduced Full Duplex Raw Slot Capacity
2000 40 Gbps
2004 60 Gbps
2006 – 2007 (in design now) 120 Gbps
2009500 Gbps
Required For Reasonable 100 GbE Port Density
15
Anatomy of a 100 Gbps Solution: Memory Selection
Advanced Content-Addressable Memory (CAM)– Less power per search– Need 4 times more performance assuming 2 x 100 GbE
ports/slot– Enhanced flexible table management schemes
Memories– DRAMs when performance allows to conserve cost– Quad Data Rate III SRAMs for speed (PS and Xbox gaming
industry is driving this technology)
Work in JEDEC (Joint Electron Device Engineering Council) to advance serial memory technology– JEDEC is an international semiconductor engineering
standardization body– Force10 is a JEDEC member
16
Anatomy of a 100 Gbps Solution: ASIC Selection
High speed interfaces– Interfaces to MACs, backplane, buffer memory will be
SERDES– SERDES used to replace parallel busing for reduced
pin and gate count– Need new higher speed SERDES technology to do this
0.09 micron (90nm) process geometry– More gates (100% more gates over current 0.13
micron process)– Better performance (25% better performance)– Lower power consumption (1/2 of 0.13 micron process)
17
Anatomy of a 100 Gbps Solution: ASIC Selection
Hierarchical placement and layout of logic cells within the ASICs– New ASICs will have on the order of 50,000,000 gates
compared to current technology using 30,000,000 gates
– That is a lot of gates to an ASIC designer
Flat placement is no longer a viable option
Requires new manufacturing technology
18
Anatomy of a 100 Gbps Solution: Backplane Design
Based on routing and connector complexity design, the best type of backplanes are
1. N+1 switching fabric in a backplane2. N+1 switching fabric in a midplane3. A/B switching fabric in a backplane4. A/B switching fabric in a midplane
N+1 fabric: less signals in a card spread out over a bigger area means less noise
A/B fabric: more signals in card in one area means more noise
19
Anatomy of a 100 Gbps Solution: Channel Design Considerations
Channel Bit Error Rate (BER)– Data transmitted across the backplane channel is
usually done in a frame with header and payload– The frame size can be anywhere from a few hundred
bytes to 16 KB– A typical backplane frame contains many PHY-layer
frames (ie Ethernet frames)
It’s OK to have a bad frame once in a while because we can’t design perfect systems– Expect a bad frame once a month– Most people could live with a bad frame a couple times
a week– Can’t live with 100 or 300 bad frames a week– BER of 10E-12 error rate means we drop 2000 – 4000
frames a week
20
Anatomy of a 100 Gbps Solution: Channel Design Considerations
Customers want to see a frame loss of zero
Systems architects want to see a frame loss of zero
Zero error is difficult to test and verify … none of us will live that long
Current BER standards of 10E-12 will result in a frame error of 10E-7 or less, depending on distribution– That is a lot of frame loss
21
Anatomy of a 100 Gbps Solution: Channel Design Considerations
The BER goal should be 10E-15 which is frame error rate of 10-12– It can be tested and verified at the system design
level– Simulate to 10E-17– Any frame loss beyond that will have minimal
effect on current packet handling/processing algorithms
– Current SERDES do not support this– Effective 10E-15 is obtained by both power noise
control and channel model loss
22
Anatomy of a 100 Gbps Solution: Channel Signaling
Existing channel signaling methods between fabrics and line cards will not work– NRZ
– In general, breaks down after 12.5 Gbps– 8B10B is not going to work at 25 Gbps– 64B66B is not going to work at 25 Gbps– Scrambling is not going to work at 25 Gbps
Need new signaling over the backplane– Duo-Binary
– Demonstrated to 33 Gbps– PAM4 or PAMx
– Demonstrated to 33 Gbps
23
Anatomy of a 100 Gbps Solution: Designing for EMI Compatibility
EMI is Electro-magnetic Interference, ie noise
Too much noise interfere can cause bit errors, so we need to minimize the EMI
Concerned about two types of EMI– Conducted EMI through power– Radiated EMI through air
24
Anatomy of a 100 Gbps Solution: Power Design to Reduce EMI
We need clean power in order to avoid interferences with chip operations
The power routing to cards inside the chassis is very important– Bus bar (noisy)– Power board (noisy)– Cabling harness (noisy)– Distribution through the backplane or mid plane using
copper foil is the cleanest method
25
Anatomy of a 100 Gbps Solution: Power Design to Reduce EMI
Design the power filter to filter in both directions– Filter out noise coming in over power, protects your
own equipment– Filter out noise going out over power, protects all
equipment on the power circuit
Design power distribution for 200% loading in case there is a manufacturing error in a power trace– Unlikely but required in carrier applications
26
Anatomy of a 100 Gbps Solution: Designing for EMI Compatibility
Each slot for a line card or module in the chassis must be a unique chamber– Shielding effectiveness determines the maximum
number of ports before exceeding emissions requirements
– Requires top and bottom seal using metal honeycomb– Requires metal panel between cards to seal sides
Seal the backplane everywhere to keep noise inside it from getting out
27
Agenda
About Force10 (just 3 slides)
100 GbE Technology Requirements– Requirements and Feasible Technology– How is Force10 Going to Get There?
Ethernet Alliance
100 GbE Standards Update– OIF– IEEE
28
Upgrade History and Path to 100 GbE and Higher Density
January 2002
October 2002
5 Tbps Passive Copper
Backplane100 GbE Ready
September 2004
1st Generation Switch Fabric Module (SFM)
112.5 Gbps/Slot
1999 – 2002
1st Generation Line Cards
“EtherScale”
E1200 (1.6875 Tbps)28 x 10 GbE336 x GbE
E600 (900 Mbps)14 x 10 GbE196 x GbE
April 2005
2nd Generation Line Cards
“TeraScale”
E120056 x 10 GbE
672/1260 x GbE
E60028 x 10 GbE
336/630 x GbE
October 2005
March 2006
2nd Generation Switch Fabric
Module (SFM3)225 Gbps/Slot
E1200 (3.375 Tbps)E600 (1.8 Tbps)100 GbE Ready
2007* 2009*
4th Generation Line Cards
100 GbE
3rd Generation Switch Fabric
Module337.5 Gbps/Slot
E-Series E1200
E-Series E600
90-port GbE
16-port 10 GbE
E120056/224 x 10 GbE672/1260 x GbE
E60028/112 x 10 GbE336/630 x GbE
3rd Generation Line Cards
High Density 10 GbE
Very High Density GbE
E120056 x 10 GbE672 x 1 GbE
E60028 x 10 GbE336 x GbE
* planned
29
100 GbE Ready Chassis = No Forklift Upgrade
Investment protection – backplane needs to scale to support over 200 Gbps bandwidth per slot
Current backplane can scale to 5 Tbps with future line card components– Designed and tested for 5 Tbps– Advanced fiberglass materials
improve transmission characteristics
– Unique conductor layers decouple 5 Tbps signal energy from power supplies
– Engineered trace geometry for clean signals
Force10 has more than 40 patents on its backplane technology
Backplane Bandwidth Capacity Per Slot
(Gbps)
Force10 337.5
Cisco ?*
Foundry ?*
Extreme ?*
* No other vendor has openly discussedtesting their backplanes for futuregrowth…
30
100 GbE Ready Chassis = No Forklift Upgrade
Chassis designed for 100 GbE and high density 10 GbE– Power capacity– Power routing– EMI– Cooling
31
Agenda
About Force10 (just 3 slides)
100 GbE Technology Requirements– Requirements and Feasible Technology– How is Force10 Going to Get There?
Ethernet Alliance
100 GbE Standards Update– OIF– IEEE
32
The Ethernet Alliance: Promoting All IEEE Ethernet Work
20 companies at launch January 10, 2006
46 members as of April 2006– Force10, Sun, Intel, Extreme,
Foundry, Broadcom, Cisco
Promotes Ethernet industry awareness, acceptance, and advancement of technology
Opportunity for end-users to speak on their requirements– Adam Bechtel (Yahoo!) spoke
at the Management Forum Panel at DesignCon in February
33
The Ethernet Alliance: Promoting All IEEE Ethernet Work
Force10 is a founding member of the Ethernet Alliance and very actively involved in leadership positions– Secretary: John D'Ambrosia
Scientist, Components Technology
– Board Member: Steve Garrison VP, Corporate Marketing
More information is available at www.ethernetalliance.org
34
Agenda
About Force10 (just 3 slides)
100 GbE Technology Requirements– Requirements and Feasible Technology– How is Force10 Going to Get There?
Ethernet Alliance
100 GbE Standards Update– OIF– IEEE
35
The Push for Standards:Interplay Between the OIF & IEEE
OIF defines multi-source agreements within the Telecom Industry– Optics and Electronic Dispersion
Compensation (EDC) for signal integrity– SERDES definition– Channel models and simulation tools– … the components inside the box
IEEE 802 covers LAN/MAN Ethernet– 802.1 and 802.3 define Ethernet over
copper cables, fiber cables, and backplanes– 802.3 leverages efforts from OIF– …the signaling and protocols on the wire
36
The Push for Standards: Ad Hoc HSSG (High Speed Study Group)
Ad Hoc HSSG is an unofficial group of companies that are looking into the feasibility of higher Ethernet speeds– We think the right speed is 100 GbE– Standards body will investigate and standardize on a speed
Meetings for this effort are facilitated by the Ethernet Alliance– Joel Goergen (Force10) and John D’Ambrosia (Force10) chair Ad
Hoc HSSG effort– The anchor team is composed of key contributors from Force10,
IBM, Quake, 3Com and Cisco (over 30 companies)
Opportunity for end-users to give input into standards – Ad Hoc HSSG survey (next slide)– Good input from AMS-IX, Comcast, Cox, Equinix, LINX, Level(3),
and Yahoo!– We need more of YOU to join the mailing list and to express your
opinions and requirements
37
Ad Hoc HSSG Survey Respondents
Organization Business Model Bandwidth Drivers
Yahoo! Portal / Content Broadband / Internet
Comcast Residential Broadband On Demand, HDTV
Cox Cable Residential Broadband P2P, Video
AMS-IX Internet Exchange
Broadband / Internet
10 GbE Peering
Equinix Internet Exchange
LINX Internet Exchange
IX in Japan Internet Exchange
Level(3) Communications Long Haul, IXC
Brookhaven National Lab Research
Cluster and Grid Computing
Moving Large Data Sets over LAN and WAN Links
Fermi National Lab Research
Lawrence Berkeley Lab Research
Lawrence Livermore Lab Research
NERSC Research
38
The Push for Standards:OIF
Force10 introduced three efforts within the OIF to drive a successful Call for Interest by the Ad Hoc HSSG– Two interfaces for interconnecting optics, ASICs, and
backplanes (in process of approval)– A 25 Gbps SERDES (approved!)– Updates of design criteria to the Systems User Group
that defines OIF standards (approved and in process)
39
Birth of an IEEE Standard:It Takes About 5 Years
Call for Interest
Study Group
Task Force
Working Group Ballot
Sponsor Ballot
Standards Board Approval
Publication
Feasibility and Research
Ideas From Industry
IEEE4 Years
Industry Pioneers
1 Year
High Speed Ethernet is
Here:CFI in July
2006
Ad Hoc HSSG
40
The Push for Standards:IEEE
Ad Hoc HSSG will introduce a Call for Interest (CFI) at the July 2006 IEEE 802.3 meeting– Meetings will be held in the coming months to
determine the CFI and the efforts required– Market potential, economic feasibility and technical
feasibility have been explored by the Ad Hoc HSSG– Target July 2006 because of resources within IEEE– Requires 50% approval vote by voting IEEE members
41
Summary
Industry has been successful scaling speed since 10 Mbps in 1983
The efforts in GbE and 10 GbE have taught a lot about interface technology
100 GbE success will depend on chips and optics
Significant effort is underway in both the OIF and the IEEE to define and invent interfaces to support next generation speeds
New updates will come in August after the CFI
42
Thank You