cloud ready data center network design guide
TRANSCRIPT
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
1/45
DESIGN GUIDE
Copyright 2011, Juniper Networks, Inc.
ClUD REaD Daa CENER
NEwRk DESIGN GUIDE
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
2/45
2 Copyright 2011, Juniper Netors, Inc.
DESIGN GUIDE - Coud Redy Dt Center Netor
Table of Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
History of the Modern Dt Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
appiction Evoution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Server Ptform Evoution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Infrstructure Evoution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
pertion Modes Evoution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
ypes of Dt Centers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
rnsction Production Dt Center Netor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Content nd Hosting Services Production Dt Center Netor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
High-Performnce Compute (HPC) Production Dt Center Netor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Enterprise I Dt Centers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Sm nd Midsize Business I Dt Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
he Ne Roe of the Netor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Design Considertions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Physic lyout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
op of Rc/Bottom of Rc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Virtu Chssis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
End of Ro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Midde of Ro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Coud Dt Center Netor Design Guidnce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Physic Netor opoogies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14
Singe ier opoogy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Mutitier opoogies (access-Core) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
access-Core Mesh Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Resiiency Design nd Protocos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
appiction Resiiency Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
appiction Resource Poos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Critic appiction Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19
Server lin Resiiency Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19
Server lin Resiiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Netor Device Resiiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20Virtu Mchine Mobiity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Netor Device Resiiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Hot-Sppbe Interfces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Unified In-Service Softre Upgrdes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Unified ISSU Methodoogy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Redundnt Sitching Fbric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Redundnt Routing Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Netor S Resiiency nd Reibiity Fetures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Routing nd Forrding on Seprte Pnes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Modur Softre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Singe Code Bse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Grcefu Routing Engine Sitchover (GRES) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Nonstop active Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25Nonstop Bridging/Nonstop Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Netor Resiiency Designs nd Protocos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
access-Core Inverse U loop-Free Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Mutichssis lin aggregtion Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Redundnt run Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Virtu Chssis t the Core . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
lyer 3 Routing Protocos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Mutipe Spnning ree Protoco . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
3/45
Copyright 2011, Juniper Networks, Inc. 3
DESIGN GUIDE - Coud Redy Dt Center Netor
MPlS t Core leve for lrge Depoyments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
agiity nd Virtuiztion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
logic Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Virtu Routers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
logic Routers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Virtu Chssis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
VlaNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Security Zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35MPlS VPNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Cpcity Pnning, Performnce, nd Scbiity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
hroughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
versubscription . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
ltency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Modur Scbiity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Port Cpcity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Softre Configurtion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Soution ImpementtionSmpe Design Scenrios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Scenrio 1: Enterprise Dt Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Scenrio 2: rnsction Dt Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Summry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
about Juniper Netors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Table of Figures
Figure 1. op of rc depoyment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Figure 2. Virtu Chssis in top of ro yout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Figure 3. Dedicted Virtu Chssis disy-chined ring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Figure 4. Virtu Chssis brided ring cbing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Figure 5. Extended Virtu Chssis configurtion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Figure 6. End of ro depoyment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Figure 7. Midde of ro depoyment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Figure 8. Singe tier netor topoogy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Figure 9. access-core hub nd spoe netor topoogy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
Figure 10. access-core inverse U netor topoogy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Figure 11. access-core mesh netor topoogy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Figure 12. appiction resource poos. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Figure 13. Critic ppiction resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Figure 14. Server in resiiency overvie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Figure 15. Seprte contro nd forrding pnes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Figure 16. Resiiency ith ccess-core inverse U . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Figure 17. Mutichssis laG configurtion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Figure 18. RG configurtion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Figure 19. Virtu Chssis core configurtion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Figure 20. lyer 3 configurtion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 0
Figure 21. MSP configurtion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Figure 22. MlPS design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Figure 23. Netor ith ogic systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34Figure 24. VPlS sitching cross dt centers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Figure 25. agiity designs for the dt centers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Figure 26. rdition versus virtu ppince rchitecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Figure 27. Simpified dt center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Figure 28. Use Cse: Enterprise dt center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Figure 29. Use Cse: rnsction dt center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
4/45
4 Copyright 2011, Juniper Netors, Inc.
DESIGN GUIDE - Coud Redy Dt Center Netor
Introduction
Dt centers hve evoved over the pst sever decdes from singe point, concentrted processing centers to dynmic,
highy distributed, rpidy chnging, virtuized centers tht provide myrid services to high y distributed, gob user
poputions. a poerfu ne coud computing prdigm hs emerged in hich users request nd receive informtion nd
services dynmicy over netors from n bstrcted set of resources. he resources re somehere out there in the
coud. Users dont cre here or ho the resources re provided; they ony cre tht their ppictions, dt, nd content
re vibe hen needed, utomticy, nd t the desired eve of quity nd security.
as demnds for coud computing services gro nd chnge t n ever incresing pce, it becomes critic for dt
center pnners to ddress both current nd evoving needs, nd to choose fexibe, dynmic coud dt center design
tht cn effectivey meet the compex chenges of the gob informtion ge.
o id in this process, it i be hepfu to oo bc t ho ppictions, computing ptforms, infrstructure, nd
opertions hve chnged since modern dt centers ere first introduced, nd ht those chnges men for todys
dt center designers. we i find common themetht the netor is the ey to the ne dt center nd the most
critic eement to consider in dt center design.
Scope
at the beginning of this guide, e revie the history of modern dt centers. he reminder of this guide introduces
the design principes, physic youts, netor topoogies, nd protocos tht provide the foundtion of the coud-
redy dt center netor. we present comprehensive set of guideines for dt center design, nd consider ho the
guideines cn be ppied in prctice to to different dt center typesn enterprise dt center nd trnsction
dt center. Finy, e describe ho to use Juniper products nd soutions to design dt centers tht fuy reize the
promise of coud computing.
his Design Guide is intended for the fooing personne:
Customers in the enterprise nd pubic sector
Service providers
Juniper prtners
I nd netor industry nysts
Individus in ses, netor design, system integrtion, technic support, product deveopment, mngement, nd
mreting ho hve n interest in dt center design
History of the Modern Data Center
Application Evolution
Modern dt centers begn in the 1960s s centr octions to house huge minfrme computers nd the ssocited
storge devices nd other periphers. a typic dt center consisted of rge, ir conditioned rooms of costy
equipment ith highy trined stff ssigned to eep the equipment running 24x7x365. Ech compute job s hnded
by singe minfrme computer ithin singe dt center. Users ccessed the computing resources vi timeshre
through dumb termins operting t very o bit rtes over teephone ines. he dt center s bout the
computer. his remined true even hen minicomputers ere introduced in the 1970s nd some dt centers ere
donsized to serve individu deprtments.
Cient/server systems ere introduced in the 1980s s the first stge in distributed computing. appiction processings distributed beteen the server nd cient ith communiction beteen them by y of proprietry protocos.
Ech compny hd its on ppiction server nd ssocited ppiction cients (either running cient softre
or operting s cient termin). he server provided the ppiction nd ssocited dtbse, hie the cient
provided the presenttion yer nd oc dt processing. he netor begn to py n importnt roe in cient/server
communictions; hoever, ithin the dt center itsef, processing for singe ppiction s sti done on singe
mchine, nd netoring ithin the dt center s of secondry concern.
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
5/45
Copyright 2011, Juniper Networks, Inc. 5
DESIGN GUIDE - Coud Redy Dt Center Netor
he 1990s s the exposion of Internet communictions nd the fu rriv of the gob informtion ge. rders-
of-mgnitude increses in bndidth, ubiquitous ccess, poerfu routing protocos, nd dedicted hrdre
oed ppictions to bre out of the singe computer or singe server mode. appictions strted to function in
mutipe tiers, in hich ech tier provided speciized ppiction services. he ppiction tier performed ppiction
processing, dring from the storge tier, hie the presenttion tier supported user interction through web
interfce. appiction deveopment reied on stndrd nguges (Jv), tiers communicted ith ech other using
stndrd protocos (CP/IP), nd stndrd presenttion protocos (HP) provided support cross mny types of
ptforms nd web brosers. Bcend processing becme modur, ith grety incresed communictions ithin
laNs nd waNs.
appiction evoution hs since seen the rise of service-oriented rchitectures (Sas), in hich ech ppiction cient
hndes piece of informtion coected from mutipe systems, nd mutipe servers my deiver singe service.
Ech ppiction is deivered s menu of services, nd ne ppictions re constructed from existing ppictions
in process non s mshup. For exmpe, credit trnsction ppiction my be constructed from set of
ppictionsone tht hndes the user interction, nother tht performs credit chec, nd nother tht processes
the ctu trnsction. Ech ppiction my hve n independent web presence tht resides in mutipe octions.
with Sas, rge, custom, singe purpose ppictions hve been repced by smer, speciized, generic functions
tht provide services to mutipe ppictions. For exmpe, n ppiction service provider my offer singe credit
chec ppiction to other ppiction providers. and fu ppictions, hich consist of mutipe singe purpose
ppictions, re no offered s service to users. For exmpe, compny no onger needs to purchse equipment
nd softre to provide customer retionship mngement (CRM) functionity. Insted, it cn purchse CRM services
from compny such s Sesforce tht in turn depends on other speciized ppiction functions.
a of these deveopments resut in compete dependence of ppictions on the netor: ithin the dt center,
beteen dt centers, nd beteen dt centers nd users. as ppiction demnd increses, more nodes must be
dded to the compute cyce nd nodes must be interconnected nd remin synchronized. appiction users no
hve high expecttions of vibiity, performnce, nd over user experience, nd support for quity of service
(QoS) nd service-eve greements (Slas) re must. he netor must be be to offer predictbe performnce
nd quity, hie hnding incresingy compex communiction fos nd ptterns.
Server Platform Evolution
From the 1960s to the ery 2000s, the story of ptform evoution s one of stggering increses in compute poer
couped ith continu reduction in size, poer requirements, nd cost. It is difficut to overstte the impct of
these chnges on spects of business, commerce, government, nd technoogy. Hoever, throughout this period,
direct connection s mintined beteen ogic systems nd physic computer systemsn individu ptform
ys corresponded to singe type of ogic entity. It s possibe to prtition mchines (dis or operting system
prtitions), but the prtitions ere ys of the sme type.
his chnged in the ery 2000s ith the introduction of virtu mchine technoogy. It becme possibe to te
singe physic ptform nd sice it into mutipe virtu mchines, ech of hich runs its on operting system nd
is ogicy seprte from other virtu mchines on the sme ptform. It is so possibe to te mutipe physic
ptforms nd combine them into singe ogic ptform to increse processing poer nd other cpbiities. with
x86-bsed virtu mchines, dt centers hve been be to significnty reduce hrdre costs nd free up spce for
ddition cpcity.
he decouping of ogic nd physic systems hs hd mjor impct on the design nd deivery of ppictions nd
services ithin the dt center. It is no onger possibe to design security nd understnd potenti fiure points bsed
on the physic cbe connections beteen mchines; it is necessry to understnd the retionship beteen ptforms
nd services. It hs become criticy importnt to understnd ho the netor ins physic systems nd ho the
physic systems operte retive to the ogic systems to hich they re mpped. Virtu systems ffect the size of
lyer 2 domins, cross-connecting servers nd netor systems, security, od bncing, high vibiity, nd QoS for
communiction beteen nodes.
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
6/45
6 Copyright 2011, Juniper Netors, Inc.
DESIGN GUIDE - Coud Redy Dt Center Netor
Infrastructure Evolution
In ddition to individu ptform evoution nd the introduction of mchine virtuiztion, there hs been n evoution
in ho physic servers nd netor eements re designed nd depoyed.
n the server side, the dvent of virtuiztion nd o cost x86-bsed servers hs ed to the introduction of bde
servers nd server custers. a bde server is chssis tht houses individu bde mchines tht re connected by
n intern Ethernet sitch. Server custers use bc chnne Infinibnd connection tht os seprte physic
servers to operte s singe node. o support these configurtions, the netor must extend itsef t the contro eveto communicte ith the proprietry netors ithin the bde server or custer.
n the netor side, converged I/ nd Converged Enhnced Ethernet (CEE) re driving chnges in ho servers nd
storge systems connect ithin the dt center. Converged I/ os server to hve feer physic interfces, ith
ech interfce supporting mutipe ogic interfces. For exmpe, ithin bde server, fe physic interfces my
support bridge connections, laN connections, nd compute connections for vriety of protocos. he evoving CEE
stndrds effort is focused on dding ossess trnsport cpbiities to Ethernet communictions ith the go of
extending converged I/ to server/storge ins.
he physic communictions infrstructure is so undergoing significnt chnges. as the current stndrd of 1/10
Gbps for communiction ithin the dt center gros to 40/100 Gbps over the decde, dt centers i need to rey
more nd more on fiber cpcity nd i need to deveop cost-effective cbing pns.
Operational Models Evolution
Driven by user demnd nd technoogy, ppictions, services, nd the ssocited production nd deivery mechnisms
hve gron in ys tht oud hve been inconceivbe decdes go. But much of this deveopment hs been t the
cost of ever incresing compexity. as trdition boundries erode ithin servers, mong servers, beteen servers nd
storge systems, nd cross dt centers, the job of mnging the vrious components becomes exponentiy more
difficut. Business processes must be mpped to compex nd dynmic infrstructure. Security nd ccountbiity
become mjor concerns, especiy hen sensitive informtion such s person or finnci dt must be trnsmitted
over inherenty unsecure physic ins through menu of speciized ppiction services offered by mutipe
ppiction providers.
Initiy, informtion systems deveopment outpced business process improvements, nd essenti governnce nd
contro functions ere cing. accountbiity hs no improved, thns to chnges such s those introduced ith
government regution; hoever, mny opertion chenges remin. Informtion no fos from mny directions,
nd is touched by mny prties nd systems. Dt center opertors must ccount for their on systems nd so me
sure tht third-prty ppiction services re reibe nd ccountbe.
For coud computing to succeed, it must be possibe to bring the mngement of ppictions, ptforms,
infrstructure, nd opertions under common orchestrtion yer. he orchestrtion systems must support the
myrid components tht comprise the coud soution nd provide common mngement toos tht cn ssure reibe
nd secure service production nd deivery. he systems must meet existing nd emerging stndrds nd regutory
nd compince requirements. ver, the orchestrtion yer must provide robust frmeor to support the
continuy evoving coud netor.
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
7/45
Copyright 2011, Juniper Networks, Inc. 7
DESIGN GUIDE - Coud Redy Dt Center Netor
Types of Data Centers
Not dt centers re the sme. here re to gener ctegoriesproduction dt centers nd I dt centers.
Production dt centers re directy ined to revenue genertion, heres I dt centers provide support functions for
business opertions. In ddition, dt center requirements nd design cn vry idey ccording to use, size, nd desired
resuts. Some dt centers re designed for the oest possibe tency nd highest possibe vibiity, hie others
require comprehensive ttention to QoS, sce, nd high vibiity. and, some imit feture support to contro costs.
his section provides exmpes of the most common dt center types.
Transactional Production Data Center Network
High-speed computing is critic to the success nd profitbiity of todys finnci institutions. at high sce, every
nnosecond of tency ccounts for either profit or oss. herefore, businesses such s finnci services cnnot
fford ny performnce or security riss.
ypicy, finnci services netor is extremey compex nd empoys numerous devices nd services to support
high-performnce computing infrstructure. he design of the production dt center must ddress specific
requirements to gurntee predictbiity nd o tency ssocited ith trding ptforms, gorithmic trding
ppictions, nd mret dt distribution systems.
Content and Hosting Services Production Data Center Network
with the emergence of coud computing nd ith more services being migrted to n IP-bsed deivery mode,
production dt centers re pying more critic roe for hosting content nd services. For exmpe, hosting providers
re no offering coud-bsed infrstructure s service. a brod rry of onine reti options hve emerged tht
position the Internet s ey business enber. In ddition, pure py coud providers offer extensive deveopment nd
deivery ptforms.
hese business modes impose strict set of dt center requirements. High vibiity is required to eep the onine
services vibe 24x7x375, nd the biity to deiver high voume trffic is required for the customer experience.
From the providers perspective, functions in the netor re needed to support ne business modes. For exmpe,
the dt center must support chnging orod distribution nd deiver ne orods to the end user. he security
infrstructure must support virtuiztion technoogy nd be grnur enough to hnde specific ppictions nd users
he netor must so support ppictions tht run cross mutipe sites hie retining consistent security poicy
cross the hoe environment.
High-Performance Compute (HPC) Production Data Center Network
Scientific innovtion in emerging fieds such s nnotechnoogy, biotechnoogy, genetics, nd seismic modeing re
driving production dt center requirements. with the id of gener-purpose hrdre, mny orgniztions re finding
it more cost-effective to everge grid computing (High Performnce Compute Custers or HPC/C) for their intensive
compute tss. his technoogy is bsed primriy on grouping mutipe CPUs together, distributing tss mong them,
nd coectivey competing rge ccutions. In the netor, 10GbE offers distinct benefits, not just for performnce
but so in terms of the o cost-to-bndidth rtio.
Enterprise IT Data Centers
Enterprise dt centers re found cross ide vriety of industries, incuding hethcre, reti, mnufcturing,
eduction, nd energy nd utiities. he enterprise dt center hs trditiony been designted s n I cost center,
in hich the primry objective is to be business enber, providing ccess to business ppictions, resources, nd
services (such s rce, CRM, ERP, nd others) for empoyees nd other netor users. Mjor requirements incude
high vibiity nd o tency performnce to enhnce productivity nd the user experience.
Enterprise dt center designers typicy oo for eding-edge innovtive soutions, such s server virtuiztion, I/
convergence, nd (MPlS)/virtu privte laN service (VPlS) for mutisite connectivity, nd the dt center netor
must support these technoogies.
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
8/45
8 Copyright 2011, Juniper Netors, Inc.
DESIGN GUIDE - Coud Redy Dt Center Netor
Small and Midsize Business IT Data Center
with todys dvnced, high priced netoring technoogies, most sm nd midsize businesses (SMBs) fce
serious chenges in being be to fford the test I dt center infrstructure technoogies hie sti trying to
remin competitive nd profitbe. Chenges incude the opertion overhed ssocited ith impementtion
nd doption of ne technoogies. Scbiity nd security re so mjor concerns. SMBs need reibe option
for buid-out of the dt center netor tht is cost-effective nd esy to depoy nd mnge. Juniper recognizes
the importnce of these SMB chenges by offering o cost, ess invsive pproch for depoying coud-redy
common sitching nd routing infrstructure bsed on feer required devices.
The New Role of the Network
he groing importnce of the netor hs been centr theme in the history of modern computing. No ith coud
computing, the netor hs become prmount. Everything no runs over netor, ithin nd cross systems in the
dt center, beteen dt centers, nd beteen dt centers nd users. within the dt center, the netor drives
nd enbes virtuiztion, consoidtion, nd stndrdiztion. Goby, the netor serves s the deivery nd ccess
vehice. without the netor, there is no coud.
Design Considerations
In the fce of exponentiy incresing compexity in compute nd netoring systems, it becomes critic to design
dt centers tht reduce compexity. Juniper ddresses this concern ith n pproch nd products tht grety
simpify dt center design, depoyment, nd opertion.
he Juniper strtegy optimizes designs in the fooing dimensions, ech of hich enbes dt centers to meet
importnt ppiction deivery objectives:
1. Simplify. Simpifying the dt center netor mens minimizing the number of netor eements required to
chieve prticur design, thus reducing both cpit nd operting costs. Simpifying so mens stremining
dt center netor opertions ith consistenty impemented softre nd contros.
2. Share. Shring the dt center netor mens inteigenty (nd in mny cses dynmicy) prtitioning the
infrstructure to support diverse ppictions nd user groups, nd interconnecting rge poos of resources ith
mximum giity. In mny cses, this invoves poerfu virtuiztion technoogies tht o mutipe ogic
opertions to be performed on individu physic entities (such s sitches, routers, nd ppinces).
3. Secure. Securing the dt center netor extends protection to support the rich, distributed rchitectures tht mny
ppictions currenty use. his requires robust, mutidimension pproch tht enhnces nd extends trdition
perimeter defenses. Incresing the grnurity nd giity of security poicies enbes trusted shring of incoming
informtion nd resident dt ithin the dt center, hie compementing the functions embedded in operting
systems nd ppictions.
4. Automate. automting mens the biity to cpture the ey steps invoved in performing mngement, opertion,
nd ppiction tss, nd embedding ts execution in softre tht dds inteigence to the over dt center
opertion. ss cn incude synchronizing configurtions mong mutipe disprte eements, strting nd stopping
critic opertions under vrious conditions, nd dignosing or profiing opertions on the dimensions tht re
importnt for mngers to observe.
with this high-eve design frmeor in mind, e cn no discuss the individu function components of the coud
dt center nd their ssocited requirements nd enbing technoogies.
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
9/45
Copyright 2011, Juniper Networks, Inc. 9
DESIGN GUIDE - Coud Redy Dt Center Netor
Physical Layout
Pnning the physic dt center yout is n importnt first step in designing the dt center. he dt center is
usuy divided into mutipe physic segments tht re commony referred to s segments, zones, ces, or pods. Ech
segment consists of ros or rcs contining equipment tht provides compute resources, dt storge, netoring,
nd other services.
In this section, e consider vrious physic yout options. Mjor fctors to consider incude cbing requirements,
cbe ength restrictions, poer nd cooing requirements, opertions, nd mngement. after the bsic segment isspecified, the sme physic yout cn be repicted cross segments of the dt center or in mutipe dt centers
his modur design pproch improves the scbiity of the depoyment, hie reducing compexity nd enbing
efficient mngement nd opertions.
he physic yout of netoring devices in the dt center must bnce the need for efficiency in equipment
depoyment ith restrictions on cbe engths nd other physic considertions. here re trde-offs to consider
beteen depoyments in hich netor devices re consoidted in singe rc versus depoyments in hich devices
re distributed cross mutipe rcs. adopting n efficient soution t the rc nd ro eves ensures efficiency of the
over design s rcs nd ros re repicted throughout the dt center.
his section considers the fooing dt center yout options:
op of rc/bottom of rc
End of ro
Midde of ro
Top of Rack/Bottom of Rack
In top of rc/bottom of rc depoyment, netor devices re depoyed in ech server rc (s shon for top of
rc in Figure 1). a singe device (or pir of devices for redundncy t the device eve) provides sitching for of the
servers in the sme rc. o o sufficient spce for servers, the devices in the rc shoud be imited to 1 U or 2 U
form fctor.
he Juniper Netors EX4200 Ethernet Sitch nd QFX3500 Sitch supports top of rc/bottom of rc
depoyments.
Figure 1. Top of rack deployment
his yout pces high-performnce devices ithin the ser ver rc in ro of servers in the dt center. with devices in
cose proximity, cbe run engths re minimized. Cbe engths cn be short enough to ccommodte 1GbE, 10GbE, nd
future 40GbE connections. here is so potenti for significnt poer svings for 10GbE connections hen the cbe
engths re short enough to o the use of copper, hich opertes t one-third the poer of onger run fiber cbes.
NetworkDevices
EX4200/QFX3500
ComputeStorageDevices
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
10/45
10 Copyright 2011, Juniper Netors, Inc.
DESIGN GUIDE - Coud Redy Dt Center Netor
with top of rc/bottom of rc youts, it is esy to provide sitching redundncy on per rc bsis. Hoever, note
tht ech egcy device must be mnged individuy, hich cn compicte opertions nd dd expense since
mutipe discreet 24- or 48-port devices re required to meet connectivity needs.
Both top of rc nd bottom of rc depoyments provide the sme dvntges ith respect to cbing nd sitching
redundncy. op of rc depoyments provide more convenient ccess to the netor devices, hie bottom of rc
depoyments cn be more efficient from n irfo nd poer perspective, becuse coo ir from under foor HVaC
systems reches the netor devices in the rc before continuing to fo uprd.
op of rc/bottom of rc depoyments hve some disdvntges, hoever. Becuse the devices serve ony the
servers in singe rc, upins re required for connection beteen the servers in djcent rcs, nd the resuting
increse in tency my ffect over performnce. agiity is imited becuse modest increses in server depoyment
must be mtched by the ddition of ne devices. Finy, becuse ech device mnges ony sm number of
servers, more devices re typicy required thn oud otherise be needed to support the server popution.
Juniper hs deveoped soution tht deivers the significnt benefits of top of rc/bottom of rc depoyments hie
ddressing the bove mentioned issues. he soution, Virtu Chssis, is described in the next section.
Virtual Chassis
Junipers pproch of virtuizing netor devices using Virtu Chssis deivers of the benefits of top of rc/
bottom of rc depoyments hie so reducing mngement compexity, providing efficient forrding pths for
server-to-server trffic, nd reducing the number of upin requirements.
a singe Virtu Chssis supports up to 10 devices using cross-connects. From mngement perspective, mutipe
devices become one ogic device. his pproch simpifies mngement by reducing the number of ogicy
mnged devices, nd it offers gie options for the number nd depoyment of upins. It so os servers to
support netor interfce crd (NIC) teming using in ggregtion groups (laGs) ith mutipe members of the
sme Virtu Chssis configurtion. his increses the tot server netor bndidth, hie so providing up to 9:1
server in redundncy.
Figure 2 iustrtes Virtu Chssis using to devices in top of rc depoyment.
Figure 2. Virtual Chassis in a top of row layout
Uplink
64 Gigabit Ethernet Dedicated Virtual Chassis
10-Gigabit Ethernet Uplinks
Virtual Chassis Master
Virtual Chassis Backup
Uplink
RE1
RE0
RE1
RE0
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
11/45
Copyright 2011, Juniper Networks, Inc. 1
DESIGN GUIDE - Coud Redy Dt Center Netor
Juniper supports fexibe pcement of EX4200 devices s prt of Virtu Chssis configurtion. Possibe
depoyments incude members in singe rc, cross sever rcs, in the sme iring coset, or spnning iring
cosets cross foors, buidings, nd fciities. when interconnecting devices through dedicted Virtu Chssis ports,
the physic distnce beteen to directy connected devices my not exceed 5 meters, hich is the mximum Virtu
Chssis port cbe ength. a Virtu Chssis configurtion cn be extended by using upin ports configured s Virtu
Chssis ports to o greter distnce beteen to directy connected member devices.
here re three cbing methods for interconnecting devices in Virtu Chssis configurtiondisy-chined ring,
brided ring, nd extended Virtu Chssis configurtion, s described in the fooing subsections. we recommend
tht devices in Virtu Chssis configurtion be connected in ring topoogy for resiiency nd speed. a ring
configurtion provides up to 128 Gbps of bndidth beteen member devices.
Daisy-Chained Ring
In the disy-chined ring configurtion, ech device in the Virtu Chssis configurtion is connected to the device
immeditey djcent to it. Members t the end of the Virtu Chssis configurtion re connected to ech other to
compete the ring topoogy. Connections beteen devices cn use either Virtu Chssis port on the bc of device
(for exmpe, VCP 0 to VCP 0 or VCP 0 to VCP 1).
he disy-chined ring configurtion provides simpe nd intuitive method for interconnecting devices. he mximum
height or bredth of the Virtu Chssis is 5 meters.
Figure 3. Dedicated Virtual Chassis daisy-chained ring
Braided Ring
In the brided ring cbing configurtion, ternting devices in the Virtu Chssis configurtion re connected to ech
other. he to device pirs t ech end of the Virtu Chssis configurtion re directy connected to ech other to
compete the ring topoogy. Connections beteen devices cn use either Virtu Chssis port on the bc of device.
he brided ring configurtion extends the Virtu Chssis height or bredth to 22.5 meters.
Figure 4. Virtual Chassis braided ring cabling
5m
22.5m
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
12/45
12 Copyright 2011, Juniper Netors, Inc.
DESIGN GUIDE - Coud Redy Dt Center Netor
Extended Virtual Chassis Conguration
he extended Virtu Chssis configurtion os the interconnection of individu Virtu Chssis members or
dedicted Virtu Chssis configurtions cross distnces of up to 40 m ith redundnt fiber ins. his configurtion
is used hen depoying Virtu Chssis configurtion cross iring cosets, dt center rcs, dt center ros,
or fciities. In this configurtion, option EX-UM-2XFP or EX-UM-4SFP upin modues or fixed sm form-fctor
puggbe trnsceiver (SFP) bse ports in the EX4200-24F re used to interconnect the members of the Virtu
Chssis. Mutipe upins cn be used for ddition bndidth nd in redundncy.
Note: Beginning ith Juniper Netors Junos operting system 9.3, the 24 fixed Gigbit Ethernet SFP bse ports in the
EX4200-24F device cn be configured s Virtu Chssis ports to extend Virtu Chssis configurtions.
Figure 5. Extended Virtual Chassis configuration
End of Row
In n end of ro configurtion, devices re depoyed in netor device-ony rc t the end of ro to support of
the servers in the ro. In this configurtion, hich is common in existing dt centers ith existing cbing, high-density
devices re pced t the end of ro of servers. End of ro configurtions cn support rger form fctor devices thn
top of rc/bottom of rc configurtions. hey so require feer upins nd simpify the netor topoogy. Becuse
they require cbing over onger distnces thn top of ro/bottom of ro configurtions, they re best for depoyments
tht invove 1GbE connections nd retivey fe servers.
he EX4200 Ethernet Sitch nd Juniper Netors EX8200 ine of Ethernet sitches support end of ro depoyments.
Figure 6. End of row deployment
0 1 23 4 5 67 89 1011 1213 1415 1617 1819 2021 2223 2425 2627 2829 3031 3233 3435 3637 3839 4041 4243 4445 4647
48PoEEX4200 Series
0 1 23 4 5 67 89 1011 1213 1415 1617 1819 2021 2223 2425 2627 2829 3031 3233 3435 3637 3839 4041 4243 4445 4647
48PoEEX4200 Series
0 1 23 4 5 67 89 1011 1213 1415 1617 1819 2021 2223 2425 2627 2829 3031 3233 3435 3637 3839 4041 4243 4445 4647
48PoEEX4200 Series
0 1 23 4 5 67 89 1011 1213 1415 1617 1819 2021 2223 2425 2627 2829 3031 3233 3435 3637 3839 4041 4243 4445 4647
48PoEEX4200 Series
Gigabit Ethernet or 10-Gigabit Ethernet Virtual Chassis ExtensionDedicatedVirtual Chassis
Virtual Chassis Location #1
01
23
45
67
89
1011
1213
1415
1617
1819
2021
2223
2425
2627
2829
3031
3233
3435
3637
3839
4041
4243
4445
4647
48PoEEX4200 Series
01
23
45
67
89
1011
1213
1415
1617
1819
2021
2223
2425
2627
2829
3031
3233
3435
3637
3839
4041
4243
4445
4647
48PoEEX4200 Series
01
23
45
67
89
1011
1213
1415
1617
1819
2021
2223
2425
2627
2829
3031
3233
3435
3637
3839
4041
4243
4445
4647
48PoEEX4200 Series
01
23
45
67
89
1011
1213
1415
1617
1819
2021
2223
2425
2627
2829
3031
3233
3435
3637
3839
4041
4243
4445
4647
48PoEEX4200 Series
Gigabit Ethernet or 10-Gigabit Ethernet Virtual Chassis Extension
Virtual Chassis Location #2
Up to 50 Km
Network
Devices
EX8200
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
13/45
Copyright 2011, Juniper Networks, Inc. 13
DESIGN GUIDE - Coud Redy Dt Center Netor
rdition modur chssis devices hve commony been used in end of ro depoyments, here cbe engths re
retivey ong beteen servers nd netor devices. Cbe engths my exceed the ength imits for 10GbE/40GbE
connections, so crefu pnning is required to ccommodte high-speed netor connectivity. Device port utiiztion
is suboptim ith trdition chssis-bsed devices, nd most consume gret de of poer nd cooing, even hen
not fuy configured or utiized. In ddition, these rge chssis-bsed devices my te up gret de of vube dt
center spce.
Middle of Row
a midde of ro depoyment is excty ie n end of ro depoyment, except tht the devices re depoyed in the
midde of the ro insted of t the end. his configurtion provides some dvntges over end of ro depoyment,
such s the biity to reduce cbe engths to support 10GbE/40GbE server connections. High-density, rge form-fctor
devices re supported, feer upins re required in comprison ith top of ro depoyments, nd simpified netor
topoogy cn be dopted.
he EX4200 ine nd EX8200 ine support midde of ro depoyments.
Figure 7. Middle of row deployment
ou cn configure midde of ro netor device rc so tht devices ith cbing imittions re insted in the rcs
tht re cosest to the device rc. whie this option is not s fexibe s the top of ro depoyment, it supports greter
scbiity nd giity thn the end of ro depoyment.
Cloud Data Center Network Design Guidance
Table 1. Juniper Products for Top of Rack, End of Row, and Middle of Row Deployments
LAOT BANDIDTH EX8200 EX4200 QFX3500
op of rc/
Bottom of
rc
1G X
10G X
MIXES
End of ro
1G X X
10G X
MIXES X
Midde of ro
1G X X
10G X
MIXES X
Network
Devices EX8200
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
14/45
14 Copyright 2011, Juniper Netors, Inc.
DESIGN GUIDE - Coud Redy Dt Center Netor
be 1 ists Juniper Netors products nd shos ho they support the physic youts tht e hve discussed. Ech
product hs been designed for fexibiity nd esy integrtion into the dt center.
EX8200 ine of Ethernet Sitches deivers the performnce, scbiity, nd crrier-css reibii ty required for
todys high-density enterprise dt center nd cmpus ggregtion nd core environments, s e s high-
performnce service provider interconnects. EX8200 Ethernet ine crds re specificy designed to optimize
enterprise ppictions.
EX4200 Ethernet Sitch combines the high vibiity nd crrier-css reibiity of modur systems ith theeconomics nd fexibiity of stcbe ptforms, deivering high-performnce, scbe soution for dt center,
cmpus, nd brnch office environments. ffering fu suite of l2 nd l3 sitching cpbiities s prt of the bse
softre, the EX4200 stisfies vriety of high-performnce ppictions, incuding brnch, cmpus, nd dt
center ccess depoyments s e s GbE ggregtion depoyments.
he high-performn ce Juniper Netors QFX3500 Sitch ddresses ide rnge of depoyment scenrios, hich
incude trdition dt centers, virtuized dt centers, high-performnce computing, netor-ttched storge,
converged server I/, nd coud computing. Feturing 48 du-mode sm form-fctor puggbe trnsceiver
(SFP+/SFP) ports nd four qud sm form-fctor puggbe pus (QSFP+) ports in 1 U form fctor, the QFX3500
Sitch deivers feture rich lyer 2 nd lyer 3 connectivity to netored devices such s rc servers, bde
servers, storge systems, nd other sitches in highy demnding, high-performnce dt center environments.
For converged server edge ccess environments, the QFX3500 is so stndrds-bsed Fibre Chnne over
Ethernet (FCoE) rnsit Sitch nd FCoE to Fibre Chnne (FCoE-FC) Gtey, enbing customers to protect theirinvestments in existing dt center ggregtion nd Fibre Chnne storge re netor (SaN) infrstructures.
Physical Network Topologies
after ddressing the physic yout, the next stge in dt center design is to consider the topoogies tht i
connect the netor devices. he decision bout topoogy invoves issues reted to cbing (ith ssocited distnce
imittions), tency, netor pth resiiency to void singe points of fiure, nd use of in mngement protocos for
resiiency (ith oop detection nd prevention, if needed).
his section considers four types of physic netor topoogies:
Singe tier topoogy
access-core hub nd spoe topoogy
access-core inverse U topoogy
access-core mesh design topoogy
Single Tier Topology
In singe tier netor topoogy, ech server poo connects directy to ech ogic sitch in singe tier, creting
compete connection mesh. his simpe design hs o overhed nd is highy efficient.
Becuse there is singe sitching tier, oops cnnot occur, nd trffic forrding decisions re highy optimized by
y of intern mechnisms. rffic fo is controed by configurtion chnges on the servers nd devices. No speci
resiiency protocos re required. Ech device in the singe tier topoogy must support l2 nd l3 functions s e s
virtuiztion fetures such s VlaNs nd virtu routers for ogic seprtion. his pproch is highy gie, becuse
resources cn move hie retining connection to the sme devices. he singe tier topoogy supports esy integrtion
ith services nd edge connectivity, providing consistency, o tency, nd simpified opertions.
In principe, ny singe device (or device pir for resiiency), such s the EX8200 ine of Ethernet sitches tht provides
compete netor connectivity, cn operte effectivey in singe tier design. with currenty vibe technoogies nd
products, hoever, the singe tier topoogy my not sce to meet the requirements of todys dt centers.
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
15/45
Copyright 2011, Juniper Networks, Inc. 15
DESIGN GUIDE - Coud Redy Dt Center Netor
Figure 8 shos singe tier topoogy tht hs been rchitected for resiiency. Ech device hs intern resiiency
cpbiities, nd the devices re connected through mutipe redundnt ins.
Figure 8. Single tier network topology
Multitier Topologies (Access-Core)
Mutitier designs re be to meet the scbiity needs of rge nd expnding dt centers. Ech mutitier design
incudes n ccess tier tht provides the connection to server/storge poos, nd core tier tht provides the sitching
infrstructure for ccess devices nd the connection to the extern netor.
Ech ccess-core configurtion becomes repicbe pod ithin the dt center, providing connectivity nd resiiency
for the servers nd storge tht it services. Using Junipers simpified design pproch, the dt center cpcity cn be
buit out by dding s mny pods s needed to support the required cpcity nd services.
o me the mutitier design resiient, ech device is connected to mutipe devices in the other tier. whie this increses
netor resiiency, it so introduces oops in the netor. lin mngement protocos re required to provide ogic
oop-free fo for trffic forrding.
his section describes the physic mutitier topoogies; the ssocited in mngement protocos re discussed in the
Resiiency Design nd Protocos section beo.
Access-Core Hub and Spoke
he ccess-core hub nd spoe design ddresses the scbiity concerns ssocited ith singe tier topoogy hie
retining the sme bsic structure. his design incudes pirs of ccess devices nd pir of core devices supporting
server/storge poos. Ech server/storge poo connects to both ccess devices in the pir to provide resiiency for the
server-to-ccess device in. (See Server lin Resiiency Design beo for description of server in resiiency options.)
In the northbound direction, ech ccess device connects to both core devices, nd these re ined to ech other.
here is no singe point of fiure t the ccess or core tier, becuse if n ccess or core device fis, trffic from the
server cn sti rech the other device. By dding ddition ccess device pirs ith simir upin connections to the
core devices, this design cn provide netor connections for greter number of compute/storge resources.
he ccess-core hub nd spoe design effectivey ddresses the sce imittions of the singe tier design, but t the
cost of greter compexity. Becuse ech ccess device connects to ech core device nd the core devices re ined,
oops cn occur ithin the ccess-core tiers. In this context, in mngement protocos re required for resiiency
nd to provide oop detection nd prevention. See Resiiency Design nd Protocos for discussion of trdition nd
current Juniper pproches to oop detection nd prevention.
High Resiliency Switch/Router
Compute IP Storage
EX8200Virtual Chassis
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
16/45
16 Copyright 2011, Juniper Netors, Inc.
DESIGN GUIDE - Coud Redy Dt Center Netor
Figure 9 shos the ccess-core hub nd spoe design. he EX8200, EX4200, nd QFX3500 cn serve s ccess
devices, nd the EX8200 ine, Juniper Netors MX480 nd MX960 3D Univers Edge Routers cn serve s core
devices. his design is very commony depoyed, s it produces highy resiient nd scbe netors.
Figure 9. Access-core hub and spoke network topology
Access-Core Inverse Design
Becuse the ccess-core hub nd spoe design reies on oops to ensure resiiency, it requires the use of in
mngement protocos for oop prevention nd detection, hich introduces compexity nd increses tency in the
netor.
o ddress this concern, the ccess-core inverse U design modifies the ccess-core hub nd spoe yout to dd
resiiency ith oop-free design. In this topoogy, connected pir of core devices serves pir of ccess devices,
hich jointy provide netor ccess to pir of server/storge poos. here re to core devices (eft nd right)
nd to ccess devices (eft nd right). Ech server/storge poo connects to both the eft nd right ccess devices.
he eft ccess device is ined to the eft core device nd the right ccess device is ined to the right core device.
Finy, the to core devices re connected ith redundnt ins for in resiiency.
he ey to this oop-free design is tht ech ccess device is connected to different device in the core pir. he core
devices re connected to ech other, but the devices in ech ccess pir re not connected to ech other. Resiiency
ors through the ins beteen the core devices. loops re voided, becuse trffic tht trves from n ccess device
to the core tier (or vice vers) cnnot directy return to the sme ccess device.
Figure 10 shos the inverse U design. he EX4200 nd EX8200 sitches cn serve s core devices. his design
provides netor resiiency in oop-free environment nd resuts in incresed giity, s VlaNs cn be expnded
nyhere in the dt center netor ithout the compexity of spnning tree. his design tes dvntge of recent
improvements in server in resiiency nd simpifies netor design. Efficient od bncing cn be chieved by
configurtion of ctive stndby ins on server netor connections.
High Resiliency Switch/Router
Compute IP Storage
Access tierEX Series/QFX3500
Coretier
EX Series/MX Series
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
17/45
Copyright 2011, Juniper Networks, Inc. 17
DESIGN GUIDE - Coud Redy Dt Center Netor
Figure 10. Access-core inverse network topology
Access-Core Mesh Design
he ccess-core mesh design provides nother terntive to the ccess-core hub nd spoe design nd is ide for
lyer 3 routing t the ccess tier. In the other ccess-core topoogies, the ccess devices operte t l2, nd the l2/l3
boundry is t the core tier. Becuse there is no routing t the ccess yer, in mngement protocos re supported
using l2 technoogy. with in mngement protoco such s Spnning ree Protoco (SP), some ins re boced
(unused) uness fiure occurs, nd bndidth is underutiized.
he ccess-core mesh design soves this probem by everging lyer 3 routing support t the ccess tier. he physic
yout is the sme s in the ccess-core inverse U design; hoever, the ccess device pir is interconnected using
lyer 2 in. he upins from ccess devices to core devices re l3 interfces, nd l3 routing protocos provide the
ctive/ctive in resiiency. he detis of this pproch re discussed in lyer 3 Routing Protocos beo.
Figure 12 shos the ccess-core mesh design ith the l2/l3 boundry t the ccess tier. he QFX3500, EX4200, nd
EX8200 sitches cn serve s ccess devices, nd the EX8200 nd EX4500 cn serve s core devices. his design
voids the compexity of spnning tree by everging the fu routing cpbiities of Juniper Netors ccess devices.
Netor bndidth utiiztion is incresing rpidy, nd effective od bncing is n importnt too for incresing
in utiiztion nd thereby ming the best use of existing bndidth. By incuding routing t the ccess tier, the
dt center netor cn increse netor resiiency through fster convergence nd enbe od bncing of trffic
beteen the ccess nd core tiers using equ-cost mutipth (ECMP).
Compute IP Storage
EX Series/MX Series
EX Series/QFX3500
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
18/45
18 Copyright 2011, Juniper Netors, Inc.
DESIGN GUIDE - Coud Redy Dt Center Netor
Figure 11 incudes prti ccess-core mesh configurtion on the eft nd fu mesh configurtion on the right.
Figure 11. Access-core mesh network topology
keeping in mind the bsic topoogies described in this section, e cn no turn to discussion of the ssocited
resiiency designs nd protocos.
Resiliency Design and Protocols
with the convergence of high demnd services onto IP infrstructures, netor outges of ny ind re no onger
cceptbe. Even retivey sm pcet osses cn hve negtive impct on users perceptions of service deivery,
hie mjor node, in, or interfce fiure cn hve serious consequences for the provider. he dt center design
must minimize netor fiures henever possibe, nd minimize the effects of fiures tht do occur.
as virtu dt centers nd coud computing infrstructures evove, they often require distribution of mngement
contros to mutipe distributed sites to shre responsibiities mong distributed tems, extend contros to ne nd
distnt octions, nd support high vibiity nd disster recovery. he dt center design must ccommodte
distributed ptforms nd components, so connectivity is mintined regrdess of oction, nd ccess to contro
informtion is vibe despite chnges in vibiity nd performnce.
In ddition, to protect n enterprises competitive edge, business ppictions must be highy vibe, nd
productivity must not suffer hen fiures occur. when disster tes pce, the orgniztion must recover ith
minim disruption, bringing bcup business ppictions onine gin quicy nd ensuring tht the ssocited user
dt is protected nd vibe.
he over objective of resiiency design is to eiminte singe points of fiure. Becuse fiures cn occur t ny eve
(ppiction, server, netor device, netor S, nd physic) the over resiiency design must ddress resiiency
requirements t ech eve. we hve redy discussed ho resiiency designs cn be incorported into physic
netor topoogies. In this section, e discuss ho resiiency options re yered on top of the physic topoogy.
Application Resiliency Design
he go of ppiction resiiency is to mintin ppiction vibiity in the event of fiure or disruption t ny eve.
his section considers ppiction resiiency using ppiction resource poos nd protected critic ppiction resources.
Application Resource Pools
In this pproch, mutipe ppiction instnces re grouped together into resource poos tht re distributed cross
the netor. Resource poos cn be mutitiered, invoving web servers, ppiction softre, nd dtbses. During
norm opertion, users my ccess ny of the ppiction instnces, subject to od bncing or other types of
coordintion. Becuse ech ppiction is insted on mutipe systems in mutipe octions, ccess to the ppiction
is mintined hen ny singe ppiction resource or ssocited connection fis.
Compute IP Storage
Core
tierEX Series/MX Series
Core
tierEX Series/MX Series
AccesstierEX Series/QFX3500
AccesstierEX Series/QFX3500
Layer 3
Layer 2
Layer 3
Layer 2
Compute IP Storage
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
19/45
Copyright 2011, Juniper Networks, Inc. 19
DESIGN GUIDE - Coud Redy Dt Center Netor
appiction resource poos re n effective resiiency soution; hoever, they my require synchronous stte nd dt
repiction mong the nodes in the resource poo to ensure tht the ppiction instnces remin synchronized. when
designing such soution, it is importnt to pn for ny synchronous coordintion nd od bncing, s e s to
consider the ssocited performnce, connectivity, nd tency effects.
Figure 12. Application resource pools
Critical Application Resources
Some ppictions hve critic ppiction resources tht re not fesibe or desirbe to repicte. hese my incude
high-end end servers tht re too costy to repicte, or mission critic resources tht shoud not be repicted for
security or opertion resons. In these cses, singe server hosts the ppiction resource, nd coud therefore
become singe point of fiure.
o minimize the riss of fiure for such critic ppiction resources, dt center designers cn depoy the ppictionson very high-end servers tht hve buit-in resiiency nd introduce high vibiity ctive/stndby configurtions for
disster recovery. Connectivity concerns cn be ddressed by mutihomed netor ins beteen the server nd the
device, nd mutipe netor pths for users nd other resources.
his type of design requires the engineering of redundnt ins nd bcup systems, nd my require periodic stte nd
dt repiction to eep the ctive nd bcup systems synchronized.
Figure 13. Critical application resources
Server Link Resiliency Design
he objective of server in resiiency is to mintin ppiction nd service vibiity if ccess to n individu server
is interrupted. Server in resiiency cn operte t sever eves ithin the coud dt center, s shon in Figure 14. he
figure shos server poos tht provide compute services ithin the dt center. his simpified figure shos singe
tier topoogy; hoever, the sme pproch ppies to of the mutitier ccess-core topoogies.
Resiiency in this context cn operte t the in eve, netor device eve, nd virtu mchine mobiity eve, s
described in the fooing subsections.
Application Resource PoolUsers
DC Network
Critical Application
Resource
Users
DC Network
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
20/45
20 Copyright 2011, Juniper Netors, Inc.
DESIGN GUIDE - Coud Redy Dt Center Netor
Figure 14. Server link resiliency overview
Server Link Resiliency
o void singe point of fiure t the in eve, servers cn be depoyed ith du homed, resiient connections to
netor devices. In this rrngement, mutipe physic connections re grouped in singe ogic connection. he
ogic connection cn be n ctive/ctive laG in hich mutipe ins re du homed (combined into singe ogic
in to core device) ith od shring, or n ctive/stndby rrngement in hich the stndby in is used ony if the
ctive in fis.
Network Device Resiliency
In the server context, netor device resiiency ddresses the need to mintin connectivity if netor device fis.
his cn be chieved through du homing to mutipe netor devices, hich provides both in nd netor device
resiiency. he principe is the sme s for server in resiiency, except tht the redundnt ctive/ctive or ctive/
stndby ins invove mutipe netor devices.
Virtual Machine Mobility
Virtu mchine mobiity ddresses the fiures ithin the server itsef. In this rrngement, virtu mchine on
server in one pod is pired ith virtu mchine on server in nother pod, nd the virtu mchine on the second
server tes over if the first server fis for ny reson. appictions tht re depoyed on the virtu mchine in the first
pod re repicted on the virtu mchine in the second pod. he second virtu mchine cn be depoyed on ny server
ithin the dt center or in nother dt center tht hs lyer 2 connectivity to the first server.
Network Device Resiliency
he fooing eements cn be used to minimize the effects of singe point of fiure of netor device or critic
component of device:
Hot-sppbe interfces
Unified in-service softre upgrde (unified ISSU)
Redundnt sitching fbric
Redundnt routing engine
ComputeCompute
Standbylink
EX Series/QFX3500
Networkdevice
resiliency
Linkresiliency
VM VM VM
ComputeCompute
VM Mobility
Standbylink
Networkdevice
resiliency
Active-activeLAG
VM VM VM
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
21/45
Copyright 2011, Juniper Networks, Inc. 2
DESIGN GUIDE - Coud Redy Dt Center Netor
Minimizing singe points of fiure is of mjor importnce in the dt center becuse modern service-eve gurntees
nd five nines uptime requirements precude the trdition prctice of schedued dontime for mintennce.
In some cses, mintennce indo provisions in requests for propos (RFPs) hve disppered together.
Gobiztion is so significnt fctorith mutipe customers nd tems oring round the coc, there re no
off pe trffic periods for the ys-on netor. he bottom ine is this. Modern netor operting systems must
enbe in-service router chnges nd upgrdes.
Hot-Swappable Interfaces
he routing community too the first step tords fciitting unified ISSU hen vendors introduced hot-sppbe
interfces for netor devices. Mny routers, incuding of those in Junipers product ine, no onger need to be
reset to insert or remove n interfce crd. Insted, the box dynmicy recognizes the ne interfce nd begins to
communicte ith it immeditey. Ne components cn thus be inserted nd removed from the router ithout ting
the system don.
nied In-Service Soware pgrades
he biity to provide unified ISSUrepcement of n entire operting system ithout pnned outgeis unique to
devices running Junos S. Juniper Netors customers cn upgrde compete operting system, not just individu
subsystems, ithout contro pne disruption nd ith minim disruption of trffic. Upgrding is compex opertion
tht requires extensive softre chnges, from the contro pne code to microcode running on the forrding crds,
nd Junos S is unique in its biity to support these chnges ithout bringing the netor don or cusing mjortrffic disruption.
an upgrde of this ind is impossibe for users of other systems ho re forced to jugge mutipe reese trins nd
softre versions hen pnning ech upgrde. Crefu pnning nd testing re required to choose the right reese
one tht incudes the ne functions but does not forego ny existing fetures or hrdre support. aso, ony Junos S
provides n utomtic configurtion chec before the upgrde. with other soutions, users re usuy notified of n
inconsistent, unsupported configurtion fter the upgrde, hen it is too te to bort.
Due to these riss, mny I groups void upgrding their softre uness it is bsoutey necessry, nd continue to run od
versions of code. his cn severey imit options hen the netor must support ne requirements nd hen users re
imptient for ne fetures. he ssocited uncertinty cn crete hvoc ith qurtery budgets nd project resources.
nied ISS Methodology
Considering the immense vrition mong todys IP netor topoogies, equipment, nd services, it is not surprising
tht vrious router vendors hve ten different pproches to unified ISSU. he right pproch is one tht ddresses
the prctic probems in todys netors nd hs the fexibiity to meet the needs of future netors. athough the
unified ISSU process is compex nd pproches to the probem vry, there re to mjor gos:
Mintin protoco djcencies broen djcency mes it necessry to reccute routing pths. If this occurs,
tens of thousnds of protoco djcencies must be reset nd reinitited, nd s mny s miion routes removed,
reinsted, nd processed to reestbish netor-ide forrding pths.
Meet Sla requirements n upgrde mechnism shoud not ffect netor topoogy or interrupt netor services.
Noticebe pcet oss, dey, nd jitter cn be extrordinriy expensive in terms of Sla penties nd dmged
customer confidence.
Unified ISSU ccompishes these gos by everging nonstop ctive routing (NSR), hich eimintes routing
disruptions so tht l2/l3 djcencies cn sty ive, nd by minimizing pcet oss to meet Sla requirements.
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
22/45
22 Copyright 2011, Juniper Netors, Inc.
DESIGN GUIDE - Coud Redy Dt Center Netor
Redundant Switching Fabric
In redundnt configurtion, sitch fbric modue (SFM) is used ith to sitch fbrics nd Routing Engines (REs)
to chieve fu bndidth ong ith RE nd sitch contro redundncy nd sitch fbric redundncy. he min
function of the SFM is to provide redundnt sitching pne for the device. For exmpe, the SFM circuitry in Juniper
Netors EX8208 Ethernet Sitch is distributed cross three moduesto RE modues nd one SFM modue. any
to of these three modues must be insted nd function to provide oring sitch fbric ith no redundncy.
he third modue, hen present, provides prti redundncy (2+1) for the sitching functionity, such tht if ny one
of the to function modues becomes nonopertion, the third modue tes over.
woring together, the RE nd SFMs deiver the necessry sitching cpcity for the EX8208 sitch. when the
second RE modue is present, the ddition sitch fbric serves in hot-stndby mode, providing fu 2+1 sitch fbric
redundncy. he SFMs re hot-sppbe nd fied-repcebe, enbing fied units to be esiy repced ithout
service interruption.
he to ctive, od-shring sitch fbrics on the RE nd SFMs coectivey deiver up to 320 Gbps (fu-dupex) of
pcet dt bndidth per ine-crd sot, providing sufficient cpcity to support future 100GbE depoyments ithout
requiring ny forift upgrdes or chnges to the netor infrstructure. he EX8208 sitch bcpne is designed to
support mximum fbric bndidth of 6.2 bps.
Redundant Routing Engine
o to ten EX4200 sitches cn be interconnected to crete Virtu Chssis configurtion tht opertes s singenetor entity. Every Virtu Chssis configurtion hs mster nd bcup. he mster cts s the mster RE nd
the bcup cts s the bcup RE. he Routing Engine provides the fooing functionity:
Runs vrious routing protocos
Provides the forrding tbe to the Pcet Forrding Engines (PFEs) in member sitches of the Virtu Chssis
configurtion
Runs other mngement nd contro processes for the entire Virtu Chssis configurtion
he mster RE, hich is in the mster of the Virtu Chssis configurtion, runs Junos S softre in the mster roe. It
receives nd trnsmits routing informtion, buids nd mintins routing tbes, communictes ith interfces nd PFE
components of the member sitches, nd hs fu contro over the Virtu Chssis configurtion.
he bcup RE, hich is in the bcup of the Virtu Chssis configurtion, runs Junos S softre in the bcup
roe. It stys in sync ith the mster RE in terms of protoco sttes nd forrding tbes. If the mster becomes
unvibe, the bcup RE tes over the functions tht the mster RE performs.
Network OS Resiliency and Reliability Features
Dt center designers cn significnty enhnce resiiency in the dt center simpy by depoying Juniper devices. he
Junos operting system, hich runs on Juniper devices, incorportes the fooing importnt resiiency fetures to
protect ginst the effects of intern fiures:
Seprtion of the routers contro pne nd forrding pnes
perting system modurity
Singe code bse
Grcefu Routing Engine sitchover (GRES) nd restrt
Nonstop ctive routing (NSR)
Juniper Netors introduced both of these fetures to the routing community, nd both re in idespred use tody.
he need to forrd pcets nd process routes simutneousy presents mjor chenge for netor routers. If the
trffic od through router becomes very hevy, most resources might be used for pcet forrding, cusing deys
in route processing nd so rections to chnges in the netor topoogy. n the other hnd, significnt chnge
in the netor topoogy might cuse food of ne informtion to the router, cusing most resources to be used in
performing route processing nd soing the routers pcet forrding performnce.
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
23/45
Copyright 2011, Juniper Networks, Inc. 23
DESIGN GUIDE - Coud Redy Dt Center Netor
Routing and Forwarding on Separate Planes
Figure 15 depicts the pioneering Juniper rchitecture tht enbes continuous systems by cery seprting the contro
pne from the forrding pne. he ey to the rchitecture ies in intern resource oction. If most resources
re consumed by one of the to bsic functions (contro or forrding), the other function suffers nd the router is
destbiized. he soution is to perform these functions in seprte physic entities, ech ith its on resources.
Figure 15. Separate control and forwarding planes
he contro pne is so non s the routing pne, nd its primry component is the Routing Engine, hich is
redundnt in mny Juniper ptforms. n ptforms, the Junos S contro pne is bsed on BSD erne. he
forrding pne is so non s the dt pne, nd its primry component is the PFE. he contro pne mintins
peer retionships, runs routing protocos, buids the routing tbe, mps destintion IP ddresses ith physic router
interfces, nd buids the FIB. he FIB is exported to the forrding pne, hich uses it to send pcets out of the
correct interfce nd on to the next-hop router. Hving copy of the forrding tbe in the forrding pne mes it
possibe for the router to continue forrding pcets even if softre bug or routing issue cuses probems in the
contro pne.
Modular Soware
he division of bor beteen contro nd forrding pnes hs its pre in the next essenti rchitectur
chrcteristic of Junos Sits fundment modurity. a ey dvntge of modurity is the inherent fut toernce
tht it brings to the softre. Ech modue of Junos S runs in its on protected memory spce nd cn restrt
independenty, so one modue cnnot disrupt nother by scribbing on its memory. If there is softre probem ith
Junos S production code, the probem cn be quicy identified, isoted, nd fixed ithout n interruption in service.
Junos S utomticy restrts fied modues ithout hving to reboot the entire device.
he modur Junos S design is in str contrst to monoithic rchitectures in hich the operting system consists
of rge singe code set. In monoithic rchitecture ithout isotion beteen processes, mfunction my cuse
fu system crsh, s one fiure cretes memory es nd other probems tht ffect mny other processes. he
device must restrt to correct the probem, putting the ptform out of service for the restrt period.
Single Code Base
Unie other netor operting systems tht spinter into mny different progrms nd imges hie remining under
common nme, Junos S hs remined singe, cohesive system throughout its ife cyce.
Juniper Netors engineers deveop ech Junos S feture ony once, nd then ppy it to devices nd security
ptforms here it is needed ithout requiring compete overhu of the code. as resut, ech ne version of Junos
S is superset of the previous version. Customers do not need to dd seprte pcges hen feture is desired,
but ony need to enbe it.
Route Processes
RIB
Management Processes
Kennel
FIB
Security
Control Plane
FIB
Layer 2 Processing
Interfacces
Forwarding Plane
-
8/4/2019 Cloud Ready Data Center Network DESIGN GUIDE
24/45
24 Copyright 2011, Juniper Netors, Inc.
DESIGN GUIDE - Coud Redy Dt Center Netor
Juniper Netors methodoogicy enhnces the singe Junos S source bse through highy discipined
deveopment process tht foos singe reese trin. Deveopers ensure singe consistent code set for ech
feture, nd the resut is e understood, extensivey tested code. he Junos S testing process incudes repeted
testing ith utomted regression scripts. Deveoped over mny yers, these test scripts re ey pieces of Juniper
Netors inteectu property. hrough the extensive testing of ech Junos S reese, bugs nd other probems re
iey to be found nd corrected by Juniper engineers before customers ever see the ne version.
Becuse the sme code runs cross Juniper Netors routers, ech feture provides common user experience
on devices. a BGP or SPF configurtion ors the sme y on brnch router s it does in the core of service
provider netor, nd so uses the sme dignostic nd configurtion toos. when netor ros out on mutipe
Juniper ptforms, singe opertions tem redy hs the noedge required to configure nd monitor of the ne
devices. his ind of efficiency cn significnty reduce netors operting expense.
Graceful Routing Engine Switchover (GRES)
Most routers tody me use of redundnt contro pne processors (REs in Juniper Netors terminoogy), so tht if
one processor fis, the other cn te over router opertions. ne RE serves s the mster nd the other s bcup.
he to REs exchnge frequent eepive messges to detect hether the other is opertion. If the bcup RE stops
receiving eepives fter specified interv, it tes over route processing for the mster.
he imiting fctor in this scenrio ies in the fct tht the dt pnes PFE is reinitiized during the sitchover
from mster to bcup Routing Engine. a dt pne erne nd forrding processes re restrted, nd trffic is
interrupted. o prevent such disruption, contro pne stte informtion must be synchronized beteen the mster
nd bcup RE. his is here GRES comes in.
GRES provides sttefu repiction beteen the mster nd bcup Routing Engines. Both REs mintin copy of
importnt entities in the Junos S erne, such s interfces, routes, nd next hops. In this y, the bcup does
not need to ern ny ne informtion before ting over from the mster. he routers forrding pne bres its
connection ith the routing tbes on the od mster nd connects to the ne mster. From the point of vie of pcet
forrding, sitching the PFE connection from one RE to the other hppens immeditey, so no pcet oss occurs.
Under GRES, the contro pne routing protoco process restrts. Neighboring routers detect the restrt nd rect to
the event ccording to the specifictions of ech protoco. If there is n RE sitchover in router X, for exmpe, ny
neighboring router tht hs peering session ith router X sees the peering session fi. when router Xs bcup
RE becomes ctive, it reestbishes the djcency, but in the mentime the neighbor hs dvertised to its on
neighbors tht router X is no onger vid next hop to ny destintion beyond it, nd the neighbor routers strt tooo for n ternte pth. when the bcup RE comes onine nd reestbishes djcencies, its neighbors dvertise
the informtion tht router X is gin vibe s next hop nd devices shoud gin reccute best pths.
hese events, ced routing fps, consume resources on the contro pnes of ffected routers nd cn be highy
disruptive to netor routing.
o preserve routing during n RE fiover, GRES must be combined either ith grcefu restrt protoco extensions or
NSR (Junipers recommended soution).
Grcefu restrt protoco extensions provide soution to the fpping probem, but not necessriy the best soution.
Grcefu restrt is defined by the Internet Engineering s Force (IEF) in series of Requests for Comment (RFCs),
ech specific to prticur routing protoco. Grcefu restrt specifies tht if router Xs contro pne goes don, its
neighbors do not immeditey report