advanced enterprise campus designd2zmdbbm9feqrf.cloudfront.net/2015/usa/pdf/brkcrs-… · ·...
TRANSCRIPT
Advanced Enterprise Campus Design:Routed Access
Johnny Tsao, CCIE No. 8759 – Sr. Systems Engineer
BRKCRS-3036
Housekeeping
• We value your feedback, please complete evaluation
• Visit the World of Solutions
• Please remember this is a 'non-smoking' venue!
• Please switch your mobile phones to stun
• Please make use of the recycling bins provided
• Please remember to wear your badge at all times
AbstractThis session starts with a quick review of the latest business and technology trends which drive the requirements for design of campus networks, such as increased mobility, need for collaboration, and pervasive unified communications, alongside with the possibility of running a virtualized network infrastructure. The objective of the session is to provide details for implementing a network design in the campus where L3 routing is leveraged straight from the access layer to maximize the network resiliency and scalability. The routing techniques covered in this session can also be leveraged in the traditional Multilayer Campus design.
We will begin with a quick recap of the basic principles and tools for designing hierarchical campus networks, and then explain the major differences between the multilayer campus design approach, and the routed access alternative. We will provide recommendations about the best topologies to use in the network design for routed access, and also provide a detailed analysis of the best practice configuration and convergence scenarios for both OSPF and EIGRP protocols. As part of our discussion for convergence, we will describe how to best leverage CEF loadbalancing and avoid issues such as polarization. The focus is on the goal of fast convergence to support a unified communications infrastructure while maximizing network utilization and availability.
The session then looks at ways to build more resiliency into switching systems used in the network by utilizing features like Stateful Switchover(SSO) and Non-Stop Forwarding(NSF) and how to leverage them at the access layer, including the advantages of using In Service Software Upgrades (ISSU). It is also discussed how to leverage Cisco's Catalyst 6500 Virtual Switch System (VSS) in the Routed Access design.
Finally, we also provide information about the considerations for running IPv6 in a campus built using the Routed Access model.
This session is applicable for those attendees responsible for the Design, Deployment, Operations, and Management of Enterprise Campus Networks.
It is recommended that those attending have at least a basic knowledge of routing and switching protocols as well as traditionalCampus design. It is also recommended that anyone attending this session have previous knowledge or experience with EIGRP andOSPF, as well as notions of IP Multicast routing, in particular PIM.
This session builds on the Multilayer Campus Architecture and Design Principals (BRKCRS-2031) and provides a design alternative to that one discussed on the Advanced Campus Design: Resilient Campus Design (BRKCRS-3032).
Enterprise Campus Design: Routed Access
• Introduction
• Cisco Campus Architecture Review
• Campus Routing Foundation and Best Practices
• Building a Routed Access Campus Design
• Routed Access Design and VSS
• Routed Access Design for IPv6
• Impact of Routed Access Design for Advanced Technologies
• Summary
Some Loops are Fun ...
It’s 8:45 am…“The whole network is
down”
“Nothing seems to work”
“I can’t access anything”
“All systems are unreachable”
Many of us have suffered the consequences of a L2 loop
%IP-4-DUPADDR: Duplicate address 10.87.1.2 on Vlan100, sourced by 00d0.04e0.63fc
%IP-4-DUPADDR: Duplicate address 10.87.1.2 on Vlan100, sourced by 00d0.04e0.63fc
%IP-4-DUPADDR: Duplicate address 10.87.1.2 on Vlan100, sourced by 00d0.04e0.63fc
...
%C4K_EBM-4-HOSTFLAPPING: Host 00:02:A5:8A:8B:5E in vlan 60 is flapping between port Gi3/6 and port Po9
%C4K_EBM-4-HOSTFLAPPING: Host 00:02:A5:8A:8B:5E in vlan 60 is flapping between port Gi3/6 and port Po9
%C4K_EBM-4-HOSTFLAPPING: Host 00:02:A5:8A:8B:5E in vlan 60 is flapping between port Gi3/6 and port Po9
...
Number of topology changes 2433341 last change occurred 00:00:02 ago
%PM-SP-4-LIMITS: Virtual port count for module 5 exceeded the recommended limit of 1800
%PM-SP-4-LIMITS: Virtual port count for switch exceeded the recommended limit of 13000
The Problem? The Solution…
• L2 Fails Open – i.e. Broadcast and Unknowns flooded
• L3 Fails Closed – i.e. neighbour lost
SiSi SiSi
SiSi
L3 Control Plane
Failure
... a loop and a
network down ... some subnets down
L2 Control Plane
Failure
This Is Not About...
This is about ...
A design alternative that leverages L3 routing all the way down to the access layer, to see where it brings an advantage while we analyze the trade offs of using it.
L3 = GOODL2 = BAD
Enterprise Campus Design: Routed Access
• Introduction
• Cisco Campus Architecture Review
• Campus Routing Foundation and Best Practices
• Building a Routed Access Campus Design
• Routed Access Design and VSS
• Routed Access Design for IPv6
• Impact of Routed Access Design for Advanced Technologies
• Summary
Hierarchical Network DesignWithout a Rock Solid Foundation the Rest Doesn’t Matter
Building BlockWAN Internet
SiSi SiSi SiSi SiSi SiSi SiSi
SiSi SiSi
SiSi SiSi
SiSi SiSiSiSi
SiSi
Access
Distribution
Core
Distribution
Access
Hierarchical Network DesignWithout a Rock Solid Foundation the Rest Doesn’t Matter
Building BlockWAN Internet
SiSi SiSi SiSi SiSi SiSi SiSi
SiSi SiSi
SiSi SiSi
SiSi SiSiSiSi
SiSi
Access
Distribution
Core
Distribution
Access Offers hierarchy—each layer has specific role
Modular topology—building blocks
Easy to grow, understand, and troubleshoot
Creates small fault domains—clear demarcations
and isolation
Promotes load balancing and redundancy
Promotes deterministic traffic patterns
Incorporates balance of both Layer 2 and Layer 3
technology, leveraging the strength of both
Can be applied to both the multilayer and routed
campus designs
Building a Highly Available Campus with Cisco
Traditional Core
3-tier Model
Downtime per Year
33 Minutes
99.9936%
0:33 / yr
Collapsed Core
2-tier Model
Downtime per Year
1 Hour
50 Minutes
Enhanced Core
Virtual Chassis Model
Downtime per Year
3 Minutes
99.9994%
0:03 / yr 99.979%
1:50 / yr
Downtime per Year – Using Reliability Block Diagram Methodology
Hierarchical Network DesignWithout a Rock Solid Foundation the Rest Doesn’t Matter
Building Block
SiSi SiSi
SiSi SiSi
SiSi SiSi
Access
Distribution
Core
Distribution
Access
Offers hierarchy—each layer has specific role
Modular topology—building blocks
Easy to grow, understand, and troubleshoot
Creates small fault domains—clear demarcations
and isolation
Promotes load balancing and redundancy
Promotes deterministic traffic patterns
Incorporates balance of both Layer 2 and Layer 3
technology, leveraging the strength of both
Can be applied to both the multilayer and routed
campus designs
L2
Multilayer Campus Network DesignLayer 2 Access with Layer 3 Distribution
SiSi SiSi SiSi SiSi
Vlan 10 Vlan 20 Vlan 30 Vlan 30 Vlan 30 Vlan 30
L3
Each access switch has
unique VLAN’s
No layer 2 loops
Layer 3 link between
distribution
No blocked links
At least some VLAN’s span
multiple access switches
Layer 2 loops
Layer 2 and 3 running over
link between distribution
Blocked links
Multilayer Campus Network Design
• Mature, 10+ year old design
• Evolved due to historical pressures
• Cost of routing vs. switching
• Speed of routing vs. switching
• Non-routable protocols
• Well understood optimization of interaction between the various control protocols and the topology
• STP Root and HSRP primary tuning to load balance on uplinks
• Spanning Tree Toolkit
Well Understood Best Practices
SiSi SiSi
SiSi SiSi
BRKCRS-2031 – Multilayer Campus Architectures and Design Principles
Root
Bridge &
HSRP
Active
HSRP
Standby
CISF, BPDU Guard
LoopGuard
RootGuard
0
2
4
6
8
10
250 msec 3 secs
Multilayer Campus Network Design
• Utilizes multiple Control Protocols
• Spanning Tree (802.1w, …)
• FHRP (HSRP, VRRP, GLBP…)
• Routing Protocol (EIGRP, …)
• Convergence is dependent on multiple factors
• FHRP - 900msec to 9 seconds
• Spanning Tree - 400msec to 50 seconds
• FHRP Load Balancing
• HSRP/VRRP – Per Subnet
• GLBP – Per Host
Good Solid Design Option
Tim
e t
o r
es
tore
Vo
IP d
ata
flo
ws
(s
ec
on
ds)
HSRP Hello Timers
FHRP Convergence
3/2 3/2
3/1 3/1
Switch 1 Switch 2
DST MAC 0000.0000.4444
DST MAC 0000.0000.4444
Multilayer Campus Network DesignLayer 2 Loops and Spanning Tree
Campus Layer 2 topology has sometimes proven a operational or
design challenge
Spanning tree protocol itself is not usually the problem, it’s the
external events that triggers the loop or flooding
L2 has no native mechanism to dampen down a problem:
‒ L2 fails Open, as opposed to L3 which fails closed
Implement Spanning Tree loops only when you have to
Enterprise Campus Design: Routed Access
• Introduction
• Cisco Campus Architecture Review
• Campus Routing Foundation and Best Practices
• Building a Routed Access Campus Design
• Routed Access Design and VSS
• Routed Access Design for IPv6
• Impact of Routed Access Design for Advanced Technologies
• Summary
Best Practices—Campus RoutingLeverage Equal Cost Multiple Paths
Layer 3 Equal
Cost Link’sLayer 3 Equal
Cost Link’sSiSiSiSi
SiSiSiSi
SiSi SiSiSiSiSiSi
SiSi SiSi SiSi SiSi SiSi SiSi
Use routed pt2pt links and do not SVIs.
ECMP to quickly re-route around failed node/links
with load balancing over redundant paths
Tune CEF L3/L4 load balancing hash to achieve
maximum utilization of equal cost paths (CEF
polarization)
Build triangles not squares for deterministic
convergence
Insure redundant L3 paths to avoid black holes
Summarize distribution to core to limit event
propagation
Utilized on both Multi-Layer and Routed Access
designs
Routed Interfaces Offer Best Convergence Properties
21:32:47.813 UTC: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet2/1, changed state to down
21:32:47.821 UTC: %LINK-3-UPDOWN: Interface GigabitEthernet2/1, changed state to down
21:32:48.069 UTC: %LINK-3-UPDOWN: Interface Vlan301, changed state to down
21:32:48.069 UTC: IP-EIGRP(Default-IP-Routing-Table:100): Callback: route, adjust Vlan301
1. Link Down
2. Interface Down
3. Autostate
4. SVI Down
5. Routing Update
21:38:37.042 UTC: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet3/1, changed state to down
21:38:37.050 UTC: %LINK-3-UPDOWN: Interface GigabitEthernet3/1, changed state to down
21:38:37.050 UTC: IP-EIGRP(Default-IP-Routing-Table:100): Callback: route_adjust GigabitEthernet3/1
SiSiSiSi
L2
SiSiSiSi
L31. Link Down
2. Interface Down
3. Routing Update
~ 8 msec
loss
~ 150-200
msec loss
Configuring L3 routed interfaces provides for faster
convergence than a L2 switchport with an associated L3 SVI
IP Event Dampening to Reduce Routing Churn
• Prevents routing protocol churn caused by constant interface state changes
• Dampening is applied on a system: nothingis exchanged between routing protocols
• Supports all IP routing protocols
• Static routing, RIP, EIGRP, OSPF, IS-IS, BGP
• In addition, it supports HSRP and CLNS routing
• Applies on physical interfaces and can’t be applied on subinterfaces individually
Up
Up
Interface State Perceived by EIGRP or OSPF
Interface Stateinterface GigabitEthernet1/1
description Uplink to Distribution 1
dampening
ip address 10.120.0.205 255.255.255.254
Down
Up
Down
SiSi
SiSiSiSi
UpDown
Up
UpDown
Down
Best Practice—Build Triangles Not SquaresDeterministic vs. Non-Deterministic
Triangles: Link/Box Failure Does Not
Require Routing Protocol Convergence
Model A
SiSi
SiSiSiSi
SiSi
Squares: Link/Box Failure Requires
Routing Protocol Convergence
Model B
SiSi
SiSiSiSi
SiSi
Layer 3 redundant equal cost links provide fast convergence
Hardware based—fast recovery to remaining path
Convergence is extremely fast (convergence is local to system)
0
0.5
1
1.5
2
2.5
3
3.5
500 1000 5000 10000 15000 20000 25000
Co
nve
rgen
ce
(s
ec
)
ECMP ECMP (SXI2) MEC
CEF ECMP—Optimize ConvergenceECMP Convergence Is Dependent on Number of Routes
Number or Routes in Area – Sup720
SiSi
SiSi
SiSi
Time for ECMP
Recovery
Time for ECMP/MEC Unicast Recovery
Until recently, time to update switch HW FIBwas linearly dependent on the number ofentries (routes) to be updated
Summarization and Filtering will decreaseRP load as well as speed up convergence
Cisco Express Forwarding (CEF) Load BalancingUnderutilized Redundant Layer 3 Paths
Redundant
Paths
Ignored
SiSiSiSi
SiSi SiSi
SiSi SiSi
L
L
R
R
Distribution
Default L3 Hash
Core
Default L3 Hash
Distribution
Default L3 Hash
Access
Default L3 Hash
Access
Default L3 Hash
70%
load
30%
load
The default CEF hash ‘input’ is L3 source and destination IP addresses
• Imbalance/overload could occur
CEF polarization: in a multihop design, CEF could select the same left/left or right/right path
• Redundant paths are ignored/underutilized
Two solutions:
1. CEF Hash Tuning
2. CEF Universal ID
SiSiSiSi
SiSi SiSi
SiSi SiSi
CEF Load Balancing1. Avoid Polarization with CEF Hash Tuning
RL
RDistribution
L3/L4 Hash
Core
Default L3 Hash
Distribution
L3/L4 Hash
L
RL
Left Side
Shown
Access
Default L3 Hash
Access
Default L3 Hash
All Paths
Used
L
With defaults, CEF could select the same left/left or right/right paths and ignore some redundant paths
Alternating L3/L4 hash and default L3 hash will give us the better load balancing results
The default is L3 hash—no modification required in core or access
In the distribution switches use:
‒ mls ip cef load-sharing full
to achieve better redundant path utilization
CEF Load Balancing2. Avoid Polarization with Universal ID
• Cisco IOS uses “Universal ID” concept (also called Unique ID) to prevent CEF polarization
• Universal ID generated at bootup (32-bit pseudo-random value seeded by router’s base IP address)
• Universal ID used as input to ECMP hash, introduces variability of hash result at each network layer
• Universal ID supported on Catalyst 6500 Sup-32, Sup-720, Sup-2T
• Universal ID supported on Catalyst 4500 SupII+10GE, SupV-10GE and Sup6E
Original Src IP + Dst IP
Universal* Src IP + Dst IP + Unique ID
Include Port Src IP + Dst IP + (Src or Dst Port) + Unique ID
Default* Src IP + Dst IP + Unique ID
Full Src IP + Dst IP + Src Port + Dst Port
Full Exclude Port Src IP + Dst IP + (Src or Dst Port)
Simple Src IP + Dst IP
Full Simple Src IP + Dst IP + Src Port + Dst Port
Catalyst 4500 Load-Sharing Options Catalyst 6500 PFC3** Load-Sharing Options
* = Default Load-Sharing Mode
Hash using Source IP
(SIP), Destination IP (DIP)
& Universal ID
SiSi SiSi
SiSi SiSi
SiSi
Enterprise Campus Design: Routed Access
• Introduction
• Cisco Campus Architecture Review
• Campus Routing Foundation and Best Practices
• Building a Routed Access Campus Design
• Routed Access Design and VSS
• Routed Access Design for IPv6
• Impact of Routed Access Design for Advanced Technologies
• Summary
Routed Access Design Layer 3 Distribution with Layer 3 Access: no L2 Loop
Data 10.1.20.0/24
2001:DB8:CAFE:20::/64
Voice 10.1.120.0/24
2001:DB8:CAFE:120::/64
EIGRP/OSPF EIGRP/OSPF
GLBP Model
SiSiSiSi
Layer 3
Layer 2
Layer 3
Layer 2EIGRP/OSPF EIGRP/OSPF
SiSi SiSi
Data 10.1.40.0/24
2001:DB8:CAFE:40::/64
Voice 10.1.140.0/24
2001:DB8:CAFE:140::/64
Move the Layer 2/3 demarcation to the network edge
Leverages L2 only on the access ports, but builds a L2 loop-free network
Design Motivations: simplified control plane, ease of troubleshooting, high availability
Routed Access AdvantagesSimplified Control Plane
Simplified Control Plane
‒ No STP feature placement (root bridge, loopguard, …)
‒ No default gateway redundancy setup/tuning (HSRP, VRRP, GLBP ...)
‒ No matching of STP/HSRP priority
‒ No asymmetric flooding
‒ No L2/L3 multicast topology inconsistencies
‒ No Trunking Configuration Required
L2 Port Edge features still apply:
‒ Spanning Tree Portfast
‒ Spanning Tree BPDU Guard
‒ Port Security, DHCP Snooping, DAI, IPSG
‒ Storm Control
‒ 802.1x, QoS Settings
‒ IPv6 FHS
SiSi
SiSiSiSi
SiSi
L3 L3 L3 L3
L3
SiSi SiSi
Routed Access AdvantagesSimplified Network Recovery
• Routed Access network recovery is dependent on L3 re-route
• Time to restore upstream traffic flows is based on ECMP re-route
• Time to detect link failure
• Process the removal of the lost routes from the SW RIB
• Update the HW FIB
• Time to restore downstream flows is based on a routing protocol re-route
• Time to detect link failure
• Time to determine new route
• Process the update for the SW RIB
• Update the HW FIBUpstream Recovery: ECMP
Downstream Recovery: Routing Protocol
SiSi
SiSiSiSi
SiSi
SiSi SiSi
00.20.40.60.8
11.21.41.61.8
2
RPVST+FHRP
OSPF EIGRP
Upstream
Routed Access AdvantagesFaster Convergence Times
• RPVST+ convergence times dependent on FHRP tuning
• Proper design and tuning can achieve sub-second times
• EIGRP converges <200 msec
• OSPF converges <200 msec with LSA and SPF tuning
Both L2 and L3 Can Provide Sub-Second Convergence
SiSiSiSi
SiSi SiSi
SiSi
Designated
Router
(High IP Address)
IGMP Querier
(Low IP address)
Designated
Router & IGMP
Querier
Non-DR has to
drop all non-RPF
Traffic
SiSiSiSi SiSi
SiSi
Routed Access AdvantagesA Single Router per Subnet: Simplified Multicast
Layer 2 access has two multicast routers per access subnet, RPF checks
and split roles between routers
Routed Access has a single multicast router which simplifies multicast
topology and avoids RPF check altogether
Routed Access Advantages Ease of Troubleshooting
• Routing troubleshooting tools
• Consistent troubleshooting: access, dist, core
• show ip route / show ip cef
• Traceroute
• Ping and extended pings
• Extensive protocol debugs
• IP SLA from the Access Layer
• Failure differences
• Routed topologies fail closed—i.e. neighbor loss
• Layer 2 topologies fail open—i.e. broadcast and unknowns flooded
SiSi
SiSiSiSi
SiSi
L3 L3 L3 L3
L3
switch#sh ip cef 192.168.0.0
192.168.0.0/24
nexthop 192.168.1.6 TenGigabitEthernet9/4
Routed Access Design ConsiderationsDesign Constrains
• Can’t span VLANs across multiple wiring closet switches
+ Contained Broadcast Domains
+ But can have same VLAN ID on all closets
• RSPAN no longer possible
• Can use ER-SPAN on Catalyst 6500
• IP addressing—do you have enoughaddress space and the allocation planto support a routed access design? -Should not be a problem for IPv6 ;-)
SiSi
SiSiSiSi
SiSi
L3 L3 L3 L3
L3
SiSi SiSi
Routed Access Design ConsiderationsPlatform Requirements
• Catalyst Requirements
• Cisco Catalyst 2960 XR (IP Lite)
• Cisco Catalyst 3850 and 3650
• Cisco Catalyst 4500
• Cisco Catalyst 6500/6800
• Catalyst IOS IP Base minimum feature set
• EIGRP-Stub – Edge Router
• PIM Stub – Edge Router
• OSPF for Routed Access
• 3x50 – 3.3.0SE – 1000 OSPF Routes
• 4500 – 3.5.0E – 1000 OSPF Routes
• Catalyst 6500 Series IOS 12.2(33)SXI4
SiSi
SiSiSiSi
SiSi
L3 L3 L3 L3
L3
SiSi SiSi
Routed Access DesignMigrating from a L2 Access Model
DHCPDNS10.1.20.0/24
10.1.30.0/24
...
10.1.120.0/24
VLAN 20
VLAN 30
...
VLAN 120
EIGRP/OSPF
GLBP Model
VLAN 20
VLAN 30
...
VLAN 120
VLAN 20
VLAN 30
...
VLAN 120
20,30 ... 120
User
Groups
User
Groups
interface Vlan20
ip address 10.1.20.3 255.255.255.0
ip helper-address 10.5.10.20
standby 1 ip 10.1.20.1
standby 1 timers msec 200 msec 750
standby 1 priority 150
standby 1 preempt
standby 1 preempt delay minimum 180
interface GigabitEthernet1/1
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 20-120
switchport mode trunk
switchport nonegotiate
10.5.10.20
SiSiSiSi
SiSiSiSi
Typical deployment uses Vlan/Subnet for different user groups
To facilitate user mobility, vlans extend to multiple closets
DHCPDNS
Routed Access Design Migrating from a L2 Access Model
10.1.20.0/24
10.1.30.0/24
...
10.1.120.0/24
VLAN 20
VLAN 30
...
VLAN 120
EIGRP/OSPF
GLBP Model
VLAN 20
VLAN 30
...
VLAN 120
VLAN 20
VLAN 30
...
VLAN 120
20,30 ... 120
User
Groups
User
Groups
interface Vlan20
ip address 10.1.20.3 255.255.255.0
ip helper-address 10.5.10.20
standby 1 ip 10.1.20.1
standby 1 timers msec 200 msec 750
standby 1 priority 150
standby 1 preempt
standby 1 preempt delay minimum 180
10.5.10.20
SiSiSiSi
L3
L3L3
L3 L3
SiSiSiSi
interface GigabitEthernet1/1
description Distribution Downlink
ip address 10.120.0.196 255.255.255.254
interface GigabitEthernet1/1
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 20-120
switchport mode trunk
switchport nonegotiate
As the routing is moved to the access layer, trunking is no longer required
/31 addressing can be used on p2p links to optimize ip space utilization
DHCPDNS
Routed Access DesignMigrating from a L2 Access Model
10.1.20.0/24
10.1.30.0/24
...
10.1.120.0/24
VLAN 20
VLAN 30
...
VLAN 120
EIGRP/OSPF
GLBP Model
VLAN 20
VLAN 30
...
VLAN 120
User
Groups
User
Groups
interface Vlan20
ip address 10.1.20.3 255.255.255.0
ip helper-address 10.5.10.20
standby 1 ip 10.1.20.1
standby 1 timers msec 200 msec 750
standby 1 priority 150
standby 1 preempt
standby 1 preempt delay minimum 180
10.5.10.20
SiSiSiSi
L3
L3L3
L3 L3
interface Vlan20
ip address 10.1.20.3 255.255.255.128
ip helper-address 10.5.10.20
10.1.20.0/25
10.1.30.0/25
...
10.1.120.0/25
10.1.20.128/25
10.1.30.128/25
...
10.1.120.128/25
SiSiSiSi
SVI configuration at the access layer is simplified
Larger subnets are split into smaller ones and assigned to new DHCP scopes
Enterprise Campus Design: Routed Access
• Introduction
• Cisco Campus Architecture Review
• Campus Routing Foundation and Best Practices
• Building a Routed Access Campus Design
• EIGRP Design to Route to the Access Layer
• OSPF Design to Route to the Access Layer
• Other Design Considerations
• Routed Access Design and VSS
• Routed Access Design for IPv6
• Impact of Routed Access Design for Advanced Technologies
Deploying a Stable and Fast Converging EIGRP Campus Network
The key aspects to consider are:
1. Using EIGRP Stub at the access layer
2. Route Summarization at the distribution layer
3. Leverage Route filters
4. Consider Hello and Hold Timer tuning
EIGRP NeighborsEvent Detection
• EIGRP neighbor relationships are created when a link comes up and routing adjacency is established
• When physical interface changes state, the routing process is notified
• Carrier-delay should be set as a rule becauseit varies based upon the platform
• Some events are detected by therouting protocol
• Neighbor is lost, but interface is UP/UP
• To improve failure detection
• Use routed interfaces and not SVIs
• Decrease interface carrier-delay to 0
• Decrease EIGRP hello and hold-down timers*• Hello = 1• Hold-down = 3
* Not recommended with NSF/SSO
interface GigabitEthernet3/2
ip address 10.120.0.50 255.255.255.252
ip hello-interval eigrp 100 1
ip hold-time eigrp 100 3
carrier-delay msec 0
SiSiSiSi
RoutedInterface
SiSi
SiSi
SiSi
Hellos
L2 Switchor VLAN Interface
EIGRP in the CampusConversion to an EIGRP Routed Edge
• The greatest advantages of EIGRP are gained from the use of summarization and stub routers
• EIGRP allows for multiple tiers of hierarchy, summarization and route filtering
• Relatively painless to migrate to a L3 access with EIGRP
• Deterministic convergence time in very large L3 topology
• EIGRP maps easily to campus topology
10.10.0.0/1710.10.128.0/17
10.10.0.0/16
SiSi SiSi SiSi SiSi
SiSi SiSi
EIGRP Query ProcessQueries Propagate the Event
• EIGRP relies on neighbors to provide routing information
• If a route is lost and no feasible successor is available, EIGRP actively queries its neighbors for the lost route(s)
• The router waits for replies from all queried neighbors before the calculating a new path
• If any neighbor fails to reply, the queried route is stuck in active and the router resets neighbor adjacency
• The fewer routers and routesqueried, the faster EIGRP converges; solution is to limit query propagation
SiSiSiSi
Query
SiSiSiSi
SiSiSiSi
Query
Query
Query
Query
Query
Query
Query
Query
Reply
Reply
Reply
Reply
Reply
Reply
Reply
Reply
ReplyAccess
Distribution
Core
Distribution
Access
EIGRP Design Rules for HA CampusLimit Query Range to Maximize Performance
• EIGRP convergence is dependent on query response times
• Minimize the number of queries to speed up convergence
• Summarize distribution block routes to limit how far queries propagate across the campus
• Upstream queries are returned immediately with infinite cost
• Configure access switches as EIGRP stub routers
• No downstream queries are ever sentSiSiSiSi
SiSiSiSi
router eigrp 100
network 10.0.0.0
eigrp stub connected
interface TenGigabitEthernet 4/1
ip summary-address eigrp 100 10.120.0.0 255.255.0.0 5
router eigrp 100
network 10.0.0.0
distribute-list Default out <mod/port>
ip access-list standard Default
permit 0.0.0.0
No Queries to Rest of Network
from Core
Limiting the EIGRP Query RangeWith Summarization
• Summarization from distribution to core for the subnets in the access limits the upstream query/reply process
• Queries will now stop at the core; no additional distribution blocks will be involved in the convergence event
• The access layer is still queriedSiSiSiSi
SiSiSiSi
Query Query
Query ReplyReply
Reply
Reply∞Reply∞
interface gigabitethernet 3/1
ip address 10.120.10.1 255.255.255.252
ip summary-address eigrp 1 10.130.0.0
255.255.0.0
Summary
RouteSummary
Route
Limiting the EIGRP Query RangeWith Stub Routers
A stub router signals (through hellos)
that it is a stub and not a transit path
Queries are not sent towards the stub
routers but marked as if a “No path
this direction” reply had been received
A stub router signals (through hellos)
that it is a stub and not a transit path
Queries are not sent towards the stub
routers but marked as if a “No path
this direction” reply had been received
D1 knows that stubs cannot be transit
paths, so they will not have any path
to 10.130.1.0/24
D1 will not query the stubs, reducing
the total number of queries in this
example to one
Stubs will not pass D1’s advertisement
of 10.130.1.0/24 to D2
D2 will only have one path to
10.130.1.0/24
D2D1 Query
Distribution
Access
SiSi SiSi
STUB
10.130.1.0/24
Hello, I’m a
Stub—
I’m Not Going
to Send You
Any Queries
Since You Said
That
Stub Stub Stub
Reply
No Queries to Rest of Network
from Core
EIGRP Query ProcessWith Summarization and Stub Routers
• When we summarize from distribution into core we can limit the upstream query/reply process
• Queries will now stop at the core; no additional routers will be involved in the convergence event
• With EIGRP stubs we can further reduce the query diameter
• Non-stub routers do not query stub routers—so no queries will be sent to the access nodes
• Only three nodes involved in convergence event—No secondary queries
SiSiSiSi
SiSiSiSi
Query Reply
Reply∞Reply∞
Stub Stub
Summary
RouteSummary
Route
SiSiSiSi
SiSiSiSi
EIGRP Route Filtering in the CampusControl Route Advertisements
• Campus bandwidth not a constraining factor but it is recommended to limit the number of routes advertised
• Remove/filter routes from the core to the access and inject a default route with distribute-lists
• Smaller routing table in access is simpler to troubleshoot
• Deterministic topology
ip access-list standard Default
permit 0.0.0.0
router eigrp 100
network 10.0.0.0
distribute-list Default out <mod/port>
Default
0.0.0.0
Default
& other
Routes
SiSiSiSi
SiSiSiSi
EIGRP Routed Access Campus DesignSummary
• Detect the event:
• Set hello-interval = 1 second and hold-time = 3 seconds to detect soft neighbor failures *
• Set carrier-delay = 0, Interface debounce “disable”
• Propagate the event:
• Configure all access layer switches as stub routers to limit queries from the distribution layer
• Summarize the routes from the distribution to the core to limit queries across the campus
• Process the event:
• Summarize and filter routes to minimize calculating new successors for the RIB and FIB
• * Not recommended with NSF/SSO
Summary
Route
Stub
Default
0.0.0.0
Stub Stub
Default
& other
Routes
Enterprise Campus Design: Routed Access• Introduction
• Cisco Campus Architecture Review
• Campus Routing Foundation and Best Practices
• Building a Routed Access Campus Design
• EIGRP Design to Route to the Access Layer
• OSPF Design to Route to the Access Layer
• Other Design Considerations
• Routed Access Design and VSS
• Routed Access Design for IPv6
• Impact of Routed Access Design for Advanced Technologies
Key Objectives of the OSPF Campus Design:
1. Map area boundaries to the hierarchical design
2. Enforce hierarchical traffic patterns
3. Minimize convergence times
4. Maximize stability of the network
Deploying a Stable and Fast ConvergingOSPF Campus Network
OSPF Design Rules for HA CampusWhere Are the Areas?
• Area size/border is bounded by the same concerns in the campus as the WAN
• In campus the lower number of nodes and stability of local links could allow you to build larger areas however-
• Area design also based on address summarization
• Area boundaries should define flooding/fault domains
• Keep area 0 for core infrastructure do not extend to the access routers Data CenterWAN Internet
SiSi SiSi SiSi SiSi SiSi SiSi
SiSiSiSi
SiSiSiSi
SiSiSiSi
SiSiSiSi
Area 100 Area 110 Area 120
Area 0
Hierarchical Campus DesignOSPF Areas with Router Types
Data CenterWAN InternetBGP
SiSi SiSi SiSi SiSi SiSi SiSi
SiSi SiSi
SiSi SiSi
SiSi SiSi
Area 0
Area 200
Area 20 Area 30Area 10
BackboneBackbone
ABR ABR
InternalInternal
Area 0
ABR
Area 100
ASBR
ABR
ABR
Area 300
Access
Distribution
Core
Distribution
Access
SiSiSiSi
OSPF in the CampusConversion to an OSPF Routed Edge
• OSPF designs that utilize an area for each campus distribution building block allow for straight forward migration to Layer 3 access
• Converting L2 switches to L3 within a contiguous area is reasonable to consider as long as new area size is reasonable
• How big can the area be?
• It depends
• Switch type(s)
• Number of links
• Stability of fiber plantArea 200Branches
Area 0Core
Area 10Dist 1
Area 20Dist 2
SiSi SiSi SiSi SiSi
SiSiSiSi
When a Link Changes State
• Every router in area hears a specific link LSA
• Each router computes shortest path routing table
Router 2, Area 1
Old Routing Table New Routing Table
Link State Table
LSA
Dijkstra Algorithm
ACKSiSi
Router 1, Area 1
SiSi
OSPF LSA ProcessLSAs Propagate the Event
• OSPF is a Link State protocol; it relies on all routers within an area having the same topology view of the network.
• If a route is lost, OSPF sends out an LSA to inform it’s peers of the lost route.
• All routers with knowledge of this route in the OSPF network will receive an LSA and run SPF to remove the lost route.
• The fewer the number of routers with knowledge of the route, the faster OSPF converges;
• Solution is to limit LSA propagation range SiSiSiSi
LSA 2
SiSiSiSi
SiSiSiSi
LSA 2
LSA 2
LSA 2
LSA 2
LSA 2
LSA 2
LSA 2
LSA 2
Area 0
Area 0
SPF
SPF SPF
SPF
SPF SPF
SPF SPF
SPF SPF
Access
Distribution
Core
Distribution
Access
SiSiSiSi
Backbone
Area 0
Area 120
OSPF Regular AreaABRs Forward All LSAs from Backbone
ABR Forwards theFollowing into an Area
Summary LSAs (Type 3)
ASBR Summary (Type 4)
Specific Externals (Type 5)
Access Config:router ospf 100
network 10.120.0.0 0.0.255.255 area 120
Distribution Configrouter ospf 100
area 120 range 10.120.0.0 255.255.0.0 cost 10
network 10.120.0.0 0.0.255.255 area 120
network 10.122.0.0 0.0.255.255 area 0
SiSiSiSi
External Routes/LSA Present in Area 120
SiSiSiSi
Backbone
Area 0
Area 120
OSPF Stub AreaConsolidates Specific External Links—Default 0.0.0.0
Stub Area ABR ForwardsSummary LSAs (Type 3)
Summary 0.0.0.0 Default
Distribution Configrouter ospf 100
area 120 stub
area 120 range 10.120.0.0 255.255.0.0 cost 10
network 10.120.0.0 0.0.255.255 area 120
network 10.122.0.0 0.0.255.255 area 0
SiSiSiSi
Access Config:router ospf 100
network 10.120.0.0 0.0.255.255 area 120
Eliminates External Routes/LSA Present in Area (Type 5)
SiSi
Backbone
Area 0
Area 120
A Totally Stubby AreaABR Forwards
Summary Default
OSPF Totally Stubby AreaUse This for Stable—Scalable Internetworks
Distribution Configrouter ospf 100
area 120 stub no-summary
area 120 range 10.120.0.0 255.255.0.0 cost 10
network 10.120.0.0 0.0.255.255 area 120
network 10.122.0.0 0.0.255.255 area 0
Access Config:router ospf 100
network 10.120.0.0 0.0.255.255 area 120
SiSi
SiSi
SiSi
Minimize the Number of LSAs and the Need for Any External Area SPF Calculations
SiSi
Backbone
Area 0
Area 120
Area Border Router
ABRs ForwardSummary 10.120.0.0/16
Summarization Distribution to CoreReduce SPF and LSA Load in Area 0
Access Config:router ospf 100
network 10.120.0.0 0.0.255.255 area 120
Distribution Configrouter ospf 100
area 120 stub no-summary
area 120 range 10.120.0.0 255.255.0.0 cost 10
network 10.120.0.0 0.0.255.255 area 120
network 10.122.0.0 0.0.255.255 area 0
SiSi
SiSiSiSi
Minimize the Number of LSAs and the Need for Any SPF Recalculations at the Core
SiSiSiSi
SiSiSiSi
OSPF Design ConsiderationsWhat Area Should the Distribution Link Be In?
• Two aspects of OSPF behavior can impact convergence
• OSPF ABRs ignore LSAs generated by other ABRs learned through non-backbone areas when calculating least-cost paths
• In a stub area environment the ABR will generate a default route when any typeof connectivity to the backbone exists
• Ensure loopbacks are ‘not’ in area 0
• Configure dist to dist link as a trunk using 2 subnets one in area 0 and one in stub area when possible
SiSi
SiSi
OSPF Timer TuningHigh-Speed Campus Convergence
• OSPF by design has a number of throttling mechanisms to prevent the network from thrashing during periods of instability
• Campus environments are candidates to utilize OSPF timer enhancements
• Sub-second hellos*
• Generic IP (interface) dampening mechanism
• Back-off algorithm for LSA generation
• Exponential SPF backoff
• Configurable packet pacing
• Hierarchical, summarization, totally stubby that well defines the boundary
ReduceLSA and SPF
Interval
SiSi
SiSi
Reduce Hello Interval
* Not recommended with NSF/SSO
Access Config:interface GigabitEthernet1/1dampeningip ospf dead-interval minimal hello-multiplier 4ip ospf network point-to-point
router ospf 100timers throttle spf 10 100 5000timers throttle lsa all 10 100 5000timers lsa arrival 80
Subsecond HellosNeighbor Loss Detection—Physical Link Up
• OSPF hello/dead timers detect neighbor loss in the absence of physical link loss
• Useful where an L2 device separates L3 devices (Layer 2 core designs)
• Fast timers quickly detect neighbor failure
• Not recommended with NSF/SSO
• Interface dampening is recommended with sub-second hello timers
• OSPF point-to-point network type to avoid designated router (DR) negotiation.
OSPF Processing
Failure(Link Up)
A B
SiSi
SiSi
SiSi
SiSi
timers throttle spf 10 100 5000
timers throttle lsa all 10 100 5000
timers lsa arrival 80
timers throttle spf 10 100 5000
timers throttle lsa all 10 100 5000
timers lsa arrival 80
5.68
0.72
0.24
0
1
2
3
4
5
6
Default
Convergence
10 msec. SPF 10 msec. SPF
and LSA
OSPF Requires Sub-Second Throttling of LSA & SPF Timers to Speed Convergence
• OSPF has an SPF timer designed to dampen route recalculation
• After a failure, the router waits for the SPF timer to expire before recalculating a new route
• By default, there is a 500ms delay before generating router and network LSAs; the wait is used to collect changes during a convergence event and minimize the number of LSAs sent
• Propagation of a new instanceof the LSA is limited at the originator
• Acceptance of a new LSAs is limited by the receiver
• Make sure lsa-arrival < lsa-hold
Tim
e t
o R
es
tore
Vo
ice
Flo
ws
(s
ec
)
OSPF Design Rules for HA CampusLSA/SPF Exponential Back-off Throttle Mechanism
timers throttle spf <spf-start> <spf-hold> <spf-max-wait>
timers throttle lsa all <lsa-start> <lsa-hold> <lsa-max-wait>
Time [ms]
Topology Change Events
SPF Calculations
200 1600 msec100 400 800 msec
timers throttle spf 10 100 5000
timers throttle lsa all 10 100 5000
Sub-second timers without risk
1. spf-start (initial hold timer) controls how long to wait prior to starting the SPF calculation
2. If a new topology change event is received during the spf-hold interval, the SPF
calculation is delayed until the hold interval expires and the hold interval is temporarily
doubled
3. The spf-hold interval can grow until the maximum period spf-max-wait is reached
4. After the expiration of any hold interval, the spf-hold timer is reset
SiSiSiSi
SiSiSiSi
OSPF Routed Access Campus DesignSummary
• Fast Convergence:
• Set hello-interval = 250 milliseconds and Dead-time = 1 seconds to detect soft neighbor failures *
• Tune LSA/SPF timers
• Set carrier-delay = 0, interface debounce “disable”
• Propagate the event:
• Configure all access layer switches as stub routers to limit queries from the distribution layer
• Summarize the routes from the distribution to the core to limit queries across the campus
• Process the event:
• Summarize and filter routes to minimize calculating new successors for the RIB and FIB
• * Not recommended with NSF/SSO
Summary
Route
Default
0.0.0.0
Default
& other
Routes
Enterprise Campus Design: Routed Access
• Introduction
• Cisco Campus Architecture Review
• Campus Routing Foundation and Best Practices
• Building a Routed Access Campus Design
• EIGRP Design to Route to the Access Layer
• OSPF Design to Route to the Access Layer
• Other Design Considerations
• Routed Access Design and VSS
• Routed Access Design for IPv6
• Impact of Routed Access Design for Advanced Technologies
Adjacency
Table
FIB
Table
Adjacency
Table
FIB
Table
Redundant Supervisors with L3Non-Stop-Forwarding with Stateful Switchover (NSF/SSO)
Active
Supervisor
Cisco IOS
CEF Tables
Synchronizatio
n
Hardware Tables
Synchronization
Configuration
Hardware
RP CPU
IOS CEF
Tables
Routing Protocol
process
Routing Information Base ARP Table
Control Path
Standby
Supervisor
Forwarding Path
Synchronization
Adjacency
Table
FIB
Table
Adjacency
Table
FIB
Table
Hardware
RP CPU
IOS CEF
Tables
ARP Table
Design with Redundant for NSF/SSOStatus of Uplinks of the Supervisor
• Don’t use BFD with NSF
• Cisco Catalyst 4500: supervisor uplink ports are active and forward traffic as long as the supervisor is fully inserted
• Uplink ports do not go down when a supervisor is reset. There are restrictions on which ports can be active simultaneously in redundant systems
• Cisco Catalyst 6500: both the active supervisor and the standby supervisor uplink ports are active as long as the supervisors are up and running
Uplink ports go down when the supervisor is reset
• Catalyst 6500 PFC3: all ports are active
1/1 1/3 1/4 1/5 1/61/2
2/1 2/3 2/4 2/5 2/62/2
1/1 1/2
2/1 2/2
• Catalyst 4500 Supervisor II+, Supervisor IV: 2 x GigE ports are active
• Catalyst 4500 Supervisor II+10GE: 2 x 10GE and 4 x GigE ports are active
An NSF/SSO switchover also modifies topology
Redundant Supervisors with L3
Stateful redundancy mechanism for intra-chassis route processor (RP) failover
NSR, unlike NSF with SSO:
– Allows routing process on active RP to synchronize all necessary data and states with routing protocol process on standby RP
– Useful when peer doesn’t support NSF/SSO since it doesn’t rely on neighbor capability
– Standards are not necessary as NSR does NOT require additional communication with protocol peers
• SUP7E IOS XE 3.5.0E, SUP8E IOS XE 3.6.1E
• SUP720/2T-15.1(1)SY
Non-Stop Routing (NSR) - OSPF No Route Flaps During Recovery
No database rebuild required
Lin
e C
ard
Lin
e C
ard
Lin
e C
ard
Lin
e C
ard
AC
TIV
E
STA
ND
BY
Redundant Route Processors
AC
TIV
E
STA
ND
BY
AC
TIV
E
ST
AN
DB
Y
New active takes over
old active’s state
avoiding rebuilding of
adjacencies, routing
exchange and
forwarding information
EIGRP/OSPF
SiSiSiSi
SiSiMaster
Access
S1 S2 S3
Single logical Switch
SiSiSiSi
StackWise at the Access Layer
• Recommended Design:
• Configure priority for master and its backup for deterministic failures
• Avoid using master as uplink to reduce uplink related losses
• Use “stack-mac persistent timer 0” to avoid the gratuitous ARP changes for
• Best convergence
• Where GARP processing is disabled in the network, e.g. Security
• Where network devices/host do not support GARP, e.g. Phones
• Upstream traffic is not interrupted by master failure
• Downstream traffic is interrupted due to routing protocol restart and adjacency reset
• Run 12.2(37)SE or higher for NSF support
Routed Access Does Not RequireSwitch Management Vlan• In the L2 design it was considered a best practice
to define a unique Vlan for network management
• In the routed access model, the best way is to configure a loopback interface
• The /32 address should belong to the summarized routed advertised from the distribution block
• The loopback interface should be configured as passive for the IGP
• ACLs should be used as required to ensure secure network management
SiSi
SiSiSiSi
SiSi
SiSi SiSi
SNMP Server
interface Loopback0
description Dedicated Switch Management
ip address 10.120.254.1 255.255.255.255
Enterprise Campus Design: Routed Access
• Introduction
• Cisco Campus Architecture Review
• Campus Routing Foundation and Best Practices
• Building a Routed Access Campus Design
• Routed Access Design and VSS
• Routed Access Design for IPv6
• Impact of Routed Access Design for Advanced Technologies
• Summary
Virtual Switch
• Virtual Switching System consists of two Catalyst 6500’sdefined as members of the same virtual switch domain runninga VSL (Virtual Switch Link) between them
• Single Control Plane with Dual Active Forwarding Planes
• Extends NSF/SSO infrastructure to Two Switches
Catalyst 6500 Virtual Switching System (VSS)
VSS
SiSiSiSi
Switch 1 + Switch 2 =
Virtual Switch Domain
Virtual Switch Link (VSL)
Virtual Switch System
• Multi-chassis Etherchannel (MEC) replaces spanning tree to provide link redundancy
• MEC allows the physical members of the Etherchannel bundle to be connected to two separate physical switches
• MEC links on both switches are managed by PAgP or LACP running on the Master Switch via internal control messages
• PAgP or LACP packets for all links in the MEC bundle are processed by the active supervisor
Multi-Chassis Etherchannel
Multi-Chassis Etherchannel
Virtual Switch SystemImpact to the Campus Topology
Physical network topology does not change
Still have redundant chassis
Still have redundant links
Logical topology is simplified as we now have a single control plane
Allows the design to replace traditional topology control plane with Multi-chassisEtherchannel (MEC)
No reliance on IGP Protocol to provide linkredundancy
Convergence and load balancing are based on Etherchannel
BRKCRS-3035 – Advance Enterprise Campus Design: Virtual Switching System (VSS)
SiSiSiSi SiSiSiSi SiSiSiSi SiSiSiSi
SiSiSiSi
Leveraging EtherChannel Time to Recovery
Catalyst Switch
Link failure detection
Removal of the Portchannel
entry in the software
Update of the hardware Portchannel
indices
1Link Failure Detection
SiSi SiSi
2
1
2
3
3
Routing Protocol
Process
Spanning Tree
Process
Notify the spanning tree and/or
routing protocol processes of path
cost change
4
4
Layer 2 Forwarding Table
Load-Balancing Hash
Destination Port
G3/1
G3/2
G4/1
G4/2
VLAN MACDestination
Index
10 AA Portchannel 1
11 BB G5/1
PortChannel 1 G3/1, G3/2, G4/1, G4/2
VSS and Routed Access DesignLink Down Convergence Without VSS
Downstream traffic recovery is dependent upon the Interior Gateway Protocol reroute to the peer distribution switch
‒ Use Stub on the access devices, and proper summarization from distribution
‒ Tune IGP ... etc.
Upstream traffic recovery is dependent upon updates to the Access Switch’s Forwarding Information Base removing the adjacency for the lost link (ECMP)
Downstream IGP rerouteUpstream CEF ECMP
SiSi
SiSi
SiSi
SiSi
L3 ECMP
VSS and Routed Access DesignLink Down Convergence with VSS MEC
Access layer switch has one neighbor
Distribution switch has neighbor count reduced by half
Upstream and Downstream traffic convergence now is an Etherchannel link event
‒ No IGP reconvergence event
‒ No Impact of number of routes/vlans
Fast IGP Timers not needed nor recommended (only 1 IGP peer)
Summarization rules still recommended
Achieves sub-second failure and no L2 loop on the topology
Downstream IGP rerouteUpstream CEF ECMP
L3 ECMP
SiSiSiSi
SiSiSiSi
VSS and Routed Access DesignLink Down Convergence with VSS MEC
Access layer switch has one neighbor
Distribution switch has neighbor count reduced by half
Upstream and Downstream traffic convergence now is an Etherchannel link event
‒ No IGP reconvergence event
‒ No Impact of number of routes/vlans
Fast IGP Timers not needed nor recommended (only 1 IGP peer)
Summarization rules still recommended
Achieves sub-second failure and no L2 loop on the topology
MC-EC
SiSi SiSi
Downstream MC-ECUpstream MC-EC
VSS and Routed Access Design Enable MEC Links in L3 Core—Best Multicast
Use MEC uplinks from the access in routed access environments with multicast traffic
VSS MEC local switch link preference avoids egress replication across the VSL link during normal conditions
In the event of link failure multicast traffic will pass across VSL link and will experience local switch replication
Large scale mroute and s,g topology the convergence may vary, however much better then ECMP based topology
L3 MEC Uplinks
SiSiSiSi
MEC Uplinks
PIM Joins
PIM Join
SW1ACTIVE
SW2HOT_STANBY
BRKCRS-3035 – Advance Enterprise Campus Design: Virtual Switching System (VSS)
Enterprise Campus Design: Routed Access
• Introduction
• Cisco Campus Architecture Review
• Campus Routing Foundation and Best Practices
• Building a Routed Access Campus Design
• Routed Access Design and VSS
• Routed Access Design for IPv6
• Impact of Routed Access Design for Advanced Technologies
• Summary
Routed Access Layer and IPv6
• IPv4 and IPv6 Dual Stack is the recommended deployment model
• In RA model, the first hop switch must be capable of routing IPv6
EIGRP-Stub and OSPFv3 Routed Access
• Catalyst IPv6 Routing
Cisco Catalyst 6500/6800 Series Switches
SUP32, SUP720, SUP2T (Recommended for IPv6)
Cisco Catalyst 4500 Series Switches
SUP6-E and higher
Cisco Catalyst 3x50 Series
Cisco Catalyst 3650 Series
Cisco Catalyst 2960 XR IP Lite
Support for Dual Stack Deployment
Dual-stackServer
L3
v6-Enabled
v6-Enabled
v6-Enabled
v6-Enabled
IPv6/IPv4 Dual Stack Hosts
v6-Enabled
v6-Enabled
Aggregation Layer (DC)
Access Layer (DC)
Access Layer
Distribution Layer
Core Layer
Du
al S
tack
Du
al S
tack
ipv6 unicast-routing
ipv6 cef
!
[...]
interface Vlan2
description Data VLAN for Access
ipv6 address 2001:DB8:CAFE:2::CAC1:3750/64
ipv6 nd prefix 2001:DB8:CAFE:2::/64 no-advertise
ipv6 nd managed-config-flag
ipv6 nd other-config-flag
ipv6 dhcp relay destination 2001:DB8:CAFE:10::2
ipv6 ospf 1 area 2
ipv6 cef
!
[...]
ipv6 router ospf 1
router-id 10.120.2.1
log-adjacency-changes
auto-cost reference-bandwidth 10000
area 2 stub no-summary
passive-interface Vlan2
timers spf 1 5
Routed Access Layer and IPv6Dual Stack Deployment Sample
For YourReference
Dual-stackServer
L3
v6-Enabled
v6-Enabled
v6-Enabled
v6-Enabled
IPv6/IPv4 Dual Stack Hosts
v6-Enabled
v6-Enabled
Aggregation Layer (DC)
Access Layer (DC)
Access Layer
Distribution Layer
Core Layer
!
interface GigabitEthernet1/0/25
description To 6k-dist-1
ipv6 address 2001:DB8:CAFE:1100::CAC1:3750/64
no ipv6 redirects
ipv6 nd suppress-ra
ipv6 ospf network point-to-point
ipv6 ospf 1 area 2
ipv6 ospf hello-interval 1
ipv6 ospf dead-interval 3
ipv6 cef
!
interface GigabitEthernet1/0/26
description To 6k-dist-2
ipv6 address 2001:DB8:CAFE:1101::CAC1:3750/64
no ipv6 redirects
ipv6 nd suppress-ra
ipv6 ospf network point-to-point
ipv6 ospf 1 area 2
ipv6 ospf hello-interval 1
ipv6 ospf dead-interval 3
ipv6 cef
Routed Access Layer and IPv6Dual Stack Deployment Sample
For YourReference
Du
al S
tack
Du
al S
tack
Access Layer: Dual Stack (Routed Access)
ipv6 unicast-routing
ipv6 cef
!
interface GigabitEthernet1/0/25
description To 6k-dist-1
ipv6 address 2001:DB8:CAFE:1100::CAC1:3750/64
no ipv6 redirects
ipv6 nd suppress-ra
ipv6 ospf network point-to-point
ipv6 ospf 1 area 2
ipv6 ospf hello-interval 1
ipv6 ospf dead-interval 3
ipv6 cef
!
interface GigabitEthernet1/0/26
description To 6k-dist-2
ipv6 address 2001:DB8:CAFE:1101::CAC1:3750/64
no ipv6 redirects
ipv6 nd suppress-ra
ipv6 ospf network point-to-point
ipv6 ospf 1 area 2
ipv6 ospf hello-interval 1
ipv6 ospf dead-interval 3
ipv6 cef
interface Vlan2
description Data VLAN for Access
ipv6 address 2001:DB8:CAFE:2::CAC1:3750/64
ipv6 ospf 1 area 2
ipv6 cef
!
ipv6 router ospf 1
router-id 10.120.2.1
log-adjacency-changes
auto-cost reference-bandwidth 10000
area 2 stub no-summary
passive-interface Vlan2
timers throttle spf spf-start spf-hold spf-max-wait
timers throttle lsa start-interval hold-interval max- interval
timers lsa arrival milliseconds
timers pacing flood milliseconds
For YourReference
BRKCRS-2301 – Enterprise IPv6 Deployments
Distribution Layer: Dual Stack (Routed Access)
ipv6 unicast-routing
ipv6 multicast-routing
ipv6 cef distributed
!
interface GigabitEthernet3/1
description To 3750-acc-1
ipv6 address 2001:DB8:CAFE:1100::A001:1010/64
no ipv6 redirects
ipv6 nd suppress-ra
ipv6 ospf network point-to-point
ipv6 ospf 1 area 2
ipv6 ospf hello-interval 1
ipv6 ospf dead-interval 3
ipv6 cef
!
interface GigabitEthernet1/2
description To 3750-acc-2
ipv6 address 2001:DB8:CAFE:1103::A001:1010/64
no ipv6 redirects
ipv6 nd suppress-ra
ipv6 ospf network point-to-point
ipv6 ospf 1 area 2
ipv6 ospf hello-interval 1
ipv6 ospf dead-interval 3
ipv6 cef
ipv6 router ospf 1
auto-cost reference-bandwidth 10000
router-id 10.122.0.25
log-adjacency-changes
area 2 stub no-summary
passive-interface Vlan2
area 2 range 2001:DB8:CAFE:xxxx::/xx
timers throttle spf spf-start spf-hold spf-max-wait
timers throttle lsa start-interval hold-interval max-interval
timers lsa arrival milliseconds
timers pacing flood milliseconds
For YourReference
BRKCRS-2301 – Enterprise IPv6 Deployments
Enterprise Campus Design: Routed Access
• Introduction
• Cisco Campus Architecture Review
• Campus Routing Foundation and Best Practices
• Building a Routed Access Campus Design
• Routed Access Design and VSS
• Routed Access Design for IPv6
• Impact of Routed Access Design for Advanced Technologies
• Summary
MA
SPG
Roaming,Within an SPG (Campus) –
• Now, let’s examine a fewmore types of user roams
• In this example, the user roams within their Switch Peer Group –since SPGs are typicallyformed around floors or other geographically-close areas,this is the most likely andmost common type of roam
The user may or may not have roamed across an L3 boundary (depends on wired setup) –however, users are always* taken back to their PoPfor policy application
Roamingwithin an SPG
(L3 behaviourand default L2
behaviour)
Note – the traffic in this most common type of roam did not have to be transported back to, or via, the MC (controller) servicingthe Switch Peer Group – traffic stayed local to the SPG only(i.e. under the distribution layerin this example – not backthrough the core).
This is an important consideration for Switch Peer Group, traffic flow, and Controller scalability.
Cisco Converged Access Deployment
MC
MAMA
PoP
PoA
Very
common
roaming
case
90
Converged Access –Traffic Flow and Roaming – Larger Site, L2 / L3 Roam (within SPG)
LAN
BR
KC
RS
-2889–
Inte
rme
dia
te –
Co
nve
rged
A
cce
ss
Sys
tem
Arc
hit
ectu
re
As Noted –
• When a user roams in a L2 environment, an optional settingallows for both the user’s PoA and PoP to move.
• The benefits that accrue to a PoP move for an L2 user roam are reduced end-to-endlatency for the user (less traffic hops), as well as a reduction of state held within thenetwork (as the user needs to be kept track of only at the roamed-to switch).
• The drawback to a PoP move for an L2 user roam are likely increased roamtimes, as user policy may be retrieved from the AAA server, and applied at theroamed-to switch. The combination of these two elements may introduce alevel of non-deterministic behaviour into the roam times if this option is used.
• Default Behaviour –
• L2 Roams Disabled – by default, all roams (whether across an L3 boundary or not)carry the user’s traffic from their roamed-to switch (where the user’s PoA has moved to),back to the original switch the user associated through (where the user’s PoP remains).In this case, the user’s policy application point remains fixed, and roam timesare more deterministic.
• However, if desired, this behaviour can be modified via a setting to allow for an L2 roam –assuming the network topology involved allows for the appropriate Layer 2 extension across the network.
Policy moves
with user move –
follows PoP
Cisco Converged Access Deployment
Converged Access –Traffic Flow and Roaming – L2 Roam (impact of policy moves)
BRKCRS-2889– Intermediate – Converged Access System Architecture
Analyzing the Impact on Advanced Technologies
• Unified Communications Deployments work the same way. You still need to provision a voice vlan/subnet per wiring closet switch
• TrustSec (802.1x) solutions work the same: user vlan assignment still possible, as well as per user dACL (checkout BRKSEC-2005)
• Wireless LAN works seamlessly as well, since LWAPP works with UDP hence at L3.
• We will take a closer look at;
• Network Virtualization
Network VirtualizationFunctional Architecture
Access control techniques remain the same with a Routed Access Model
Path Isolation techniques remain the same, but there are provisioning implications by running routing at the access layer
Access Control Path Isolation Services Edge
WAN – MAN – Campus Branch – Campus Data Center – Internet Edge –Campus
EthernetVRFs
GREVRFs
MPLSVPNs
BRKCRS-2033 – Deploying a Virtualized Campus Network Infrastructure
Path IsolationFunctional Components
VRF
VRF
Global
Device virtualization
‒Control plane virtualization
‒Data plane virtualization
‒Services virtualization
Data path virtualization
Hop-by-Hop:
VRF-Lite End-to-End
Multi-Hop:
VRF-Lite+GRE, MPLS-VPN
VRF: Virtual Routing and Forwarding
Per VRFVirtual Routing Table
Virtual Forwarding Table
IP
802.1q
Network Virtualization and Routed AccessPath Isolation Issues—VRFs to the Edge
Define VRFs on the access layer switches
One VRF dedicated to each virtual network (Red, Green, etc.)
Map device VLANs to corresponding VRF
Provisioning is more challenging, because multiple routing processes and logical interfaces are required.
The chosen path isolation technique must be deployed from the access layer devices
EVNs
VRF-lite Ethernet
VRF-Lite GRE
MPLS L3 VPNs
Campus Core
Layer 3
Links
SiSiSiSi
VLAN 21 Red
VLAN 22 Green
VLAN 23 Blue
VLAN 21 Red
VLAN 22 Green
VLAN 23 Blue
VRF Blue
VRF Green
VRF Red
Virtualizing at the Access LayerVLANs to VRF Mapping Configuration
ip vrf Red
!
ip vrf Green
!
vlan 21
name Red_access_switch_1
!
vlan 22
name Green_access_switch_1
!
interface Vlan21
description Red on Access Switch 1
ip vrf forwarding Red
ip address 12.137.21.1 255.255.255.0
!
interface Vlan22
description Green on Access Switch 1
ip vrf forwarding Green
ip address 11.137.22.1 255.255.255.0
Defining the VRFs
Defining the VLANs
(L2 and SVI) and Mapping
Them to the VRFs
VRF-Lite – Routing Protocol Example
OS
PF
Exam
ple
router ospf 1
network 10.0.0.0 0.255.255.255 area 0
passive-interface default
no passive-interface vlan 2000
!
router ospf 100 vrf green
network 11.0.0.0 0.255.255.255 area 0
no passive-interface vlan 2001
!
router ospf 200 vrf red
network 12.0.0.0 0.255.255.255 area 0
no passive-interface vlan 2002
router eigrp 100
network 10.0.0.0 0.255.255.255
passive-interface default
no passive-interface vlan 2000
no auto-summary
!
address-family ipv4 vrf green autonomous-system 100
network 11.0.0.0 0.255.255.255
no auto-summary
exit-address-family
!
address-family ipv4 vrf red autonomous-system 100
network 12.0.0.0 0.255.255.255
no auto-summary
exit-address-family
EIG
RP
Exam
ple
Defining the Routing Protocol
within the VRFs
Network Virtualization and Routed AccessPath Isolation Issues—VRFs to the Edge (Cont.)
Catalyst 6500 supports all three path isolation techniques:
‒ 802.1Q Ethernet VRF-Lite
‒ GRE VRF-Lite
‒ MPLS VPNs
Catalyst 4500s support EVN and 802.1Q VRF-Lite
Catalyst 3000s only support 802.1Q Ethernet VRF-Lite
Convergence times increase
‒ ~800ms for 9 VRFs + Global
‒ Increased load from multiple routing processes and logical interfaces
Operational impact of managing multiple logical networks
Campus Core
Layer 3
Links
SiSiSiSi
VLAN 21 Red
VLAN 22 Green
VLAN 23 Blue
VLAN 21 Red
VLAN 22 Green
VLAN 23 Blue VRF Blue
VRF Green
VRF Red
Network Virtualization--Path Isolation Design Guide
http://www.cisco.com/en/US/docs/solutions/Enterprise/Network_Virtualization/PathIsol.html#wp277205
Enterprise Campus Design: Routed Access
• Introduction
• Cisco Campus Architecture Review
• Campus Routing Foundation and Best Practices
• Building a Routed Access Campus Design
• Routed Access Design and VSS
• Routed Access Design for IPv6
• Impact of Routed Access Design for Advanced Technologies
• Summary
= STP Blocked Link
STP-Based Redundant Topology
B
SiSi SiSi
SiSi SiSi
SiSi SiSi
Routed Access Redundant Topology
SiSi SiSi
SiSi SiSi
SiSi SiSi
SiSi SiSi
SiSi SiSi
Routed Access Campus DesignEnd to End Routing: Fast Convergence and Maximum Reliability
B
BB
B
Summary
• Layer 2 designs remain valid
• Routed Access Design:
• Simplified Control Plane:no dependence on STP, HSRP, etc.
• Increased Capacity:flow-based load balancing
• High Availability:200 msec or better recovery
• Simplified Multicast
• No L2 Loopsfails closed, no flooding
• Easy Troubleshooting
• Flexibility to provide the right implementation for each requirement
SiSi SiSi SiSi SiSi
SiSi SiSi
Complete Your Online Session Evaluation
Don’t forget: Cisco Live sessions will be available for viewing on-demand after the event at CiscoLive.com/Online
• Give us your feedback to be entered into a Daily Survey Drawing. A daily winner will receive a $750 Amazon gift card.
• Complete your session surveys though the Cisco Live mobile app or your computer on Cisco Live Connect.
Continue Your Education
• Demos in the Cisco campus
• Walk-in Self-Paced Labs
• Table Topics
• Meet the Engineer 1:1 meetings
• Related sessions
Thank you
BRKCRS-3036: Advance Enterprise Campus Design
• Johnny Tsao
• Email: [email protected]
• 408.894.4060
Routed Access
BRKCSR-3036 Enterprise Campus Design: Routed Access Recommended Reading
Recommended Reading