juniper data center edge connectivity solutions · srx mx junos space juniper’s vision: common...
TRANSCRIPT
JUNIPER DATA CENTER EDGE CONNECTIVITY SOLUTIONS
Michael Pergament, Data Center Consultant EMEA (JNCIE2)
2 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
AGENDA
Reasons to focus on Data Center Interconnect
MX as Data Center Interconnect
Connectivity options towards DC Interconnect
Providing L2 services across multiple DC locations with VPLS
EVPN Overview
Network support for Seamless VM Mobility
3 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
REASONS TO FOCUS ON DC-INTERCONNECT REASON #1
Data-Center Consolidation and Distribution
Scalability High
Availability Compliance
Multi-Tenancy
4 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
REASONS TO FOCUS ON DC-INTERCONNECT REASON #2
Scalable L2-Stretch
Traffic Engineering &
Resiliency
Low Latency & Jitter
Fault Containment –
No STP
Geo-Clustering, Disaster Recovery
5 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
REASONS TO FOCUS ON DC-INTERCONNECT REASON #3
Server maintenance No disruption to VMs Resource
Optimization
DC disaster recovery
Storage replication
Hybrid cloud services
– a strong SP trend
L2 Stretch and VM Mobility
6 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
GbE/10GbE SERVERS
FC STORAGE
Pooled storage iSCSI / NAS
Customer B - IT DC
SRX
MX
Junos Space
JUNIPER’S VISION: COMMON DATA CENTER MODEL
Public Cloud Users
SMB
GbE/10GbE SERVERS Pooled Storage
(NAS)
Production Data
Center A
MX
GbE/10GbE SERVERS
FC STORAGE
Pooled storage iSCSI / NAS
Customer A - IT DC
SRX
MX
NAT
FW
LB
IPSec
Junos Space
Inter Data Center Connectivity
Hybrid Cloud
Junos Space
SRX
GbE/10GbE SERVERS Pooled Storage (NAS)
Production Data
Center B
MX
Hybrid Cloud VPN
NAT
FW
LB
IPSec
SRX
VPN
Junos Space
EX QFX μF
QFX QFX
EX QFX μF
7 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
Servers, Virtual Machines Pooled Ethernet Storage iSCSI / NAS
FC Storage
SRX HA
MX: Data Center Connectivity
L3VPN
• Best of the breed platforms
• Single JUNOS
• Optimized L2, L3, L4-7 services
delivery
• JUNOS and JUNOS Space SDK
for 3rd party integration
Junos Space
Orchestration
Fabric
IP E-VPN
SRX: L4-7
Services Complex
EX / QFX: Any port to any
port L2/L3 connectivity
DATA CENTER REFERENCE ARCHITECTURE
8 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
MX PROVIDING DC LAN & WAN CONNECTIVITY
W
A
N
L
A
N
MX
supporting
extensive set
of LAN
features
High scale,
multi-
tenancy,
resiliency,
deployment
flexibility
Inline
services,
stateful
services
WAN / CORE MX
providing
market
leading
WAN
features
Proven platform
Over 24,000 chassis
shipped
Over $3B revenues
Over 2,500 customers
EDGE
COLLAPSED
CORE
9 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
MX L2 INSTANCE OVERVIEW
Bridge-Domain.0
VLAN-ID: 1001
IFL 0: VLAN-ID
100
IFL 1: VLAN 200
IFL 2: VLAN 300
IFL 3: VLAN 400
Bridge-Domain L2 Flooding Domain
Typically one BD per
cloud tenant
Assigned to tenant
WAN instance
BD level VLAN tag
preserved over WAN
IRB.0
VPLS.0 L3VPN.0
Bridge Domain Automatic port level VLAN manipulation
BD VLAN-ID used to identify tenant
4K VLAN / L2 learning domain per BD
Extensive VLAN manipulations: swap,
pop, push, pop-swap, swap-push, swap-
swap
Multi-tenancy Interface tags
locally significant
IRB per tenant
for L3
connectivity
WAN Instances
Stitched per
tenant
10 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
VIRTUAL-SWITCH OVERVIEW
Virtual-Switch = L2 VRF
Each L2 Domain independent of other
Each virtual-switch
Multiple bridge-domains
Separate xSTP instance
Separate 4K VLAN-ID space
Separate VPLS instance
Combines LAN and WAN switching in single
place
BD and Virtual-Switch combined High scale
8K Virtual-Switch support
Virtual Switch 0
BD 0: VLAN 100 - 200
IFL 1 IFL 2
BD 1: VLAN 300
IFL 5
Virtual Switch 1
BD 0: VLAN 101 - 150
IFL 10
BD 1: VLAN OT:400, IT: 1001
IFL 11 IFL 12
L2 Domain #1
STP #1
4K VLANs
L2 Domain #2
STP #2
4K VLANs
STP VPLS STP VPLS
11 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
MX HAS STRONG LAN FEATURES
Single & double tags
Extensive manipulation capabilities (push, pop, swap, multiple operations)
Local / global significance, label standardization
Aggregated Ethernet interface support
LAG 16 member 64 member
MC-LAG
Integrated Routing and Bridging
Single Interface
IFL level resolution
High scale MAC table: 1M MAC address support, 1M ARPs
128K IFL support (64bit RE, Trio chipset), high scale L2 filters
User controlled MAC learning limits
Layer-2 port mirroring
Next-hop group capable: L2 and L3 nexthops
IGMP and PIM Snooping
Snooping with MC-LAG
Further flooding optimization by Proxy-ARP, DHCP-Relay
TAG
LAG
IRB
Snoop
Scale
Mirror
12 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
DCI TODAY, VIRTUAL PRIVATE LAN SERVICES (VPLS)
SRX5800
EX4200
EX/MX
MX Series
MX Series Remote
Data Center
MX Series
MX Series Remote
Data Center
VPLS over MPLS (or) IP
GbE/10GbE SERVERS
NAT
FW
LB
IPSec
SRX
QFX
GbE/10GbE SERVERS
13 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
VPLS EMULATES AN ETHERNET SWITCH
Common Characteristics:
Forwarding of Ethernet Frames
Forwarding of Unicast frames with an unknown MAC address
Replication of broadcast and multicast frames
Loop prevention
Dynamic Learning of MAC address
CE P P
PE
PE
CE
CE
CE
PE
DC3 DC1
DC2 DC4
14 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
Virtual Private LAN Service (VPLS) provides VLAN
Extension over a shared IP/MPLS network.
VPLS CHARACTERISTICS
Full Mesh
VLAN Separation
Provisioning
Multicast, Broadcast
and Flooding
Availability
Any-to-Any connectivity regardless of physical path
Separate VPLS instances per VLAN. Allows network-wide
segmentation with very large scale
New site Auto Discovery, RSVP Automatic Mesh
Point-to-Multipoint LSPs capabilities
Underlying MPLS offers ECMP, Fast Reroute
15 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
CONNECTIVITY OPTIONS TOWARDS MX
VPLS Multi-Homing
Multi-Chassis LAG
Standard LAG to MX VC
NAT
FW
LB
IPSec
SRX
QFX
MX Series
NAT
FW
LB
IPSec
SRX
QFX
MX Series
MC-LAG
NAT
FW
LB
IPSec
SRX
QFX
MX Series
LAG
VC
16 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
OPTION 1: VPLS MULTI-HOMING
QFabric has one uplink to
each MX (can be LAG)
MX will allow traffic
forwarding for particular
VLAN only on one uplink
Loop prevention implemented
in BGP on MXs
Traffic Load-Balancing
Different VLANs can have
different active uplinks
MX Series
NAT
FW
LB
IPSec
SRX
QFX
VLAN X
Active on this link
VRRP Master for
VLAN X
VRRP Master for
VLAN Y
VLAN X
Active on this link
17 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
OPTION 2: MULTI-CHASSIS LAG A/P WITH VPLS
MC-LAG – Multi-Chassis Link Aggregation Group
Allows a LAG interface to be established across multiple MX
chassis
One logical interface across 2x chassis
Provides node level redundancy, multi-homing support, and
loop-free Layer2 network without running Spanning Tree
Protocol (STP)
Uses Inter-Chassis Control Protocol (ICCP) to exchange
control information between two MC-LAG nodes
Client device device terminates physical links in a link
aggregation group (LAG)
Client device not aware of MC-LAG
MX Series
NAT
FW
LB
IPSec
SRX
QFX
ICCP
MC-LAG 1
18 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
OPTION 3: MX VIRTUAL CHASSIS, A/A LAG
Benefits of a Virtual Chassis
Performance and Scale
Scaling Ports & Services beyond one chassis
Easy to Manage
Single image, single config
One management IP address
Single Control Plane
Single protocol peering
Single RT/FT
NAT
FW
LB
IPSec
SRX
QFX
MX Series
LAG
VC
19 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
MPLS CONNECTIVITY: EVPN
Ethernet-VPN a new standards based protocol to inter-connect L2
domains over MPLS
Enhancing industry standard VPLS further
Multi-vendor / open initiative – non-proprietary
MPLS investment protection - builds easily over VPLS, L2/L3VPN
environments
Enhancements delivered by EVPN:
Active multi-homed
Extended control plane (MAC address) scaling
Faster convergence from edge failures using local repair
Flooding AND Control Plane learning
Increased granularity on MAC address reach-ability distribution –
increased support for host mobility – policy based decisions
MPLS Cloud
DC-1
DC-2 DC-3
MAC
update MAC
update
Load
balancing
VM moves
from DC-2
to DC-1
20 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
EVPN TERMINOLOGY
MES : MPLS Edge Switch
CE: Customer Edge Interface
ES: Ethernet Segment
ESI: Ethernet Segment Identifier (e.g. LAG Identifier)
EFI: EVPN Forwarding Instance
An E-VPN comprises CEs that are connected to MESs (PEs) that
comprise the edge of the MPLS infrastructure. A CE may be a host,
a router or a switch.
21 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
MES 1
Ethernet
Switch-B3
MES 3
MES 2 MES 4
VPN A
Host -A1
Host-B1
Host-A4
Host-A3 VPN B
VPN A
VPN B
EFI-A EFI-A
EFI-B
EFI-B
VPN A
EFI-A
RR Host –A5
MESes are connected by an IP/MPLS infrastructure
Transport may be provided by MPLS P2P or P2MP LSPs for “multicast”
Transport may be also be provided by IP/GRE Tunnels
ESI 1,
VLAN1
ESI 2,
VLAN2
ESI 3, VLAN1
ESI 4, VLAN2
ESI 5, VLAN1
ESI 1,
VLAN1
EVPN REFERENCE MODEL
22 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
EVPN LOCAL MAC ADDRESS LEARNING
A MES must support local data plane learning using
vanilla Ethernet learning procedures
When a CE generates a data plane packet such as an ARP request
MESes may learn the MAC addresses of hosts in the
control plane using extensions to protocols such as LLDP
that run between the MES and the hosts
MESes may learn the MAC addresses of hosts in the
management plane
23 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
EVPN REMOTE MAC ADDRESS LEARNING
EVPN introduces the ability for an MES to advertise
locally learned MAC addresses in BGP to other
MESes, using principles borrowed from IP VPNs
EVPN requires an MES to learn the MAC addresses
of CEs connected to other MESes in the control
plane using BGP
Remote MAC addresses are not learned in the data plane
24 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
ETHERNET AUTO-DISCOVERY (A-D) ROUTES
EVPN
• DCB auto discovery through advertisement of Ethernet A-D routes
• Includes Ethernet Segment Identifier (ESI) to allow multi-homing of
DCS to DCB
• Auto-discovery of Ethernet Tags (VLANs) on Ethernet Segments
ESI
DCS
DCB
DCB
DCB
DCS
RR
ESI
RD
ETHERNET TAG
Label
25 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
KNOWN UNICAST FORWARDING - ACTIVE/ACTIVE LOAD BALANCING (DCS-DCB)
EVPN
• Redundant connection between DCS and DCB appears as a LAG to
the DCS (no STP required)
• The DCS connection to the DCB(s) is referred by the Ethernet
Segment Identifier (ESI)
ESI
DCS
DCB
DCB
DCB
DCS
26 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
UNKNOWN BUM TRAFFIC- DF/BDF ELECTION (CE-PE)
EVPN
Redundant connection between DCS and DCB appears as a LAG to the DCS (no STP
required)
• A Designated Forwarder (DF) is elected (can be per VLAN) using Ethernet A-D Route
• Other DCB becomes a backup designated forwarder (BDF)
• Highest IP address of DCB wins by default
• Support of “Split Horizon”
• To ensure that a multicast, broadcast or unknown unicast packet that is sent on one link
by a DCS (that is dual homed) isn’t sent back by the other link
BDF
DF
DCS
27 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
CHALLENGES VM MOBILITY INTRODUCES
L2 & L3 address no longer pinned to a
site, interface
Ingress and Egress traffic convergence,
optimization
Learning and information distribution
control
L2 & L3 interaction for best user
experience
Fast convergence of network paths as
VM moves
Challenges
draft-raggarwa-data-center-mobility
28 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
VM DEFAULT GATEWAY SOLUTION: FIRST MECHANISM
Each VLAN/subnet uses anycast IP and MAC addresses for its default
gateway
Each VLAN/subnet would have its own IP and MAC anycast addresses
All the VMs on a given VLAN/subnet are (auto) configured with this IP
(anycast) address
The anycast default gateway IP and MAC address for a given
VLAN/subnet must be configured on each MES that this VLAN/subnet
could span
This ensures that a particular MES can always perform IP forwarding
on packets sent by a VM to the anycast default gateway MAC address.
29 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
VM DEFAULT GATEWAY SOLUTION: SECOND MECHANISM
Eliminates the need to configure the anycast addresses for a given VLAN/subnet on each
MES that is part of that VLAN/subnet
Each MES that acts as the default gateway for a given VLAN/subnet propagates in the E-
VPN control plane an E-VPN route that carries MES's IP and MAC address
BGP “Default Gateway” Community is used to indicate that this E-VPN route is for the
default gateway
For a given VLAN/subnet the distribution scope of this route is the set of MESes that are
spanned by that VLAN/subnet
Each MES that receives such E-VPN route:
Creates MAC forwarding state that enables it to apply IP forwarding to the packets destined
to the MAC address carried in the route
Replies to ARP requests that it receives from locally connected VMs destined to the default
gateway IP address of the advertising MES
30 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
MACVPN – L3VPN INTERACTION FOR INGRESS TRAFFIC STEERING
DC
Site-2
VPN
DC
Site-3
Site-1
VM moves to
site-1
Backbone
MACVPN
update
L3VPN update
VPN traffic
steers to new
site
VM 1
Ingress Traffic
to VM1
31 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
Avoiding Triangular Routing for Inter-DC scenario: NHRP Solution
Next Hop Resolution Protocol (NHRP) – RFC2332 (1998)
IETF Proposed Standard
Implemented by multiple vendors (including Juniper and Cisco)
Original application - eliminating (extra) IP hops when routing over ATM/FR – Non-
Broadcast Multiple Access (NBMA) media
NHRP messages could be carried directly over IP (protocol 54), or over IP/GRE
(protocol 54)
NHC (NHRP Client): originates NHRP Request, receives NHRP Replies, receives
NHRP Purge Request
NHS (NHRP Server): receives NHRP Request, originates NHRP Replies, originates
NHRP Purge Request
32 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
DCBR1/NHS1 advertise into IP routing
10.1.1/24 route (subnet of VM-A)
1. Client Site BR/NHC receives from host -A a packet destined to VM-A (10.1.1.1)
2. Client Site BR/NHC originates an NHRP Request (the Request carries 10.1.1.1). The NHRP Request is routed (relying on plain IP routing) towards
DCBR1/NHS1 (as DCBR1/NHS1 advertises a route for 10.1.1/24)
3. Meantime, the packet is forwarded towards VM-A using plain IP routing, first to DCBR1 (as DCBR1 advertises a route for 10.1.1/24), and then (using
the E-VPN procedures) from DCBR1 to DCBR2, to ToR4, and ultimately to VM-A
4. DCBR1/NHS1 (relying on the information provided by E-VPN) determines that VM-A is in Data Center 2 - DCBR2/NHS2 is the authoritative NHS for
VM-A. So, DCBR1/NHS1 (using E-VPN procedures) forwards the NHRP Request to DCBR2/NHS2
5. When DCBR2/NHS2 receives the NHRP Request, it send back to Client Site BR/NHC an NHRP Reply (as DCBR2/NHS2 is the authoritative NHS for
VM-A). The Reply carries IP address of DCBR2/NHS2
6. Once Client Site BR/NHC receives the NHRP Reply, Client Site BR/NHC installs in its FIB host route to VM-A, This route requires encapsulation (the
destination address in the outer header is the address of the originator of the NHRP Reply – DCBR2/NHS2)
7. From that moment traffic to VM-A from Client Site BR/NHC goes (directly) to DCBR2/NHS2
Data Center 1 ToR1
ToR2
DCBR1/NHS1
Data Center 2
ToR4 ToR5
DCBR2/NHS2
FIB:
Dest: 10.1.1.1
Next-Hop: DCBR2/NHS2
Encap: GRE
“Cloud”
1 2
4
5 6 7
VM-A
10.1.1.1
Host-A
192.9.20.1
Client site VM
VM
Client Site BR/NHC
3
NHRP EXAMPLE: STEP BY STEP
33 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
Data Center 1
ToR1
ToR2
DCBR1/NHS1
Data Center 2
ToR4
ToR5
DCBR2/NHS2
Client Site
BR/NHC
VM-A moves from
from ToR4 to ToR6
DCBR1/NHS1 advertise into IP routing
10.1.1/24 route (subnet X)
0. The initial state is the end state reached on previous slide
1. VM-A moved from ToR4 to ToR6 (from Data Center 2 to Data Center 3)
2. DCBR2/NHS2 (relying on the information provided by E-VPN) determines that VM-A moved to another DC. Therefore,
DCBR2/NHS2 send an NHRP Purge to Client Site BR/NHC
3. When Client Site BR/NHC receives the Purge message, it deletes from its FIB the route to 10.1.1.1
4. Same as steps 2-7 on previous slide
DCBR3/NHS3
ToR6
Data Center 3
1
2
FIB:
Dest: 10.1.1.1
Next-Hop: DCBR2/NHS2
Encap: GRE
Dest: 10.1.1.1
Next-Hop: DCBR3/NHS3
Encap: GRE
3 4
5
6 7
8
9 0
5
VM-A
10.1.1.1 VM-A
10.1.1.1
Host-A
192.9.20.1 Client site
NHRP EXAMPLE: VM MOVES STEP-BY-STEP
34 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
SUGGESTED READING
1) EVPN - draft-ietf-l2vpn-evpn
2) Seamless VM Mobility - draft-raggarwa-data-center-mobility
35 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
Don’t forget:
You can copy-
paste this slide
into other
presentations,
and move or
resize the poll.
36 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
Don’t forget:
You can copy-
paste this slide
into other
presentations,
and move or
resize the poll.
37 Copyright © 2012 Juniper Networks, Inc. www.juniper.net
Don’t forget:
You can copy-
paste this slide
into other
presentations,
and move or
resize the poll.