multi chassis trunking ga ig 326 00
TRANSCRIPT
-
8/3/2019 Multi Chassis Trunking GA IG 326 00
1/21
IP NETWORK
Implementing Multi-Chassis Trunking on
Brocade NetIron Platforms
Provides an overview of the Brocade Multi-Chassis Trunking (MCT)
technology and an in-depth discussion of the way it is implemented on
Brocade platforms such as Brocade MLX Series and Brocade NetIron
XMR, CER, and CES; also discusses detailed implementation and best
practices for typical deployment scenarios using MCT to remove
Spanning Tree Protocol (STP) as a loop management technology inlarge-scale Layer 2 Ethernet networks.
-
8/3/2019 Multi Chassis Trunking GA IG 326 00
2/21
IP NETWORK IMPLEMENTATION GUIDE
Implementing Multi-Chassis Trunking (MCT) on Brocade NetIron Platforms Page 2 of 21
CONTENTS
Overview ............................................................................................................................................................................................................................................. 3MCT Topologies ................................................................................................................................................................................................................................ 3MCT Components ............................................................................................................................................................................................................................ 5Traffic Flow ......................................................................................................................................................................................................................................... 6Single Link Aggregation Group Entity in MCT ........................................................................................................................................................................ 7MAC Database Update (MDUP) over Cluster Control Protocol ........................................................................................................................................ 8Layer 2 Protocol Support in MCT ................................................................................................................................................................................................ 9
xSTP ................................................................................................................................................................................ 9MRP ................................................................................................................................................................................ 9Layer 2 xSTP BPDU Tunneling ..................................................................................................................................... 10Keep-alive VLANs ......................................................................................................................................................... 11
VRRP/VRRP-E Implementation on MCT ...............................................................................................................................................................................11MCT Topology A Implementation: Single-Level MCT on Brocade NetIron XMR/MLX and CER/CES ...................... 12MCT Topology B Implementation: Multi-tier MCT on the Brocade NetIron XMR/MLX and CER/CES ..................... 15MCT Topology C Implementation: Brocade NetIron XMR/MLX and CER/CES MCT Integration in a Layer
2 MRP Metro Ring ........................................................................................................................................................ 19Conclusion ........................................................................................................................................................................................................................................21About Brocade ................................................................................................................................................................................................................................21
-
8/3/2019 Multi Chassis Trunking GA IG 326 00
3/21
IP NETWORK IMPLEMENTATION GUIDE
Implementing Multi-Chassis Trunking (MCT) on Brocade NetIron Platforms Page 3 of 21
OVERVIEW
Multi-Chassis Trunking (MCT) is a trunk that initiates in a single MCT-unaware server or switch and
terminates at two Brocade MCT-aware switches. MCT allows links that are physically connected to two
Brocade MCT-aware switches to appear to downstream device as coming from a single device as part of a
single link aggregation trunk interface. Multi-Chassis Trunking is available on the Brocade MLX Series and
Brocade NetIron XMR, CER, and CES devices. At the time of writing this paper, two peers can be configured
as an MCT systemand the two peers can be the same device type or a mix of any two of the platforms
listed above.
In a data center network environment, Link Aggregation (LAG) trunks are commonly deployed to provide
link-level redundancy and increase the link capacity between network devices. However, LAG trunks do not
provide switch-level redundancy. If the switch to which the LAG trunk is attached fails, the entire LAG trunk
loses network connectivity. With MCT, member links of the LAG are connected to two MCT-aware switches,
which are directly connected using an Inter-Chassis Link (ICL) to enable data flow and control messages
between them. In the MCT deployment scenario, all links are active and can be load shared using hash
algorithm. If one MCT switch fails, a data path will remain through the other switch with milliseconds rang of
traffic convergence time, which dramatically increase the network resilience and performance.
MCTTOPOLOGIESBrocade NetIron MCT topologies include the following:
Single-level MCT on the Brocade NetIron XMR/MLX and CER/CES (topology A). This topology
comprises access switches dual-homed to the Brocade NetIron XMR/MLX or CER/CES with a switch
link aggregation trunk interface with Gigabit Ethernet (GbE) or 10 GbE links. This topology can also
consist of link aggregation trunk interface with each endpoint host connected either with one or more
links to each XMR/MLX or CER/CES.
Brocade
NetIron
CER/CES
ICL
Layer 3
network
Keep-aliveVLAN
Brocade
XMR/MLX
Brocade
XMR/MLX
Brocade
NetIron
CER/CES
Switch
link aggregation
IEEE 802.3ad
MCT client
edge ports
Host
link aggregation
IEEE 802.3ad
Host
link aggregation
IEEE 802.3ad
Switch
link aggregation
IEEE 802.3ad
Figure 1. MCT topology A
-
8/3/2019 Multi Chassis Trunking GA IG 326 00
4/21
IP NETWORK IMPLEMENTATION GUIDE
Implementing Multi-Chassis Trunking (MCT) on Brocade NetIron Platforms Page 4 of 21
Multi-tier MCT on the Brocade NetIron XMR/MLX and CER/CES (topology B). This topology comprises
a pair of access switches (typically Brocade NetIron CER/CES) in MCT mode with a unique LAG trunk
interface configured between the access MCT switches and a pair of aggregation/core layer switches,
which are also in MCT mode ( typically Brocade NetIron XMR/MLX). It is often called a double-sided
MCT.
BrocadeNetIron
CER/CES
Layer 3
network
Keep-alive
VLAN
ICLBrocade
XMR/MLX
Brocade
XMR/MLX
BrocadeNetIron
CER/CESICL
Figure 2. MCT topology B
-
8/3/2019 Multi Chassis Trunking GA IG 326 00
5/21
IP NETWORK IMPLEMENTATION GUIDE
Implementing Multi-Chassis Trunking (MCT) on Brocade NetIron Platforms Page 5 of 21
Brocade NetIron XMR/MLX and CER/CES MCT integration in a Layer 2 MRP metro ring (topology C):This topology comprises pairs of Brocade NetIron XMR/MLX and CER/CES in MCT mode in a
Layer 2 Metro Ring Protocol (MRP) metro ring topology. The ICL between the pair of MCT switches
is part of the MRP ring and is designed to always be in non-blocking mode. This topology provides
one more layer of aggregation in the MRP ring topology and active/active path to the dual-home
connected servers.
Routed
Layer 3
network
Brocade
MLX
Aggregation
Brocade MLX
MCT client
edge ports
METRO RING
LAG
MCT
MCT
Servers
Servers
LAG
Figure 3. MCT topology C
MCTCOMPONENTS
To properly understand MCT, consider Figure 4, which shows an example of MCT deployment,
functions and features.
End
stations
CEP
ICL
Bridge ID:
100Bridge ID:
200
Switch bridge ID:
300
LAG
End
stations
CCEP
CEP
CCEP
Cluster: ABC
Figure 4. MCT components
-
8/3/2019 Multi Chassis Trunking GA IG 326 00
6/21
IP NETWORK IMPLEMENTATION GUIDE
Implementing Multi-Chassis Trunking (MCT) on Brocade NetIron Platforms Page 6 of 21
Figure 4 shows a number of MCT components:
MCT peer switches. A pair of switches connected as peers through the ICL. The LAG interface is spread
across two MCT peer switches and it acts as the single logical endpoint to the MCT client.
MCT client. TheMCT client is the device that connects with MCT peer switches through an IEEE802.3ad link. It can be a switch or an endpoint server host in the single-level MCT topology or another
pair of MCT switches in a multi-tier MCT topology.
MCT Inter-Chassis Link (ICL). A single-port or multi-port GbE or 10 GbE interface between the two MCTpeer switches. This link is typically a standard IEEE 802.3ad Link Aggregation interface. ICL ports
should not be untagged members of any VLAN. The ICL is a tagged Layer 2 link, which carries
packets for multiple VLANs.MCT VLANS are the VLANs on which MCT clients are operating. On the
Brocade NetIron XMR/MLX, non-MCT VLANs can co-exist with MCT VLANs on the ICL. However, on the
Brocade NetIron CES/CER, only MCT VLANs are carried over ICL.
NOTE: For MCT VLANs, MAC learning is disabled on ICL ports, while MAC learning is enabled on ICL
port for non-MCT VLANs.
MCT Cluster Client Edge Port (CCEP). A physical port on one of the MCT peer switches that is amember of the LAG interface to the MCT client. To have a running MCT instance, at least one LinkAggregation Interface is needed with a member port on each peer switch.
MCT Cluster Edge Port (CEP). A port on MCT peer switches that is neither a Cluster Client Edge Port noran ICL port.
MCT Cluster Communication Protocol (CCP). A Brocade proprietary protocol that provides reliable,point-to-point transport to synchronize information between peers. CCP comprises two main
components: CCP peer management and CCP client management. CCP peer management deals with
establishing, and maintaining TCP transport session between peers, while CCP client management
provides event-based, reliable packet transport to CCP peers.
TRAFFIC FLOW
MCT configuration is optimized to ensure that traffic through an MCT-capable system is symmetric.
In Figure 5, for example, traffic from the server redirected to the core or the server attached to another
access switch reaches a Brocade MLX (Agg 1 on the left) and the receiving Brocade MLX routes it directly to
the core or switches it directly to the destination access switch without unnecessarily passing it to the peer
Brocade MLX. Similarly, the traffic reaching the Brocade MLX (Agg 1 on the right) from the core is forwarded
toward the access switch without traversing the MCT peer Brocade MLX switch. This can be achieved
regardless of which Brocade MLX switch aggregation device is the primary Virtual Router Redundancy
Protocol (VRRP) device for a given VLAN.
-
8/3/2019 Multi Chassis Trunking GA IG 326 00
7/21
IP NETWORK IMPLEMENTATION GUIDE
Implementing Multi-Chassis Trunking (MCT) on Brocade NetIron Platforms Page 7 of 21
ICL
Core 1 Core 2
Agg 1 Agg 2
L3
L2
ICL
Core 1 Core 2
Agg 1 Agg 2
L3
L2
Figure 5. MCT traffic flow
SINGLE LINK AGGREGATION GROUP ENTITY IN MCT
While still operating with two separate control planes, MCT ensures that the neighboring client devices
perceive the MCT peers as a single link aggregation interface entity. Compared to static LAG configuration,
Link Aggregation Control Protocol (LACP) is always recommended so that the negotiation process is in place
before the LAG interface comes up to minimize the likelihood of misconfiguration. According to the IEEE
specifications, to allow LACP to determine whether a set of links can connect to the same system and to
determine whether those links are compatible from the point of view of aggregation, it is necessary to be
able to establish a globally unique identifier for each system that participates in link aggregation. The
unique identifier in MCT peer switches comprises two components: the MCT system ID and the MCT LAG
group ID. These values are calculated as follows:
MCT base system ID = 0180.c200.0000
MCT system ID = MCT base system ID + cluster ID
The cluster ID is user configurable on each MCT peer and unique across the MCT system
NetIron(config)# cluster TOR 1
Syntax: [no] cluster
Where the parameters specify the cluster name with a limit of 64 charactersand the parameters specify the cluster ID (1-65535).
MCT base key = 30000
MCT LAG Group ID = MCT base key + client bridge ID
-
8/3/2019 Multi Chassis Trunking GA IG 326 00
8/21
IP NETWORK IMPLEMENTATION GUIDE
Implementing Multi-Chassis Trunking (MCT) on Brocade NetIron Platforms Page 8 of 21
The client bridge ID is also user configurable on each MCT peer and unique for each client device
(switch or server).
NetIron(config-cluster-TOR)# client client-1
Syntax: [no] client
Where the parameter can be 64 characters (maximum).
NetIron(config-cluster-TOR-client-1)#rbridge-id 100
Syntax: [no] rbridge-id
Where the parameters specify the remote bridge ID; possible values are 1 35535.
MACDATABASE UPDATE (MDUP) OVER CLUSTER CONTROL PROTOCOL
Each MCT cluster switch needs to maintain multiple MAC Databases (MDBs) corresponding to the
cluster peer. The local MDB consists of MAC addresses that are learnt locally and the remote MDTs
are constructed by the MDUP messages received from the MCT peer. MDUP runs as an applicationover CCP and only the best MAC entry is installed in the Forwarding decision Database (FDB).
Local MAC
learning
MDUP
messages
SWITCH A
Local MDB
(RB-A)
Remote MDB
(RB-B)
FDB
Local MAC
learning
MDUP
messages
SWITCH B
Local MDB
(RB-B)
Remote MDB
(RB-A)
FDB
Figure 6. MCT MAC Database table
The following MDB resolution algorithm is used on all the MDBs in a given switch to identify which
MAC should be installed in FDB. The algorithm works as follows:
1. The MACs learned locally are given the highest priority or the cost of 0(zero) so that they are always selected
as best MAC.
2. Each MAC is advertised with a cost; low-cost MACs are given preference over high-cost MACs.
3. If a MAC is moved from an MCT MAC to a regular MAC, a MAC move message is sent to the peer and the peer
should also move the MAC from CCEP ports to ICL links adjusting the MDBs.
4. If the cost of a MAC is same, then the MAC learned from the lower RBridge ID wins and is installed in the FDB
-
8/3/2019 Multi Chassis Trunking GA IG 326 00
9/21
IP NETWORK IMPLEMENTATION GUIDE
Implementing Multi-Chassis Trunking (MCT) on Brocade NetIron Platforms Page 9 of 21
The following types of MACs are defined for the Cluster:
Cluster Local MAC (CL). MACs that are learned on VLANs that belongs to a cluster VLAN range and onCEP ports locally. MACs are synchronized to the cluster peer and subject to aging.
Cluster Remote MAC (CR). MACs that are learned via MDUP messages from the peer (CL on the peer).The MACs are always programmed on the ICL port and they do not age. They are deleted only when
they are deleted from the peer. A remote MDB is created for these MACs with a cost of 1 (one).
Cluster Client Local MAC (CCL). MACs that are learned on VLANs that belongs to cluster VLAN rangeand on CCEP ports. The MACs are synchronized to the cluster peer and subject to aging. A local MDB is
created for these MACs with a cost of 0 (zero).
Cluster Client Remote MAC (CCR). MACs that are learned via MDUP messages from the peer (CCL onthe peer). The MACs are always programmed on the corresponding CCEP port and they do not age.
They are deleted only when they are deleted from the peer. A remote MDB is created for the MACs with
a cost of 1 (one).
LAYER 2PROTOCOL SUPPORT IN MCT
xSTP
The STP algorithm has been modified such that ICL never goes to blocking. The ICL guard ensures this
by starting the ICL guard timer as soon as superior BPDUs are received on the port and runs the timer
for the entire duration of receiving the superior BPDUs. As long as this timer runs on an interface, the
superior BPDUs are dropped. The modified STP algorithm also ensures that the CCEP STP state
information between MCT peers is synchronized using messages that are sent over CCP and that the
CCEP interfaces spanning tree state on both MCT peers is the same.
Only one of the MCT peers sends BPDU towards the MCT client. It is decided by whichever is the
designated bridge on the ICL. Three new STP states are added in the MCT implementation:
The BLK_BY_ICL state indicates that the superior BPDUs were received on this interface, which could
have led to blocking of the ICL interface, so the ICL port guard mechanism has been triggered on thisport.
The FWD_BY_MCT state indicates that the MCT peer has set the CCEP state to forwarding.
The BLK_BY_MCT state indicates that the MCT peer has set the CCEP state to blocking.
MRP
Metro Ring Protocol (MRP) is a Brocade proprietary protocol that provides a scalable Layer 2 loop-free
ring topology typically for a Metropolitan Area Networks (MANs) and fast reconvergence compared to
spanning tree protocols. The pair of MCT switches can act as a single logic node in the MRP topology
the only restriction being that the ICL interface cannot be configured as an MRP secondary interface
since the ICL interface cannot be in blocking state. MRP shouldnt be enabled on an MCT CCEP port
and vice versa. MCT-MRP integration provides a solution with active-active dual homing to the MRPring, high availability, and fast recovery.
-
8/3/2019 Multi Chassis Trunking GA IG 326 00
10/21
IP NETWORK IMPLEMENTATION GUIDE
Implementing Multi-Chassis Trunking (MCT) on Brocade NetIron Platforms Page 10 of 21
Layer 2 xSTP BPDU Tunneling
One of the advantages of MCT is that it enables a solution with a loop-free Layer 2 network that
eliminates spanning tree. However, MCT doesnt restrict the use of spanning tree in the network.
Spanning tree can be enabled so that MCT switches participate in the SPT process as an extra
protection for any loop caused by misconfiguration. If spanning tree is disabled on MCT switches, by
default MCT switches act as a hub for STP, RSTP, or MSTP. In the case, the BPDUs are flooded to theVLAN as regular multicast packets. Note the BPDU received from ICL is forwarded to the CCEP port if
the peer MCT switch can reach the same cluster client. To prevent STP BPDUs on MCT VLANs from
tunneling through the MCT switch, the no cluster-l2protocol-forward command needs to be
configured on a port or global basis.
To disable xSTP BPDU tunneling globally, enter a command such as the following:
NetIron(config)#no cluster-l2protocol-forward
To disable xSTP BPDU tunneling on an interface, enter a command such as the following:
NetIron(config-if-e1000-1/2)#cluster-l2protocol-forward disable
MCT
logical
switch
MCT switch
pair acting
as a single
logical switchBrocade
MLX
Brocade
MLX
Standard
link aggregation
IEEE 802.3ad
Layer 2
Figure 7. MCT xSTP BPDU tunneling
-
8/3/2019 Multi Chassis Trunking GA IG 326 00
11/21
IP NETWORK IMPLEMENTATION GUIDE
Implementing Multi-Chassis Trunking (MCT) on Brocade NetIron Platforms Page 11 of 21
Keep-alive VLANs
Using a LAG trunk interface for the ICL between the MCT peer switches is a best practice to provide
link redundancy. However, an optional keep-alive VLAN can be configured to start transverse
connectivity check messages when the ICL link fails. Only one VLAN can be configured as the keep-
alive VLAN. The MCT is operating in client isolation loose mode by default, which means that in the
event that the CCP fails because the ICL link fails:
If the keep-alive VLAN is configuredthe MCT performs master/slave negotiation. After the negotiation,
the client port will be active and forward traffic only on the master MCT switch.
If no keep-alive VLAN configuredthe client ports on both MCP peer switches will remain active and
forward traffic independently.
The MCT can also operate in client isolation strict mode. If the CCP fails, the client interfaces on both
MCT peer switches are administratively shut down. In this mode, the client is completely isolated from
the network when the CCP is not operational.The same isolation mode should be configured on both
MCT switches.
NetIron(config-cluster-TOR)#client-isolation strict
Syntax: [no] client-isolation strict/loose
The table below summarizes the behavior discussed above.
Modules Keep-alive VLAN No Keep-alive VLAN
(default)
Strict Client Isolation
Mode
ICL normal operation Client ports on both MCT
peers active
Client ports on both MCT
peers active
Client ports on both
MCT peers active
ICL failure Client ports active only
on Master MCT node
Client ports on both MCT
peers active
All client ports shut
down
ICL failure in an MCT-MRP topology
Client ports active onlyon Master MCT node
Not recommended orsupported
All client ports shutdown
VRRP/VRRP-EIMPLEMENTATION ON MCT
Virtual Router Redundancy Protocol (VRRP) is a standard redundancy protocol to increase the
availability of the default gateway servicing hosts on the same subnet via advertising a "virtual router
as the default gateway to the host instead of one physical router. VRRP-E is the Brocade proprietary
version of VRRP with a number of enhancements. Both VRRP and VRRP-E are supported with MCT,
which provides switch-level redundancy for VRRP/VRRP-E.
The MCT switch that acts as backup router needs to ensure that packets sent to a VRRP-E virtual IP
address can be L2 switched to the VRRP-E master router for forwarding. The MCT switch that acts asmaster router syncs the VRRP-E MAC to the other MCT switch that acts as a backup router. Both data
traffic and VRRP-E control traffic travel through the ICL unless the short-path forwarding feature is
enabled.
With the VRRP-E server virtualization feature, short-path forwarding, enabled, the MCT VRRP-E backup
switch can forward both Layer 2 and Layer 3 packets to the VRRP-E master switch without going
through ICL, which provides a VRRP active-active topology.
http://en.wikipedia.org/wiki/Default_gatewayhttp://en.wikipedia.org/wiki/Subnetworkhttp://en.wikipedia.org/wiki/Subnetworkhttp://en.wikipedia.org/wiki/Default_gateway -
8/3/2019 Multi Chassis Trunking GA IG 326 00
12/21
IP NETWORK IMPLEMENTATION GUIDE
Implementing Multi-Chassis Trunking (MCT) on Brocade NetIron Platforms Page 12 of 21
MCT Topology A Implementation:
Single-Level MCT on Brocade NetIron XMR/MLX and CER/CES
CES 1
ICL
Layer 3 network
MLX2
VRRP-E
backup
CES2
Switchlink aggregationIEEE 802.3ad
HostLink aggregation
IEEE 802.3adSwitch
link aggregatioIEEE 802.3ad
MLX1
VRRP-E
master
1/7 1/7
1/1
1/2
1/1 1/1
1/61/5
1/4
1/21/3
1/6
1/51/4
1/2
1/3
Figure 8. MCT topology A deployment example
Create VLANs (including the session VLAN used by the CCP) and assign ports to VLANs. Only ICL ports
should be assigned to session VLANs.
-
8/3/2019 Multi Chassis Trunking GA IG 326 00
13/21
IP NETWORK IMPLEMENTATION GUIDE
Implementing Multi-Chassis Trunking (MCT) on Brocade NetIron Platforms Page 13 of 21
Create a LAG on the MCT switches; in this example, there are 4 x LAGs on each MCT switch:
LAG 1 serves as an ICL link and LAG 2 to LAG 4 are the connections from the MCT switch to the clients
(access switches and server host).
Configure the MCT cluster in operation mode and the MCT cluster client. One MCT cluster client matches
each access switch or host respectively. Note the following:
If the ICL or client interfaces needs to be configured as a LAG interface, then only the primary port of
the LAG needs to be specified in the ICL or client configuration.
An ICL interface cannot be configured as the CCEP port in any client.
Once the cluster is deployed, only the cluster member VLANs can be modified. Other configurations are
not allowed be changed.
Once the client is deployed, any configuration under the client cannot be changed.
Clients can be added or deleted even when the cluster is deployed.
-
8/3/2019 Multi Chassis Trunking GA IG 326 00
14/21
IP NETWORK IMPLEMENTATION GUIDE
Implementing Multi-Chassis Trunking (MCT) on Brocade NetIron Platforms Page 14 of 21
Configure VRRP-E on the MCT client VLAN 2: switch MLX1 is the master and switch MLX2 is the backup.
Note that if short-path-forwarding is enabled, the backup VRRP-E switch forwards both Layer 2 and Layer 3
traffic.
Layer 3 interfaces and protocols needs to be configured and enabled on the interfaces facing the Layer 3
core layer so that the subnets of the access layer can be advertised out. In an MCT implementation,
Brocade recommends that you redistribute the related routes to routing protocols.
-
8/3/2019 Multi Chassis Trunking GA IG 326 00
15/21
IP NETWORK IMPLEMENTATION GUIDE
Implementing Multi-Chassis Trunking (MCT) on Brocade NetIron Platforms Page 15 of 21
MCT Topology B Implementation:
Multi-tier MCT on the Brocade NetIron XMR/MLX and CER/CES
ICLCES1
ICL
Layer 3 network
MLX2
VRRP-E
backup
CES2
MLX1
VRRP-E
master
1/6 1/6
1/1
1/2
1/1 1/1
1/5
1/4
1/2
1/3
1/5
1/4
1/2
1/3
1/3
1/41/41/3
Figure 9. MCT topology B deployment example
-
8/3/2019 Multi Chassis Trunking GA IG 326 00
16/21
IP NETWORK IMPLEMENTATION GUIDE
Implementing Multi-Chassis Trunking (MCT) on Brocade NetIron Platforms Page 16 of 21
Create VLANs (including the session VLAN used by the CCP) and assign the ports to VLANs. Layer 2 VLANs
span from the Brocade NetIron CES access switch up to Brocade MLX aggregation/core switches. VE of
the MCT-VLAN 2 is configured only on the Brocade MLX aggregation/core switch.
-
8/3/2019 Multi Chassis Trunking GA IG 326 00
17/21
IP NETWORK IMPLEMENTATION GUIDE
Implementing Multi-Chassis Trunking (MCT) on Brocade NetIron Platforms Page 17 of 21
Create LAGs on the MCT switches, both on the Brocade MLX pair and Brocade NetIron CES pair. The LAG
between the Brocade MLX MCT switch and the Brocade NetIron CES MCT switch is virtually a single entity of
LAG interface.
-
8/3/2019 Multi Chassis Trunking GA IG 326 00
18/21
IP NETWORK IMPLEMENTATION GUIDE
Implementing Multi-Chassis Trunking (MCT) on Brocade NetIron Platforms Page 18 of 21
Configure the cluster operation mode and cluster client. The Brocade MLX pair of MCT switches is the client
of the Brocade NetIron CES pair of MCT switches. The Brocade NetIron CES pair of MCT switches is the
client of the pair of MCT switches. The Brocade NetIron CES pair of MCT switches also have another client,
which is the server connected to the Brocade NetIron CES MCT switches through the standard IEEE
802.3ad protocol.
-
8/3/2019 Multi Chassis Trunking GA IG 326 00
19/21
IP NETWORK IMPLEMENTATION GUIDE
Implementing Multi-Chassis Trunking (MCT) on Brocade NetIron Platforms Page 19 of 21
Only the aggregation/core Brocade MLX MCT switches need to be configured with VRRP/VRRP-E.
Switch MLX1 is the master and switch MLX2 is the backup. Note that if short-path-forwarding is enabled as
recommended, the backup VRRP-E switch will forward both Layer 2 and Layer 3 traffic.
Layer 3 interfaces and protocol needs to be configured and enabled on the interfaces facing the Layer 3
core layer so that subnets for the access layer can be advertised out. The Layer 2 and Layer 3 boundary
sits on the Brocade MLX MCT switches in the aggregation/core layer. In the MCT implementation, Brocaderecommends redistributing the routes to routing protocols.
MCT Topology C Implementation:
Brocade NetIron XMR/MLX and CER/CES MCT Integration in a Layer 2 MRP Metro Ring
Servers
Aggregation
Brocade MLX
METRO RING
Brocade
MLXMCT
MCT
LAG
Figure 10. MCT topology C deployment example
-
8/3/2019 Multi Chassis Trunking GA IG 326 00
20/21
IP NETWORK IMPLEMENTATION GUIDE
Implementing Multi-Chassis Trunking (MCT) on Brocade NetIron Platforms Page 20 of 21
Create VLANs and enable MRP on the VLANs. If the MCT switches are configured as MRP masters, make
sure that the ICL ports on the MCT switches are not configured as secondary ports.
Create LAGs on the MCT switches.
-
8/3/2019 Multi Chassis Trunking GA IG 326 00
21/21
IP NETWORK IMPLEMENTATION GUIDE
Implementing Multi Chassis Trunking (MCT) on Brocade NetIron Platforms Page 21 of 21
Configure the cluster operation mode and cluster client. One cluster client matches each access switch or
host respectively.
CONCLUSION
Brocade MCT provides a number of important benefits for a Layer 2 network in addition to a set of
enhancements for Layer 3 interconnect specifically resulting from the Layer 2 capabilities. With MCT,
customers can achieve enhanced system availability through redundant systems, loop management
without the use of Spanning Tree Protocol, full system bandwidth high availability, rapid link-failure recovery,
and link aggregation to any IEEE 802.3ad-capable edge device.
ABOUT BROCADE
Brocade provides innovative, end-to-end network solutions that help the worlds leading organizations
transition smoothly to a virtualized world where applications and information can reside anywhere. These
solutions deliver the unique capabilities for a more flexible IT infrastructure with unmatched simplicity, non-
stop networking, optimized applications, and investment protection. As a result, organizations in a widerange of industries can achieve their most critical business objectives with greater simplicity and a faster
return on investment.
For more information about Brocade products and solutions, visit www.brocade.com.
2010 Brocade Communications Systems, Inc. All Rights Reserved. 09/10 GA-IG-326-00
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron, SAN Health, ServerIron, TurboIron, and
Wingspan are registered trademarks, and Brocade Assurance, Brocade NET Health, Brocade One, Extraordinary Networks,
MyBrocade, and VCS are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries.
Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners.
Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning
any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes
to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes
features that may not be currently available. Contact a Brocade sales office for information on feature and product availability.
Export of technical data contained in this document may require an export license from the United States government.