In today’s world, multicast senders and receivers are not limited to a single network. They can be spread across enterprise and data center locations. Multicast can be generated or consumed anywhere and can be present in various security contexts – be it a tenant of VXLAN EVPN -based data center or within a traditional IP multicast network.
Applications expect transparency to the underlying transport architecture while security compliance demand segmentation. Networks should enable seamless and secure connectivity without compromising security or performance. The Border Device interconnected multicast network domains are the focus of this innovation. Both the seamless integration of VXLAN EVPN with TRM (Tenant Routed Multicast) and MVPN (Multicast VPN); two flavors of the same kind.
The Two-Node Approach
An integration in which each node acts as a border to their domain requires a two-node approach. This incurs both CapEx costs and operations burden for customers to manage two devices. The complexity is multiplied if the integration needs to happen between traditional multicast networks, VXLAN EVPN (multicast network), and MVPN networks.
To keep OpEx and CapEx costs to a minimum, we need a simpler, single-node approach.
We followed a step–by–step approach to provide a solution addressing all these challenges.
- Cisco innovated Tenant Routed Multicast (TRM) as a first–shipped solution delivering Layer-3 multicast overlay forwarding in VXLAN EVPN networks with an Anycast Designated router (DR) for End-Points.
- Cisco introduced Multicast VPN (Draft Rosen PIM/GRE) support on Cisco Nexus 3600-R and 9500-R as a steppingstone.
- Cisco NX-OS 9.3(5) release delivered seamless integration between EVPN(TRM) and MVPN (Draft Rosen). Since these edge devices have functions for both TRM as well as MVPN, they act as seamless hand-off nodes for forwarding multicast between VXLAN EVPN networks and MVPN network.
In our prior blog Cisco NX-OS VXLAN Innovations Part 1: Inter-VNI Communication Using Downstream VNI we covered about VXLAN EVPN DSVNI. In this blog, we will cover the use case of integration between VXLAN BGP EVPN(TRM) and MVPN(Draft Rosen).
Tenant Routed Multicast
Cisco Tenant Routed Multicast (TRM) efficiently delivers overlay Layer-3 multicast traffic in a multi-tenant VXLAN BGP EVPN data center network. Cisco TRM is based on standards-based, next-gen multicast VPN control plane (ngMVPN) as described in IETF RFC 6513 and RFC 6514 plus the extensions posted as part of IETF “draft–bess–evpn–mvpn-seamless-interop“. In VXLAN EVPN fabric, every Edge-Device act as a Distributed IP Anycast Gateway for unicast traffic as well as a Designated Router (DR) for multicast. On top of achieving scalable unicast and multicast routing, multicast forwarding is optimized by leveraging IGMP snooping on every edge-device by sending traffic only to the interested receivers.
TRM leverages Multicast Distribution Trees (MDT) in the underlying transport network and incurs multi-tenancy with VXLAN encapsulation. A default MDT is built per-VRF and individual multicast group addresses in the overlay is mapped to respective underlay multicast groups for efficient replication and transport. TRM can leverage the same multicast infrastructure as VXLAN BUM (Broadcast, Unknown Unicast, and Multicast) traffic. Even by leveraging the same infrastructure, Rendezvous-Point (RP), the Multicast groups for BUM, and MDT are separated. The combination of TRM and Ingress Replication is also supported. In the overlay, TRM operates as fully distributed Overlay Rendezvous-Point (RP), with seamless RP presence on every edge-device. The whole TRM–enabled VXLAN EVPN fabric acts as a single Multicast Router.
In multicast networks, multicast sources, receivers, and Rendezvous-point (RP) reside within the fabric, across sites, inside Campus locations or over the WAN network. TRM allows seamless integration with existing multicast networks regardless of whether the sources, receivers and RP are located. TRM allows tenant-aware external connectivity using Layer-3 physical or sub-interfaces.
TRM Multi-Site – DCI with Multicast
Multi-site architecture
Data and application growth compelled customers to look for scale-out data center architectures as one large fabric per location brought challenges in operation and fault isolation. To improve fault and operational domains, customers started building smaller compartments of fabrics with Multi-Pod and Multi-Fabric architectures. These fabrics are interconnected with the Data Center Interconnect (DCI) technologies. The complexity of interconnecting these various compartments prevented from the rollout of such concepts with the introduction of Layer–2 and Layer–3 extensions. With a single overlay domain (end-to-end encapsulation), Multi-Pod introduced challenges with scale, fate sharing, and operational restrictions. Although Multi-Fabric provided improvements over Multi-Pod by isolating both the control and the data plane, it introduced additional challenges and operational complexity with confused mixing of different DCI technologies to extend and interconnect the overlay domains.
Please refer to Cisco Live Session VXLAN BGP EVPN based Multi-Site – BRKDCN-2035 presented by Lukas Krattiger, Principal Engineer, Cisco Systems, Inc. for more information on VXLAN EVPN Multisite.
TRM Multi-site
For unicast traffic, VXLAN EVPN Multi-Site architecture was introduced to address the above concerns. It allows the interconnection of multiple distinct VXLAN BGP EVPN fabrics or overlay domains, new approaches to fabric scaling, compartmentalization, and DCI. At the DCI, Border Gateways (BGW) were introduced to retain the network control points for overlay network traffic. Organizations also have a control point to steer and enforce network extension within and beyond a single data center.
Further, the Multi-Site architecture was extended with TRM in NX-OS 9.3(1) for seamless communication between sources and receivers spread across multiple EVPN VXLAN networks. This enables them to leverage similar benefits as that of the VXLAN EVPN Multi-site architecture.
Tenant Routed Multicast to MVPN
Multicast VPN (Draft Rosen – PIM/GRE)
MVPN (PIM/GRE) Draft-Rosen IETF draft “draft-rosen-vpn-mcast-10“ is an extension of BGP/MPLS IPVPN[RFC4364] and, specifies the necessary protocols and procedures for support of IPv4 Multicast. Like unicast IP VPN, MVPN allows enterprises to transparently interconnect its private network across the provider backbone without any change in enterprise network connectivity and administration for streaming multicast data.
The NX-OS 9.3(3) release introduced MVPN (PIM/GRE) support on Cisco Nexus 9000 (R-Series) and Nexus3000 Series switches (R-Series).
Seamless integration between EVPN (TRM) and MVPN (Draft Rosen)
Brand new in Cisco NX-OS 9.3(5), we introduced the seamless integration between TRM capable edge-devices with Multicast VPN networks. The functionality of VXLAN VTEP and MVPN PE is brought together on the Nexus 9500-R Series and Nexus 3600-R Series. In Border PE (a combination of VXLAN Border and MPLS PE), a border device plays a VTEP role in VXLAN EVPN(TRM) network and a PE role in the MVPN network. The gateway node enables packets to be handed off between a VXLAN network (TRM or TRM Multi-Site) and an MVPN network. It acts as a central node that performs necessary packet forwarding, encapsulation, and decapsulation to send multicast traffic to the respective receivers. The rendezvous point (RP) for the customer (overlay) network can be in any of the three networks: VXLAN, MVPN, or IP multicast.
Customers reap the benefits of lower OpEx and CapEx costs with a single-node approach at the border for hand-off functionality.
Customers achieve the benefits of standards-based data center fabric deployments using VXLAN EVPN technology – scalability, performance, agility, workload mobility, and security. As data cross multiple domains or boundaries, it becomes critical for customers to achieve similar benefits without increasing costs and operational complexity. Customers are looking for a simple, flexible, manageable approach to data center operations and Cisco’s single-box solution (both VXLAN EVPN(TRM) and MVPN function on the same device) offers operational flexibility to customers.
For more information, please refer to Cisco Nexus 9000 Series NX-OS VXLAN Configuration Guide, Release 9.3(x).
CONNECT WITH US