So, lets take a closer at OTV and how it works. As a reminder, OTV is an NX-OS feature that allows us to extend Ethernet LANs between data centers. One of the nice things about OTV is that it is transport agnostic–the connectivity between data centers can be L2 based, L3 based, IP switched or label switch–pretty much anything that can transport IP.
OTV works by creating an OTV control plane through authenticated links between the Nexus 7000 switches at each of your data centers (called edge nodes in OTV parlance). You can then “route” your LAN traffic by encapsulating it and routing it through this IP infrastructure. Routing of the traffic is determined by associating a MAC address with a next-hop IP address. The process is fully dynamic, so there is no need to establish and manage tunnels and virtual wires. This approach certainly simplifies management and administration over existing approaches, but it also allows you to take full advantage of your IP core such as optimal routing and features such as load balancing, multicast traffic replication, and fast failover.
The tables themselves are built transparently in the background, once OTV is configured, by proactively advertising MAC reachability. If the IP core supports multicast, the edge device advertises the address along with some extended attributes in a single updated that reaches all neighbors. The additional information, including VLAN ID, site ID, and associated IP addresses makes OTV a significantly smarter solution that other approaches, giving it the ability to support loop-free multi-homing, load balancing, First Hop Resiliency Protocol and ARP containment without generating increased complexity, processing overhead or the need for ancillary protocols. If the core does not support multicast, OTV can run with a “adjacency server mode” where one of the edge nodes is responsible for collecting and disseminating reachabilty information via unicast traffic. In either case, OTV’s approach is an improvement over other approaches that that require explicit configuration of each connection of depend upon flooding information.
As noted above, the extra information OTV uses makes is a much more intelligent connectivity solution. For example, OTV provides improved L2 fault isolation between locations: Spanning Tree BDPU are not forwarded into the core and neither are unknown unicasts. Cross site ARP traffic is reduced via proxy ARP capabilities in the edge nodes and broadcast traffic in general can be controlled via rate limiting and white lists.
So, what does all this technical goodness net you (yes, pun intended):
- One-touch add/drop of sites without reconfiguration of other sites (point-to-cloud model)
- Embedded intelligence obviates need for ancillary protocols such as VPLS
- Dynamic design avoids the exponential complexity of virtual wires and the overhead and risk of flooding
- When multihoming sites, OTV supports multiple active links on multiple edge (up to 16 per site) devices while avoiding loops for effective use of bandwidth and increased resiliency. OTV also supports both vPC and TRILL in this scenario as well as any multipathing offered by the underlying IP transport
- OTV leverages IP multicast capabilities for optimal traffic replication and avoids head-end replication overhead
- Fast failover based on an interior gateway protocol running between edge nodes, equal cost multipath and bidirectional forwarding detection extensions
- Consolidation traffic from multiple VLANs over one overlay with support for over 4,000 VLAN IDs in a single 802.1Q domain
- Ridiculously simple configuration
That’s it for now–if you have questions, post them in the comments–next post, I’ll look at some use cases for OTV.