It is amazing how the data centre world has changed in the last few years. A Data Centre used to be a collection of network elements to interconnect static servers (and their associated storage), with traffic patterns that were highly predictable and mostly north-south. Cloud and virtualization have changed all of this: a data centre is now a collection of compute and storage resources which can be securely sliced up into virtual networks and placed anywhere according to real time needs, interconnected by a fabric. The virtualization of servers, network services such as firewalls and load balancers, and even network devices such as switches and routers, has created a very dynamic landscape in terms of how fast you could configure a virtual network, in a way where location shouldn’t really matter, and where compute and storage resources can be added on the fly, based on demand. Multi-tenant Data Centres, such as the one to deploy Virtual Private Clouds, need to support 10000’s of these virtual networks. And every one of these virtual networks needs a lot of different service instances to stitch together the virtual network across virtual servers, virtual switches, virtual firewalls, virtual load-balancers, and virtual routers. Traffic patterns have shifted to East-West, because of the new applications which spread processing across many hosts, and because of the ‘location freedom’ that virtualization allows. Network infrastructure needs to be cost-effective to handle all this traffic, while the increased lookup-table size caused by the any to any traffic patterns often led to increased cost.
Traditionally Ethernet based forwarding has been deployed to cope with the mobility and agility aspects. Ethernet is great here because of its plug and play behaviour: no addresses have to be configured or provisioned to identify the servers, and the location of the servers is learned dynamically. The industry has been creating Next Generation Ethernet solutions to add more functionality in this realm such as Equal Cost Multi Path (ECMP) and SAN and LAN convergence while trying to retain the plug and play behaviour. However, these solutions didn’t take into account the above shift.
Virtualization Vendors could no longer wait for the networking industry to support these needs, and therefore started creating overlays. An overlay is created when a virtual switch encapsulates Ethernet packets into some form of IP based encapsulation. The fabric only needs to be IP aware from that point on. Initially overlays like VxLAN and nvGRE were still using Ethernet flooding and learning to associate the location to the identity of end-stations. But there is a growing trend to add a control plane to map end-stations to locations, where the end-station identity can be an Ethernet address, as well as an IP address, effectively creating overlays which can use IP routing for IP traffic, and Ethernet forwarding for non-IP or non-routable traffic.
Taking a step back, such overlays should :
- Reduce Operational Complexity as the Underlying network is a fairly static IP based network, while the edge only needs to know the location of where some end-stations have moved.
(Note: The underlay could leverage protocols that allow very easy ‘self-clustering’ of network nodes, and individual network nodes could then leverage network wide ‘intent’ to form the underlay).
- Should support an orchestration driven approach of mapping end-station identifiers to locations.
- Work with existing Ethernet L2 and L3 switches.
- Support concurrent L2 adjacencies and L3 adjacencies between end stations, as the control plane to map locations to end-stations identifiers can leverage multiples address families in both the ‘identity’ name space as well as the ‘location’ namespace. Migration from IPv4 to IPv6 is eased by this.
- Support network infrastructures with a large amount of access devices, each serving a lot of VMs.
- Support the creation of a lot of Virtual Networks
- support VM mobility and server clustering.
- Support both Network based solutions well as hypervisor/virtual switch based solutions, in a unified manner.
- Enable scalable table sizes and scalable controlplanes to create and maintain these forwarding tables, as well as potentially maintain the associate policies associated to certain destinations.
The IP Underlay network will:
- Allow the efficient use of the entire topology, in other words ECMP
- Enable optimal forwarding for both unicast and multicast.
One technology which can meet these needs is LISP, the Location-Identity Separation Protocol. LISP has been used already successfully to deploy overlays, mapping IP end-station identifiers to IP locations, enabling applications such as multi-homing, high-scale multi-tenancy and seamless mobility (including VM mobility). LISP uses a centralized mapping system to achieve this, where the edge devices are responsible to populate this mapping system, as soon as a new device is discovered, or a device is discovered to have moved. Edge devices can request the mapping system about the location of a certain end-device, and these entries are cached for further use, until they age out. Entries that point to ‘old’ locations are dynamically altered by an interaction between the mapping system and the edge device owning the ‘new’ location of this end-station. LISP can scale to an unlimited number of instances of global cross organizational reach. The mapping system is modular and can be changed without changing sites that run LISP. The existing mapping database transport system is designed with the same design principles as DNS has. That is, one can deploy private or public mapping databases, as well as allowing multiple instances of the mapping system or supporting multiple tenants with a private or public mapping system.
LISP can be made to work very easily with the current proposed VxLAN proposed overlay dataplane, as the encapsulations are very similar. Other proposed dataplanes such as nvGRE can be made to work with a LISP control plane very effectively. This will allow for concurrent support of IP and mac-address end station identifiers across the IP Underlay, solving all the different use cases. Because of its pull-based , on-demand nature, the LISP control plane scales very well in environments where table space is limited, extending its use to both physical switches with limited table space and virtual switches. More-over the mapping system can hold more state than just the mapping between end station and location. It could also hold policy information for certain locations or groups of end station identifiers. It could also hold service path information, where the mapping system could recursively resolve through which services a certain flow needs to be pushed through to eventually reach its destination. And because the mapping system is an open system, north-bound interfaces can be created into it to write policies, mappings and service paths into it , or to read network state from it in a centralized, but network wide manner.
LISP has been designed by the LISP IETF WG, with representation and collaboration not only from Cisco, but operators, researchers and other vendors. The experimental nature of the WG is very convenient in allowing the technology to mature quickly and flexibly as drafts can be adjusted rapidly as we learn from new developments. Don’t let the word ‘experimental’ misguide you as after six years of deployments and testing , LISP is indeed fit for production, regardless of the technicalities the IETF process may impose on the documents. As a matter of fact VxLAN is another experimental draft, and adoption hasn’t been hindered by that. And adding ‘warning messages’ in other drafts that refer to how ‘bad’ experimental RFCs are is unfortunately part of the politics amongst equipment vendors.
A new working Group in IETF (Network Virtualization Overlay using L3 or NVO3) has been created to investigate whether new control planes have to be created to deal with the functions described above, or whether existing control planes can be used. LISP can be a very good candidate in being that control plane, effectively creating a Unified L2 and L3 IP based overlay solution that can work across both physical and virtual network equipment, as can be seen in this draft.
LISP brings mobility, scale and segmentation to the global network, without resorting to elaborate BGP policies or ‘flat-earth’ network designs. The global network is inclusive of any private portion of the network (Campus, DC, WAN, Metro) and the public Internet. So although we have talked about the data center here, the benefits of LISP are easily realized across all places in the network seamlessly. Think Bring Your Own Device (BYOD) in Enterprise networks, and the associated network and mobility challenges it can bring. Another use case for this unified solution based on LISP? Most definitely!