Scaling OpenStack L3 using Cisco ASR1K platform
Cisco has developed a plug-in to integrate the ASR 1000 Series Router (ASR1K) into OpenStack to offload L3 capabilities on to dedicated routing hardware. The plug-in was demonstrated at Cisco Live in a Proof-of-Concept environment. We are planning demos of a Cloud solution based on the ASR1k plug-in at the OpenStack Summit in Vancouver. The plug-in is considered open source and will be submitted upstreamed into OpenStack. It will also be available from Cisco’s Neutron Tech-Preview repository for Juno.
OpenStack offers a reference software implementation for Layer 3 functionality. Routing, static NAT (Floating IPs) and dynamic NAT/SNAT (VM “Internet” access) are handled by the l3agent that runs part of the neutron component. The L3 agent relies on Linux IP Tables to define forwarding rules. With that comes a critical scalability issue as Linux IPTables has inherent scaling shortcomings. For highly-scalable clouds with many route and NAT operations, this becomes a serious bottleneck.
Cisco offers the ASR1K routing platform to be used in Data Centers typically for WAN edge operations. It performs NAT and L3 forwarding in hardware and provides L3 high-availability (HSRP). The ASR Config Agent builds upon the same technology utilized for the integration of the Cisco Cloud Services Router (CSR1000v) into OpenStack.
The ASR1k plug-in is transparent and does not interfere with the users experience of configuring their private Cloud. To a user of OpenStack, the way tenants, networks, subnets and VMs with floating IPs are created is not modified. HSRP is implemented to provide gateway-redundancy. A reference architecture of a Cloud environment leveraging the ASR1k plug-in is shown below:
Each OpenStack network configuration corresponds to a set of CLI commands on the ASR1k. Here, I will highlight how the plug-in is used. A router in OpenStack is defined using “neutron router-create <name>”. This corresponds to a VRF definition on the ASR1k. Adding an internal network (interface) to the OpenStack router can be done using “neutron router-interface-add <router-name> <subnet-ID>”. On the ASR this is realized by adding a sub-interface to the upstream ports (may that be a port-channel or a physical interface depending on the upstream network configuration). Each sub-interface is configured with a IP address, HSRP group details and the VRF specific to the defined network. Floating IPs are used in OpenStack to enable access to VMs from outside. On the ASR platform a Floating IP is realized by defining static NAT entries.
To better understand the internals of the ASR1k plug-in, the figure below shows a typical workflow of adding a network to a router:
- User adds a network to the router by attaching a new interface in Horizon Dashboard or using the neutron CLI
- OpenStack’s neutronclient triggers a REST API call to the neutron-server requesting to add a router interface
- The neutron-server updates the DB with the Router/Port details
- A ”routers updated” message is sent across the AMQP message bus to the CiscoCFGAgent
- If an update occurs, the CiscoCFGAgent fetches new data from DB to stay in sync
- CiscoCFGAgent updates the ASR1K driver with the newly added internal network port
- The driver pushes the required configuration to the ASR1K HSRP pair
Stay tuned for more details at the OpenStack Summit in Vancouver in May.