Today I want to bring up DCI use case that I’ve been thinking about: capacity expansion. As you know, the purpose of DCI is to connect two or more Data Centers together so that they share resources and deliver services. The capacity expansion use case is when you have temporary traffic bursts, cloud bursts, either planned or unplanned, maintenance windows, migrations or really any temporary service event that requires additional service capacity.
To start addressing the challenge of meeting these planned and unplanned cloud burst and capacity expansion requirements, check out the new ACE + OTV feature called Dynamic Workload Scaling announced recently.
In this topology, the clients sit outside the Data Centers and are attempting to access resources in the Primary DC. Note that in the Primary DC there is an Application Control Engine (ACE), it can either be a standalone appliance or a module in the Catalyst 6500. The ACE has a Virtual IP (VIP) and load balances the client-server transactions to the local virtual server farm, IE the virtual machines on the VMware ESX Host in the Primary DC. Since it’s a virtual server farm, these virtual machines are all in the same layer 2 domain and IP subnet.
With the DWS feature, the ACE has an API based integration with the VMware vCenter and can monitor the virtual machine memory and CPU levels to make resource allocation decisions regarding the virtual server farm pool. That is, if the memory or CPU exceeds a configured threshold, that you configure, the ACE can bring in additional virtual machines into the virtual server farm from the Secondary DC.
The DWS feature also integrates with OTV on the Nexus 7000, which is really interesting. The ACE interacts with the Nexus 7000 by continuously polling (every 60 seconds) it for the “locality” of the virtual machines. (Note that you can also see the VM locality via the command “show otv route” executed on the Nexus 7000). The end result is that the ACE knows which VM’s are local (Primary DC) and which VM’s are remote (Secondary DC) and you can configure your virtual server farm to primarily use the local VM’s and then burst to the remote VM’s when thresholds on the local VM’s are exceeded.
Looking at figure 2, we can see that the ACE VIP is sending the client server traffic to the VM in the remote location. In this case, a threshold (your choice, either CPU or Memory) has been exceeded on the virtual machines in the Primary DC so the ACE is now including the VM’s in the Secondary DC as part of the virtual server farm rotation. It’s important to be clear that all new transactions are not being sent to the Secondary DC, but rather all new transactions are still being load balanced and now the VM’s in the Secondary DC are part of the virtual server farm pool.
The OTV (Overlay Transport Virtualization) dynamic MAC-in-IP technology is the critical DCI link that enables this capacity expansion use case. Remember, the servers in the server farm need to be part of the same Layer 2 domain and OTV provides that necessary LAN extension connection between the Data Centers.
We’re currently working through testing some use case ideas in SASU (Systems Architecture and Strategy Unit) that includes the ACE + OTV Dynamic Workload Scaling feature and expect to circle back on this topic shortly. If you have any thoughts or feedback on this new feature and capacity expansion use case, please feel free to comment below.
Thanks for reading.
Tags: ACE, Burst, Capacity Expansion, Cisco, cloud, Cloud Burst, data center, Data Center Interconnect, DC, DCI, DWS, Dynamic Workload Scaling, locality, Nexus 7000, OTV, SASU, Systems Architecture and Strategy Unit, virtual machine, VM, VM Locality