Scaling Networks Automatically with ACI
There are many things we think about when considering Software Defined Networking. Mostly, it’s the controllers, and the ability to apply configs and policies to our networks. In fact, we almost forget about the hardware, because as long as it’s setup properly, we don’t NEED to think about it. However, as any good network engineer knows, that is not the end of the hardware story.
Applications are using more and more bandwidth, especially in an East/West traffic flow. We have more applications than we used to. We have things like Big Data and analytics taking up bandwidth. And, if we’re lucky, we gain more and more customers who are using our applications or taking up room in our internal apps. That’s actually a good problem to have, but what it means for the network folks is that we need to add more hardware.
How Does ACI Help?
ACI helps in many ways, but in this blog we’re concentrating on hardware scalability. So let’s first talk about the initial setup, referred to as Automatic Fabric Discovery. It couldn’t be easier. Once your boxes are racked and connected, you connect your APICs (Application Policy Infrastructure Contorllers). Answer about 10 questions in a CLI wizard, simple things like usernames, passwords, and VTEP (VXLAN Tunnel End Point) IP pools. Then just log into the GUI. The GUI will automatically discover the first leaf switch it finds (it uses LLDP for discovery) and you just need to enter a name and node number for the switch. It will then find all the other leaf and spine switches automatically. Again, you just enter a name and node number, and in minutes you’ll have a full network topology.
You could even do this programmatically, too! Create a REST call using a REST client like Postman or a Python script that uses a pre-populated list of node names and numbers.
We were talking about growing our networks to meet bandwidth needs at the beginning of the blog. This is even easier than the initial discovery, because we just add some leaf switches to get more end ports. We could add more spine switches in the same way. Give these names and node numbers, and we’re off and running.
To see a short video of how this works, check this out:
Again, we could do this programmatically as well. We can also add in descriptions of which rack, or which building in which these switches are located to help with later troubleshooting and ease the burden of creating detailed network diagrams.
Best of all, every switch is configured properly. There’s no box-by-box configuration, no human error configuration problems, no IP addresses to even configure…because it all comes from IP pools and configs set in the APIC.
Cloud Like Configuration On-Premises
Now we can scale our networks with a few clicks to support application demands. When we combine that with a converged compute system like UCS or even a hyperconverged solution like HyperFlex we get even more automated scaling for operational needs. Again, this concentrates heavily on the hardware, but bring in things like Cloud Center to deploy and orchestrate applications from a self-service GUI, and we truly get cloud-like ease of use but with the control, security, and ROI we want. To see other demos on how these solutions work together click here!
For more information on ACI go to http://www.cisco.com/go/aci