If you come to Cisco’s corporate headquarters, chances are good that (especially if you’re traveling internationally) you will fly into SFO, which is the airport code for San Francisco International Airport. This point has virtually nothing to do with the rest of what you’re about to read…other than the fact that those same 3 letters – SFO – are representative of 3 key takeaways from an outstanding Infoworld product review on Application Centric Infrastructure (ACI). When you think about ACI, think about SFO:
Simple. Fast. Open.
I won’t spend much space on this, as I’d much rather you go and read Paul Venezia’s comprehensive and detailed look at ACI. But I do want to highlight a few brief comments on how ACI is Simple, Fast and Open.
“Implementing ACI is surprisingly simple, even in the case of large-scale buildouts.”
“Assuming the cabling is complete, the entire process of standing up an ACI fabric might take only a few minutes from start to finish.”
“Not only is ACI an extremely open architecture…”
“Cisco is actively supporting a community gathering around ACI, and the community is already reaping the rewards of Cisco’s open stance.”
“This is only one example of ACI’s openness and easy scriptability. The upshot is it will be straightforward to integrate ACI into custom automation and management solutions, such as centralized admin tools and self-service portals.”
“This should be made abundantly clear: This isn’t an API bolted onto the supplied administration tools, or running alongside the solution. The API is the administration tool.”
Simple. Fast. Open.
Whether you’re traveling to Northern California or not, if you’re considering a better way to do networking, think about SFO and ACI.
The OpenStack community gathered in Tokyo for the 12th-Liberty release of the OpenStack platform. The Foundation reported over 5,000 attended the conference–50% for the first time. Attendees were from across the globe with 46% from APAC and 38% from North America. Job roles varied and included developers (28%), user/operators (25%), manager/architects (19%), sales/marketing (11%), and CxOs (10%).
OpenStack has entered the post-excitement phase, which may appear slow-moving, but reflects deeper customer engagement and a focus on the operationalization of OpenStack. Hundreds of interesting sessions were presented by community members and recorded for those who could not be there. Check out the OpenStack Foundation Summit site for the full schedule. Common themes included overcoming the complexity of configuring, deploying and maintaining OpenStack; retaining workload flexibility; and various approaches to manageability, scalability and extensibility. Having the Summit in Japan was an opportunity to highlight Asia-based users of OpenStack, including Kirin Brewing, Yahoo Japan, NEC, NTT Resonant, GMO Internet, CyberAgent, and Rakuten.
Below are links to the strategic and technical sessions presented on Cisco solutions at the Summit.
Because of the nature of SDN, and specifically the automation available with Cisco’s Application Centric Infrastructure, ACI works really well with cloud orchestration tools such as OpenStack. I was able to be at the OpenStack Summit in Tokyo last week and gave a vBrownBag TechTalk about why Cisco ACI makes OpenStack even better.
So, how does ACI work with OpenStack and perhaps even make it better? First, ACI offers distributed and scalable networking. It supports Floating IP addresses in OpenStack. If you’re not familiar with Floating IPs, they are essentially a pool of publicly routable IP addresses that you purchase from an ISP and assign to instances. This would be especially useful for instances, or VMs, like web servers. It also saves CPU cycles by putting the actual networking in the Nexus 9000 switches that make up the ACI fabric. Since these switches are built to forward packets, and that’s what they’re good at, why not save the CPU cycles for other things like running instances?
OpenStack doesn’t natively work with Layer 4-7 devices like firewalls and load balancers. With ACI we can stitch in these necessary network services in an automated and repeatable way. We do this in a way that doesn’t sacrifice visibility as well. While it’s important that we’re able to automate things, especially in a private or public cloud that is constantly changing and updating, if we lose visibility, we lose the ability to troubleshoot easily. In the demo, shown in the video above, you will see just how easy it is to troubleshoot problems in ACI. We also get the ability to preemptively strike before a problem causes issues on the network by offering easily interpreted health scores for the entire fabric, including hardware and end point groups.
ACI is also a very secure model. Not only does it use a white-list model where traffic is denied by default and only allowed when explicitly configured that way, it will also give more security when it comes to multi-tenancy. In a strict overlay solution, if a hypervisor is attacked or owned the multi-tenancy model could be deemed insecure. In the ACI fabric the security is done at the port level. So even if a hypervisor is attacked the tenants will be safe.
In recent versions of ACI we are able to use OpFlex as a southbound protocol to communicate between OpenStack and ACI. By using OpFlex we get a deeper integration and more visibility into the virtual environment of OpenStack. Instead of attaching hypervisor servers to a physical domain in ACI we can attach them into a VMM (Virtual Machine Manager) domain. This allows us to learn which instances or VMs are on which physical server. It will also automatically learn IP addresses, MAC addresses, states and other information. We can also see which networks or portgroups contain which hypervisors and instances within our OpenStack environments.
For more information on how Cisco ACI works with OpenStack you can go to http://cisco.com/go/aci
Server load balancer (SLB) has become very common in network deployments, as the data & video traffic are expanding at rapid rate. There are various modes of SLB deployments today. Application load balancing with network address translation (NAT) has become a necessity for various benefits.
With our latest NX-OS Software 7.2(1)D1(1) (also known as Gibraltar MR), ITD supports SLB NAT on Nexus 7k series of switches.
In SLB-NAT deployment, client can send traffic to a virtual IP address, and need not know about the IP of the underlying servers. NAT provides additional security in hiding the real server IP from the outside world. In the case of Virtualized server environments, this NAT capability provides increased flexibility in moving the real servers across the different server pools with out being noticed by the their clients. With respect health monitoring and traffic reassignment, SLB NAT helps applications to work seamlessly without client being aware of any IP change.
ITD won the Best of Interop 2015 in Data Center Category.
ITD provides :
Zero latency load-balancing.
CAPEX savings : No service module or external L3/L4 load-balancer needed. Every Nexus port can be used as load-balancer.
Resilient (like resilient ECMP), Consistent hash
Bi-directional flow-coherency. Traffic from A–>B and B–>A goes to same node.
Learning new skills and using new tools to automate your network can appearto be scary if you don’t have a coding background. But that doesn’t need to be the case…
In a previous blog post, I discussed Cisco’s SDN Strategy for the Data Center. I mentioned that it is built on 3 key pillars: Application Centric Infrastructure, Programmable Fabric, and Programmable Network. Regarding the 3rd pillar, I wrote that network programmability has largely been the domain of big Web SP’s, and/or those whose propellers seen to spin faster than others. However, the reality is that tools are available that are useful for networks of pretty much any size, and the tools are within reach of pretty much everybody.
Rather than rattle off a list cool features that are part of Programmable Network (some of which are summarized here), I thought it more useful to consider common things network people actually do on a daily basis, then show how we can apply programmability tools to do those things with, for lack of a better phrase, “the 3 S’s”:
Speed – enabling you to do things much faster;
Scale – enabling you to do things to a much larger group of devices; and
Stability – enabling you to make far fewer errors (thereby also increasing Security…oops, now that’s 4 S’s…)
In upcoming posts, we will consider use cases such as switch provisioning. For example, you need to put a bunch of VLANs on a bunch of switches. Unless you have a battalion of minions to carry out your wishes, this can be a tedious, time consuming task. There is a better way, and we’ll show you how.
What’s that? You say you’re a network geek, but you moonlight as a server admin? You’ve been using Linux tools to monitor and troubleshoot servers and want to use the same tools for the network? Okay, we can cover that too because tools like ifconfig and tcpdump are all part of the party.
If you can’t wait for the future posts and/or you want to dive deep, this recorded webinar should tide you over.
Anyhow, I need to go carve a pumpkin now…Happy Halloween!
*For music aficionados…Yeah, I know – the link was Heavy Metal not Death Metal, but I used one of my own songs…and this is about as close to Death Metal as I get. That whole guttural screaming thing never worked for me…