You’ve seen the data points: 30 million new devices connected to the Internet each week. A whopping 50 billion connected by 2020. This surge of connectivity – driven largely by the Internet of Everything – is creating vast new opportunities for digitization as industries transform.
This tidal wave of connected devices is also reshaping the data center. Why? Because every single thing connected to the Internet has a MAC and IP address, and this enormous growth will unleash more addresses than anyone can imagine. These addresses need, feed, and breed applications, whether by running an app or providing it data. And as this happens at an exponential scale, the data center becomes the key to making it all work.
We know that the applications will be everywhere, and that’s a good thing. Apps will continue to be in the enterprise data center – the private cloud—where they’ve been running for a long time. And they’ll run in cloud-based data centers. They’ll also run at the edge – whether the edge is a branch office, your home, or even a part of your body.
For applications to perform optimally no matter where they are, the infrastructure has to understand the language of applications. We have to teach it. And this is where policy comes in. For us, policy is teaching the infrastructure the language of the application so that the application can tell the infrastructure, “Here is what I need to run at my best.”
This is an area where Cisco has a lot of skin in the game. After all, no one knows Data Center infrastructure better than we do.
It seems people sometimes have this view of SDN as addressing rather esoteric use cases and situations. However, the reality is that while there are instances of ‘out there stuff’ happening, there are many situations where we see customers leverage the technology to address pretty straightforward issues. And these issues are often similar across different business/vertical/customer types.
Aftab Rasool is Senior Manager, Data Center Infrastructure and Service Design Operations for Du. I recently had the chance to talk with him about Cisco’s flagship SDN solution – Application Centric Infrastructure (ACI) – and Du’s experience with it. I found there were many instances of Du using ACI to simply make traditional challenges easier to deal with.
Du is an Information & Communications Technology (ICT) company based in Dubai. They offer a broad range of services to both consumer and business markets, including triple play to the home, mobile voice/data, and hosting. The nature of their business means the data center, and thus the data center network, is critical to their success. They need a solution to effectively handle challenges of both deployment, as well as operations…and that’s where ACI comes in.
I’ll quickly use the metaphor of driving to summarize the challenges Aftab covers in the video. He addresses issues that are both ‘in the rear view mirror’ as well as ‘in the windshield’ – with both being generalizable to lots of other customers. What I mean is that there are issues from the past that, though they are largely behind the car and visible in the mirror, still impact the driving experience. There are also issues on the horizon that are visible through the windshield, but are just now starting to come into focus and have effect.
Rear view mirror issues – These are concepts as basic as scalability associated with spanning tree issues, or sub optimal use of bandwidth, also due to spanning tree limitations. These issues are addressed with ACI, as there is no spanning tree in the fabric, and the use of Equal Cost Multi Pathing (ECMP) allows use of all links. Additionally, use of BiDi allows use of existing 10G fiber plant for 40G upgrades, thus obviating the expense and hassle of fiber upgrades. As a result, the ACI fabric, based on Nexus 9000’s, provides all the performance and capacity Du needs.
Windshield issues – These are represented by a range of things that result from business’s need for speed, yet are diametrically opposed by the complexity of most data centers. The need for speed through automation is becoming more and more critical, as is simplifying the operating environment, particularly as the business must scale. Within this context, Aftab mentioned both provisioning and troubleshooting.
Provisioning: Without ACI, provisioning involved getting into each individual switch, making requisite changes – configuring VLANs, L3, etc. It also required going into L4-7 services devices to assure they were configured properly and worked in concert with the L2 and L3 configurations. This device by device configuration not only was time consuming, but created the potential for human error. With ACI, these and other types of activities are automated and happen with a couple of clicks.
Troubleshooting: Before ACI, troubleshooting was complicated and time consuming, in part because they had to troll through each switch, look at various link by link characteristics to check for errors, etc. With ACI, healthscores make it easy and fast to pinpoint where the challenge is.
Please take a few minutes to check out what Aftab has to say about these, and other aspects of his experience with ACI at Du.
Yesterday, Chuck Robbins tweeted that we hit our 1000th customer – Danske Bank, the largest financial institution in Denmark.
Our fast momentum and success with the Nexus 9000 (N9K) and ACI is largely due to our continued focus on customer needs – both now and well into the future. And our broad ecosystem of industry leaders has been and is committed to deliver integrated solutions for our mutual customers.
Just a little over three years ago, the team behind the N9K and ACI – Insieme Networks – began by listening to a variety of customers on what their business requirements were at the time and into the foreseeable future. What we learned is that modern enterprises were looking for an application-centric approach using open standards to deliver today’s business services.
This June in San Diego, I had the pleasure of meeting Dan Stanton, Trainer and Subject Matter Expert at NterOne, a global IT training and consulting company. Dan shared his challenges to create great digital experiences for NterOne’s students. Dan and his team have to support virtual IT training in many different time zones and must undertake twenty or so dynamic reconfigurations every week. NterOne is like many enterprise customers except they are sped up to a high rate of change.
Dan runs a multi-hypervisor environment which made ACI a perfect match. Please listen to Dan share his use cases and how they positively impact NterOne’s business in the interview below:
As application performance, security and delivery get more critical, and as the need for network automation grows, the vision of an architecture that allows easy integration of L4-7 services into the data center fabric is increasingly getting validated. We’ve seen at least two services load balancers and firewalls in every application tier our customers deploy. Traditional deployment models are also shifting, as we have seen the model evolve from north-south traffic (perimeter based approaches) to east-west traffic patterns bringing new requirements of scale, security and application performance.
Cisco Application Centric Infrastructure (ACI) architecture was designed to help both easy integration and scale of network services. ACI can manage physical switches, virtual switches in hypervisors as well as L4-7 services from multiple vendors stitching everything under the umbrella of applications. Recognizing that customers have a choice of L4-7 vendors, ACI has taken an open approach to accommodate automation of network services from multiple vendors (for both physical and virtual form factors) with its policy-driven architecture, delivering greater operational simplicity to customers.
Traditional way of inserting L4-L7 devices, from any vendor, in the network is to manually steer traffic through L4-L7 devices and configuring each of these devices independently. The manual steering of traffic is done by carefully provisioning VLANs/VRFs/Subnets etc by a network administrator today.
While ACI supports traditional mode of L4-L7 insertion from any vendor device, ACI provides additional capabilities for automating the entire workflow and tying it to applications. There are two steps in automation of L4-L7 integration through APIC:
Automatically steering traffic from one application tier to chain of L4-L7 service devices and finally connecting back to another application tier.
Automatically configuring all L4-L7 devices in a chain as the application are deployed and modified
The step (2) is ultimate level of automation; configuring all L4-L7 devices as needed by application and keeping it up to date as the application life cycle changes. For example customers add security policies into their firewall, but never clear them since it’s hard to correlate which policies to clear when application goes away, or when there are organizational changes with the relevant SME moves out. With APIC managing application tiers and configuration on L4-7 device, the configuration is added and removed dynamically as application are added or removed.
Since day 1, APIC supports traditional manual way of inserting L4-L7 services from any L4-L7 vendor. Similarly ACI supports fully automated mode called “Managed” mode, where both the network services stitching and device configuration is performed as described by both 1 and 2 above. The managed mode requires a “device package” which is typically provided by the concerned L4-L7 ecosystem partner and jointly qualified by Cisco/Partner for ACI.
A second new automation mode called “Unmanaged” will be introduced that equates to network stitching only as described in #1. Customers have realized that traditional manual mode is error-prone and hard to automate as workload moves around. The “Unmanaged” mode will provide a middle ground between traditional L4-L7 mode and fully automated ACI “managed” mode. Read More »