Cisco IT has, as you may have heard, been shutting down some of its Data Centers. We’ve closed dozens of older facilities – some large, many small – in the past 10 years, consolidating into purpose-built rooms that better enable our business.
The latest to close was the company’s longest-running Development Data Center, located at Cisco headquarters in San Jose. It’s one of two server environments that I worked in daily when I joined Cisco in the late 1990s. Even after designing, working in, and touring many other facilities since, when someone talks about what a Data Center is that room still appears in my mind’s eye.
I took a final walk around the mostly-empty space recently. I hadn’t been in there in years and it was a bit like visiting my old high school. Things looked slightly smaller than I remembered, and several items triggered unexpected memories.
Cisco recently started shipping the newest member to the UCS family – the storage-optimized UCSC3260 Rack Server. Data centers these days are bursting at the seams with unstructured data from new emerging applications and services. According to IDC, 80% of data is unstructured and continues to grow at a 16.2% CAGR. The ability for data centers to economically and rapidly ingest, index, analyze, and archive all this data is top of mind.
Cisco first introduced the C3000 family of storage optimized UCS rack servers in 2014 with the launch of the UCS C3160 rack server last fall. While on the surface the two servers models may look Read More »
You’ve seen the data points: 30 million new devices connected to the Internet each week. A whopping 50 billion connected by 2020. This surge of connectivity – driven largely by the Internet of Everything – is creating vast new opportunities for digitization as industries transform.
This tidal wave of connected devices is also reshaping the data center. Why? Because every single thing connected to the Internet has a MAC and IP address, and this enormous growth will unleash more addresses than anyone can imagine. These addresses need, feed, and breed applications, whether by running an app or providing it data. And as this happens at an exponential scale, the data center becomes the key to making it all work.
We know that the applications will be everywhere, and that’s a good thing. Apps will continue to be in the enterprise data center – the private cloud—where they’ve been running for a long time. And they’ll run in cloud-based data centers. They’ll also run at the edge – whether the edge is a branch office, your home, or even a part of your body.
For applications to perform optimally no matter where they are, the infrastructure has to understand the language of applications. We have to teach it. And this is where policy comes in. For us, policy is teaching the infrastructure the language of the application so that the application can tell the infrastructure, “Here is what I need to run at my best.”
This is an area where Cisco has a lot of skin in the game. After all, no one knows Data Center infrastructure better than we do.
As I’ve mentioned in more than one post, I enjoy touring Data Centers. One detail I pay attention to during these visits is signage.
Are patch panels and structured cabling labeled, so it’s easy to trace connections between devices? Is electrical circuit information provided at each cabinet location, so power feeds can be quickly identified? Do alarms (fire and otherwise) have instructions nearby, telling visitors what their various audio tones and light patterns mean?
It seems people sometimes have this view of SDN as addressing rather esoteric use cases and situations. However, the reality is that while there are instances of ‘out there stuff’ happening, there are many situations where we see customers leverage the technology to address pretty straightforward issues. And these issues are often similar across different business/vertical/customer types.
Aftab Rasool is Senior Manager, Data Center Infrastructure and Service Design Operations for Du. I recently had the chance to talk with him about Cisco’s flagship SDN solution – Application Centric Infrastructure (ACI) – and Du’s experience with it. I found there were many instances of Du using ACI to simply make traditional challenges easier to deal with.
Du is an Information & Communications Technology (ICT) company based in Dubai. They offer a broad range of services to both consumer and business markets, including triple play to the home, mobile voice/data, and hosting. The nature of their business means the data center, and thus the data center network, is critical to their success. They need a solution to effectively handle challenges of both deployment, as well as operations…and that’s where ACI comes in.
I’ll quickly use the metaphor of driving to summarize the challenges Aftab covers in the video. He addresses issues that are both ‘in the rear view mirror’ as well as ‘in the windshield’ – with both being generalizable to lots of other customers. What I mean is that there are issues from the past that, though they are largely behind the car and visible in the mirror, still impact the driving experience. There are also issues on the horizon that are visible through the windshield, but are just now starting to come into focus and have effect.
Rear view mirror issues – These are concepts as basic as scalability associated with spanning tree issues, or sub optimal use of bandwidth, also due to spanning tree limitations. These issues are addressed with ACI, as there is no spanning tree in the fabric, and the use of Equal Cost Multi Pathing (ECMP) allows use of all links. Additionally, use of BiDi allows use of existing 10G fiber plant for 40G upgrades, thus obviating the expense and hassle of fiber upgrades. As a result, the ACI fabric, based on Nexus 9000’s, provides all the performance and capacity Du needs.
Windshield issues – These are represented by a range of things that result from business’s need for speed, yet are diametrically opposed by the complexity of most data centers. The need for speed through automation is becoming more and more critical, as is simplifying the operating environment, particularly as the business must scale. Within this context, Aftab mentioned both provisioning and troubleshooting.
Provisioning: Without ACI, provisioning involved getting into each individual switch, making requisite changes – configuring VLANs, L3, etc. It also required going into L4-7 services devices to assure they were configured properly and worked in concert with the L2 and L3 configurations. This device by device configuration not only was time consuming, but created the potential for human error. With ACI, these and other types of activities are automated and happen with a couple of clicks.
Troubleshooting: Before ACI, troubleshooting was complicated and time consuming, in part because they had to troll through each switch, look at various link by link characteristics to check for errors, etc. With ACI, healthscores make it easy and fast to pinpoint where the challenge is.
Please take a few minutes to check out what Aftab has to say about these, and other aspects of his experience with ACI at Du.