Cisco Logo


Data Center and Cloud

With the opening of the new Cisco Datacenter in RTP, I thought it would be cool to reach out to a few of the guys responsible for the design and ask them a few questions. So, I got together with Jag Kahlon (Cisco IT Architect) and John Banner (Cisco IT Network Engineer) for a quick chat.

Me: What were the primary objectives for the new datacenter?

Jag Kahlon (Cisco IT Architect): The new RTP1 Data Center is being built with two primary objectives in mind. First is the need for a DR capability and second is the need for a non-Production environment. The traditional problem that we get into when we try to build a DR environment is: How do we justify spending a whole lot of money to buy, install and maintain a bunch of machines that you hope you will never have to use? The answer to that is to build a non-Prod environment that we can quickly and efficiently repurpose to provide DR services if and when the need arises.

Me: This is very similar to the DR design we are doing as part of the Cisco Validated Design Zone. We are currently looking to provide customers with this exact model. Enabling them to have a DR site that takes advantage of Cisco LISP technology to help with automation and speed up the repurpose process. The role of this secondary site being non-production/DR.

Me: How does the new datacenter different from traditional disaster recovery model?

John Banner (Cisco IT Network Engineer): In our traditional DC, we use a pretty standard model that has not significantly changed for probably 5 to 10 years. Within this model, we have our hosts connected to a set of L2 Switches. The L2 switches are aggregated together onto a pair of Routers. In our larger DCs, several equipment groups are further aggregated together onto a pair of DC gateways which in turn connect to the core network for the site. In addition, the DMZ has a completely parallel infrastructure, typically consisting of a separate group of systems which connects to the DMZ core. In this model, each system set is a self-contained, unit that can be replicated as needed for scale. Each group is assigned a function, such as Production, non-Production, DMZ, simulated DMZ (non-Prod), etc.

Me: What kinds of problems did you run into with the standard DR model?

John Banner (Cisco IT Network Engineer): The problem with this model is that you can’t share resources between groups or tenants. Grouping systems is great in the same way that servers are great -- if a server fails it only impacts what’s on that server. Groups systems is bad in the same way that servers are bad -- one may be sitting relatively idle while the one next-door doesn’t have enough resources to do it’s job.

The answer with servers was to implement virtualization using VMware. The answer on the network is similar.

Me: Yes, the enterprise and service provider market are adopting this model. We see a large number of them adopting UCS with VMware in either standalone or in different partner stack offerings like vBlock or FlexPod.

Me: So these two concepts were built into the new datacenter?

Jag Kahlon (Cisco IT Architect): In RTP1, instead of building a large number of smaller switching and routing groups, each being relatively under-utilized in terms of what the network hardware can support, we built two large aggregated groups, which we have then virtually segmented into logical tenants using VRF [Virtual Routing and Forwarding] technology. In this way, we can provide the same security and logical containment that we get with multiple physical tenants, but the resources can be shared across multiple environments. A single UCS chassis, which can only be connected to a single physical group, can serve any logical group that exists within the physical stack. In this way, we can build our DR, non-Prod, DMZ and simulated DMZ networks all within the single physical stack, allocate resources where they are needed, and reallocate those resources as needed without having to physically touch the devices. In addition, we can now make better utilization of the network resources (just as we do with VMware on the UCS chassis), and save money on hardware, power, cabling, etc. at the same time.

Me: This is very similar to the Cisco® Virtualized Multi-Tenant Data Center (VMDC). It’s great to see Cisco on Cisco.

Me: What about the storage side of the house?

Jag Kahlon (Cisco IT Architect): In addition to the above, we have also consolidated all the Storage into another physical group that is reachable via either the Storage or Data network and once we have had time to ensure that the Operations teams are comfortable with the new Architecture and Design, we plan to combine the two physical storage groups into a single logical group, probably using Cisco’s FabricPath technology. We are also looking to significantly reduce the time to provision non-prod instances which need to be refreshed from production using a storage based refresh mechanism. This will help us reduce the storage footprint as compared to the solution we have in place today.

Me: What storage vendors do we have in this new datacenter?

Jag Kahlon (Cisco IT Architect): Our strategic partners EMC and NetApp

Me: So what was the key to achieve multipurpose DR datacenter?

Jag Kahlon (Cisco IT Architect): In order to efficiently accommodate the objective for which this DC has been built we moved to aggressively virtualize non-Production and DR infrastructure to ensure we can move a significant portion on Cisco IT compute onto a virtual solution. A key use case which we are aiming for is to use capacity sharing between non-Production and DR, here VRF and virtualization will help to make sure a ESX farm can provision VMs on different logical VRF segments. This will allow us to use the same UCS blades to provision VMs for non-Production and DR. Since only one of these non-production or DR will be used at a particular time we can achieve optimization in terms of compute over existing models. The ability to re-purpose non-Production for DR is a key ability.

Here are a few links for the new RTP Datacenter.

In an effort to keep conversations fresh, Cisco Blogs closes comments after 90 days. Please visit the Cisco Blogs hub page for the latest content.

1 Comments.


  1. Is there going to be an increased service out of this development to africa as well.Also lead time on units ordered for customers in africa through your agents.

       0 likes

  1. Return to Countries/Regions
  2. Return to Home
  1. All Data Center and Cloud
  2. All Security
  3. Return to Home