Cisco Blogs


Cisco Blog > Data Center and Cloud

Update: Cisco IT and the Nexus Deployment

March 9, 2009 at 12:00 pm PST

image Here is a quick update from Sidney on the progress with the Nexus and unified fabric deployment in Cisco’s production data centers. Beyond Sidney’s update, you can get more details on the project here.

New Thoughts on Data Center Temperature Management

March 9, 2009 at 12:00 pm PST

image Here are some interesting thoughts from Doug Alger on how some companies are looking to deal with their excess data center heat. This is a good example of how thinking is evolving on the topic. While we expect power and cooing to continue be a hot topic, as we get smarter about the subject, we will have an easier time getting a handle on things.

Data Center Interconnect

So, you have moved your server virtualization pilots into limited production and are in the process of scaling this out such that, at some point, it will move from limited production to a massive rollout across the entire enterprise. Or maybe you are already there. In any case, the time has probably come, or will soon, when you realize that doing VMotion or DRS or otherwise moving workloads across physical servers in the Data Center (DC) is extremely useful-but wouldn’t it be cool to be able to do that across servers in different DCs. There are many reasons this becomes interesting, now and in the future. Maybe power is cheaper in DC B than in DC A. Maybe there are no more available resources for that next workload in DC A, but there are in DC B. Maybe if you move workload from A to B you can shut down servers, heck maybe entire racks, in A for significant segments of time. You get the point -there are and will be compelling reasons to want to move workloads between DCs .Great, but how do you do it? Typically, DCs are connected via routers, implicitly stating that there is a Layer 3 boundary between them. If you are reading this, I’m going to assume you have some involvement in running a network, which means you probably don’t have a lot of spare time, which means you probably are not real interested in readdressing lots of IP devices. Thus, we have a bit of an issue. This issue is handled by extending the Layer 2 domain across DCs . There are several different ways to accomplish this. Before we discuss the mechanism by which we can transport that L2 domain across DCs, I want to first mention that this whole DC Interconnect topic is about much more than just transport. When a given workload moved from DC A to DC B what implications does that have for its storage target? If a user collocated to DC A has her application dynamically moved to DC B, she will probably never know if DC B is connected via 10GbE over dark fiber. But if DC B is far away over slow links, she could have an issue. My point here is that incorporating intelligent Fiber Channel switching such that the SAN is aware of VM activity is important. Incorporating WAAS solutions as part of the overall design (particularly in scenarios like that above) are important. Lots to consider as we move forward with these designs.Before we go into any detail on those topics we need to think through the basic transport between the DCs, which can be dumped into 1 of 3 different buckets:1) Dark Fiber2) Vanilla IP3) MPLSI don’t have time to go into all of these in detail now, but let’s consider Dark Fiber, the option that provides us with the most flexibility and options. Before proceeding, however, we should stop to recognize that in extending the L2 domain between sites we are, theoretically undoing what we have spent the past couple decades building. Many years ago at my first networking job (yes, I was once a customer, too), I remember broadcast storms emanating from one of our Midwest sites taking down a West Coast site. Not too long after this happened a few times, we decided introducing L3 between sites might be a good idea! So, this notion of extending L2 seems to be a reversion to the dark ages of networking in that Spanning Tree architectures become complex when the diameter/topology increases, Spanning Tree convergence or failure within a DC affects all other DC’s connected at L2, Spanning Tree will become fragile during link flaps, etc., etc. etc. STOP the madness! Craig -what in the name of all that is sacred in networking are you suggesting this for?! Well, luckily we have progressed a bit since Radia Perlman did her thing and 802.1d was blessed. If you have Catalyst 6500′s or Nexus 7000′s interconnecting your DC’s you can use Virtual Switching System (VSS) or virtual port channels (vPC’s). VSSbasically aggregates 2 Catalyst 6500 switches into a single virtual Switch. So you have a single logical switch running across 2 physical DCs -pretty cool, and a radically different scenario than just schlepping Spanning Tree across them-simpler, more scalable, more resilient. A (vPC) allows links that are physically connected to two different Nexus 7000′s to appear as a single port channel by a third downstream device (e.g. another switch or even a server). Among other things, vPCs increase usable bandwidth by allowing traffic to flow over the otherwise blocked ports/links in a standard dual-homed STP design. In any event, VSS and vPC’s are clearly simpler than running VPLS or MPLS, though these are certainly viable options as well. If you don’t have the option of dark fiber, and your SP is already providing MPLS between sites, than this is an obvious scenario you would want to use MPLS.Are you already doing DC Interconnect? What are your experiences? What are the big wins and what are the pain points? Thanks for whatever you can share with the rest of the readers-

Do You Want Cisco Nexus Today…or Yesteday’s Technology Tomorrow?

March 6, 2009 at 12:00 pm PST

I have to admit that its gratifying to see our competitors validate decisions we made a couple of years ago with regards to the need for a unified fabric in the data center. After dismissing FCoE or worse, completely missing the boat, they seem to be getting the religion. However, even though they are newly converted, it is useful to examine exactly how far behind the curve some of these companies are in delivering a unified fabric switch. After all, marketectures are easy--shipping product is a bit trickier.First off, we have a networking company that announced a switch about the same time we announced the Nexus 7000 last year. While that switch has yet to ship, the company has apparently had to call a mulligan and pre-announce a newer version of that switch that has not shipped yet--kinda like calling the mulligan before you even swing. This time, this new, new switch will offer unified fabric. Now you have to ask yourself how are they going to do that--where are they going to get the storage networking expertise? Well, you can either buy a Fibre Channel stack or build one yourself. If you look around, the”buy” options are limited. The Fibre Channel director space is pretty evenly split between Cisco and Brocade and I don’t think either company is selling. :) So, perhaps it makes more sense to build your own FC stack. An admirable effort, but the follow-up challenge is finding a customer who wants to be the guinea pig for your company’s first foray into storage networking. Trust us, we have been there and, lucky for us, we built a solid product and had success in the enterprise switching market to leverage.Next, we have a company that declared little customer interest in FCoE, then, seven months later, spent $3B to buy a network switch vendor, which got them parity with Cisco….well, Cisco in 2002. That was the year we acquired Andiamo Systems and had Ethernet switch and Fibre Channel switch which shared a nameplate, but little else in terms of hardware architecture or software. While this is certainly a step in the right direction for this competitor, it is still a full generation behind the benchmark.Which brings us to the Nexus family. So, here is shipping hardware that is purpose-built to allow customers to evolve from GbE to 10GbE to FCoE with full investment protection. But, as impressive as the hardware is (and it is), the real secret sauce is NX-OS. This is the only shipping operating system that includes both storage networking and data network code. The fact that NX-OS initially shipped as version 4.x was not a marketing exercise but rather acknowledgement that NX-OS was the synthesis of existing, battle-tested operating systems. The hard reality is that while competitors may be able to shorten their hardware development cycle by taking advantage of merchant silicon and the like, there is no hurrying the software side. Good software takes time.Well, that’s it for now--next post, I’ll dig into some of the FCoE FUD that is out there.

Cisco, WAN Optim, and What People Are Saying

Brad Reese of Network World made a short and very clear posting today around Infonetics most recent market share report on WAN optimization. While I wouldn’t normally re-post or call out an industry observer covering Cisco (or whichever vendor’s) current market success, I think this posting is worth citing to set the record of the last 24+ months straight…

Read More »