Cisco Blogs


Cisco Blog > Data Center and Cloud

Forrester Report: Mainstream Production Virtualization Exposes New Data Center Needs

A study recently released by Forrester research (commissioned by Cisco -- PDF) reveals that virtualization has taken hold in production data centers, but that there are still many hurdles to be addressed--especially when it comes to the relationship between abstracted and physical resources. Titled “How Server And Network Virtualization Make Data Centers More Dynamic”, the 14 page study asked 240 randomly selected firms with experience in managing medium to large virtualized server environments about the barriers that keep them from doing more.

The results are quite interesting.

Read More »

Update: Cisco IT and the Nexus Deployment

March 9, 2009 at 12:00 pm PST

image Here is a quick update from Sidney on the progress with the Nexus and unified fabric deployment in Cisco’s production data centers. Beyond Sidney’s update, you can get more details on the project here.

New Thoughts on Data Center Temperature Management

March 9, 2009 at 12:00 pm PST

image Here are some interesting thoughts from Doug Alger on how some companies are looking to deal with their excess data center heat. This is a good example of how thinking is evolving on the topic. While we expect power and cooing to continue be a hot topic, as we get smarter about the subject, we will have an easier time getting a handle on things.

Data Center Interconnect

So, you have moved your server virtualization pilots into limited production and are in the process of scaling this out such that, at some point, it will move from limited production to a massive rollout across the entire enterprise. Or maybe you are already there. In any case, the time has probably come, or will soon, when you realize that doing VMotion or DRS or otherwise moving workloads across physical servers in the Data Center (DC) is extremely useful-but wouldn’t it be cool to be able to do that across servers in different DCs. There are many reasons this becomes interesting, now and in the future. Maybe power is cheaper in DC B than in DC A. Maybe there are no more available resources for that next workload in DC A, but there are in DC B. Maybe if you move workload from A to B you can shut down servers, heck maybe entire racks, in A for significant segments of time. You get the point -there are and will be compelling reasons to want to move workloads between DCs .Great, but how do you do it? Typically, DCs are connected via routers, implicitly stating that there is a Layer 3 boundary between them. If you are reading this, I’m going to assume you have some involvement in running a network, which means you probably don’t have a lot of spare time, which means you probably are not real interested in readdressing lots of IP devices. Thus, we have a bit of an issue. This issue is handled by extending the Layer 2 domain across DCs . There are several different ways to accomplish this. Before we discuss the mechanism by which we can transport that L2 domain across DCs, I want to first mention that this whole DC Interconnect topic is about much more than just transport. When a given workload moved from DC A to DC B what implications does that have for its storage target? If a user collocated to DC A has her application dynamically moved to DC B, she will probably never know if DC B is connected via 10GbE over dark fiber. But if DC B is far away over slow links, she could have an issue. My point here is that incorporating intelligent Fiber Channel switching such that the SAN is aware of VM activity is important. Incorporating WAAS solutions as part of the overall design (particularly in scenarios like that above) are important. Lots to consider as we move forward with these designs.Before we go into any detail on those topics we need to think through the basic transport between the DCs, which can be dumped into 1 of 3 different buckets:1) Dark Fiber2) Vanilla IP3) MPLSI don’t have time to go into all of these in detail now, but let’s consider Dark Fiber, the option that provides us with the most flexibility and options. Before proceeding, however, we should stop to recognize that in extending the L2 domain between sites we are, theoretically undoing what we have spent the past couple decades building. Many years ago at my first networking job (yes, I was once a customer, too), I remember broadcast storms emanating from one of our Midwest sites taking down a West Coast site. Not too long after this happened a few times, we decided introducing L3 between sites might be a good idea! So, this notion of extending L2 seems to be a reversion to the dark ages of networking in that Spanning Tree architectures become complex when the diameter/topology increases, Spanning Tree convergence or failure within a DC affects all other DC’s connected at L2, Spanning Tree will become fragile during link flaps, etc., etc. etc. STOP the madness! Craig -what in the name of all that is sacred in networking are you suggesting this for?! Well, luckily we have progressed a bit since Radia Perlman did her thing and 802.1d was blessed. If you have Catalyst 6500′s or Nexus 7000′s interconnecting your DC’s you can use Virtual Switching System (VSS) or virtual port channels (vPC’s). VSSbasically aggregates 2 Catalyst 6500 switches into a single virtual Switch. So you have a single logical switch running across 2 physical DCs -pretty cool, and a radically different scenario than just schlepping Spanning Tree across them-simpler, more scalable, more resilient. A (vPC) allows links that are physically connected to two different Nexus 7000′s to appear as a single port channel by a third downstream device (e.g. another switch or even a server). Among other things, vPCs increase usable bandwidth by allowing traffic to flow over the otherwise blocked ports/links in a standard dual-homed STP design. In any event, VSS and vPC’s are clearly simpler than running VPLS or MPLS, though these are certainly viable options as well. If you don’t have the option of dark fiber, and your SP is already providing MPLS between sites, than this is an obvious scenario you would want to use MPLS.Are you already doing DC Interconnect? What are your experiences? What are the big wins and what are the pain points? Thanks for whatever you can share with the rest of the readers-

Cisco, WAN Optim, and What People Are Saying

Brad Reese of Network World made a short and very clear posting today around Infonetics most recent market share report on WAN optimization. While I wouldn’t normally re-post or call out an industry observer covering Cisco (or whichever vendor’s) current market success, I think this posting is worth citing to set the record of the last 24+ months straight…

Read More »