In the last article, we looked at the big picture of what is involved in creating a SAN distance extension. In this article, we’re going to take a slightly closer look at the physical requirements and with luck we’ll be able to clear up some general confusion and misconceptions along the way.
There is a lot of information about these different elements available via a quick search on your favorite search engine. What I find, though, is that there is usually very little context that accompanies the descriptions or, at best, the authors assume that you may have more of an understanding about some of these technologies than you do. In this case, if I’m going to err it will likely be on the side of making it too accessible and in Plain English, which is something I can live with.
As usual, this is a mid-level view. There are many deep dives that will go into each subject in fine-toothed detail available on the web, but we’re going to stay focused on what you need to know for extending SANs across distances.
Again, this is a rather long post, but hopefully it will be useful as a reference point for you. Read More »
Problem is, whenever you start talking about extending your storage connectivity over distance, there are many things to consider, including some things that many storage administrators (or architects) may not always remember to think about. The more I thought about this (and the longer it took to write down the answers), the more I realized that there needed to be a good explanation for how this worked.
Generally speaking, the propeller spins the ‘other way’ when it comes to storage distance.
To that end, I began writing down the things that affect the choice for selecting a distance solution, which involves more than just a storage protocol. And so the story grew. And grew. And then grew some more. And if you’ve ever read any blogs I’ve written on the Cisco site you’ll know I’m not known for my brevity to begin with! So, bookmark this article as a reference instead of general “light reading,” and with luck things will be clearer than when we started. Read More »
Welcome to another episode of Engineers Unplugged! This week features Cisco’s Andrew Levin (@AndLevin) discussing the use cases for FCOE with Nexus IS’s Paul Sferratore (@MadItalianATL). This is a detailed and nuanced debate of the pros and cons based on a variety of scenarios from large-scale to smaller deployments.
Listen in and let us know what you think about efficiency and cost savings:
Andrew Levin and Paul Sferratore show off their unicorns. Do not try this at home.
Welcome to Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:
Episodes will publish weekly (or as close to it as we can manage)
The data center landscape has changed dramatically in several dimensions. Server virtualization is almost a defacto standard with a big increase in VM density. And there is a move towards world of many clouds. Then there is the massive data growth. Some studies show that data is doubling in every 2 years while there is an increased adoption of solid-state drives (SSD). All of these megatrends demand new solutions in the SAN market. To meet these needs, Cisco’s introducing the next generation Storage Network innovations with the new MDS 9710 Multilayer Director and new MDS 9250i Multiservice Switch. These new multi-protocol, services-rich MDS innovations redefine storage networking with superior performance, reliability and flexibility!
We are, once again, demonstrating Cisco’s extraordinary capability to bring to market innovations that meet our customer needs today and tomorrow.
For example, with the new MDS solutions, we are announcing 16 Gigabit Fibre Channel (FC) and 10 Gigabit Fibre Channel over Ethernet (FCoE) support. But guess what? This is just couple of the many innovations we are introducing. In other words, we bring 16 Gigabit FC and beyond to our customers:
A NEW BENCHMARK FOR PERFORMANCE
We design our solutions with future requirements in mind. We want to create long term value for our customers and investment protection moving forward.
The switching fabric in the MDS 9710 is one example of this design philosophy. The MDS 9710 chassis can accommodate up to six fabric cards delivering:
1.536 Tbps per slot for Fibre Channel – 24 Tbps per chassis capacity
Only 3 fabric cards are required to support full 16G line rate capacity
Supports up to 384 Line Rate 16G FC or 10G FCoE ports
So there is room for growth for higher throughput in the future …without forklift upgrades
This is more thanthree times the bandwidth of any Director in the market today – providing our customers with a superior investment protection for any future needs!
The data center landscape has changed dramatically in several dimensions. Server virtualization is almost a defacto in our customers’ data centers with a big increase in VM density. They are also moving towards world of many clouds. And then there is the massive data growth. Some studies show that data is doubling in every 2 years while there is an increase in the adoption of solid-state drives (SSD). Several of our customers are also either consolidating their data centers or forming mega data centers. All of these mega trends certainly come with increasing challenges for the Storage Administrator as the storage network is becoming more critical as it is the strategic asset in the Data Centers.
Take a look at this short video with Richard Darnielle (Director of Product Management for MDS Product lines) and me. Richard shares his insights on the mega trends that will shape the next-generation storage networks.
Guess what? Once again Cisco is here to help you on your journey to addressing these mega trends by raising the bar for storage networks. How you ask?