In one of my earlier blogs, -- “How to get more SAN mileage….” -- I had highlighted how one can deploy End-to-End FCoE using a converged Director-class platform, like Nexus 7000, connected directly from the converged access switch, like UCS FI, in order to get the utmost agility. Well, this is how ITOCHU Techno-Solutions Corporation (CTC), a Cloud Service provider, deployed its network to get significantly higher mileage.
CTC provides a wide range of IT services for business customers in Japan. The company’s Cloud Platform Group recently launched its innovative ElasticCUVIC shared private cloud service, which helps customers reduce infrastructure cost and management complexity. With large numbers of VMs, CTC wanted to simplify its data center architecture and IT management while optimizing scalability. The challenge was to deliver high-performance, easy-to-manage cloud services at scale.
The company evaluated several storage networking solutions and turned to Cisco for Fibre Channel over Ethernet (FCoE) solutions, which greatly simplify the infrastructure and management. CTC built its two newest data centers in Yokohama and Kobe with ultra-high performance and flexibility in mind. CTC implemented an End-to-End FCoE architecture using Cisco Nexus 7000 Series Switches, Cisco UCS servers, and FCoE connections between the switches, servers, and FCoE storage arrays.
With the converged FCoE architecture, ElasticCUVIC is enabling CTC customers to gain Read More »
Mileage (miles per gallon) is one of the important criteria while buying any automobile and once bought, it is highly desirable to hit the maximum advertised mileage withoutsignificantly changing the driving habits or the routes (highway vs city mpg). Well, I have not been able to achieve that yet, so being a geek, I focused my attention on a different form of mileage (throughput per switch-port) that interests me at work. So in this blog, I would explore a way to get more SAN mileage from the Cisco UCSFI (Fabric Interconnect)without significantly affecting the SAN admin’s day-to-day operations.
Just a bit of background before we delve into the details -- The I/O fabric between the UCS FI and the UCS Blade Server Chassis is a converged fabric, running FCoE. The usage of FCoE within the UCS fabric is completely transparent to the host operating system, and any Fibre Channel block storage traffic traverses this fabric as the FCoE traffic. So, a large number of over 20,000+ UCS customers, using Block Storage, are already using FCoE at the access layer of the network.
Now, the key question is what technology, FC or FCoE, to use northbound on the FI uplink ports to connect to an upstream Core switch for the SAN connectivity. So, what are the uplink options? Well, the FI has Unified ports and the choice is using the same uplink port as either 8G FC -or- 10G FCoE. [Note that when using the FCoE uplink, it is not a requirement to use a converged link and one can still use a dedicated FCoE link for carrying pure SAN traffic].
1) Bandwidth for Core Links: This is a very important aspect for the core part of the network. It is interesting to note that 10G FCoE provides almost50% more throughput than the 8G FC. This is because FC has a different bit encoding and clock-rate than Ethernet, and so 8G FC yields 6.8G throughput while 10G FCoE yields close to 10G throughput (post 1-2% Ethernet frame overhead)
2) Consistent Management Model: FCoE is FC technology with same management and security model, so it will be a seamless transition for a SAN admin to move from FC to FCoE with very minimal change in the day-to-day operations. Moreover, this FCoE link is carrying dedicated SAN traffic without requiring any convergence of LAN traffic. To add to that, if the UCS FI is running in the NPV mode, then technically the FCoE link between the UCS FI and the upstream SAN switch does not constitute a Multi-Hop FCoE design, as the UCS FI is not consuming a Domain-ID, and the bulk of SAN configurations like zoning etc. need to happen on only the Core SAN switch, thus maintaining the same consistent SAN operational model as with just the FC.
3) Investment Protection with Multi-protocol flexibility: By choosing FCoE uplink from the converged access layer, one can still continue to use the upstream core SAN Director switch as-is, providing the connectivity to existing FC Storage arrays. Note that Cisco MDS 9000 SAN Director offers Multi-protocol flexibility so that one can Interconnect FCoE SANs on the Server-side with the FC SANs on the Storage-side.
The data center landscape has changed dramatically in several dimensions. Server virtualization is almost a defacto standard with a big increase in VM density. And there is a move towards world of many clouds. Then there is the massive data growth. Some studies show that data is doubling in every 2 years while there is an increased adoption of solid-state drives (SSD). All of these megatrends demand new solutions in the SAN market. To meet these needs, Cisco’s introducing the next generation Storage Network innovations with the new MDS 9710 Multilayer Director and new MDS 9250i Multiservice Switch. These new multi-protocol, services-rich MDS innovations redefine storage networking with superior performance, reliability and flexibility!
We are, once again, demonstrating Cisco’s extraordinary capability to bring to market innovations that meet our customer needs today and tomorrow.
For example, with the new MDS solutions, we are announcing 16 Gigabit Fibre Channel (FC) and 10 Gigabit Fibre Channel over Ethernet (FCoE) support. But guess what? This is just couple of the many innovations we are introducing. In other words, we bring 16 Gigabit FC and beyond to our customers:
A NEW BENCHMARK FOR PERFORMANCE
We design our solutions with future requirements in mind. We want to create long term value for our customers and investment protection moving forward.
The switching fabric in the MDS 9710 is one example of this design philosophy. The MDS 9710 chassis can accommodate up to six fabric cards delivering:
1.536 Tbps per slot for Fibre Channel – 24 Tbps per chassis capacity
Only 3 fabric cards are required to support full 16G line rate capacity
Supports up to 384 Line Rate 16G FC or 10G FCoE ports
So there is room for growth for higher throughput in the future …without forklift upgrades
This is more thanthree times the bandwidth of any Director in the market today – providing our customers with a superior investment protection for any future needs!
The data center landscape has changed dramatically in several dimensions. Server virtualization is almost a defacto in our customers’ data centers with a big increase in VM density. They are also moving towards world of many clouds. And then there is the massive data growth. Some studies show that data is doubling in every 2 years while there is an increase in the adoption of solid-state drives (SSD). Several of our customers are also either consolidating their data centers or forming mega data centers. All of these mega trends certainly come with increasing challenges for the Storage Administrator as the storage network is becoming more critical as it is the strategic asset in the Data Centers.
Take a look at this short video with Richard Darnielle (Director of Product Management for MDS Product lines) and me. Richard shares his insights on the mega trends that will shape the next-generation storage networks.
Guess what? Once again Cisco is here to help you on your journey to addressing these mega trends by raising the bar for storage networks. How you ask?
I was sitting in a room with a client the other day and normally in these conference rooms with the mahogany tables and high back leather chairs*, you have Cisco on one side of the table, and the client on the other. However, this wasn’t the case, as the table was formica and the chairs were folding. Also, in the room was two groups that had never spoken before except in rare cases, “The network is down!” or “Our hosts can’t see their storage!” Yes my friends, it was the LAN and SAN folks in the room. The topic of FCoE was in front of us and the question was around their soon to be deployed Nexus 5000 switching infrastructure. The discussion between the two parties over who would manage the Nexus 5000 reminded me of a scene from Ghostbusters… Read More »