Cisco Blogs


Cisco Blog > Data Center and Cloud

How to get more SAN mileage out of UCS FI?

 

Image Credit: Wikispeed.org

Mileage (miles per gallon) is one of the important criteria while buying any automobile and once bought, it is highly desirable to hit the maximum advertised mileage without significantly changing the driving habits or the routes (highway vs city mpg). Well, I have not been able to achieve that yet, so being a geek, I focused my attention on a different form of mileage (throughput per switch-port) that interests me at work. So in this blog, I would explore a way to get more SAN mileage from the Cisco UCS FI (Fabric Interconnect) without significantly affecting the SAN admin’s day-to-day operations.

Context:

Just a bit of background before we delve into the details -- The I/O fabric between the UCS FI and the UCS Blade Server Chassis is a converged fabric, running FCoE. The usage of FCoE within the UCS fabric is completely transparent to the host operating system, and any Fibre Channel block storage traffic traverses this fabric as the FCoE traffic. So, a large number of over 20,000+ UCS customers, using Block Storage, are already using FCoE at the access layer of the network.

Choices:

Now, the key question is what technology, FC or FCoE, to use northbound on the FI uplink ports to connect to an upstream Core switch for the SAN connectivity. So, what are the uplink options? Well, the FI has Unified ports and the choice is using the same uplink port as either 8G FC -or- 10G FCoE. [Note that when using the FCoE uplink, it is not a requirement to use a converged link and one can still use a dedicated FCoE link for carrying pure SAN traffic].

Observations:

1)    Bandwidth for Core Links: This is a very important aspect for the core part of the network. It is interesting to note that 10G FCoE provides almost 50% more throughput than the 8G FC. This is because FC has a different bit encoding and clock-rate than Ethernet, and so 8G FC yields 6.8G throughput while 10G FCoE yields close to 10G throughput (post 1-2% Ethernet frame overhead)

10G-FCoE-Uplink

FCoE-is-FC

2)   Consistent Management ModelFCoE is FC technology with same management and security model, so it will be a seamless transition for a SAN admin to move from FC to FCoE with very minimal change in the day-to-day operations. Moreover, this FCoE link is carrying dedicated SAN traffic without requiring any convergence of LAN traffic. To add to that, if the UCS FI is running in the NPV mode, then technically the FCoE link between the UCS FI and the upstream SAN switch does not constitute a Multi-Hop FCoE design, as the UCS FI is not consuming a Domain-ID, and the bulk of SAN configurations like zoning etc. need to happen on only the Core SAN switch, thus maintaining the same consistent SAN operational model as with just the FC.

3)    Investment Protection with Multi-protocol flexibility: By choosing FCoE uplink from the converged access layer, one can still continue to use the upstream MDS core SAN Director switch as-is, providing the connectivity to existing FC Storage arrays. Note that Cisco MDS 9000 SAN Director offers Multi-protocol flexibility so that one can Interconnect FCoE SANs on the Server-side with the FC SANs on the Storage-side.

And, we have a winner… Read More »

Tags: , , , , , ,

Cisco UCS Outperforms HP and IBM Blade Servers on East-West Latency

Virtualization, Private Cloud, Big Data, HPC, etc. have been steadily changing the landscape of data center architectures. Lower latency and higher performing server-to-server data traffic (East-West) have become key discussion points as customers look to modernize their infrastructures.  Cisco specifically designed UCS unified fabric for this type of traffic to create a highly-available infrastructure with reduced latency and unmatched consistency as the solution scales. Without providing any supporting data, HP and IBM have been incorrectly asserting that Cisco UCS unified fabric would increase latency and slow blade-to-blade traffic. Cisco ran the tests, and the results were simply amazing.

Cisco UCS Outperforms HP Blade Servers on East-West Latency
Cisco UCS Outperforms IBM Flex System Blades on East-West Latency
Read More »

Tags: , , , , , , , ,

Cisco UCS vs. IBM Flex System: Complexity and Cost Comparison

June 25, 2013 at 3:29 pm PST

Complexity and Cost Comparison: Cisco UCS vs. IBM Flex System is report recently published by Principled Technologies.

They evaluated both the technologies and costs of each solution and found a UCS solution is both less expensive to deploy and less complex to manage than an IBM Flex System.

Off all the ways Principled Technologies shows how UCS is a superior solution, I wanted to touch on just one: highly available and scalable management. A UCS management domain consists of a pair of Fabric Interconnects and supports up to 160 blade and/or rack servers. In contrast, IBM is limited to 54 blade servers plus a non-redundant Flex System Manager node. Quoting from the paper:

Because IBM Flex System Manager nodes do not failover automatically like the Cisco UCS solution, administrators must manually connect to a backup node and bring it online. Each target system has an OS agent that remains registered to the original FSM node and does not recognize the new FSM. Admins must manually unregister each of these agents from the failed node and then register the new FSM node. [page 7]

Read the full report to learn the many additional ways which UCS is shown to be superior solution and why Cisco has leapt ahead of IBM and is now the #2 blade server vendor worldwide1

Principled Technologies Complexity and Cost Comparison Cisco UCS vs. IBM Flex System from Cisco Data Center

 

Would like to learn more about how Cisco is changing the economics of the datacenter, I would encourage you to review this presentation on SlideShare  or my previous series of blog posts, Yes, Cisco UCS servers are that good.

  1. Source:  IDC Worldwide Quarterly Server Tracker, Q1 2013 Revenue Share, May 2013

Tags: , , , , , , , , , , , , , , , , , , , , , , ,

NEW! Cisco Announces Highly Scalable Third Generation UCS Networking Fabric

Today Cisco announced an expanded portfolio of third generation UCS Networking products that improve Datacenter scalability, performance and agility with industry leading capabilities. The announcement includes the following new products:

1. A new Fabric Interconnect (Cisco UCS 6296UP) that doubles the switching capacity of the data center fabric to improve workload density (from 960Gbps to 1.92 Tbps), reduces end-to-end latency by 40 percent to improve application performance  and provides flexible unified ports to improve infrastructure agility and transition to a fully converged fabric

2. A new Chassis I/O Module (Cisco UCS 2204XP) that offers enhanced resiliency and utilization with Port Channeling, and an option for 80 Gbps, in addition to 160 Gbps, down to each chassis (from 80Gb to 320Gb to the blade) to handle workload bursts

Read More »

Tags: , , , , , , ,

New Cisco UCS Networking Products announced at Cisco Live, 2011

On July 12, 2011 Cisco announced at Cisco Live an expanded portfolio of UCS Networking products that improves Datacenter agility, scalability, performance with industry leading capabilities. The announcement includes the following new products:

1. A new Fabric Interconnect (Cisco UCS 6248UP) that doubles the switching capacity of the data center fabric to improve workload density (from 520Gbps to 1Tbps), reduces end-to-end latency by 40 percent to improve application performance  and provides flexible unified ports to improve infrastructure agility and transition to a fully converged fabric

2. A new Chassis I/O Module (Cisco UCS 2208XP) that doubles the bandwidth to the chassis (from 40Gb to 80Gb) to improve application performance and handle workload bursts (from 80Gb to 320Gb to the blade)

3. A new Virtual Interface Card (Cisco UCS VIC 1280) that quadruples the bandwidth to the server to improve application performance (from dual 10Gb to dual 40Gb) and doubles the number of virtual interfaces to improve Virtual Machine workload density (from 128 interfaces to 256 interfaces). It also offers a choice of Hypervisor to customers by expanding VM-FEX  technology to Linux based hypervisors (KVM based on RHEL 6.1). Read More »

Tags: , , , ,