Cisco Blogs


Cisco Blog > Data Center

UCS M-Series System Link Technology: The converged infrastructure story.

It almost feels like this blog entry should start with: Once upon a time….   Because it captures a journey of a young emerging technology and the powerful infrastructure tool it has become. The Cisco UCS journey starts with the tale of Unified Fabric and the Converged Network Adapter (CNA).

Most people think of Unified Fabric as the ability to put both Fiber Channel and Ethernet on the same wire between the server and the Fabric Interconnect or upstream FCoE switchs.  That is part of the story, but that part is as simple as putting a Fiber Channel frame inside of an Ethernet frame.   What is the magic that makes this happen at the server level?  Doesn’t FCoE imply that the Operating System itself would have to know how to present a Fiber Channel device in software and then encapsulate and send the frame across the Ethernet port?   Possibly, but that would require OS FCoE software support which would also require CPU overhead and require end users to qualify these new software drivers and compare the performance of software against existing hardware FC HBAs.

For UCS the key to the success of converged infrastructure was due greatly to the very first Converged Network Adapters that were released.  These adapters presented existing PCIe Fiber Channel and Ethernet endpoints to the operating system.  This required no new drivers or new qualification from the perspective of the operating system and users.  However at the heart of this adapter was a Cisco ASIC that provided two key functions:

1.)  Present the physical functions for existing PCIe devices to the operating system without the penalty of PCIe switching.

2.)   Encapsulate Fiber Channel frames into an Ethernet frame as they are sent to the northbound switch.

Converged Network Adapter

Converged Network Adapter

It is the second function that we often focus on because that’s the cool networking portion that many of us at Cisco like to talk about.  But how exactly do we convince the operating system that it is communicating with an Intel Dual port Ethernet NIC and a Dual port 4GB Qlogic Fiber Channel HBA?  I mean these are the exact same drivers that we use for the actual Intel and Qlogic card, there’s got to be some magic there right?

Well, yes and no.  Lets start with the no.  Presenting different physical functions (PCIe endpoints) on a physical PCIe card is nothing new.  It’s as simple as putting a PCIe switch between the bus and the endpoints.  But like all switching technologies a PCIe switch incurs latency and it cannot encapsulate a FC frame into an Ethernet frame.  So that’s where the magic comes into play.  The original Converged Network Adapater contained a Cisco ASIC that sits on the PCIe bus between the Intel and Qlogic physical functions.  From the operating system perspective the ASIC “looks” like a PCIe switch providing direct access to the the Ethernet and Fiber Channel endpoints, but in reality it has the ability to move I/O in and out of the physical functions without incurring the latency of a switch.  The ASIC also provides a mechanism for encapsulating the FC Frames into a specific Ethernet frame type to provide FCoE connectivity upstream.

The pure beauty of this ASIC is that we have evolved it from the CNA to the Virtual Interface Card (VIC). These traditional CNAs have a limited number of Ethernet and FC ports available to they system (2 each) based on the chipsets installed on the card.  The Cisco VIC provides a variety of vNICs and vHBAs to be created on the card.  The VIC not only virtualizes the PCIe switch, it virtualizes the I/O endpoint.

Cisco Virtual Interface Card

Cisco Virtual Interface Card

So in essence what we have created with the Cisco ASIC, that drives the VIC, is a device that can provide a standard PCIe mechanism to present an end device directly to the operating system.  This ASIC also provides a hardware mechanism designed to receive native I/O from the operating system and encapsulate and translate where necessary without the need for OS stack dependencies, for example native Fiber Channel encapsulated into Ethernet.

At the heart of the UCS M-Series servers is the System Link Technology.  It is this specific component that provides access to the shared I/O resources in the chassis to the compute nodes.  System Link Technology is the 3rd Generation technology behind the VIC and the 4th Generation technology for Unified Fabric within the construct of Unified Computing.  The key function of the System Link Technology is the creation of a new PCIe physical function called the SCSI NIC (sNIC) that presents a virtual storage controller to the operating system and maps drive resources to a specific service profile within Cisco UCS.

System Link Technology

System Link Technology

It is this innovative technology that provides a mechanism for each compute node within UCS M-Series to have it’s own specific virtual drive carved out of the available physical drives within the chassis.  This is accomplished using standard PCIe and not MR-IOV.  Therefore it does not require any special knowledge of a change in the PCIe frame format by the operating system.

For a more detailed look at System Link Technology in the M-Series check out the following white paper.

The important thing to remember is that hardware infrastructure is only part of the overall architectural design for UCS M-Series. The other component that is key to UCS is the ability to manage the virtual instantiations of the system components.  In the next segment on UCS M-Series Mahesh will discuss how UCS Manager rounds out the architectural design.

Tags: , , , , ,

NetApp and Cisco Deliver Extreme Performance For Oracle Database

Guest post by Aaron Newcomb, Solutions Marketing Manager, NetApp

FlexPodNo one wants the 2:00 am distressed phone call disturbing a good night’s sleep. For IT Managers and Database Administrators that 2:00 am call is typically bad news regarding the systems they support. Users in another region are not able to access an application. Customers are not placing orders because the system is responding too slowly. Nightly reporting is taking too long and impacting performance during peak business hours. When your business critical applications running on Oracle Database are not performing at the speed of business that creates barriers to customer satisfaction and remaining competitive. NetApp wants to help break down those barriers and help our customers get a good night sleep instead of worrying about the performance of their Oracle Database.

NetApp today unveiled a solution designed to address the need for extreme performance for Oracle Databases with FlexPod Select for High Performance Oracle RAC. This integrated infrastructure solution offers a complete data center infrastructure including networking, servers, storage, and the management software you need to run your business 24×7 365 days a year. Since NetApp and Cisco validate the architecture you can deploy your Oracle Databases with confidence and in much less time than traditional approaches. Built with industry-leading NetApp EF-550 flash storage arrays and Cisco UCS B200 M3 Blade Servers this solution can deliver the highest levels of performance for the most demanding Oracle Database workloads on the planet.

The system will deliver more than one million IOPS of read performance for Oracle Database workloads at sub-millisecond latencies. This means faster response times for end users, improved database application performance, and more overhead to run additional workload or consolidate databases. Not only that, but this pre-validated and pre-tested solution is based on a balanced configuration so that the infrastructure components you need to run your business are working in harmony instead of competing for resources. The solution is built with redundancy in mind to eliminate risk and allow for flexibility in deployment options. The architecture scales linearly so that you can start with a smaller configuration and grow as your business needs change optimizing a return on investment. If something goes wrong the solution is backed by our collaborative support agreement so there is no finger pointing and only swift problem resolution.

So what would you do with one million IOPS? Build a new application that will respond to a competitive threat? Deliver faster results for your company? Increase the number of users and transactions your application can support without having to worry about missing critical service level agreements? If nothing else, imagine how great you will sleep knowing that your business is running with the performance needed for success.

Tags: , , , , , ,

#CiscoChampion Radio S1|Ep 31 #UCSGrandSlam

cisco_champions BADGE_200x200#CiscoChampion Radio is a podcast series by Cisco Champions as technologists. Today we’re talking with Cisco Marketing Manager Bill Shields and Cisco Principal Engineer Jim Leach, about our recent UCS launch. Amy Lewis (@CommsNinja) moderates and AJ Kuftic and Chric Nickl are this week’s Cisco Champion guest hosts.

Listen to the Podcast.

Learn about the Cisco Champions Program HERE.
See a list of all #CiscoChampion Radio podcasts HERE.

Cisco SME
Bill Shields, @hightechbill, Cisco Marketing Manager
Jim Leach, @JamesAtCisco, Cisco Principal Engineer

Cisco Champions
AJ Kuftic, @ajkuftic, Enterprise Engineer
Chris Nickl, @ck_nic, Cloud Infrastructure Architect Read More »

Tags: , ,

World Record Oracle Performance with Cisco UCS

Oracle OpenWorld is a show like no other, with over 60,000 IT professionals convening in San Francisco for a week of all things, Oracle, Java and more. Cisco has a full slate of activities planned, including demos and theater sessions on the many benefits of running your Oracle workloads on the Cisco Unified Computing System (UCS). We’re also teaming with theCUBE to stream three days of live interviews with the industry’s leading thought leaders and Oracle solution experts. All of our activities have a common theme – Unleashing Oracle Performance.

With more than 25 world record benchmarks for Oracle workloads, Cisco has a proven record of delivering record-setting Oracle performance with each generation of server and processor technologies. This week at Oracle OpenWorld, we’re showcasing three recent world record benchmarks for Oracle E-Business Suite and critical Java operations.

Oracle E-Business Suite Applications R12 (12.1.3) Payroll and Order-to-Cash Benchmarks

E-Biz Suite GraphThe Cisco UCS® B200 M4 Blade Server with the Intel® Xeon® processor E5-2600 v3 product family, is the number-one server with top results on the Oracle E-Business Suite Applications R12 benchmark. The Cisco UCS B200 M4 performed over a million employees per hour on the Payroll Extra-Large Model Benchmark, outperforming the IBM Power System S824. UCS also set a world record on the Order-to-Cash workload, processing more than 11,000 more order lines per hour than the same server configured with previous-generation processors. The performance brief has all the details.

Read More »

Tags: , , , , , ,

UCS M-Series Design Principles – Why bigger is not necessarily better!

Cisco UCS M-Series servers have been purpose built to fit specific need in the data center. The core design principles are around sizing the compute node to meet the needs of cloud scale applications.

When I was growing up I used to watch a program on PBS called 3-2-1 Contact, most afternoons, when I came home from school (Yes, I’ve pretty much always been a nerd). There was an episode about size and efficiency, that for some reason I have always remembered. This episode included a short film to demonstrate the relationship between size and efficiency.

The plot goes something like this. Kid #1 says that his uncle’s economy car, that gets a whopping 15 miles to the gallon (this was the 1980s), is more efficient than a school bus that gets 6 miles to the gallon. Kid #2 disagrees and challenges Kid #1 to a contest. But here’s the rub, the challenge is to transport 24 children from the bus stop to school, about 3 miles a way, on a single gallon of fuel. Long story short, the school bus completes the task with one trip, but the car has to make 8 trips and runs out of fuel before it completes the task. So kid #2 proves the school bus is more efficient.

The only problem with this logic is that we know that the school bus is not more efficient in all cases.

For transporting 50 people a bus is very efficient, but if you need to transport 2 people 100 miles to a concert the bus would be a bad choice. Efficiency depends on the task at hand. In the compute world, a task equates to the workload. Using a 1RU 2-socket E5 server for the distributed cloud scale workloads that Arnab Basu has been describing would be equivalent to using a school bus to transport a single student. This is not cost effective.

Thanks to hypervisors, we can have multiple workloads on a single server so that we achieve the economies of scale. However there is a penalty to building that type of infrastructure. You add licensing costs, administrative overhead, and performance penalties.

Customers deploying cloud scale applications are looking for ways to increase the compute capacity without increasing the cost and complexity. They need all terrain vehicles, not school buses. Small, cost effective, and easy to maintain resources that serve a specific purpose.

Many vendors entering this space are just making the servers smaller. Per the analogy above smaller helps. But one thing we have learned from server virtualization is that there is real value in the ability to share the infrastructure. With a physical server the challenge becomes how do you share components in compute infrastructure without a hypervisor? Power and cooling are easy, but what about network, storage and management. This is where M-Series expands on the core foundations of unified compute to provide a compute platform that meets the needs of these applications.

There are 2 key design principles in Unified Compute:

1.) Unified Fabric
2.) Unified Management

Over the next couple of weeks Mahesh Natarajan and I will be describing how and why these 2 design principles became the corner stone for building the M-Series modular servers.

Tags: , , , , , ,