The Nexus 3548 with Algo Boost was announced last week and received a lot of positive buzz around this game changing innovation. To follow up on Berna Devrim’s Introduction Blog, I am introducing a multipart series that goes into more specifics by Cisco experts. As part 1 of the series, I recently had the opportunity to have a chat with Will Ochandarena about the latency enhancements. Will is a Senior Product Manager in the Server Access and Virtualization Business Unit. In this role, he is responsible for the Nexus 3548 switch, and Cisco’s low latency switching strategy.
GD: The Cisco Nexus 3548 switch with Algo Boost was announced on September 19th and received a lot of positive attention. Can you elaborate a little more on the latency that this switch can achieve? How does this benefit our financial customers?
<WO>: The custom switching ASIC in the Nexus 3548, codenamed Monticello, sets a new bar for switching latency. Our engineers worked tirelessly to eliminate unnecessary nanoseconds from the forwarding path, tweaking it down to as low as 190 nanoseconds (ns). Best of all, this latency is achieved even when we are doing full layer-2 and layer-3 switching, with features such as Network Address Translation (NAT) enabled. We actually went as far as to offer a few different switching modes, each with different latency and forwarding characteristics, in order to give our customers the most flexibility in their deployments.
In terms of the impact on our end customers, we consistently hear from companies in the financial community that switch latency has a direct impact on the profitability of their business. Trading firms -- as well as the exchanges and other participants -- gain significant business advantage if the supporting infrastructure enables them to acquire data and execute trades nanoseconds faster than the competition.
Read More »
Tags: Algo Boost, Algorithm Boost, data center, high performance computing, high performance trading, High Performance Trading Fabric, High-Frequency Trading, HPC, latency, Nexus 3000, Nexus 3500, Nexus 3548, Nexus 3K, ultra-low latency, Unified Fabric
We’re here at the MPI Forum in Vienna where the Forum has just unanimously voted to accept the MPI-3.0 document.
This document caps a 4-year effort that started in January of 2008. MPI-3.0 clarifies many pending MPI-2.2 issues and adds some significant new user-level features to the standard:
Read More »
Tags: HPC, mpi, MPI-3.0
… While Delivering Superior Fabric Visibility!
Today, at the High Performance Computing for Wall Street event, we announced Cisco Algorithm Boost or Algo Boost technology, a groundbreaking networking innovation with numerous patents pending, that offers the highest speed, visibility and monitoring capabilities in the networking industry. A true game changer delivering competitive advantage to our customers!
Ideal for high performance trading, big data and high performance computing environments, this new technology offers network access performance as low as 190 nanoseconds, more than 60% faster than other full featured Ethernet switches. When your business success is determined by nanoseconds, this is a huge gain!
The first switch to integrate the Cisco Algo Boost technology is the new Cisco Nexus 3548 full-featured switch which extends Cisco’s leadership in networking by pairing performance and low latency with innovations in visibility, automation, and time synchronization. And it is tightly integrated with the rich feature set of our Nexus Operating System, a proven operating system used in many of the world’s leading data centers, creating a truly differentiated offering.
So you may ask how we deliver this breakthrough offering that will change the game. Here is how…
Read More »
Tags: Algo Boost, Algorithm Boost, ASIC, Big Data, data center, high performance computing, high performance trading, High Performance Trading Fabric, High-Frequency Trading, HPC, latency, Nexus 3000, Nexus 3500, Nexus 3548, Nexus 3K, ultra-low latency, Unified Fabric
Most people’s reactions to hearing about the new MPI-3 non-blocking “barrier” collective think: huh?
Why on earth would you have a non-blocking barrier? The whole point of a barrier is to synchronize — how does it make sense not to block while waiting?
The key is re-phrasing that previous question: why would you block while waiting?
Read More »
Tags: HPC, mpi, MPI-3
In my last post, I described the Simple mode of Open MPI v1.7′s process affinity system.
The Simple mode is actually quite flexible, and we anticipate that it will meet most users’ needs. However, some users will need more flexibility. That’s what the Expert mode is for.
Before jumping in to the Expert mode, though, let me describe two more features of the revamped v1.7 affinity system.
Read More »
Tags: HPC, hwloc, mpi, NUMA, Open MPI, process affinity