Today’s guest post is by Reese Faucette, one of my fellow usNIC team members here at Cisco.
I’m pleased to announce that this past Friday, Cisco contributed a usNIC-based provider to libfabric, the new API in the works from OpenFabrics Interfaces Working Group.
(Editor’s note: I’ve blogged about libfabric before)
Yes, the road is littered with the bodies of APIs that were great ideas at the time (or not), but that doesn’t change the fact neither Berkeley sockets nor Linux Verbs are really adequate as cross-vendor, high-performance programming APIs.
Read More »
Tags: HPC, libfabric, mpi, USNIC
As you probably already know, the MPI-3.0 document was published in September of 2012.
We even got a new logo for MPI-3. Woo hoo!
The MPI Forum has been busy working on both errata to MPI-3.0 (which will be collated and published as “MPI-3.1″) and all-new functionality for MPI-4.0.
The current plan is to finalize all errata and outstanding issues for MPI-3.1 in our December 2014 meeting (i.e., in the post-Supercomputing lull). This means that we can vote on the final MPI-3.1 document at the next MPI Forum meeting in March 2015.
MPI is sometimes criticized for being “slow” in development. Why on earth would it take 2 years to formalize errata from the MPI-3.0 document into an MPI-3.1 document?
The answer is (at least) twofold:
- This stuff is really, really complicated. What appears to be a trivial issue almost always turns out to have deeper implications that really need to be understood before proceeding. This kind of deliberate thought and process simply takes time.
- MPI is a standard. Publishing a new version of that standard has a very large impact; it decides the course of many vendors, researchers, and users. Care must be taken to get that publication as correct as possible. Perfection is unlikely — as scientists and engineers, we absolutely have to admit that — but we want to be as close to fully-correct as possible.
MPI-4 is still “in the works”. Big New Things, such as endpoints and fault tolerant behavior is still under active development. MPI-4 is still a ways off, so it’s a bit early to start making predictions about what will/will not be included.
Tags: HPC, mpi, MPI-3
In part 1 of this series, I discussed various peer-wise technologies and techniques that MPI implementations typically use for communication / computation overlap.
MPI-3.0, published in 2012, forced a change in the overlap game.
Specifically: most prior overlap work had been in the area of individual messages between a pair of peers. These were very helpful for point-to-point messages, especially those of the non-blocking variety. But MPI-3.0 introduced the concept of non-blocking collective (NBC) operations. This fundamentally changed the requirements for network hardware offload.
Let me explain.
Read More »
Tags: HPI, mpi, MPI-3
I’ve mentioned computation / communication overlap before (e.g., here, here, and here).
Various types of networks and NICs have long-since had some form of overlap. Some had better quality overlap than others, from an HPC perspective.
But with MPI-3, we’re really entering a new realm of overlap. In this first of two blog entries, I’ll explain some of the various flavors of overlap and how they are beneficial to MPI/HPC-style applications.
Read More »
Tags: HPC, mpi
A few months ago, I posted an entry entitled “HPC in L3“. My only point for that entry was to remove the “HPC in L3? That’s a terrible idea!” knee-jerk reaction that us old-timer HPC types have.
I mention this because we released a free software update a few days ago for the Cisco usNIC product that enables usNIC traffic to flow across UDP (vs. raw L2 frames). Woo hoo!
That’s right, sports fans: another free software update to make usNIC even better than ever. Especially across 40Gb interfaces!
Read More »
Tags: HPC, HPC in L3, ip, mpi, UDP, USNIC