I’ve previously written about libfabric. Here’s some highlights: libfabric is a set of next-generation, community-driven, ultra-low latency networking APIs The APIs are not tied to any particular networking hardware model Cisco is
Cisco is pleased to announce the intention to support the Intel MPI Library™ with usNIC on the UCS server and Nexus switches product lines over the ultra low latency Ethernet and routable IP transports, at both 10GE and 40GE speeds. usNIC will be
I’ve mentioned libfabric on this blog a few times: it’s a set of next-generation APIs that allow direct access to networking hardware (e.g., high-speed / low latency NICs) from Linux userspace (kernel access is in the works). To give you a
I’m stealing this text directly from Torsten Hoefler‘s blog, because I think it’s directly relevant to many of this blog’s readers: Our book on “Using Advanced MPI” will appear in about a month — now it’s the time to pre-order
It’s that time of year again — we’re at about T-2.5 weeks to the Supercomputing conference and trade show; SC’14 is in New Orleans, November 16-21. Are you going to get some tasty gumbo and supercharged computing power? If
Today’s blog post is by Nathan Hjelm, a Research Scientist at Los Alamos National Laboratory, and a core developer on the Open MPI project. The latest version of the “vader” shared memory Byte Transport Layer (BTL) in the upcoming
Today’s guest post comes from Ralph Castain, a principle engineer at Intel. The bulk of this post is an email he sent explaining the concept of a “slot” in typical HPC schedulers. This is a little departure from the normal fare on
Today’s guest post is by Reese Faucette, one of my fellow usNIC team members here at Cisco. I’m pleased to announce that this past Friday, Cisco contributed a usNIC-based provider to libfabric, the new API in the works from OpenFabrics
As you probably already know, the MPI-3.0 document was published in September of 2012. We even got a new logo for MPI-3. Woo hoo! The MPI Forum has been busy working on both errata to MPI-3.0 (which will be collated and published as