MPI-3.1! …not quite yet

The MPI Forum met for our quarterly meeting last week in Portland, Oregon. The main goal of the meeting was to pass the MPI-3.1 standard into law.  MPI-3.1 contains a bunch of errata from MPI-3.0, and a small number of new things.

A Farewell to LAM/MPI

With a little sadness, I note that LAM/MPI was officially retired recently. LAM/MPI’s hosting provider, Indiana University, made the decision not to renew the lam-mpi.org domain any more.  As of a few weeks ago, LAM/MPI’s web site is no more, and its domain is in the process of expiring. LAM/MPI was a highly popular implementation […]

Open MPI: behind the scenes

Working on an MPI implementation isn’t always sexy.  There’s a lot of grubby, grubby work that needs to happen on a continual basis to produce a production-quality MPI implementation that can be used for real-world HPC applications. Sure, we always need to work on optimizing short message latency. Sure, we need to keep driving MPI’s […]

MPI 3.1: coming soon to an implementation near you

The next MPI Forum meeting will be in Portland, OR, USA, in early March. One of the major topics on the agenda will be voting on the MPI 3.1 standard. You might be wondering what’s new in MPI-3.1. I’m glad you asked.

Tree-based launch in Open MPI (part 2)

In my prior blog entry, I described the basics of Open MPI’s tree-based launching system over ssh (yes, there are still some valid / good reasons for using ssh over a native job scheduler / resource manager’s parallel launch mechanisms…). That entry got a little long, so I split the rest of the discussion into […]

Tree-based launch in Open MPI

I’ve mentioned it before: the run-time systems of MPI implementations are frequently unsung heroes. A lot of blood, sweat, tears, and innovation goes into parallel run time systems, particularly those that can scale to very large systems.  But they’re  not discussed often, mainly because they’re not as sexy and ultra-low latency numbers, or other popular […]

Holiday wishes

As usual, in the post-Supercomputing / post-US-Thanksgiving-holiday lull, the work that we have all put off since we started ignoring it to prepare for Supercomputing catches up to us.  Inevitably, it means that my writing here at the blog falls behind in December.  Sorry, folks! To make up for that, here’s a little ditty I […]

libfabric support of usNIC in Open MPI

I’ve previously written about libfabric.  Here’s some highlights: libfabric is a set of next-generation, community-driven, ultra-low latency networking APIs The APIs are not tied to any particular networking hardware model Cisco is actively helping define, design, and develop the libfabric APIs as part of the community My fellow team member Reese Faucette recently contributed a […]

usNIC support for the Intel MPI Library

Cisco is pleased to announce the intention to support the Intel MPI Library™ with usNIC on the UCS server and Nexus switches product lines over the ultra low latency Ethernet and routable IP transports, at both 10GE and 40GE speeds. usNIC will be enabled by a simple library plugin to the uDAPL framework included in […]