Cisco Blogs

Cisco Blog > High Performance Computing Networking

MPI-3.1! …not quite yet

MPI 3 logoThe MPI Forum met for our quarterly meeting last week in Portland, Oregon.

The main goal of the meeting was to pass the MPI-3.1 standard into law.  MPI-3.1 contains a bunch of errata from MPI-3.0, and a small number of new things.

Read More »

Tags: , ,

MPI 3.1: coming soon to an implementation near you

MPI 3 logoThe next MPI Forum meeting will be in Portland, OR, USA, in early March.

One of the major topics on the agenda will be voting on the MPI 3.1 standard.

You might be wondering what’s new in MPI-3.1.

I’m glad you asked. Read More »

Tags: , , , ,


MPI 3 logoAs you probably already know, the MPI-3.0 document was published in September of 2012.

We even got a new logo for MPI-3.  Woo hoo!

The MPI Forum has been busy working on both errata to MPI-3.0 (which will be collated and published as “MPI-3.1″) and all-new functionality for MPI-4.0.

The current plan is to finalize all errata and outstanding issues for MPI-3.1 in our December 2014 meeting (i.e., in the post-Supercomputing lull).  This means that we can vote on the final MPI-3.1 document at the next MPI Forum meeting in March 2015.

MPI is sometimes criticized for being “slow” in development.  Why on earth would it take 2 years to formalize errata from the MPI-3.0 document into an MPI-3.1 document?

The answer is (at least) twofold:

  1. This stuff is really, really complicated.  What appears to be a trivial issue almost always turns out to have deeper implications that really need to be understood before proceeding.  This kind of deliberate thought and process simply takes time.
  2. MPI is a standard.  Publishing a new version of that standard has a very large impact; it decides the course of many vendors, researchers, and users.  Care must be taken to get that publication as correct as possible.  Perfection is unlikely — as scientists and engineers, we absolutely have to admit that — but we want to be as close to fully-correct as possible.

MPI-4 is still “in the works”.  Big New Things, such as endpoints and fault tolerant behavior is still under active development.  MPI-4 is still a ways off, so it’s a bit early to start making predictions about what will/will not be included.

Tags: , ,

Overlap of communication and computation (part 2)

In part 1 of this series, I discussed various peer-wise technologies and techniques that MPI implementations typically use for communication / computation overlap.

MPI-3.0, published in 2012, forced a change in the overlap game.

Specifically: most prior overlap work had been in the area of individual messages between a pair of peers.  These were very helpful for point-to-point messages, especially those of the non-blocking variety.  But MPI-3.0 introduced the concept of non-blocking collective (NBC) operations.  This fundamentally changed the requirements for network hardware offload.

Let me explain.

Read More »

Tags: , ,

First public tools for the MPI_T interface in MPI-3.0

Today’s guest post is written by Tanzima Islam, Post Doctoral Researcher at Lawrence Livermore Laboratory, and Kathryn Mohror and Martin Schulz, Computer Scientists at Lawrence Livermore Laboratory.

MPI_T logoThe latest version of the MPI Standard, MPI 3.0, includes a new interface for tools: the MPI Tools Information Interface, or “MPI_T”.

MPI_T complements the existing MPI profiling interface, PMPI, and offers access to both internal performance information as well as runtime settings. It is based on the concept of typed variables that can be queried, read, and set through the MPI_T API.

Read More »

Tags: , ,