Cisco Blogs

Cisco Blog > High Performance Computing Networking

MPI progress

From my blog post about the upcoming MPI-3 draft spec, Geoffrey Irving asked a followup question:

Can you describe the difference between the current situation and true background progression? Does the lack of background progression mean having to occasionally explicitly relinquish control to MPI in order to let one-sided operations proceed? Once true background progression is in place, would it involve extra threads and context switching, or use some other mechanism?

A great question.  He asked it in the context of the new MPI-3 one-sided stuff, but it’s generally applicable for other MPI operations, too (even MPI_SEND / MPI_RECV).

Read More »

Tags: , , ,

MPI-3 voting: results

Last March’s MPI Forum meeting was the last meeting to get a “formal reading” of proposals into MPI-3. Some were quite controversial. Some ended up being withdrawn before the next meeting.

This week’s Forum meeting in Japan saw the first vote (out of two) for each the surviving proposals from the March meeting (see the full voting results here). Some continued to be quite controversial. Some didn’t survive their first votes (doh!). Others narrowly survived.

Here’s a summary of some of the users-will-care-about-these proposals, and how they fared: Read More »

Tags: , ,

The last new things in MPI-3

I know we’ve been talking about new MPI-3 things for forever.  But this is the last list of new things.

I promise.


I can say this with certainly because the Forum’s March meeting was the deadline for all new proposals to make it into the MPI-3 standard.  Anything else will have to be in MPI-<next> (where <next> may be 3.1, or 4, or …11.  Shrug).

Because of the deadline, we had a blizzard of proposals finally get into shape to be presented to the entire Forum.  Let’s talk about some of the more interesting ones…

Read More »

Tags: ,

New Fortran MPI bindings are “in”! And other MPI-3 stuff…

As of March 7, 2012, the new “use mpi_f08″ bindings have been officially voted in to the MPI-3 standard.

Woo hoo!!

A few other minor corrections made it into MPI-3 at the same meeting, but they’re boring / not worth discussing.

What is worth discussing, however, are some proposals that passed their first (of two) formal votes to make it into MPI-3 at that same meeting:

Let’s give a few details on each of these…

Read More »

Tags: ,

The New MPI-3 Remote Memory Access (One Sided) Interface

Today we feature a deep-dive guest post from Torsten Hoefler, the Performance Modeling and Simulation lead of the Blue Waters project at NCSA, and Pavan Balaji, computer scientist in the Mathematics and Computer Science (MCS) Division at the Argonne National Laboratory (ANL), and as a fellow of the Computation Institute at the University of Chicago.

Despite MPI’s vast success in bringing portable message passing to scientists on a wide variety of platforms, MPI has been labeled as a communication model that only supports “two-sided” and “global” communication. The MPI-1 standard, which was released in 1994, provided functionality for performing two-sided and group or collective communication. The MPI-2 standard, released in 1997, added support for one-sided communication or remote memory access (RMA) capabilities, among other things. However, users have been slow to adopt such capabilities because of a number of reasons, the primary ones being: (1) the model was too strict for several application behavior patterns, and (2) there were several missing features in the MPI-2 RMA standard. Bonachea and Duell put together a more-or-less comprehensive list of areas where MPI-2 RMA falls behind. A number of alternate programming models, including Global Arrays, UPC and CAF have gained popularity filling this gap.

That’s where MPI-3 comes in.

Read More »

Tags: , , ,