Cisco Blogs


Cisco Blog > High Performance Computing Networking

Gettings towards an MPI-3.0 draft

July 20, 2012
at 11:25 am PST

I’ve been a bit tardy with my blogging responsibilities of late; but only because I’ve been swamped with MPI stuff.  Honest!

This past week, the MPI Forum met in Chicago and had a huge text-merging party.  Specifically, we took all the MPI-3 proposals that had passed and actually merged their text into a single document.  We did this in parallel (get it?) by dividing up the tickets and chapters among all the meeting participants.  It was quite amazing to watch, actually.  :-)

The merges resulted in a few conflicts here and there, a probably-inevitable set of LaTeX issues, some “Hey, why isn’t the Subversion server responding?” complaints, and some last minute, “Hey, that doesn’t look quite right…”-isms.

All that being said, it actually was a highly successful week, and the MPI-3 document is looking to be in very, very good shape.  We fixed oodles of little problems, cleaned up bunches of typos, and generally smoothly merged all the proposals into a good-looking document.

We’ve still got a little work to go, but the plan is to have a darn-near-complete MPI-3.0 draft put out to the public by the end of next week (i.e., around Friday, 27 July, 2012).

Enjoy!

Tags: ,

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.

4 Comments.


  1. How long do you think it’ll be before MPI 3 implementations are fairly standard in production environments? What about OpenMPI in particular?

       0 likes

  2. July 25, 2012 at 12:32 pm

    I know that Open MPI and MPICH2 are both working towards MPI-3 compliance. At this point, we both have different levels of conformance towards the major new items of what will be MPI-3.0.

    I predict it’ll still be quite a while before all of MPI-3 is fully fully fully implemented in the major MPI implementations. For example, while Open MPI is “working on it”, having true background progression (which is really required for some of the new MPI-3 one-sided stuff) is a fairly large architectural change for us, and will take time. I don’t see this particular feature being done before next summer, at the earliest.

    That being said, a bunch of other things are implemented in OMPI already:

    - New mpi_f08 Fortran module and revamped mpi Fortran module (for non-gfortran compilers)
    - Most of the MPI-3 one-sided stuff
    - Basic implementations of the non-blocking collectives
    - Bunches of other little things (e.g., the new MPI_REDUCE_LOCAL function)

    We haven’t made a conclusive list yet of all the MPI-3 things we’ve implemented so far in Open MPI; we will likely do so before the v1.7 release (scheduled for this Fall).

       0 likes

  3. Thanks for the quick reply!

    I imagine this may be an involved question, but can you describe the difference between the current situation and true background progression? Does the lack of background progression mean having to occasionally explicitly relinquish control to MPI in order to let one-sided operations proceed? Once true background progression is in place, would it involve extra threads and context switching, or use some other mechanism?

       0 likes

  4. July 26, 2012 at 1:27 pm

    Mmm… good questions!

    Let me make a separate blog entry to answer those. :-)

       1 like