Cisco Blogs


Cisco Blog > High Performance Computing Networking

MPI-3.0 Draft 2 public comment period

August 3, 2012
at 5:17 am PST

Dear MPI user,

The MPI-Forum is about to ratify the MPI-3.0, a new version of the MPI standard.  As part of this process, we are soliciting feedback on the current draft standard.  The draft document can be found here:

http://meetings.mpi-forum.org/draft_standard/mpi3.0_draft_2.pdf

We are seeking the following feedback in particular:

  • Small requests/corrections that can be resolved before finishing 3.0.
  • Requests for clarification of unclear text or undefined behavior.
  • Undetected severe inconsistencies or bugs that may delay the standard publication.
  • Wishes for additional enhancements or new functionality to be taken into account after 3.0 is published.

Please comment before Sep. 6, 2012 as follows:

  1. Subscribe to: http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi-comments
  2. Then send your comments to: mpi-comments@lists.mpi-forum.org

Messages sent from an unsubscribed e-mail address will not be considered.

Thank you in advance for your help.

Best regards,
The MPI Forum

Tags: , ,

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.

2 Comments.


  1. Hi Jeff,

    I see the ABI definition stuff didn’t make it into the draft spec. There seemed to be a lot of interest in that back in 08(?) when the 3.0 effort started, and it sure would solve a headache that many HPC centres have. Can you say why it got dropped?

    Cheers,

    M

       0 likes

  2. September 17, 2012 at 9:40 am

    There are many reasons ABI didn’t make it. I agree — it would be a Very Good Thing — but unfortunately, ABI wouldn’t solve all the perceived problems, either.

    For example, you’d also have to standardize mpiexec (mpirun). That would be a massive, massive undertaking — parallel run-time systems are very much still an active area of research. Getting all MPI implementations and OS/parallel machine vendors to agree on a common mpiexec would be an incredibly difficult task.

    There were other issues about ISV’s being uncomfortable with their software being able to run with any arbitrary MPI implementation (because they only QA check with MPI implementations X, Y, and Z).

    It also seemed that someone could write an open source “MorphMPI” layer (which a few groups have done) which would provide a single, common implementation of the MPI / PMPI APIs, but then would effectively open a plugin representing different MPI implementations under the covers for the actual implementation (you still have the problem with mpiexec, however). There has been some moderate success here, but nothing has really taken off yet.

    …and so on.

    The laundry list was large, and there wasn’t enough people who said “yes, I’d be willing to work on that.” So it got dropped. :-(

       0 likes