MPI-3 voting: results
Last March’s MPI Forum meeting was the last meeting to get a “formal reading” of proposals into MPI-3. Some were quite controversial. Some ended up being withdrawn before the next meeting.
This week’s Forum meeting in Japan saw the first vote (out of two) for each the surviving proposals from the March meeting (see the full voting results here). Some continued to be quite controversial. Some didn’t survive their first votes (doh!). Others narrowly survived.
Here’s a summary of some of the users-will-care-about-these proposals, and how they fared:
- MPI3 Hybrid Programming: Proposal for Helper Threads: This proposal allowed MPI to pool multiple application threads to make progress on MPI work. It’s somewhat complicated, and has a long history. I don’t know exactly what happened (I wasn’t at the meeting), but this one failed its vote.
- Add Immediate versions of nonblocking collective I/O routines: This proposal added true non-blocking versions of some MPI-IO functions, like MPI_File_iread_all. There’s some controversy here, but it looks like this ticket failed its vote due to timing and too many organizations abstaining from the vote. Doh. 🙁
- Remove C++ Bindings: This is the so-called “The C++ bindings must die Die DIE” ticket. I’m generally in favor of it, but can see why some others are not. It passed.
- Move MPI-1 deprecated functions to new “Removed Interfaces” chapter: This ticket removes all the MPI functions that have been deprecated since the mid-90’s to their own “don’t use these functions any more!” chapter. MPI implementations can still provide these functions, but they will officially not be part of MPI-3. This ticket passed, but somewhat narrowly.
- Clarify MPI behavior when multiple MPI processes run in the same address space: This ticket clarifies what some MPI implementations have done in the past (and IBM wants to do in the future) about what happens when MPI processes are threads. It failed.
- User-Level Failure Mitigation: This is the (in)famous “fault tolerance” proposal that was introduced in the last meeting. It was quite controversial, and failed its vote in this week’s meeting. IMHO, this proposal deserves to be resurrected and continued for MPI-after-3.0; it contains a bunch of good ideas.
- Allocate a shared memory window: Users have long asked for MPI to play nice with shared memory. This ticket allows MPI to create a shared memory blob, and even use it with MPI one-sided semantics, if desired. It passed.
Remember that all tickets that passed their first votes must still pass a second vote before they’re (mostly) in MPI-3.
Then we will vote on chapters in their entirety and then the entire MPI-3 document as a whole. Then we’re done!
Wasn’t that easy?