In the January MPI Forum meeting, several proposals passed their 2nd votes, meaning that they are “in” MPI-3. That being said, MPI-3 is not yet finalized (and won’t be for many more months), so changes can still happen.
- Creating MPI_COMM_SPLIT_TYPE
- Making the C++ bindings optional
- Updating RMA (a.k.a., “one-sided”)
- Creating a new “MPIT” tools interface
I’ll describe each of these briefly below.
MPI_COMM_SPLIT_TYPE: This new API function splits communicators according to type. The new predefined type MPI_COMM_TYPE_SHARED allows users to split, for example, MPI_COMM_WORLD, into communicators comprised of processes that can communicate by shared memory. Effectively: this allows users to create per-node communicators easily. Note, too, that upcoming MPI-3 proposals extend MPI_COMM_SPLIT_TYPE by adding new predefined types for other split patterns.
Optional C++ bindings: A single sentence has been added to make the C++ bindings optional in MPI implementations. That is, an implementation can choose not to provide the MPI C++ bindings. Fun fact: most people don’t know that the Fortran bindings have been optional since MPI-1! (even though most MPI implementations provide the MPI Fortran interfaces) It should be noted that there is another proposal churning through the Forum to completely delete the C++ bindings — meaning that MPI would only be left with official bindings for C and Fortran. If it passes, deleting the C++ bindings would make their optionality (is that a word?) moot, obviously.
Updated the RMA chapter: The RMA chapter has received a significant overhaul. It is unfortunately still quite complex and fairly subtle, but I am told by multiple people who were involved in the RMA revamp that many problems from the MPI-2 RMA definitions were fixed. There’s some new functionality, too, of course — but since I wasn’t involved in the RMA working group, I’m not going to try to describe the new stuff for fear of being incorrect. :-)
New “MPIT” interface: The Tools working group designed a new interface that tools can use to harvest information from a running MPI application. Debuggers, profilers, and correctness-checking frameworks can now query a much richer set of information from the innards of an MPI implementation. This information can be presented to a user to help them more deeply understand the run-time characteristics of their MPI application.