Cisco Blogs

MPI-2.2 is darn near done

- August 3, 2009 - 0 Comments

Torsten beat me to the punch last week (and insideHPC commented on it), but I’m still going write my $0.02 about the MPI-2.2 spec anyway.At last week’s MPI Forum meeting in Chicago (hosted at the beautiful Microsoft facility — gotta love those fruit+granola yogurt parfaits they serve!), we had the last round of 2nd votes on the MPI-2.2 specification. All changes and updates to MPI-2.1 are therefore closed. Woo hoo! All that remains is for us to actually integrate all the text that was voted on into a single, cohesive document, and then have a round of final votes at the next Forum meeting in Helsinki, Finland. These last votes in Helsinki are at least somewhat of a formality, but they do ensure that we don’t make editing mistakes in the process of transcribing all the proposals that passed into what will become the official MPI-2.2 standard document. A few MPI-2.2 proposals didn’t get resolved in time to make it into the final MPI-2.2 document (and we found at least one or two errors in the proposals that did pass into MPI-2.2), so we’ll be issuing a short MPI-2.2 errata document shortly after MPI-2.2 is published.So what changes can users expect in the MPI-2.2 spec? Most of the changes are bug fixes or small evolutions — there’s nothing “huge” that is new. It is very important to note that any existing correct MPI-1 or MPI-2 code will still compile and run with an MPI-2.2 implementation with no changes. We fixed lots and lots of little things: mistakes in grammar, mistakes in code examples, inconsistencies in text, references that have become obsolete since the mid-90’s, defined some behaviors that were previously undefined, and other things like that. As Ewing “Rusty” Lusk remarked when the Forum was re-convened 2+ years ago, “Standards work is grubby, grubby, grubby, grubby (but necessary!) work.”Ok, I’m paraphrasing Rusty. But not much.There were a small number of totally-new things introduced into MPI-2.2, but only when they were “small” (an admittedly subjective measure) and addressed a specific need. Torsten listed one of these in his post: the addition of a local reduction function. I’ll add a few other proposals chosen more-or-less at random to the list in Torsten’s entry:

  1. Deprecate the C++ bindings: Anyone on the Forum will tell you that this is one of my favorite proposals. As the original author of the C++ bindings, I am uniquely qualified to deprecate them. Note that this proposal deprecates the C++ bindings — it does not remove them! “Deprecating” is simply an indication that they may be removed in a future version of the MPI standard (I actually have an open proposal for MPI-3.0 to remove the C++ bindings — but that’s still a long way off!). The C++ bindings were a great idea that didn’t work out in practice. We intentionally designed the C++ MPI bindings to have a 1-to-1 correspondence to the C bindings — we’re standarding bindings, after all, not a class library with additional functionality. The problem is that those C++ bindings didn’t offer a compelling enough reason for C++ programmers to use them over the C bindings. Indeed, one of the best examples of C++ MPI class libraries recently (Boost.MPI) is implemented on the C bindings — not the C++ bindings. Additionally, the C++ bindings have required a large effort to maintain over the years. We don’t have many bona-fide C++ experts on the Forum (we have lots of C++ programmers, but few experts), making the job that much more difficult. Case in point is some errata to MPI-2.0 that was introduced in the late 1990’s that erroneously removed “const” from some of the C++ MPI handles. Whoops! With mistakes like this, it indicates to me that it is time for the C++ bindings to go; let the C++ community design new, cool, interesting, and genuinely much-more-useful class libraries than we can provide in standardized bindings.
  2. Fix MPI attribute examples: When we started the Open MPI project, I was the poor schlep who was tapped to implement the MPI attribute functionality. When reading the MPI spec to be sure that I was implementing the right stuff, I ran across some inconsistencies between the text and the examples provided in the text that were tremendously confusing. I submitted errata back then, but we didn’t get a chance to fully address the more-subtle-than-expected set of issues until now. We replaced the two extremely terse (and erroneous!) examples with nine totally-explicit (and hopefully clear!) examples showing every possible combination of what can happen. Honestly, MPI attributes are minor functionality and this change probably won’t affect a huge number of people. But it certainly makes me feel better.
  3. Specify order of attribute delete callbacks: Quicey Koziol introduced this ticket to specify an ordering of the attribute deletion callbacks that MPI will invoke during MPI_FINALIZE. It’s such a small thing, yet so obvious — we should have done this from the beginning. This change will allow middleware to have transparent deterministic cleanup when MPI shuts down. Huzzah to Qunicey for shepharding this ticket through the process!
  4. Fortran -> Fortran 90: I mention this ticket just as an example of a small text update, but it’s one of those “it was right when we wrote it back in the mid-90’s” kinds of things. We made statements about Fortran in the MPI-2.x specs that weren’t really forward-looking — they addressed the current state of Fortranedness back then. We slightly updated the wording to be a little more less sensitive to the passage of time, indicated that we specifically mean “Fortran 90”, etc. I should note, too, that I’m on the MPI-3 Fortran Working Group — we have lots of delicious, yummy things coming for Fortran MPI programmers in MPI-3. We’ll be discussing them in detail and making a prototype implementation available at SC’09 this year, with the intent of actively seeking feedback from the Fortran community about the proposed changes.
  5. Add a local reduction function: Torsten already mentioned this one, but I wanted to throw in one extra point: it may be possible / useful for the MPI implementation to hide offloading single-process reductions onto GPGPU’s (i.e., it may be more straightforward to do this in a serial fashion vs. a parallel operation). Nifty!
  6. Remove use of deprecated functions in examples: I don’t have a specific MPI 2.2 ticket to link to here because this issue spanned several. In short, we went through all the examples in MPI-2.2 and removed the use of all deprecated functions, replacing them with the equivalent functionality of non-deprecated functions. I always found the use of deprecated functions in examples to be a disservice to programmers — it falsely gives the impression of “it’s deprecated, but go ahead and use it anyway.” Not so — they’re deprecated. Please don’t use them so that we can remove them in a future version of the MPI specification!

This is just a sampling of some big and small changes that we did, just to give you a taste of the types of things that are deliberated on the MPI Forum. Big things, small things… all are important.Finally, it should be noted that MPI-2.2 was genuinely a team effort; it simply would not have been possible to do all this work individually. Not only was the work spread across many people, the end result greatly benefited from the insights and viewpoints from a diverse set of experiences and backgrounds. The standard was (much) better for having undergone the iterative (and lengthy) process of debating and revising. Many, many thanks to all those who had the patience for enduring!

Leave a comment

We'd love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.