Cisco Blogs

The MPI C++ bindings are gone: what does it mean to you?

- October 19, 2012 - 3 Comments

Jeff Hammond at Argonne tells me that there’s some confusion in the user community about MPI and C++.  I explained how/why we got here in my first post; let Jeff (Hammond) and I now explain what this means to you.

The short version is: DON’T PANIC.

MPI implementations that provided the C++ bindings will likely continue to do so for quite a while.  I know that we have no intention of removing them from Open MPI any time soon, for example.  The MPICH guys have told me the same.

I’ll discuss below what this means to both applications that are written in C++, and applications that use the MPI C++ bindings. First off, recognize that there are two different issues:

  1. Applications written in C++ that use the MPI C bindings.
  2. Applications that use the MPI C++ bindings.

Applications in the first category are completely unaffected by the removal of the MPI C++ bindings, and will continue to work exactly as they used to (and can actually work better than they used to; see below).

To be 100% clear: the MPI standard does not preclude supporting general C++ applications; MPI implementations that have supported C++ applications are free to continue to do so.  Don’t forget the famous adage that all C programs are also C++ programs by virtue of backwards compatibility (modulo a few minor corner cases in C99).  MPI has therefore always supported C++ programs via the C interface, which, in MPI-3, is now complete with respect to C++ types (see below).

Applications in the second category will likely continue to work in the immediate future because of a paragraph in the new MPI-3 chapter 16, entitled “Removed” interfaces, in section 16.2:

The C++ bindings were deprecated as of MPI-2.2. The C++ bindings are removed in MPI-3.0. The namespace is still reserved, however, and bindings may only be provided by an implementation as described in the MPI-2.2 standard.

This means that MPI-3 implementations can still include their existing MPI-2.2 C++ bindings.

As mentioned above, I imagine that most MPI implementations will continue to do so for some time. Hence, MPI applications using the C++ bindings will be ok for the near future.

That being said, if these applications want to keep working in the future, they should endeavor to convert their C++ MPI function calls to C MPI function calls. The conversion is not difficult (the C++ bindings were pretty much a 1:1 mapping to the C bindings, after all), but it can be tedious.  Perhaps someone will invent an automatic conversion tool.  Or perhaps your application only uses a few MPI C++ bindings calls, or is restricted to a small portion of your overall application, and the conversion will therefore be fairly easy.

Another scenario that should be considered for conversion is a C++-bindings-using application that starts using MPI-3 features.  In this case, it might be good to convert to using the C bindings, just for consistency (remember that MPI-3 functions only have C/Fortran bindings — no C++ bindings).

A small (but vocal) group of MPI developers who actively use the C++ bindings in their applications are quite annoyed with the Forum for removing the C++ bindings. And they have a right to be.

These users also brought to light a critical oversight in the existing C and Fortran bindings: MPI datatypes for some C++ types were missing from MPI-2.2.  An MPI-3 proposal added several MPI datatypes (e.g., MPI_CXX_FLOAT_COMPLEX) to support these C++ basic datatypes.  This proposal was passed, and is included in the final version of MPI-3.0.

If it’s some small consolation, know that there was a LOT of Forum debate over a long period of time about the C++ bindings.  Indeed, it is safe to characterize the C++ MPI datatype addition proposal as a direct result of this prolonged, vigorous debate.

Not everyone is happy with the outcome of this debate, but:

  • There is a viable path forward in the near- and long-term for MPI applications that use the C++ bindings
  • MPI C/Fortran support for C++ is now better than it used to be.

Note, too that some MPI users are already discussing a C++ interface that can do type inference instead of requiring an MPI_Datatype argument.  This is very definitely the type of discussion that should be occurring in the MPI C++ community.  Abstraction and exploiting native-language features are Good Things.

Additionally, it is well-known that Boost.MPI supports only MPI-1, so there’s plenty of room for someone extend/complete Boost.MPI, or even develop a whole new interface.

Sidenote: extending Boost.MPI concepts (e.g., serialization) to include one-sided communication will likely require active-message support, which is not natively provided by MPI-3.  That being said, active messages are still being discussed by the Forum.  As such, a new C++ MPI interface might be timely if the Forum ever adds active-message functionality to the MPI standard.

As I mentioned in my prior post, the Forum’s bias is that we’ll continue working on C bindings and leave a higher-level C++ class library to third parties.  Never say “never”, but that’s the current thinking.

Leave a comment

We'd love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.


  1. Hi Jeff -- Could you clarify what you said about active messages being needed if Boost.MPI wanted to support one-sided communication? My understanding is that one-sided communication works as long as you aren't using custom MPI_Op. Thus, MPI_Put and MPI_Get should be fine; it's just MPI_Accumulate and ilk that pose issues. Is that correct? I actually wouldn't mind if one-sided communication were only supported for certain C++ types, as long as there is a traits class that tells me whether it's supported. I'm sad that we made the decision a while back not to use Boost.MPI. We had our reasons at the time (dealing with a certain compiler's inability to build Boost) but I do think we could have handled that better.

    • My understanding is that serialization must be symmetric with deserialization when used. In MPI send-recv, this is not a problem, since the Boost.MPI recv call can call the deserialization after MPI_Recv. However, in RMA, this is not possible because the target is passive. Looking at, I see the following: /** * INTERNAL ONLY * * We're sending a type that does not have an associated MPI * datatype, so it must be serialized then sent as MPI_PACKED data, * to be deserialized on the receiver side. */ template void send_impl(int dest, int tag, const T& value, mpl::false_) const; The "so it must be serialized then sent as MPI_PACKED data , to be deserialized on the receiver side" is what can't happen with RMA. There is no way to call MPI_UNPACK Perhaps there is a way to do this all with MPI datatypes. I don't know though because I am not a C++ guru nor do my applications require exotic types. I don't see any problem with your restricted usage of RMA (MPI datatypes but not Boost.Serialization objects) but I'm not sure the Boost.MPI community is. If there are to be new MPI C++ bindings in the standard some day in the future, I have to assume they will be conservative regarding types such that they can be implemented in a straightforward way on top of the MPI communication features available at the time.

      • Thanks Jeff for taking a look at the Boost innards for me :) It seems like the Boost folks could just expose deserialization. Boost.MPI already depends explicitly on Boost's serialization facility. Once you synch up the window, there's a perfectly valid buffer of MPI_PACKED waiting for you to deserialize. There's nothing terribly exotic about that interface, though it would force them to distinguish between types that require serialization and types that don't. Another thing they could do would be to register a callback on the window for synch operations. The callback automatically deserializes when the user requests a synch. For C++ types T that require serialization, this would use a hidden array of char that would be the actual window. This would be tricky because you would have to know the offset into the window on both sides. Should I go talk to the Boost.MPI folks?