Cisco Blogs

Do you use the MPI C++ bindings?

May 15, 2010 - 6 Comments

Do you use the MPI C++ bindings in real-world MPI applications?

I’m not talking about using the MPI C bindings in C++ MPI applications (e.g., using


— a C binding).  I’m talking about writing substantial C++ MPI applications that use the MPI C++ bindings (such as



Do you do that?  Post a comment below and let me know.

The reason that I ask is because there is some confusion in the MPI Forum as to exactly how many people use the MPI C++ bindings — and whether we should un-deprecate the MPI C++ bindings. 

Specifically, during the MPI-2.2 process, the MPI Forum put out a call asking if anyone was using the MPI C++ bindings.  We could not find any sizable MPI applications that used the C++ bindings.  Anecdotal / unsubstantiated evidence suggested that they were barely used (if at all) because C++ developers felt that the C++ bindings didn’t add enough C++ features to make them worthwhile.  Hence, they just used the C bindings, or used a higher-level class library (like Boost.mpi).

Sidenote: higher level class libraries like Boost.mpi are great. I’m all in favor of them.  Sadly, not all software is portable to all platforms — for example, Boost.mpi requires fairly advanced C++ compiler features which are not available on all modern HPC platforms.  Bummer!  For this (and other) reasons, the Forum has explicitly rejected standardizing such class libraries.  Another good reason not to standardize them is that these class libraries add definitions and semantics above and beyond what is specified in the MPI documents.  If we standardize these class libraries, we’d therefore be standardizing different behavior in different languages — it would effectively be a new standard.  An explicit goal of the MPI specification is that, to the greatest extent possible, MPI should provide the same behavior in every language.

Because the Forum couldn’t find any C++ bindings users (and because of a few other reasons), the MPI C++ bindings were deprecated in MPI-2.2.  They weren’t removed, mind you — just deprecated.  “Deprecated” is a very specific term that means two things:

  1. The bindings might be removed in a future version of the MPI specification document.
  2. It also means that we won’t introduce C++ bindings for new MPI-3 functions.

In December 2010, the Forum conducted the MPI user survey, which I’ve mentioned in several prior blog entires.  One of the questions was:

When answering the following question, please remember that C++ MPI applications can use the C++ and/or C MPI bindings.  Do you have any MPI applications that are both written in C++ and use the MPI C++ bindings?

AUTHOR’S NOTE: Per Jed Brown’s comments (see below), the rest of this blog entry contained an incorrect statistical analysis.  I have removed the rest of this entry on May 20, 2010 and submitted a new entry with a corrected analysis.  Thanks for keeping me honest, Jed!

In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.


  1. I think that Open MPI ships with Solaris these days ( Open MPI is a much more recent / modern implementation of MPI than LAM/MPI (I can say that because I was the primary maintainer for LAM/MPI for years — several years ago, we shifted all focus to Open MPI).As for whether it’s good for algo trading, it depends on what your requirements are. Open MPI will take advantage of whatever low-latency network you might have (iWARP, Myricom MX, InfiniBand, etc.), for example. But it depends on what kind of message patterns you want to effect, whether your machines are reliable or not, etc.

  2. Hii installed lam mpi in my solaries 10 system. i want to use MPI for alogo trading parallel computing system. Can i used it for alogo trading ? what is advantage & disadvantage of MPI for algo trading implementation. Sanjai

  3. MPI is a general purpose inter-process communication (IPC) tool. It is usually used in parallel computing / high-performance computing environments. Implementations of the MPI specification typically emphasize the maximization of bandwidth and the minimization of latency, but also strive to be efficient in a number of other metrics, as well.MPI is a little different than other IPC mechanisms because, among other things, it features a robust set of collective”” communication operations (e.g., broadcast, scatter, gather, barrier, etc.) in addition to a rich set of point-to-point communication operations.That being said, these collective operations are one of the reasons that fault tolerance is typically a bit lacking in MPI implementations (Open MPI will kill an entire parallel job if one of them dies, for example). In addition to reading the MPI spec itself (available at, there are many books written about MPI. You might want to browse through Amazon to see what’s available to find more information about MPI:;-keywords=parallel+computing+mpi&x=0&y=0

  4. HiThis is Sanjai Kumar,I want to know Extensively used of MPI.where can its apply and where it can’t apply.RegardsSanjai Kumar

  5. Jed is, of course, entirely correct.He has been schooling me in email and on the phone; I will post a corrected set of stats shortly.Many thanks, Jed!

  6. Jeff, the analysis leading to 7% and 10% is flawed, to offer statistics about group 1, you must also select group 1 from the set of respondents. This will give a somewhat higher number, perhaps also around 20%. That said, I doubt that the C++ bindings are in any way simplifying the code, or would be difficult to replace with C calls.FWIW, I was a fan of the deprecation and hope it remains, even if only to liberate C++ libraries from feeling obliged to include the C++ headers and cope with the SEEK_* mess (which every implementation handles slightly differently, though recent MPICH2 and Open MPI do very well). Marginal reductions in your work, compile time, and binary size are other bonuses.