Recently, a reader asked me about how MPI defines its global constants.
More specifically, the user was asking how MPI defines its interactions with languages other than C and Fortran (i.e., the two officially-supported language bindings).
This is a good question, and has implications on both the MPI standards documents and various MPI implementations. Let’s dive in.
First, let’s discuss the “how does MPI define its global symbols?” part of the question.
In the MPI-3.1 document, section 2.5.4 defines MPI’s named constants. All of them are guaranteed to be link-time constants, but not necessarily compile-time constants. There are basically three categories:
- Compile-time constants. These can be used in initialization expressions, and can be used as compile-time constants, such as array lengths, and C switch and Fortran select/case statements. Specifically: the values of these symbols do not change during execution. Examples of these include:
- Link-time symbols that may change value between MPI initialization and completion. These symbols can be used in initialization expressions, but you must ensure that the initialization occurs after
MPI_INIT[_THREAD]. All MPI pre-defined handles fall in this category, such as
- Link-time symbols that cannot be used in initialization expressions. These primarily only affect accessing specific symbols in Fortran, such as
Reading between the lines, you can infer that MPI is only defining behavior, not a specific implementation. I.e., MPI does not define what the type is for MPI handles — it just says that
MPI_COMM_WORLD is of type
MPI_Comm, for example.
MPI_Comm is then defined by each MPI implementation — not the standard.
To be totally clear about that point: MPI does not define an ABI.
Here’s two quick facts that exemplify the above assertion:
- Compile-time constants may be a
- C MPI handles may be a pointer (e.g., Open MPI) or an integer (MPICH).
A few years ago, the MPI Forum created an ABI working group to see if we could resolve the issues and define, once and for all, an ABI specification that could apply to all MPI implementations. On the surface, there’s several things that would need to be covered:
- Symbol types (e.g.,
MPI_VERSIONmust be an
intin C and an
- Symbol values (e.g., for MPI version X,
MPI_MAX_PROCESSOR_NAMEmust be 32)
- Symbol names (e.g.,
MPI_ANY_SOURCEmust be a symbol of exactly that name in C)
- Library names (e.g.,
However, not only is it difficult to come up with a least-common-denominator set of definitions that satisfy the above four ABI specifications, but there are also more subtle, complicated issues that would be required for an MPI ABI specification:
- C++ and Fortran compilers have no standardized symbol-mangling algorithm. It is therefore impossible for MPI to mandate what symbols will be in non-C environments. Indeed, even in C-only environments, sometimes multiple compilers on the same platform have different calling conventions and/or bootstrapping symbols that can conflict (or just be different). Indeed, even compilers that maintain ABI compatibility with the GNU C compiler sometimes have bugs.
- There is no commonality between the runtime systems of MPI implementations. How you launch MPI processes in one implementation commonly has little relation to how you launch MPI processes in another implementation. The cross product of runtime systems on different platforms and runtime systems in different MPI implementations make the determination of a least-common-denominator set of requirements and standardization a nightmare, at best.
- There are no standardized wire protocols between MPI implementations. Hence, if you run your MPI job with Open MPI on some servers and MPICH on others, there’s no guarantee that they would be able to interoperate (hint: they won’t).
- Similarly, there are no standardized algorithms for MPI’s more complicated operations, such as collectives and IO-based operations.
I realize that some of the above bullets digress into interoperability, but the line between ABI and interoperability is quite thin. Once you have an ABI, it’s a fairly small jump to assume that the
mpiexec from MPI implementation A should be able to launch an MPI job with an app linked against the
libmpi from MPI implementation B (hint: that doesn’t work because of lack of interoperability between MPI implementation runtimes).
The point is that binary compatibility between MPI implementations is comprised of (much) more than just an ABI. An MPI implementation ABI still wouldn’t solve other problems (such as launcher interoperability), and therefore doesn’t really gain much functionality for the end user.
Regardless, the above bullets can be summed up in a specific, intentional goal of the MPI standards:
MPI proscribes an API and the behavior of those APIs. MPI defines what happens, but not how it happens.
It’s also worth noting that MPI is implemented over a hugely heterogeneous set of hardware and software platforms. Each of these platforms have unique hardware and software features that can be used to optimize MPI operations in different ways. It is critical to give MPI implementations the freedom to exploit those features. By definition, such optimizations may (and do) prohibit “least-common-denominator” types of requirements that would be required for an ABI (and/or interoperability).
Put differently: the MPI standard intentionally allows performance optimizations that preclude the possibility of an ABI.
All this being said, it should be noted that MPI applications are source code compatible. You can re-compile a correct MPI app with any MPI implementation and it will work just fine. That’s a pretty hugely important feature: users can take their apps to entirely different environments, recompile them, and run.
MPI just doesn’t support binary compatibility.
These are not necessarily reasons that everyone likes, but they are reasons why the MPI Forum has decided not to support an ABI.
In my next post, I’ll discuss how MPI defines its interactions with languages other than C and Fortran.
Thanks a lot Jeff! That clarifies some points! Can’t wait for your next post!
Glad it helped! Sorry it took so long to post; I’m already working on the part 2 of this post — should have it ready by the end of this week.
Thanks Jeff definiteley clarifies some problems i have been having, moving on to part 2 now
Comments are closed.