I gave two mini-talks during my speaking slot, the first of which was entitled: Crazy ideas about revamping MPI_INIT and MPI_FINALIZE.
This question is inspired by the fact that the “count” parameter to MPI_SEND and MPI_RECV (and friends) is an “int” in C, which is typically a signed 4-byte integer, meaning that its largest positive value is 231, or about 2 billion.
However, this is the wrong question.
The right question is: can MPI send and receive messages with more than 2 billion elements?
After a metric ton of work by the entire community, Open MPI has released version 1.7.5.
Among the zillions of minor updates and new enhancements are two major new features:
- MPI-3.0 conformance
- OpenSHMEM support (Linux only)
See this post on the Open MPI announcement list for more details.
Now that we’re just starting into the MPI-3.0 era, what’s next?
The MPI Forum is still having active meetings. What is left to do? Isn’t MPI “done”?
Nope. MPI is an ever-changing standard to meet the needs of HPC. And since HPC keeps changing, so does MPI.
Jeff Hammond at Argonne tells me that there’s some confusion in the user community about MPI and C++. I explained how/why we got here in my first post; let Jeff (Hammond) and I now explain what this means to you.
The short version is: DON’T PANIC.
MPI implementations that provided the C++ bindings will likely continue to do so for quite a while. I know that we have no intention of removing them from Open MPI any time soon, for example. The MPICH guys have told me the same.
I’ll discuss below what this means to both applications that are written in C++, and applications that use the MPI C++ bindings. Read More »