Way back in the MPI-2.2 timeframe, a proposal was introduced the add the C keyword “const” to all relevant MPI API parameters. The proposal was discussed at great length. The main idea was twofold:
- Provide a stronger semantic statement about which parameter contents MPI could change, and which it should not. This mainly applies to user choice buffers (e.g., the choice buffer argument in MPI_SEND).
- Be more friendly to languages that use const(-like constructs) more than C. The original proposal was actually from Microsoft, whose goal was to provide higher quality C# MPI bindings.
Additionally, the (not deprecated at the time) official MPI C++ bindings have had const since the mid-1990s — so why not include them in the C bindings?
Read More »
Tags: HPC, mpi, MPI-3
The count parameter exists in many MPI API functions: MPI_SEND, MPI_RECV, MPI_TYPE_CREATE_STRUCT, etc. In conjunction with the datatype parameter, the count parameter is often used to effectively represent the size of a message. As a concrete example, the language-neutral prototype for MPI_SEND is:
MPI_SEND(buf, count, datatype, dest, tag, comm)
The buf parameter specifies where the message is in the sender’s memory, and the count and datatype arguments indicate its layout (and therefore size).
Since MPI-1, the count parameter has been an integer (int in C, INTEGER in Fortran). This meant that the largest count you could express in a single function call was 231, or about 2 billion. Since MPI-1 was introduced in 1994, machines — particularly commodity machines used in parallel computing environments — have grown. 2 billion began to seem like a fairly arbitrary, and sometimes distasteful, limitation.
The MPI Forum just recently passed ticket #265, formally introducing the MPI_Count datatype to alleviate the 2B limitation.
Read More »
Tags: HPC, mpi, MPI-3
Today we feature a guest post from Torsten Hoefler, the Performance Modeling and Simulation lead of the Blue Waters project at NCSA, and Adjunct Assistant Professor at the Computer Science department at the University of Illinois at Urbana-Champaign (UIUC) .
I’m sure everybody heard about network topologies, such as 2D or 3D tori, fat-trees, Kautz networks and Clos networks. It can be argued that even multi-core nodes (if run in the “MPI everywhere” mode) are a separate “hierarchical network”. And you probably also wondered how to map your communication on such network topologies in a portable way.
MPI offers support for such optimized mappings since the old days of MPI-1. The process topology functionality if probably one of the most overlooked useful features of MPI. We have to admit that it had some issues and was clumsy to use but it was finally fixed in MPI-2.2. Read More »
Tags: HPC, MPI-3
Here’s some MPI quick-bites for this week:
- The MPI_MPROBE proposal was voted into MPI-3 a few weeks ago. Yay! (see this quick slideshow for an explanation of what MPI_MPROBE is)
- The Hardware Locality project just released hwloc v1.2. This new version now includes distance metrics between objects in the topology tree. W00t!
- Support for large counts looks good for getting passed into MPI-3; it’s up for its first formal reading at the upcoming Forum meeting.
- The same is true for the new MPI-3 one-sided stuff; it, too, is up for its first formal reading at the upcoming Forum meeting (they haven’t sent around their new PDF yet, but they will within a week or so — stay tuned here for updates).
- Likewise, the new Fortran-08 bindings are up for their first Forum presentation next meeting. We solved all of the outstanding Fortran issues with the F77 and F90 bindings… with the possible exception of non-blocking communication code movement. :-( That one is still being debated with the Fortran language standardization body — it’s a complicated issue!
- Finally — the new MPI tools interface chapter is up for a first formal reading, too.
That’s a lot of first formal readings in one meeting…
Tags: HPC, hwloc, mpi, MPI Forum, MPI-3