Cisco Blogs

Cisco Blog > High Performance Computing Networking

The MPI C++ Bindings

What a strange position I find myself in: the C++ bindings have become somewhat of a divisive issue in the MPI Forum.  There are basically 3 groups in the Forum:

  1. Those who want to keep the C++ bindings deprecated.  Meaning: do not delete them, but do not add any C++ bindings for new MPI-3 functions.
  2. Those who want to un-deprecate the C++ bindings.  Meaning: add C++ bindings for all new MPI-3 functions.
  3. Those who want to delete the C++ bindings.  Meaning: kill.  Axe.  Demolish.  Remove.  Never speak of them again.

Let me explain.

Read More »

Tags: ,

Shared Receive Queues

In my last post, I talked about the so-called eager RDMA optimization, and its effects on resource consumption vs. latency optimization.

Let’s talk about another optimization: shared receive queues.

Shared receive queues are not a new idea, and certainly not exclusive to MPI implementations.  They’re a way for multiple senders to send to a single receiver while only consuming resources from a common pool.

Read More »

Tags: , , ,

MPI tradeoffs: space vs. time

@brockpalen asked me a question in Twitter:

@jsquyres [can you discuss] common #MPI implementation assumptions made for performance and/or resource constraints?

Good question.  MPI implementations are full of trade-offs between performance and resource consumption.  Let’s discuss a few easy ones.

Read More »

Tags: , ,

More MPI-3 newness: const

Way back in the MPI-2.2 timeframe, a proposal was introduced the add the C keyword “const” to all relevant MPI API parameters.  The proposal was discussed at great length.  The main idea was twofold:

  • Provide a stronger semantic statement about which parameter contents MPI could change, and which it should not.  This mainly applies to user choice buffers (e.g., the choice buffer argument in MPI_SEND).
  • Be more friendly to languages that use const(-like constructs) more than C.  The original proposal was actually from Microsoft, whose goal was to provide higher quality C# MPI bindings.

Additionally, the (not deprecated at the time) official MPI C++ bindings have had const since the mid-1990s — so why not include them in the C bindings?

Read More »

Tags: , ,

New things in MPI-3: MPI_Count

The count parameter exists in many MPI API functions: MPI_SEND, MPI_RECV, MPI_TYPE_CREATE_STRUCT, etc.  In conjunction with the datatype parameter, the count parameter is often used to effectively represent the size of a message.  As a concrete example, the language-neutral prototype for MPI_SEND is:

MPI_SEND(buf, count, datatype, dest, tag, comm)

The buf parameter specifies where the message is in the sender’s memory, and the count and datatype arguments indicate its layout (and therefore size).

Since MPI-1, the count parameter has been an integer (int in C, INTEGER in Fortran).  This meant that the largest count you could express in a single function call was 231, or about 2 billion.  Since MPI-1 was introduced in 1994, machines — particularly commodity machines used in parallel computing environments — have grown.  2 billion began to seem like a fairly arbitrary, and sometimes distasteful, limitation.

The MPI Forum just recently passed ticket #265, formally introducing the MPI_Count datatype to alleviate the 2B limitation.

Read More »

Tags: , ,