Cisco Blogs


Cisco Blog > High Performance Computing Networking

Process Affinity in OMPI v1.7 (part 2)

September 11, 2012 at 5:00 am PST

In my last post, I described the Simple mode of Open MPI v1.7′s process affinity system.

The Simple mode is actually quite flexible, and we anticipate that it will meet most users’ needs. However, some users will need more flexibility. That’s what the Expert mode is for.

Before jumping in to the Expert mode, though, let me describe two more features of the revamped v1.7 affinity system.

Read More »

Tags: , , , , ,

Process Affinity in OMPI v1.7 (part 1)

September 7, 2012 at 11:32 am PST

In my last post, I mentioned that we just finished a complete revamp of the Open MPI process affinity system, and provided only a few details as to what we actually did.

I did link to a SVN commit message, but I’ll wager that few readers — if anyone — actually read it.  :-)

Much of what is in the Open MPI v1.6.x series is the same as what Ralph Castain described in a prior blog post.  I’ll describe below what we changed for the v1.7 series.

Read More »

Tags: , , , ,

Taking MPI Process Affinity to the Next Level

August 31, 2012 at 1:33 pm PST

Process affinity is a hot topic.  With commodity servers getting more and more complex internally (think: NUMA and NUNA), placing and binding individual MPI processes to specific processor, cache, and memory resources is becoming quite important in terms of delivered application performance.

MPI implementations have long offered options for laying out MPI processes across the resources allocated for the job.  Such options typically included round-robin schemes by core or by server node.  Additionally,  MPI processes can be bound to individual processor cores (and even sockets).

Today caps a long-standing effort between Josh Hursey, Terry Dontje, Ralph Castain, and myself (all developers in the Open MPI community) to revamp the processor affinity system in Open MPI.

The first implementation of the Location Aware Mapping Algorithm (LAMA) for process mapping, binding, and ordering has been committed to the Open MPI SVN trunk.  LAMA provides a whole new level of processor affinity control to the end user.

Read More »

Tags: , , , , , ,

MPI: Messages, Not Streams

August 27, 2012 at 6:39 am PST

Periodically, new MPI developers get confused about MPI because they’re coming from environments where they’re used to dealing with streams for inter-process communication: TCP sockets, bi-directional POSIX pipes, etc.

Streams are simple flows of bytes. For example, when you write a 32-byte buffer down a TCP socket, it’s just an in-order sequence of bytes. When the receiver actually tries to receive the message, it may receive some, all, or none of those 32 bytes, depending on the receiver’s timing.

MPI presents a simpler abstration to applications: the application will receive nothing until it receives an entire incoming message.

Let me explain.

Read More »

Tags: ,

Which to use: tags or communicators?

August 20, 2012 at 7:21 am PST

A common question from new MPI developers is: which should I use to separate my messages — tags or communicators?

If you didn’t already know MPI offers two key abstractions for message passing:

  • Message delineation.  If you’re a TCP sockets programmer, you’re used to receiving streams of bytes.  For example, if you try to receive 32 bytes, you might receive 17 bytes, meaning that you have to loop around and try again to receive the remaining 15 bytes.  MPI doesn’t have streams; MPI only has atomic messages.  For example, if you send 16 integers, the receiver will receive 16 integers (not 15 integers, not 17 bytes — they’ll receive the entire 16 integers all at once).
  • Message separation.  As mentioned in the first sentence, MPI offers two key mechanisms for separating messages: tags and communicators.  We’ll dive into both in this blog post; I’ll explain the differences and when you might want to use one over the other.

Read More »

Tags: ,