Process affinity is a hot topic. With commodity servers getting more and more complex internally (think: NUMA and NUNA), placing and binding individual MPI processes to specific processor, cache, and memory resources is becoming quite important in terms of delivered application performance.
MPI implementations have long offered options for laying out MPI processes across the resources allocated for the job. Such options typically included round-robin schemes by core or by server node. Additionally, MPI processes can be bound to individual processor cores (and even sockets).
Today caps a long-standing effort between Josh Hursey, Terry Dontje, Ralph Castain, and myself (all developers in the Open MPI community) to revamp the processor affinity system in Open MPI.
The first implementation of the Location Aware Mapping Algorithm (LAMA) for process mapping, binding, and ordering has been committed to the Open MPI SVN trunk. LAMA provides a whole new level of processor affinity control to the end user.
Read More »
Tags: HPC, hwloc, mpi, NUMA, Open MPI, process affinity
In prior blog posts, I’ve talked about the implications of registered memory for both MPI applications and implementations.
Here’s another fun implication that was discovered within the last few months by Nathan Hjelm and Samuel Gutierrez out at Los Alamos National Labs: registered memory imbalances.
As an interesting side note: as far as we can tell, no other MPI implementation attempts to either balance registered memory between MPI processes, or handle the performance implications that occur with grossly imbalanced registered memory consumption.
Let’s review a few key points before defining what registered memory imbalances are.
Read More »
Tags: HPC, mpi, NUMA, RDMA
Today we feature a deep-dive guest post from Ralph Castain, Senior Architecture in the Advanced R&D group at Greenplum, an EMC company.
Jeff is lazy this week, so he asked that I provide some notes on the process binding options available in the Open MPI (OMPI) v1.5 release series.
First, though, a caveat. The binding options in the v1.5 series are pretty much the same as in the prior v1.4 series. However, future releases (beginning with the v1.7 series) will have significantly different options providing a broader array of controls. I won’t address those here, but will do so in a later post.
Read More »
Tags: HPC, hwloc, mpi, NUMA, Open MPI, process affinity, processor affinity
In the vein of awesome software releases (ahem…), Hardware Locality (hwloc) v1.2.1 has been released. As the “.1″ implies, this is a bug fix release of a bunch of little things that crept in the 1.2 series. A full list of the news-worthy items can be found here.
But more awesome than that is the fact that Hwloc 1.3rc1 has also been released. The Hwloc 1.3 series brings in some major new features. The list of new features can be found below.
Read More »
Tags: HPC, hwloc, mpi, NUMA
There was a great comment chain on my prior post (“Unexpected Linux Memory Migration“) which brought out a number of good points. Let me clarify a few things from my post:
- My comments were definitely about HPC types of applications, which are admittedly a small subset of applications that run on Linux. It is probably a fair statement to say that the OS’s treatment of memory affinity will be just fine for most (non-HPC) applications.
- Note, however, that Microsoft Windows and Solaris do retain memory affinity information when pages are swapped out. When the pages are swapped back in, if they were bound to a specific locality before swapping, they are restored to that same locality. This is why I was a bit surprised by Linux’s behavior.
- More specifically, Microsoft Windows and Solaris seem to treat memory locality as a binding decision — Linux treats it as a hint.
- Many (most?) HPC applications are designed not to cause paging. However, at least some do. A side point of this blog is that HPC is becoming commoditized — not everyone is out at the bleeding edge (meaning: some people willingly violate the “do not page” HPC mantra and are willing to give up a little performance in exchange for the other benefits that swapping provides).
To be clear, Open MPI has a few cases where it has very specific memory affinity needs that almost certainly fall outside the realm of just about all OS’s default memory placement schemes. My point is that other applications may also have similar requirements, particularly as core counts are going up, and therefore communication between threads / processes on different cores will become more common.
Read More »
Tags: HPC, hwloc, Linux, mpi, NUMA, process affinity