Process affinity is a hot topic. With commodity servers getting more and more complex internally (think: NUMA and NUNA), placing and binding individual MPI processes to specific processor, cache, and memory resources is becoming quite important in terms of delivered application performance.
MPI implementations have long offered options for laying out MPI processes across the resources allocated for the job. Such options typically included round-robin schemes by core or by server node. Additionally, MPI processes can be bound to individual processor cores (and even sockets).
Today caps a long-standing effort between Josh Hursey, Terry Dontje, Ralph Castain, and myself (all developers in the Open MPI community) to revamp the processor affinity system in Open MPI.
The first implementation of the Location Aware Mapping Algorithm (LAMA) for process mapping, binding, and ordering has been committed to the Open MPI SVN trunk. LAMA provides a whole new level of processor affinity control to the end user.
Read More »
Tags: HPC, hwloc, mpi, NUMA, NUNA, Open MPI, process affinity
It’s finally out! The Architecture of Open Source Applications, Volume II, is now available in dead tree form (PDFs will be available for sale soon, I’m told).
Additionally, all content from the book will also be freely available on aosabook.org next week sometime (!).
But know this: all royalties from the sales of this book go to Amnesty International. So buy a copy; it’s for a good cause.
Both volumes 1 and 2 are excellent educational material for seeing how other well-known open source applications have been architected. What better way to learn than to see how successful, widely-used open source software packages were designed? Even better, after you read about each package, you can go look at the source code itself to further grok the issues.
Read More »
Tags: HPC, mpi, Open MPI, open source
Today we feature a deep-dive guest post from Ralph Castain, Senior Architecture in the Advanced R&D group at Greenplum, an EMC company.
Jeff is lazy this week, so he asked that I provide some notes on the process binding options available in the Open MPI (OMPI) v1.5 release series.
First, though, a caveat. The binding options in the v1.5 series are pretty much the same as in the prior v1.4 series. However, future releases (beginning with the v1.7 series) will have significantly different options providing a broader array of controls. I won’t address those here, but will do so in a later post.
Read More »
Tags: HPC, hwloc, mpi, NUMA, Open MPI, process affinity, processor affinity
Let me tell you a reason why open source and open communities are great: information sharing.
Let me explain…
I am Cisco’s representative to the Open MPI project, a middleware implementation of the Message Passing Interface (MPI) standard that facilitates big number crunching and parallel programming. It’s a fairly large, complex code base: Ohloh says that there are 0ver 674,000 lines of code. Open MPI is portable to a wide variety of platforms and network types.
However, supporting all the things that MPI is suppose to support and providing the same experience on every platform and network can be quite challenging. For example, a user posted a problem to our mailing list the other day about a specific feature not working properly on OS X.
Read More »
Tags: HPC, mpi, MPICH2, Open MPI, open source
I’m sure most everyone has heard already, but the K supercomputer has been upgraded and now reaches over 10 petaflops. Wow!
10.51 petaflops, actually, so if you round up, you can say that they “turned it up to 11.” Ahem.
We’ll actually have Shinji Sumimoto from the K team speaking during the Open MPI BOF at SC’11. Rolf vandeVaart from NVIDIA will also be discussing their role in Open MPI during the BOF.
We have the 12:15-1:15pm timeslot on Wednesday (room TCC 303); come join us to hear about the present status and future plans for Open MPI.
Tags: HPC, Open MPI, Supercomputing