- MPI_ESP(): The “do what I meant, not what my code says” function. The function is intended as a hint to the MPI implementation that the executing code is likely incorrect, and the implementation should do whatever it feels that the programmer really intended it to do.
- MPI_Encourage(): A watered-down version of MPI_Progress().
- MPI_Alltoalltoall(): Every process sends to every other process, and then, just to be sure, everyone sends to everyone else again. Good for benchmarks. Read More »
Have you ever wondered how an MPI implementation picks network paths and allocates resources? It’s a pretty complicated (set of) issue(s), actually.
An MPI implementation must tread the fine line between performance and resource consumption. If the implementation chooses poorly, it risks poor performance and/or the wrath of the user. If the implementation chooses well, users won’t notice at all — they silently enjoy good performance.
It’s a thankless job, but someone’s got to do it.
If ever I doubted that MPI was good for the world, I think that all I would need to do is remind myself of this commit that I made into the Open MPI source code repository today. It was a single-character change — changing a 0 to a 1. But the commit log message was Tolstoyian in length:
- 87 lines of text
- 736 words
- 4225 characters
Go ahead — read the commit message. I double-dog dare you.
That tome of a commit message both represents several months of on-and-off work on a single bug, and details the hard-won knowledge that was required to understand why changing a 0 to a 1 fixed a bug.
I just ran across a great blog entry about SGE debuting topology-aware scheduling. Dan Templeton does a great job of describing the need for processor topology-aware job scheduling within a server. Many MPI jobs fit exactly within his description of applications that have “serious resource needs” — they typically require lots of CPU and/or network (or other I/O). Hence, scheduling an MPI job intelligently across not only the network, but also across the network and resources inside the server, is pretty darn important. It’s all about location, location, location!
Particularly as core counts in individual server are going up.
Particularly as networks get more complicated inside individual servers.
Particularly if heterogeneous computing inside a single server becomes popular.
Particularly as resources are now pretty much guaranteed to be non-uniform within an individual server.
These are exactly the reasons that, even though I’m a network middleware developer, I spend time with server-specific projects like hwloc — you really have to take a holistic approach in order to maximize performance.
(this blog entry co-written by Brice Goglin and Samuel Thibault from the INRIA Runtime Team)
We’re pleased to announce a new open source software project: Hardware Locality (or “hwloc“, for short). The hwloc software discovers and maps the NUMA nodes, shared caches, and processor sockets, cores, and threads of Linux/Unix and Windows servers. The resulting topological information can be displayed graphically or conveyed programatically though a C language API. Applications (and middleware) that use this information can optimize their performance in a variety of ways, including tuning computational cores to fit cache sizes and utilizing data locality-aware algorithms.
hwloc actually represents the merger of two prior open source software projects:
- libtopology, a package for discovering and reporting the internal processor and cache topology in Unix and Windows servers.
- Portable Linux Processor Affinity (PLPA), a package for solving Linux topological processor binding compatibility issues