I just ran across a great blog entry about SGE debuting topology-aware scheduling. Dan Templeton does a great job of describing the need for processor topology-aware job scheduling within a server. Many MPI jobs fit exactly within his description of applications that have “serious resource needs” — they typically require lots of CPU and/or network (or other I/O). Hence, scheduling an MPI job intelligently across not only the network, but also across the network and resources inside the server, is pretty darn important. It’s all about location, location, location!
Particularly as core counts in individual server are going up.
Particularly as networks get more complicated inside individual servers.
Particularly if heterogeneous computing inside a single server becomes popular.
Particularly as resources are now pretty much guaranteed to be non-uniform within an individual server.
These are exactly the reasons that, even though I’m a network middleware developer, I spend time with server-specific projects like hwloc — you really have to take a holistic approach in order to maximize performance.
Read More »
Tags: HPC, hwloc, mpi, NUMA, NUNA
(this blog entry co-written by Brice Goglin and Samuel Thibault from the INRIA Runtime Team)
We’re pleased to announce a new open source software project: Hardware Locality (or “hwloc“, for short). The hwloc software discovers and maps the NUMA nodes, shared caches, and processor sockets, cores, and threads of Linux/Unix and Windows servers. The resulting topological information can be displayed graphically or conveyed programatically though a C language API. Applications (and middleware) that use this information can optimize their performance in a variety of ways, including tuning computational cores to fit cache sizes and utilizing data locality-aware algorithms.
hwloc actually represents the merger of two prior open source software projects:
- libtopology, a package for discovering and reporting the internal processor and cache topology in Unix and Windows servers.
- Portable Linux Processor Affinity (PLPA), a package for solving Linux topological processor binding compatibility issues
Read More »
Tags: HPC, mpi, NUMA, process affinity
Everything old is new again — NUMA is back!
With NUMA going mainstream, high performance software — MPI applications and otherwise — might need to be re-tuned to maintain their current performance levels.
A less-acknowledged aspect of HPC systems is the multiple levels of networks that are traversed to get data from MPI process A to MPI process B. The heterogeneous, multi-level network is going to become more important (again) in your applications’ overall performance, especially as per-compute-server-core-counts increase.
That is, it’s not going to only be about the bandwidth and latency of your “Ethermyriband” network. It’s also going to be about the network (or networks!) inside each compute server.
A Cisco colleague of mine (hi Ted!) previously coined a term that is quite apropos for what HPC applications now need to target: it’s no longer just about NUMA — NUMA effects are only one of the networks involved.
Think bigger: the issue is really about Non-Uniform Network Access (NUNA). Read More »
Tags: HPC, mpi, NUMA, NUNA, process affinity