Core counts are going up. Cisco’s C460 rack-mount server series, for example, can have up to 32 Nehalem EX cores. As a direct result, we may well be returning to the era of running more than one MPI process per server. This has long been true in “big iron” parallel resources, but commodity Linux HPC clusters have tended towards the one-MPI-job-per-server model in recent history.
Because of this trend, I have an open-ended question for MPI users and cluster administrators: how do you want to bind MPI processes to processors? For example: what kinds of binding patterns do you want? How many hyperthreads / cores / sockets do you want each process to bind to? How do you want to specify what process binds where? What level of granularity of control do you want / need? (…and so on)
We are finding that every user we ask seems to have slightly different answers. What do you think? Let me know in the comments, below.
Read More »
Tags: HPC, mpi, NUMA, process affinity
Representing over a year of research, development, and testing, the Open MPI team is extremely pleased to release version v1.5. Read the full announcement here. Version 1.5 is chock full of new features and countless little enhancements. We hope you’ll enjoy it!
Open MPI Version v1.4.3 was just released a few days ago, too. It’s mainly a bug fix release that increases the stability of the time-tested v1.4 series.
Some of you may be wondering, “Why the heck would they put out a point release and then a major new revision within a few days of each other?”
How fortunate that you ask! Let me explain…
Read More »
The following is an abbreviated list of my favorite concepts and/or specific functions that never made the cut into an official version of the MPI specification:
- MPI_ESP(): The “do what I meant, not what my code says” function. The function is intended as a hint to the MPI implementation that the executing code is likely incorrect, and the implementation should do whatever it feels that the programmer really intended it to do.
- MPI_Encourage(): A watered-down version of MPI_Progress().
- MPI_Alltoalltoall(): Every process sends to every other process, and then, just to be sure, everyone sends to everyone else again. Good for benchmarks. Read More »
Tags: humor, mpi
Just a digression from the normal technical talk here… We finally launched the new Cisco blogs site. Woo hoo!!
Most importantly, I wanted to let you all know that the landing page and RSS feed URLs have both changed. There are HTTP redirects in place for both (which I noticed this morning caused a bunch of old RSS entries to be marked as “new” — oops), but just in case you need to know them:
You don’t need to update your bookmarks / RSS readers, but you might want to anyway just because all the cool kids are doing it.
Read More »
Tags: blog, blogs, social media
Have you ever wondered how an MPI implementation picks network paths and allocates resources? It’s a pretty complicated (set of) issue(s), actually.
An MPI implementation must tread the fine line between performance and resource consumption. If the implementation chooses poorly, it risks poor performance and/or the wrath of the user. If the implementation chooses well, users won’t notice at all — they silently enjoy good performance.
It’s a thankless job, but someone’s got to do it.
Read More »
Tags: HPC, mpi, RDMA