Arguably, one of the biggest weaknesses of MPI is its lack of resilience — most (if not all) MPI implementations will kill an entire MPI job if any individual process dies. This is in contrast to the reliability of TCP sockets, for example: if a process on one side of a socket suddenly goes away, the peer just gets a stale socket.
This lack of resilience is not entirely the fault of MPI implementations; the MPI standard itself lacks some critical definitions about behavior when one or more processes die.
I’ve seen many users make lots of different kinds of MPI programming mistakes.
Some are common, newbie types of mistakes. Others are common intermediate-level mistakes. Others are incredibly subtle programming mistakes in deep logic that took sophisticated debugging tools to figure out (race conditions, memory overflowing, etc.).
In 2007, I wrote a pair of magazine columns listing 10 common MPI programming mistakes (see this PDF for part 1 and this PDF for part 2). Indeed, we still see users asking about some of these mistakes on the Open MPI user’s mailing list.
What mistakes do you see your users making with MPI? How can we — the MPI community — better educate users to avoid these kinds of common mistakes? Post your thoughts in the comments.
We just finished up another MPI Forum meeting earlier this week, hosted at the Cisco node 0 facility in San Jose, CA. A lot of the working groups are making tangible progress and bringing their work back to the full forum for review and discussion. Sometimes the working group reports are accepted and moved forward towards standardization; other times, the full Forum provides feedback and guidance, and then sends the working group back to committee to keep hashing out details. This is pretty typical stuff for a standard body.
This week, we had a first vote (out of two total) on the MPI_MPROBE proposal. It passed the vote, and will likely pass its next vote in March, meaning that it will become part of the MPI 3.0 draft standard.
MPI_MPROBE closes an important race condition vulnerability.
Here’s a poll for readers: is your MPI IPv6-ready?
Many of you may not be using IP-based MPI network transports, but as HPC is becoming more and more commoditized, IP-based MPI implementations may actually start gaining in importance. Not to ultra-high-performing systems, of course. But you’d be surprised how many 4-, 8-, and 16-node Ethernet-based clusters are sold these days… particularly as core counts are increasing — a 16-node Westmere cluster is quite powerful!
Owners of such systems are typically running ISV-based MPI applications, or other “canned” parallel software. Most of them don’t use InfiniBand or other high-speed interconnect — they just use good old Ethernet with TCP as the underlying transport for their MPI.