Cisco Blogs


Cisco Blog > High Performance Computing Networking

Open MPI v1.5.2 released

We’re very pleased to release Open MPI version 1.5.2 today.  The v1.5 series is our “feature development” series; this release includes lots of tasty new features; see the full announcement here.

Here’s an abbreviated list of new features:

  • Now using Hardware Locality for affinity and topology information
  • Added ummunotify support for OpenFabrics-based transports.  See the README for more details.
  • Add OMPI_Affinity_str() optional user-level API function (i.e., the “affinity” MPI extension).  See the Open MPI README for more details.
  • Added support for ARM architectures.
  • Updated ROMIO from MPICH v1.3.1 (plus one additional patch).
  • Updated the Voltaire FCA component with bug fixes, new functionality.  Support for FCA version 2.1.
  • Added new “bfo” PML that provides failover on OpenFabrics networks.
  • Added the MPI_ROOT environment variable in the Open MPI Linux SRPM for customers who use the BPS and LSF batch managers.
  • Added Solaris-specific chip detection and performance improvements.
  • Added more FTB/CIFTS support.
  • Added btl_tcp_if_seq MCA parameter to select a different ethernet interface for each MPI process on a node.  This parameter is only useful when used with virtual ethernet interfaces on a single network card (e.g., when using virtual interfaces give dedicated hardware resources on the NIC to each process).
  • Added new mtl_mx_board and mtl_mx_endpoint MCA parameters.

Tags:

Unexpected Linux memory migration

I learned something disturbing earlier this week: if you allocate memory in Linux to a particular NUMA location and then that memory is paged out, it will lose that memory binding when it is paged back it.

Yowza!

Core counts are going up, and server memory networks are getting more complex; we’re effectively increasing the NUMA-ness of memory.  The specific placement of your data in memory is becoming (much) more important; it’s all about location, Location, LOCATION!

But unless you are very, very careful, your data may not be in the location that you think it is — even if you thought you had bound it to a specific NUMA node.

Read More »

Tags: , , , , ,

Making MPI survive process failures

Arguably, one of the biggest weaknesses of MPI is its lack of resilience — most (if not all) MPI implementations will kill an entire MPI job if any individual process dies.  This is in contrast to the reliability of TCP sockets, for example: if a process on one side of a socket suddenly goes away, the peer just gets a stale socket.

This lack of resilience is not entirely the fault of MPI implementations; the MPI standard itself lacks some critical definitions about behavior when one or more processes die.

I talked to Joshua Hursey, Postdoctoral Research Associate at Oak Ridge National Laboratory and a leading member of the MPI Forum’s Fault Tolerance Working Group to find out what is being done to make MPI more resilient.

Read More »

Tags: , , , ,

MPI Programming Mistakes

Mistakes

I’ve seen many users make lots of different kinds of MPI programming mistakes.

Some are common, newbie types of mistakes.  Others are common intermediate-level mistakes.  Others are incredibly subtle programming mistakes in deep logic that took sophisticated debugging tools to figure out (race conditions, memory overflowing, etc.).

In 2007, I wrote a pair of magazine columns listing 10 common MPI programming mistakes (see this PDF for part 1 and this PDF for part 2).  Indeed, we still see users asking about some of these mistakes on the Open MPI user’s mailing list.

What mistakes do you see your users making with MPI?  How can we — the MPI community — better educate users to avoid these kinds of common mistakes?  Post your thoughts in the comments.

Read More »

Tags: , ,

MPI Forum Roundup

We just finished up another MPI Forum meeting earlier this week, hosted at the Cisco node 0 facility in San Jose, CA.  A lot of the working groups are making tangible progress and bringing their work back to the full forum for review and discussion.  Sometimes the working group reports are accepted and moved forward towards standardization; other times, the full Forum provides feedback and guidance, and then sends the working group back to committee to keep hashing out details.  This is pretty typical stuff for a standard body.

This week, we had a first vote (out of two total) on the MPI_MPROBE proposal.  It passed the vote, and will likely pass its next vote in March, meaning that it will become part of the MPI 3.0 draft standard.

MPI_MPROBE closes an important race condition vulnerability.

Read More »

Tags: ,